title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
DCE-MRI analysis methods for predicting the response of breast cancer to neoadjuvant chemotherapy: pilot study findings.
|
PURPOSE
The purpose of this pilot study is to determine (1) if early changes in both semiquantitative and quantitative DCE-MRI parameters, observed after the first cycle of neoadjuvant chemotherapy in breast cancer patients, show significant difference between responders and nonresponders and (2) if these parameters can be used as a prognostic indicator of the eventual response.
METHODS
Twenty-eight patients were examined using DCE-MRI pre-, post-one cycle, and just prior to surgery. The semiquantitative parameters included longest dimension, tumor volume, initial area under the curve, and signal enhancement ratio related parameters, while quantitative parameters included K(trans), v(e), k(ep), v(p), and τ(i) estimated using the standard Tofts-Kety, extended Tofts-Kety, and fast exchange regime models.
RESULTS
Our preliminary results indicated that the signal enhancement ratio washout volume and k(ep) were significantly different between pathologic complete responders from nonresponders (P < 0.05) after a single cycle of chemotherapy. Receiver operator characteristic analysis showed that the AUC of the signal enhancement ratio washout volume was 0.75, and the AUCs of k(ep) estimated by three models were 0.78, 0.76, and 0.73, respectively.
CONCLUSION
In summary, the signal enhancement ratio washout volume and k(ep) appear to predict breast cancer response after one cycle of neoadjuvant chemotherapy. This observation should be confirmed with additional prospective studies.
|
Competitive priorities and managerial performance : a taxonomy of small manufacturers
|
Much of the research in manufacturing strategy has focused on specific relationships between a few constructs, with w relatively little emphasis on typologies and taxonomies Bozarth, C., McDermott, C., 1998. Configurations in manufacturing Ž . x strategy: a review and directions for future research. Journal of Operations Management 16 4 427–439 . Using data from 196 respondents in 98 manufacturing units, this study develops a taxonomy of small manufacturers based on their emphasis on several competitive priorities. The annual sales for 64% of the participating units in this study are below US$50 million, w which is on the lower side as compared to other studies in this area cf., Miller, J.G., Roth, A.V., 1994. A taxonomy of Ž . x manufacturing strategies. Management Science 40 3 285–304 . The study findings indicate that different groups of manufacturers — Do All, Speedy Conformers, Efficient Conformers, and Starters — emphasize different sets of competitive priorities, even within the same industry. Further, the Do All types, who emphasize all four competitive priorities, seem to perform better on customer satisfaction than their counterparts in the Starters group. The above findings lend support to the sandcone model but contradict the traditional trade-off model. q 2000 Elsevier Science B.V. All rights reserved.
|
Intimacy of Friendship , Interpersonal Competence , and Adjustment during Preadolescence and Adolescence
|
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.
|
Inferior clinical outcome of the CD4+ cell count-guided antiretroviral treatment interruption strategy in the SMART study: role of CD4+ Cell counts and HIV RNA levels during follow-up.
|
BACKGROUND AND METHODS
The SMART study compared 2 strategies for using antiretroviral therapy-drug conservation (DC) and viral suppression (VS)-in 5,472 human immunodeficiency virus (HIV)-infected patients with CD4+ cell counts >350 cells/microL. Rates and predictors of opportunistic disease or death (OD/death) and the relative risk (RR) in DC versus VS groups according to the latest CD4+ cell count and HIV RNA level are reported.
RESULTS
During a mean of 16 months of follow-up, DC patients spent more time with a latest CD4+ cell count <350 cells/microL (for DC vs. VS, 31% vs. 8%) and with a latest HIV RNA level >400 copies/mL (71% vs. 28%) and had a higher rate of OD/death (3.4 vs. 1.3/100 person-years) than VS patients. For periods of follow- up with a CD4+ cell count <350 cells/microL, rates of OD/death were increased but similar in the 2 groups (5.7 vs. 4.6/100 person-years), whereas the rates were higher in DC versus VS patients (2.3 vs. 1.0/100 person-years; RR, 2.3 [95% confidence interval, 1.5-3.4]) for periods with the latest CD4+ cell count >or= 350 cells/microL-an increase explained by the higher HIV RNA levels in the DC group.
CONCLUSIONS
The higher risk of OD/death in DC patients was associated with (1) spending more follow-up time with relative immunodeficiency and (2) living longer with uncontrolled HIV replication even at higher CD4+ cell counts. Ongoing HIV replication at a given CD4+ cell count places patients at an excess risk of OD/death.
|
DJIA stock selection assisted by neural network
|
This paper presents methodologies to select equities based on soft-computing models which focus on applying fundamental analysis for equities screening. This paper compares the performance of three soft-computing models, namely multi-layer perceptrons (MLP), adaptive neuro-fuzzy inference systems (ANFIS) and general growing and pruning radial basis function (GGAP-RBF). It studies their computational time complexity; applies several benchmark matrices to compare their performance, such as generalize rate, recall rate, confusion matrices, and correlation to appreciation. This paper also suggests how equities can be picked systematically by using relative operating characteristics (ROC) curve. 2007 Elsevier Ltd. All rights reserved.
|
Really Uncertain Business Cycles
|
We propose uncertainty shocks as a new impulse driving business cycles. First, we demonstrate that uncertainty, measured by a number of proxies is strongly countercyclical. Second, we build a dynamic stochastic general equilibrium model that extends the benchmark neoclassical growth model along two dimensions. It allows for the existence of heterogeneous
rms with non-convex adjustment costs in both capital and labor and time-variation in uncertainty that is modeled as a change in the variance of innovations to productivity. We
nd that increases in uncertainty lead to large drops in economic activity. This occurs because a rise in uncertainty makes
rms cautious, leading them to pause hiring and investment. It also reduces the reallocation of capital and labor across
rms, leading to large falls in productivity growth. Finally, we show that because uncertainty makes
rms cautious it signi
cantly reduces the response of the economy to stimulative policy, leading to pro-cyclical policy multipliers. Keywords: uncertainty, adjustment costs and business cycles. JEL Classi
cation: D92, E22, D8, C23. Disclaimer: Any opinions and conclusions expressed herein are those of the authors and do not necessarily represent the views of the U.S. Census Bureau. All results have been reviewed to ensure that no con
dential information is disclosed. We would like to thank our formal discussants Eduardo Engel at the AEA (2009) and Frank Smets at the ECB (2009) for their comments and suggestions. We thank Gadi Barlevy, Patrick Kehoe, and Naryana Kocherlakota for most helpful comments and suggestions. We would like to thank seminar participants at the AEA, ECB, UC-Irvine, IMF, Warwick University, Kellogg Business School of Management, NBER summer institute (2009). Correspondence: Nick Bloom, Department of Economics, Stanford University, Stanford, CA 94305, [email protected].
|
A GIS data model for landmark-based pedestrian navigation
|
Landmarks provide the most predominant navigation cue for pedestrian navigation. Very few navigation data models in the geographical information science and transportation communities support modeling of landmarks and use of landmark-based route instructions for pedestrian navigation services. This article proposes a landmark-based pedestrian navigation data model to fill this gap. This data model can model landmarks in several pedestrian navigation scenarios (buildings, open spaces, multimodal transportation systems, and urban streets). This article implements the proposed model in the ArcGIS software environment and demonstrates two typical pedestrian navigation scenarios: (1) a multimodal pedestrian navigation environment involving bus lines, parks, and indoor spaces and (2) a subway system in a metropolitan environment. These two scenarios illustrate the feasibility of the proposed data model in real-world environments. Further improvements of this model could lead to more intuitive and user-friendly landmark-based pedestrian navigation services than the functions supported by current map-based navigation systems.
|
Trustworthiness in electronic commerce: the role of privacy, security, and site attributes
|
While the growth of business-to-consumer electronic commerce seems phenomenal in recent years, several studies suggest that a large number of individuals using the Internet have serious privacy concerns, and that winning public trust is the primary hurdle to continued growth in e-commerce. This research investigated the relative importance, when purchasing goods and services over the Web, of four common trust indices (i.e. (1) third party privacy seals, (2) privacy statements, (3) third party security seals, and (4) security features). The results indicate consumers valued security features significantly more than the three other trust indices. We also investigated the relationship between these trust indices and the consumer’s perceptions of a marketer’s trustworthiness. The findings indicate that consumers’ ratings of trustworthiness of Web merchants did not parallel experts’ evaluation of sites’ use of the trust indices. This study also examined the extent to which consumers are willing to provide private information to electronic and land merchants. The results revealed that when making the decision to provide private information, consumers rely on their perceptions of trustworthiness irrespective of whether the merchant is electronic only or land and electronic. Finally, we investigated the relative importance of three types of Web attributes: security, privacy and pleasure features (convenience, ease of use, cosmetics). Privacy and security features were of lesser importance than pleasure features when considering consumers’ intention to purchase. A discussion of the implications of these results and an agenda for future research are provided. q 2002 Elsevier Science B.V. All rights reserved.
|
Sparse Coding From a Bayesian Perspective
|
Sparse coding is a promising theme in computer vision. Most of the existing sparse coding methods are based on either <i>l</i><sub>0</sub> or <i>l</i><sub>1</sub> penalty, which often leads to unstable solution or biased estimation. This is because of the nonconvexity and discontinuity of the <i>l</i><sub>0</sub> penalty and the over-penalization on the true large coefficients of the <i>l</i><sub>1</sub> penalty. In this paper, sparse coding is interpreted from a novel Bayesian perspective, which results in a new objective function through maximum a posteriori estimation. The obtained solution of the objective function can generate more stable results than the <i>l</i><sub>0</sub> penalty and smaller reconstruction errors than the <i>l</i><sub>1</sub> penalty. In addition, the convergence property of the proposed algorithm for sparse coding is also established. The experiments on applications in single image super-resolution and visual tracking demonstrate that the proposed method is more effective than other state-of-the-art methods.
|
Abnormal Structure and Function of the Esophagogastric Junction and Proximal Stomach in Gastroesophageal Reflux Disease
|
OBJECTIVES:This study applies concurrent magnetic resonance imaging (MRI) and high-resolution manometry (HRM) to test the hypothesis that structural factors involved in reflux protection, in particular, the acute insertion angle of the esophagus into the stomach, are impaired in gastroesophageal reflux disease (GERD) patients.METHODS:A total of 24 healthy volunteers and 24 patients with mild-moderate GERD ingested a test meal. Three-dimensional models of the esophagogastric junction (EGJ) were reconstructed from MRI images. Measurements of the esophagogastric insertion angle, gastric orientation, and volume change were obtained. Esophageal function was assessed by HRM. Number of reflux events and EGJ opening during reflux events were assessed by HRM and cine-MRI. Statistical analysis applied mixed-effects modeling.RESULTS:The esophagogastric insertion angle was wider in GERD patients than in healthy subjects (+7°±3°; P=0.03). EGJ opening during reflux events was greater in GERD patients than in healthy subjects (19.3 mm vs. 16.8 mm; P=0.04). The position of insertion and gastric orientation within the abdomen were also altered (both P<0.05). Median number of reflux events was 3 (95% CI: 2.5–4.6) in GERD and 2 (95% CI: 1.8–3.3) in healthy subjects (P=0.09). Lower esophageal sphincter (LES) pressure was lower (−11±2 mm Hg; P<0.0001) and intra-abdominal LES length was shorter (−1.0±0.3 cm, P<0.0006) in GERD patients.CONCLUSIONS:GERD patients have a wider esophagogastric insertion angle and have altered gastric morphology; structural changes that could compromise reflux protection by the “flap valve” mechanism. In addition, the EGJ opens wider during reflux in GERD patients than in healthy volunteers: an effect that facilitates volume reflux of gastric contents.
|
Evaluation of the effect of D-002, a mixture of beeswax alcohols, on osteoarthritis symptoms
|
BACKGROUND/AIMS
Nonsteroidal anti-inflammatory drugs relieve osteoarthritis (OA) symptoms but cause adverse effects. D-002, a mixture of beeswax alcohols, is effective against experimental OA. A pilot study found that D-002 (50 mg/day) for 8 weeks improves OA symptoms. The aim of this study was to investigate the effects of D-002 (50 to 100 mg/day) administered for 6 weeks on OA symptoms.
METHODS
Patients with OA symptoms were double-blindly randomized to D-002 (50 mg) or placebo for 6 weeks. Symptoms were assessed by the Western Ontario and McMaster Individual Osteoarthritis Index (WOMAC) and the visual analog scale (VAS) scores. Patients without symptom improvement at week 3 were titrated to two daily tablets. The primary outcome was the total WOMAC score. WOMAC pain, joint stiffness and physical function scores, VAS score, and use of rescue medications were secondary outcomes.
RESULTS
All randomized patients (n = 60) completed the study, and 23 experienced dose titration (two in the D-002 and 21 in the placebo groups). At study completion, D-002 reduced total WOMAC (65.4%), pain (54.9%), joint stiffness (76.8%), and physical function (66.9%) WOMAC scores, and the VAS score (46.8%) versus placebo. These reductions were significant beginning in the second week, and became enhanced during the trial. The use of rescue medication by the D-002 (6/30) group was lower than that in the placebo (17/30) group. The treatment was well tolerated. Seven patients (two in the D-002 and five in the placebo group) reported adverse events.
CONCLUSIONS
These results indicate that D-002 (50 to 100 mg/day) for 6 weeks ameliorated arthritic symptoms and was well tolerated.
|
FD-GAN: Pose-guided Feature Distilling GAN for Robust Person Re-identification
|
Person re-identification (reID) is an important task that requires to retrieve a person’s images from an image dataset, given one image of the person of interest. For learning robust person features, the pose variation of person images is one of the key challenges. Existing works targeting the problem either perform human alignment, or learn human-region-based representations. Extra pose information and computational cost is generally required for inference. To solve this issue, a Feature Distilling Generative Adversarial Network (FD-GAN) is proposed for learning identity-related and pose-unrelated representations. It is a novel framework based on a Siamese structure with multiple novel discriminators on human poses and identities. In addition to the discriminators, a novel same-pose loss is also integrated, which requires appearance of a same person’s generated images to be similar. After learning pose-unrelated person features with pose guidance, no auxiliary pose information and additional computational cost is required during testing. Our proposed FD-GAN achieves state-of-the-art performance on three person reID datasets, which demonstrates that the effectiveness and robust feature distilling capability of the proposed FD-GAN. ‡‡
|
Unsupervised Graph-based Word Sense Disambiguation Using lexical relation of WordNet
|
Word Sense Disambiguation (WSD) is one of tasks in the Natural Language Processing that uses to identifying the sense of words in context. To select the correct sense, we can use many approach. This paper uses a tree and graphconnectivity structure for finding the correct senses. This algorithm has few parameters and does not require senseannotated data for training. Performance evaluation on standard datasets showed it has the better accuracy than many previous graph base algorithms and decreases elapsed time.
|
Word embeddings for idiolect identification
|
The term idiolect refers to the unique and distinctive use of language of an individual and it is the theoretical foundation of Authorship Attribution. In this paper we are focusing on learning distributed representations (embeddings) of social media users that reflect their writing style. These representations can be considered as stylistic fingerprints of the authors. We are exploring the performance of the two main flavours of distributed representations, namely embeddings produced by Neural Probabilistic Language models (such as word2vec) and matrix factorization (such as GloVe).
|
The Polar Parallel Coordinates Method for Time-Series Data Visualization
|
Parallel coordinates is a very important visualization method, but the dimensions it can express are limited by the length or width of the screen. In this paper, we present the polar parallel coordinates method. First, we define the polar parallel coordinates as the coordinate whose axes exist between the same degree and show the expression of the axis. We give the algorithms for the polar parallel coordinates generation and data shown in the polar parallel coordinates. Then we point out two feathers of the polar parallel coordinates. It can express more axes than parallel coordinates and reflect the characteristics of time-series data. Based on this, we compare the parallel coordinates with the polar parallel coordinates by the time-series data visualization.
|
Fast Deep Matting for Portrait Animation on Mobile Phone
|
Image matting plays an important role in image and video editing. However, the formulation of image matting is inherently ill-posed. Traditional methods usually employ interaction to deal with the image matting problem with trimaps and strokes, and cannot run on the mobile phone in real-time. In this paper, we propose a real-time automatic deep matting approach for mobile devices. By leveraging the densely connected blocks and the dilated convolution, a light full convolutional network is designed to predict a coarse binary mask for portrait image. And a feathering block, which is edge-preserving and matting adaptive, is further developed to learn the guided filter and transform the binary mask into alpha matte. Finally, an automatic portrait animation system based on fast deep matting is built on mobile devices, which does not need any interaction and can realize real-time matting with 15 fps. The experiments show that the proposed approach achieves comparable results with the state-of-the-art matting solvers.
|
A comparative analysis of the Iris iQ200 with manual microscopy as a diagnostic tool for dysmorphic erythrocytes in urine.
|
Microscopic analysis of erythrocytes in urine is a valuable diagnostic tool for identifying glomerular hematuria. Indicative of glomerular hematuria is the presence of erythrocyte casts and polyand dysmorphic erythrocytes. In contrast, in non-glomerular hematuria, urine sediment erythrocytes are monoand isomorphic, and erythrocyte casts are absent (1, 2) . To date, various variant forms of dysmorphic erythrocyte morphology have been defi ned and classifi ed. They are categorized as: D1, D2, and D3 cells (2) . D1 and D2 cells are also referred to as acanthocytes or G1 cells which are mickey mouse-like cells with membrane protrusions and severe (D1) to mild (D2) loss of cytoplasmic color (2) . D3 cells are doughnut-like or other polyand dysmorphic forms that include discocytes, knizocytes, anulocytes, stomatocytes, codocytes, and schizocytes (2, 3) . The cellular morphology of these cells is observed to have mild cytoplasmic loss, and symmetrical shaped membranes free of protrusions. Echinocytes and pseudo-acanthocytes (bite-cells) are not considered to be dysmorphic erythrocytes. Glomerular hematuria is likely if more than 40 % of erythrocytes are dysmorphic or 5 % are D1-D2 cells and nephrologic work-up should be considered (2) . For over 20 years, manual microscopy has been the prevailing technique for examining dysmorphic erythrocytes in urine sediments when glomerular pathology is suspected (4, 5) . This labor-intensive method requires signifi cant expertise and experience to ensure consistent and accurate analysis. A more immediate and defi nitive automated technique that classifi es dysmorphic erythrocytes at least as good as the manual method would be an invaluable asset in the routine clinical laboratory practice. Therefore, the aim of the study was to investigate the use of the Iris Diagnostics automated iQ200 (Instrumentation Laboratory, Brussels, Belgium) as an automated platform for screening of dysmorphic erythrocytes. The iQ200 has proven to be an effi cient and reliable asset for our urinalysis (5) , but has not been used for the quantifi cation of dysmorphic erythrocytes. In total, 207 urine specimens of patients with suspected glomerular pathology were initially examined using manual phase contrast microscopy by two independent experienced laboratory technicians at a university medical center. The same specimens were re-evaluated using the Iris iQ200 instrument at our facility, which is a teaching hospital. The accuracy of the iQ200 was compared to the results of manual microscopy for detecting dysmorphic erythrocytes. Urine samples were processed within 2 h of voiding. Upon receipt, uncentrifuged urine samples were used for strip analysis using the AutionMax Urine Analyzer (Menarini, Valkenswaard, The Netherlands). For analysis of dysmorphic erythrocytes 20 mL urine was fi xed with CellFIX TM (a formaldehyde containing fi xative solution; BD Biosciences, Breda, The Netherlands) at a dilution of 100:1 (6) . One half of fi xed urine was centrifuged at 500 × g for 10 min and the pellet analyzed by two independent experienced technicians using phase-contrast microscopy. The other half was analyzed by automated urine sediment analyzer using the iQ200. The iQ200 uses a fl ow cell that hydrodynamically orients the particles within the focal plane of a microscopic lens coupled to a 1.3 megapixel CCD digital camera. Each particle image is digitized and sent to the instrument processor. For our study, the instrument ’ s cellrecognition function for classifying erythrocytes was used. Although the iQ200 can easily recognize and classify normal erythrocytes it cannot automatically classify dysmorphic erythrocytes. Instead, two independent and experienced technicians review the images in categories ‘ normal erythrocytes ’ and ‘ unclassifi ed ’ and reclassify dysmorphic erythrocytes to a separate ‘ dysmorphic ’ category. To minimize *Corresponding author: Ayşe Y. Demir, MD, PhD, Department of Clinical Chemistry and Haematology, Meander Medical Center Utrechtseweg 160, 3818 ES Amersfoort, The Netherlands Phone: + 31 33 8504344, Fax: + 31 33 8502035 , E-mail: [email protected] Received September 20, 2011; accepted November 15, 2011; previously published online December 7, 2011
|
Feasibility and acceptability of a school-based coping intervention for Latina adolescents.
|
Latino girls (Latinas) experience disproportionate rates of emotional distress, including suicidal ideation, which may be indicative of inadequate coping abilities. Prevention of mental health problems, a U.S. public health priority, is particularly critical for Latina adolescents due to lack of access to mental health treatments. The purpose of this study was to examine the feasibility of Project Wings, a 14-session stress management/coping intervention. Latinas in school (ages 15-21) met weekly for 2-hr with two bilingual experienced facilitators to participate in sharing circles, relaxation exercise, and skill building. Intervention participation and post-intervention focus group data were analyzed. Fall semester intervention (n = 10) occurred during school (72% attendance rate); spring semester intervention (n = 11) was after school (84% attendance rate). Focus group data confirmed acceptability. Latina adolescents will participate in a school-based, group-based stress management/coping intervention. The findings offer insights about intervention recruitment and retention that are specifically relevant to school nurses. Future research includes intervention testing using a randomized study design.
|
Treatment planning for volumetric modulated arc therapy.
|
PURPOSE
Volumetric modulated arc therapy (VMAT) is a specific type of intensity-modulated radiation therapy (IMRT) in which the gantry speed, multileaf collimator (MLC) leaf position, and dose rate vary continuously during delivery. A treatment planning system for VMAT is presented.
METHODS
Arc control points are created uniformly throughout one or more arcs. An iterative least-squares algorithm is used to generate a fluence profile at every control point. The control points are then grouped and all of the control points in a given group are used to approximate the fluence profiles. A direct-aperture optimization is then used to improve the solution, taking into account the allowed range of leaf motion of the MLC. Dose is calculated using a fast convolution algorithm and the motion between control points is approximated by 100 interpolated dose calculation points. The method has been applied to five cases, consisting of lung, rectum, prostate and seminal vesicles, prostate and pelvic lymph nodes, and head and neck. The resulting plans have been compared with segmental (step-and-shoot) IMRT and delivered and verified on an Elekta Synergy to ensure practicality.
RESULTS
For the lung, prostate and seminal vesicles, and rectum cases, VMAT provides a plan of similar quality to segmental IMRT but with faster delivery by up to a factor of 4. For the prostate and pelvic nodes and head-and-neck cases, the critical structure doses are reduced with VMAT, both of these cases having a longer delivery time than IMRT. The plans in general verify successfully, although the agreement between planned and measured doses is not very close for the more complex cases, particularly the head-and-neck case.
CONCLUSIONS
Depending upon the emphasis in the treatment planning, VMAT provides treatment plans which are higher in quality and/or faster to deliver than IMRT. The scheme described has been successfully introduced into clinical use.
|
18-year experience in the management of men with a complaint of a small penis.
|
In many cultures, the erect penis has been a symbol of masculine qualities. Because of this symbolism, a penis that is less than average size can cause insecurity or embarrassment. This series reports the authors' 18-year experience in the management of 60 men with a complaint of a small penis. For 44 of these 60 men, counseling was sufficient; the other 16 had surgery, and of these, 9 were satisfied with the result. Despite limitations, the authors conclude that those men who already achieve a penis length of no less than 7.5 cm (2.95 in) in erection, have only limited benefit from penis-enhancing surgery. This particular patient category should therefore be dissuaded from surgery.
|
Putting people in the map: anthropogenic biomes of the world
|
© The Ecological Society of America www.frontiersinecology.org H have long distinguished themselves from other species by shaping ecosystem form and process using tools and technologies, such as fire, that are beyond the capacity of other organisms (Smith 2007). This exceptional ability for ecosystem engineering has helped to sustain unprecedented human population growth over the past half century, to such an extent that humans now consume about one-third of all terrestrial net primary production (NPP; Vitousek et al. 1986; Imhoff et al. 2004) and move more earth and produce more reactive nitrogen than all other terrestrial processes combined (Galloway 2005; Wilkinson and McElroy 2007). Humans are also causing global extinctions (Novacek and Cleland 2001) and changes in climate that are comparable to any observed in the natural record (Ruddiman 2003; IPCC 2007). Clearly, Homo sapiens has emerged as a force of nature rivaling climatic and geologic forces in shaping the terrestrial biosphere and its processes. Biomes are the most basic units that ecologists use to describe global patterns of ecosystem form, process, and biodiversity. Historically, biomes have been identified and mapped based on general differences in vegetation type associated with regional variations in climate (Udvardy 1975; Matthews 1983; Prentice et al. 1992; Olson et al. 2001; Bailey 2004). Now that humans have restructured the terrestrial biosphere for agriculture, forestry, and other uses, global patterns of species composition and abundance, primary productivity, land-surface hydrology, and the biogeochemical cycles of carbon, nitrogen, and phosphorus, have all been substantially altered (Matson et al. 1997; Vitousek et al. 1997; Foley et al. 2005). Indeed, recent studies indicate that human-dominated ecosystems now cover more of Earth’s land surface than do “wild” ecosystems (McCloskey and Spalding 1989; Vitousek et al. 1997; Sanderson et al. 2002, Mittermeier et al. 2003; Foley et al. 2005). It is therefore surprising that existing descriptions of biome systems either ignore human influence altogether or describe it using at most four anthropogenic ecosystem classes (urban/built-up, cropland, and one or two cropland/natural vegetation mosaic(s); classification systems include IGBP, Loveland et al. 2000; “Olson Biomes”, Olson et al. 2001; GLC 2000, Bartholome and Belward 2005; and GLOBCOVER, Defourny et al. 2006). Here, we present an alternate view of the terrestrial biosphere, based on an empirical analysis of global patterns of sustained direct human interaction with ecosystems, yielding a global map of “anthropogenic biomes”. We then examine the potential of anthropogenic biomes to serve as a new global framework for ecology, complete with CONCEPTS AND QUESTIONS
|
Solving Interleaved and Blended Sequential Decision-Making Problems through Modular Neuroevolution
|
Many challenging sequential decision-making problems require agents to master multiple tasks, such as defense and offense in many games. Learning algorithms thus benefit from having separate policies for these tasks, and from knowing when each one is appropriate. How well the methods work depends on the nature of the tasks: Interleaved tasks are disjoint and have different semantics, whereas blended tasks have regions where semantics from different tasks overlap. While many methods work well in interleaved tasks, blended tasks are difficult for methods with strict, human-specified task divisions, such as Multitask Learning. In such problems, task divisions should be discovered automatically. To demonstrate the power of this approach, the MM-NEAT neuroevolution framework is applied in this paper to two variants of the challenging video game of Ms. Pac-Man. In the simplified interleaved version of the game, the results demonstrate when and why such machine-discovered task divisions are useful. In the standard blended version of the game, a surprising, highly effective machine-discovered task division surpasses human-specified divisions, achieving the best scores to date in this game. Modular neuroevolution is thus a promising technique for discovering multimodal behavior for challenging real-world tasks.
|
Improved Performance of Serially Connected Li-Ion Batteries With Active Cell Balancing in Electric Vehicles
|
This paper presents an active cell balancing method for lithium-ion battery stacks using a flyback dc/dc converter topology. The method is described in detail, and a simulation is performed to estimate the energy gain for ten serially connected cells during one discharging cycle. The simulation is validated with measurements on a balancing prototype with ten cells. It is then shown how the active balancing method with respect to the cell voltages can be improved using the capacity and the state of charge rather than the voltage as the balancing criterion. For both charging and discharging, an improvement in performance is gained when having the state of charge and the capacity of the cells as information. A battery stack with three single cells is modeled, and a realistic driving cycle is applied to compare the difference between both methods in terms of usable energy. Simulations are also validated with measurements.
|
The Relevance of Data Warehousing and Data Mining in the Field of Evidence-based Medicine to Support Healthcare Decision Making
|
Evidence-based medicine is a new direction in modern healthcare. Its task is to prevent, diagnose and medicate diseases using medical evidence. Medical data about a large patient population is analyzed to perform healthcare management and medical research. In order to obtain the best evidence for a given disease, external clinical expertise as well as internal clinical experience must be available to the healthcare practitioners at right time and in the right manner. External evidence-based knowledge can not be applied directly to the patient without adjusting it to the patient’s health condition. We propose a data warehouse based approach as a suitable solution for the integration of external evidence-based data sources into the existing clinical information system and data mining techniques for finding appropriate therapy for a given patient and a given disease. Through integration of data warehousing, OLAP and data mining techniques in the healthcare area, an easy to use decision support platform, which supports decision making process of care givers and clinical managers, is built. We present three case studies, which show, that a clinical data warehouse that facilitates evidence-based medicine is a reliable, powerful and user-friendly platform for strategic decision making, which has a great relevance for the practice and acceptance of evidence-based medicine. Keywords—data mining, data warehousing, decision-support systems, evidence-based medicine.
|
THE DEVELOPMENT OF THE BALANCED SCORECARD AS A STRATEGIC MANAGEMENT TOOL
|
The Balanced Scorecard is a widely adopted performance management framework first described in the early 1990s. More recently it has been proposed as the basis for a ‘strategic management system’. This paper describes its evolution, recognising three distinct generations of Balanced Scorecard design. The paper relates the empirically driven developments in Balanced Scorecard thinking with literature concerning strategic management within organisations. It concludes that developments to date have been worthwhile, highlights potential areas for further refinement, and sets out some possible topics for future research into the field. The Balanced Scorecard and its development The Balanced Scorecard was first introduced in the early 1990s through the work of Robert Kaplan and David Norton of the Harvard Business School. Since then, the concept has become well known and its various forms widely adopted across the world (Rigby, 2001). By combining financial measures and non-financial measures in a single report, the Balanced Scorecard aims to provide managers with richer and more relevant information about activities they are managing than is provided by financial measures alone. To aid clarity and utility, Kaplan and Norton proposed that the number of measures on a Balanced Scorecard should also be constrained in number, and clustered into four groups (Kaplan and Norton, 1992, 1993). Beyond this, the original definition of Balanced Scorecard was sparse. But from the outset it was clear that the selection of measures, both in terms of filtering (organisations typically had access to many more measures than were needed to populate the Balanced Scorecard) and clustering (deciding which measures should appear in which perspectives) would be a key activity. Kaplan and Norton proposed that measure selection should focus on information relevant to the implementation of strategic plans, and that simple attitudinal questions be used to help determine the appropriate allocation of measures to perspectives (Kaplan and Norton, 1992). In essence the Balanced Scorecard has remained unchanged since these early papers, having at its core a limited number of measures clustered into groups, and an underlying strategic focus. But modern Balanced Scorecard designs also have a number of features that clearly differentiate them from earlier examples. This paper describes these changes as an evolution through three distinct ‘generations’ of Balanced Scorecard design. 1 Generation Balanced Scorecard Balanced Scorecard was initially described as a simple, “4 box” approach to performance measurement (Kaplan and Norton, 1992). In addition to financial measures, managers were encouraged to look at measures drawn from three other “perspectives” of the business: Learning and Growth; Internal Business Process; and Customer, chosen to represent the major stakeholders in a business (Mooraj et al, 1999). Definition of what comprised a Balanced Scorecard was sparse and focused on the high level structure of the device. Simple ‘causality’ between the four perspectives was illustrated but not used for specific purpose. Kaplan and Norton’s original paper’s focus was on the selection and reporting of a limited number of measures in each of the four perspectives (Kaplan and Norton, 1992). The paper suggested use of attitudinal questions relating to the vision and goals of the organisation to help in the selection of measures to be used, and also encouraged the consideration of ‘typical’ areas of interest in this process. Kaplan and Norton’s original work makes no specific observations concerning how the Balanced Scorecard might improve the performance of organisations; the implication is that the provision of accessible relevant measurement data itself will trigger improved organisational performance. However, they do imply that the source of these improvements is changes in behaviour: “It establishes goals but assumes that people will adopt whatever behaviours and take whatever actions are necessary to arrive at those goals”. In the light of this, the basis for selecting the goals represented by the Balanced Scorecard is of some importance. But in their first paper Kaplan and Norton say little about how a Balanced Scorecard could be developed in practice beyond a general assertion that design involved “putting vision and strategy at the centre of the measurement system” (1992). Later writing includes increasing amounts of proscription about development methods, concluding with a lengthy description of one such process in their first book on the subject published in 1996. Figure 1 – 1 Generation Balanced Scorecard Figure 1 shows a diagrammatic representation of Kaplan and Norton’s original Balanced Scorecard design. We will subsequently refer to this type of Balanced Scorecard design as a ‘1 Generation’ Balanced Scorecard. Key attributes of the design and suggested development process for this type of Balanced Scorecard are summarised in the table below. Practical Experiences with 1 Generation Balanced Scorecards The authors’ professional experience suggests that 1 Generation Balanced Scorecards are still being developed, and that they probably still form the large majority of Balanced Scorecard designs introduced into organisations. This is reflected in the literature, where books and articles that use more advanced representations of Balanced Scorecard are only recently appearing (Olve et al, 1999, Kaplan and Norton, 2000, Niven, 2002). But despite its huge popularity as a concept, and apparently widespread adoption, relatively few detailed case studies concerning Balanced Scorecard implementation experiences appear to exist in the academic literature. Those few that do focus primarily on the architecture of the Balanced Scorecard design (e.g. Butler et al, 1997), and associated organisational experiences (e.g. Ahn, 2001). Commercial / practitioner writing on Balanced Scorecard is more extensive (e.g. Schneiderman, 1999), but often more partisan (e.g. Lingle et al 1996). But in general the literature endorses the utility of the approach (Epstein et al, 1997) but notes weaknesses in the initial design proposition, and recommends improvements (e.g. Eagleson et al, 2000, Kennerley et al, 2000). 2 Generation Balanced Scorecard The practical difficulties associated with the design of 1 Generation Balanced Scorecards are significant, in part because the definition of a Balanced Scorecard was initially vague, allowing for considerable interpretation. Two significant areas of concern were filtering (the process of choosing specific measures to report), and clustering (deciding how to group measures into ‘perspectives’). Discussions relating to clustering continue to be rehearsed in the literature (e.g. Butler et al, 1997, Kennerley et al, 2000), but discussions relating to filtering are less common, and usually appear as part of descriptions of methods of Balanced Scorecard design (e.g. Kaplan and Norton, 1996, Olve et al, 1999). Perhaps the most significant early change translated the attitudinal approach to measure selection proposed initially be Kaplan and Norton (e.g. “To succeed financially, how should we appear to our shareholders?”) into a process that yielded a few appropriate key measures of performance in each perspective. A solution was the introduction of the concept of ‘strategic objectives’ (Kaplan and Norton, 1993). Initially these were represented as short sentences attached to the four perspectives, and were used to capture the essence of the organisation’s strategy material to each of the areas: measures were then selected that reflected achievement of these strategic objectives. Although subtle, this approach to measure selection quite different from that initially proposed, since strategic objectives were developed directly from strategy statements based on a corporate vision or a strategic plan. Another key development concerned causality. Causality between the perspectives had been introduced in early ‘1st Generation’ Balanced Scorecard thinking (see Figure 1). ‘2nd Generation’ Balanced Scorecard saw the idea of causality developed further. Instead of simply highlighting causal links between perspectives, internal documents from one consulting firm’s work in 1993 shows an early attempt to indicate linkages between the measures themselves. This improvement was also proposed later by others (Newing, 1995). Measure based linkages provided a richer model of causality than before, but presented conceptual problems – for example, the use of measures encouraged attempts to ‘prove’ the causality between measures using various forms of analysis (indeed this is still the case – e.g. Brewer, 2002). Collectively the changes in design described here represent a materially different definition of what comprises a Balanced Scorecard compared to Kaplan and Norton’s original work we will refer to Balanced Scorecards that incorporate these developments as ‘2 Generation Balanced Scorecards’. The impact of these changes were characterised by Kaplan and Norton in 1996 as enabling the Balanced Scorecard to evolve from “an improved measurement system to a core management system” (Kaplan and Norton 1996). Maintaining the focus that Balanced Scorecard was intended to support the management of strategic implementation, Kaplan and Norton further described the use of this development of the Balanced Scorecard as the central element of “a strategic management system”. One consequence of this change in emphasis was to increase the pressure on the design process to accurately reflect the organisation’s strategic goals. Over time the idea of strategic linkage became an increasingly important element of Balanced Scorecard design methodology, and in the mid 1990’s Balanced Scorecard documentation began to show graphically linkages between the strategic objectives themselves (rather than the measures) with causality linking across the perspectives toward key objectives re
|
Non-Local Recurrent Network for Image Restoration
|
Many classic methods have shown non-local self-similarity in natural images to be an effective prior for image restoration. However, it remains unclear and challenging to make use of this intrinsic property via deep networks. In this paper, we propose a non-local recurrent network (NLRN) as the first attempt to incorporate non-local operations into a recurrent neural network (RNN) for image restoration. The main contributions of this work are: (1) Unlike existing methods that measure self-similarity in an isolated manner, the proposed non-local module can be flexibly integrated into existing deep networks for end-to-end training to capture deep feature correlation between each location and its neighborhood. (2) We fully employ the RNN structure for its parameter efficiency and allow deep feature correlation to be propagated along adjacent recurrent states. This new design boosts robustness against inaccurate correlation estimation due to severely degraded images. (3) We show that it is essential to maintain a confined neighborhood for computing deep feature correlation given degraded images. This is in contrast to existing practice [43] that deploys the whole image. Extensive experiments on both image denoising and super-resolution tasks are conducted. Thanks to the recurrent non-local operations and correlation propagation, the proposed NLRN achieves superior results to state-of-the-art methods with many fewer parameters. The code is available at https://github.com/Ding-Liu/NLRN.
|
A mutation in the gene for type III procollagen (COL3A1) in a family with aortic aneurysms.
|
Experiments were carried out to test the hypothesis that familial aortic aneurysms, either thoracic or abdominal, are caused by mutations in the gene for type III procollagen (COL3A1) similar to mutations in the same gene that have been shown to cause rupture of aorta and other disastrous consequences in the rare genetic disorder known as Ehlers-Danlos syndrome type IV. A family was identified through a 37-yr-old female captain in the United States Air Force who was scrutinized only because many of her direct blood relatives had died of ruptured aortic aneurysms. The woman was heterozygous for a single-base mutation that converted the codon for glycine 619 of the alpha 1(III) chain of type III procollagen to a codon for arginine. Studies on cultured skin fibroblasts demonstrated the mutation caused synthesis of type III procollagen that had a decreased temperature for thermal unfolding of the protein. The same mutation was identified in DNA extracted from pathologic specimens from her mother who had died at the age of 34 and a maternal aunt who died at the age of 55 of aortic aneurysms. Examination of DNA from samples of saliva revealed that the woman's daughter, her son, a brother, and an aunt also had the mutation. The results demonstrated that mutations in the type III procollagen gene can cause familial aortic aneurysms and that DNA tests for such mutations can identify individuals at risk for aneurysms.
|
Solvent regulation of oxygen affinity in hemoglobin. Sensitivity of bovine hemoglobin to chloride ions.
|
Under physiological conditions of pH (7.4) and chloride concentration (0.15 M), the oxygen affinity of bovine hemoglobin is substantially lower than that of human hemoglobin. Also, the Bohr effect is much more pronounced in bovine hemoglobin. Numerical simulations indicate that both phenomena can be explained by a larger preferential binding of chloride ions to deoxyhemoglobin in the bovine system. Also, they show that the larger preferential binding may be produced by a decreased affinity of the anions for oxyhemoglobin, thereby stressing the potential relevance of the oxy conformation in regulating the functional properties of the protein. The conformation of the amino-terminal end of the beta subunits appears to regulate the interaction of hemoglobin with solvent components. The pronounced sensitivity of the oxygen affinity of bovine hemoglobin to chloride concentration and to pH suggests that in bovine species these are the modulators of oxygen transport in vivo.
|
Neural Network for Graphs: A Contextual Constructive Approach
|
This paper presents a new approach for learning in structured domains (SDs) using a constructive neural network for graphs (NN4G). The new model allows the extension of the input domain for supervised neural networks to a general class of graphs including both acyclic/cyclic, directed/undirected labeled graphs. In particular, the model can realize adaptive contextual transductions, learning the mapping from graphs for both classification and regression tasks. In contrast to previous neural networks for structures that had a recursive dynamics, NN4G is based on a constructive feedforward architecture with state variables that uses neurons with no feedback connections. The neurons are applied to the input graphs by a general traversal process that relaxes the constraints of previous approaches derived by the causality assumption over hierarchical input data. Moreover, the incremental approach eliminates the need to introduce cyclic dependencies in the definition of the system state variables. In the traversal process, the NN4G units exploit (local) contextual information of the graphs vertices. In spite of the simplicity of the approach, we show that, through the compositionality of the contextual information developed by the learning, the model can deal with contextual information that is incrementally extended according to the graphs topology. The effectiveness and the generality of the new approach are investigated by analyzing its theoretical properties and providing experimental results.
|
Character-level Convolutional Networks for Text Classification
|
This article offers an empirical exploration on the use of character-level convolutional networks (ConvNets) for text classification. We constructed several largescale datasets to show that character-level convolutional networks could achieve state-of-the-art or competitive results. Comparisons are offered against traditional models such as bag of words, n-grams and their TFIDF variants, and deep learning models such as word-based ConvNets and recurrent neural networks.
|
Evolutionary and Neo-Schumpeterian Approaches to Economics
|
1. The Neo-Schumpeterian and Evolutionary Approach to Economics - an Introduction L. Magnusson. Part I: Evolutionary Theory. 2. What is Evolutionary Economics? R. Langlois, M.J.Everett. 3. The Unit which Evolves: Linking Self-Reproduction and Self-Interest M. Hutter. 4. On the Nature of Economic Evolution: John R. Commons and the Metaphor of Artificial Selection Y. Ramstad. Part II: Neo-Schumpeterian Dynamics. 5. The Phenomenon of Economic Change: Neoclassical vs. Schumpeterian Approaches K. Dopfer. 6. The Theory of the Firm and the Theory of Economic Growth G. Eliasson. 7. Evolutionary Regimes and Industrial Dynamics G. Dosi, F. Malerba, L. Orsenigo. 8. The Role of Firm Difference in an Evolutionary Theory of Technical Advance R. Nelson. Part III: Critique and New Challenges. 9. The Integration of Theory and History: Methodology and Ideology in Schumpeter's Economics W. Lazonick. 10. Neo-Schumpeterians and Economic Theory A. Heertje. Rethinking Economics. From GNP-Growth to Ecological Sustainability P. Soderbaum. Innovations and Institutions: an Historical Approach L. Magnusson, G. Marklund. Index.
|
Tracking Sentiment in Mail: How Genders Differ on Emotional Axes
|
With the widespread use of email, we now have access to unprecedented amounts of text that we ourselves have written. In this paper, we show how sentiment analysis can be used in tandem with effective visualizations to quantify and track emotions in many types of mail. We create a large word–emotion association lexicon by crowdsourcing, and use it to compare emotions in love letters, hate mail, and suicide notes. We show that there are marked differences across genders in how they use emotion words in work-place email. For example, women use many words from the joy–sadness axis, whereas men prefer terms from the fear–trust axis. Finally, we show visualizations that can help people track emotions in their emails.
|
A study of detecting and combating cybersickness with fuzzy control for the elderly within 3D virtual stores
|
Elderly individuals can access online 3D virtual stores from their homes to make purchases. However, most virtual environments (VEs) often elicit physical responses to certain types of movements in the VEs. Some users exhibit symptoms that parallel those of classical motion sickness, called cybersickness, both during and after the VE experience. This study investigated the factors that contribute to cybersickness among the elderly when immersed in a 3D virtual store. The results of the first experiment show that the simulator sickness questionnaire (SSQ) scores increased significantly by the reasons of navigational rotating speed and duration of exposure. Based on these results, a warning system with fuzzy control for combating cybersickness was developed. The results of the second and third experiments show that the proposed system can efficiently determine the level of cybersickness based on the fuzzy sets analysis of operating signals from scene rotating speed and exposure duration, and subsequently combat cybersickness. & 2014 Elsevier Ltd. All rights reserved.
|
Camera Trajectory Estimation Using Inertial Sensor Measurements and Structure from Motion Results
|
This paper describes an approach to estimating the trajectory of a moving camera based on the measurements acquired with an inertial sensor and estimates obtained by applying a structure from motion algorithm to a small set of keyframes in the video sequence. The problem is formulated as an o ine trajectory tting task rather than an online integration problem. This approach avoids many of the issues usually associated with inertial estimation schemes. One of the main advantages of the proposed technique is that it can be applied in situations where approaches based on feature tracking would have signi cant diÆculties. Results obtained by applying the procedure to extended sequences acquired with both conventional and omnidirectional cameras are presented.
|
Carbonatites in a subduction system: The Pleistocene alvikites from Mt. Vulture (southern Italy)
|
We report here, for the first time, on the new finding of extrusive calciocarbonatite (alvikite) rocks from the Pleistocene Mt. Vulture volcano (southern Italy). These volcanic rocks, which represent an outstanding occurrence in the wider scenario of the Italian potassic magmatism, form lavas, pyroclastic deposits, and feeder dikes exposed on the northern slope of the volcano. The petrography, mineralogy and whole-rock chemistry attest the genuine carbonatitic nature of these rocks, that are characterized by high to very high contents of Sr, Ba, U, LREE, Nb, P, F, Th, high Nb/Ta and LREE/HREE ratios, and low contents of Ti, Zr, K, Rb, Na and Cs. The O–C isotope compositions are close to the “primary igneous carbonatite” field and, thus, are compatible with an ultimate mantle origin for these rocks. The Sr–Nd–Pb–B isotope compositions, measured both in the alvikites and in the silicate volcanic rocks, indicate a close genetic relationship between the alvikites and the associated melilitite/nephelinite rocks. Furthermore, these latter products are geochemically distinct from the main foiditic-phonolitic association of Mt. Vulture. We propose a petrogenetic/geodynamic interpretation which has important implications for understanding the relationships between carbonatites and orogenic activity. In particular, we propose that the studied alvikites are generated through liquid unmixing at crustal levels, starting from nephelinitic or melilititic parent liquids. These latter were produced in a hybrid mantle resulting from the interaction through a vertical slab window, between a metasomatized mantle wedge, moving eastward from the Tyrrhenian/ Campanian region, and the local Adriatic mantle. The occurrence of carbonatite rocks at Mt. Vulture, that lies on the leading edge of the Southern Apennines accretionary prism, is taken as an evidence for the carbonatation of the mantle sources of this volcano. We speculate that mantle carbonatation is related to the introduction of sedimentary carbon from the Adriatic lithosphere during Tertiary subduction. © 2007 Elsevier B.V. All rights reserved.
|
Clinical and genetic outcome determinants of Internet- and group-based cognitive behavior therapy for social anxiety disorder.
|
OBJECTIVE
No study has investigated clinical or genetic predictors and moderators of Internet-based cognitive behavior therapy (ICBT) compared with cognitive behavioral group therapy for (CBGT) for SAD. Identification of predictors and moderators is essential to the clinician in deciding which treatment to recommend for whom. We aimed to identify clinical and genetic (5-HTTLPR, COMTval158met, and BDNFval66met) predictors and moderators of ICBT and CBGT.
METHOD
We performed three types of analyses on data from a sample comprising participants (N = 126) who had undergone ICBT or CBGT in a randomized controlled trial. Outcomes were i) end state symptom severity, ii) SAD diagnosis, and iii) clinically significant improvement.
RESULTS
The most stable predictors of better treatment response were working full time, having children, less depressive symptoms, higher expectancy of treatment effectiveness, and adhering to treatment. None of the tested gene polymorphisms were associated with treatment outcome. Comorbid general anxiety and depression were moderators meaning that lower levels were associated with a better treatment response in ICBT but not in CBGT.
CONCLUSION
We conclude that demographic factors, symptom burden, adherence, and expectations may play an important role as predictors of treatment outcome. The investigated gene polymorphisms do not appear to make a difference.
|
The challenge problem for automated detection of 101 semantic concepts in multimedia
|
We introduce the challenge problem for generic video indexing to gain insight in intermediate steps that affect performance of multimedia analysis methods, while at the same time fostering repeatability of experiments. To arrive at a challenge problem, we provide a general scheme for the systematic examination of automated concept detection methods, by decomposing the generic video indexing problem into 2 unimodal analysis experiments, 2 multimodal analysis experiments, and 1 combined analysis experiment. For each experiment, we evaluate generic video indexing performance on 85 hours of international broadcast news data, from the TRECVID 2005/2006 benchmark, using a lexicon of 101 semantic concepts. By establishing a minimum performance on each experiment, the challenge problem allows for component-based optimization of the generic indexing issue, while simultaneously offering other researchers a reference for comparison during indexing methodology development. To stimulate further investigations in intermediate analysis steps that inuence video indexing performance, the challenge offers to the research community a manually annotated concept lexicon, pre-computed low-level multimedia features, trained classifier models, and five experiments together with baseline performance, which are all available at http://www.mediamill.nl/challenge/.
|
Electricity Smart Meters Interfacing the Households
|
The recent worldwide measures for energy savings call for a larger awareness of the household energy consumption, given the relevant contribution of domestic load to the national energy balance. On the other hand, electricity smart meters together with gas, heat, and water meters can be interconnected in a large network offering a potential value to implement energy savings and other energy-related services, as long as an efficient interface with the final user is implemented. Unfortunately, so far, the interface of such devices is mostly designed and addressed at the utilities supervising the system, giving them relevant advantages, while the communication with the household is often underestimated. This paper addresses this topic by proposing the definition of a local interface for smart meters, by looking at the actual European Union and international regulations, at the technological solutions available on the market, and at those implemented in different countries, and, finally, by proposing specific architectures for a proper consumer-oriented implementation of a smart meter network.
|
Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo Cameras
|
We propose Stereo Direct Sparse Odometry (Stereo DSO) as a novel method for highly accurate real-time visual odometry estimation of large-scale environments from stereo cameras. It jointly optimizes for all the model parameters within the active window, including the intrinsic/extrinsic camera parameters of all keyframes and the depth values of all selected pixels. In particular, we propose a novel approach to integrate constraints from static stereo into the bundle adjustment pipeline of temporal multi-view stereo. Real-time optimization is realized by sampling pixels uniformly from image regions with sufficient intensity gradient. Fixed-baseline stereo resolves scale drift. It also reduces the sensitivities to large optical flow and to rolling shutter effect which are known shortcomings of direct image alignment methods. Quantitative evaluation demonstrates that the proposed Stereo DSO outperforms existing state-of-the-art visual odometry methods both in terms of tracking accuracy and robustness. Moreover, our method delivers a more precise metric 3D reconstruction than previous dense/semi-dense direct approaches while providing a higher reconstruction density than feature-based methods.
|
Vascular heterogeneity between native rat pancreatic islets is responsible for differences in survival and revascularisation post transplantation
|
Highly blood-perfused islets have been observed to be the most functional islets in the native pancreas. We hypothesised that differences in vascular support of islets in donor pancreases influence their susceptibility to cellular stress and capacity for vascular engraftment after transplantation. Highly blood-perfused islets in rats were identified by injection of microspheres into the ascending aorta before islet isolation. Cell death was evaluated after in vitro cytokine or hypoxia exposure, and 2 days post transplantation. One month post transplantation, islet engraftment, including vascular density, blood perfusion and oxygen tension (pO2) in the tissue, was evaluated. Microsphere-containing islets had a similar frequency of cell death during standard culture conditions but increased cell death after exposure to cytokines and hypoxia in comparison with other islets. Two days after transplantation the percentage of apoptotic or necrotic cells was also higher in grafts of such islets and 1 month post transplantation these grafts were composed of substantially more connective tissue. Grafts of highly blood-perfused islets in the native pancreas regained a higher vascular density, blood perfusion and pO2 in comparison with grafts of other islets. Native islets that are highly blood-perfused regained this feature after transplantation, indicating a superior capacity for revascularisation and post-transplant function. However, the same group of islets was more vulnerable to different kinds of cellular stress, which limited their early survival post transplantation. Preferential death of these most active islets may contribute to the high number of islets needed to provide cure with islet transplantation.
|
Design of a 300-mV 2.4-GHz Receiver Using Transformer-Coupled Techniques
|
This paper presents a 1.6-mW 2.4-GHz receiver that operates from a single supply of 300 mV allowing direct powering from various energy harvesting sources. We extensively utilize transformer coupling between stages to reduce headroom requirements. We forward-bias bulk-source junctions to lower threshold voltages where appropriate. A single-ended 2.4 GHz RF input is amplified and converted to a differential signal before down-converting to a low IF of 1 to 10 MHz. A chain of IF amplifiers and narrowband filters are interleaved to perform programmable channel selection. The chip is fabricated in a 65-nm CMOS process. The receiver achieves -91.5-dBm sensitivity for a BER of 10e-3.
|
Combinatorial Multi-armed Bandits for Real-Time Strategy Games
|
Games with large branching factors pose a significant challenge for game tree search algorithms. In this paper, we address this problem with a sampling strategy for Monte Carlo Tree Search (MCTS) algorithms called näıve sampling, based on a variant of the Multiarmed Bandit problem called Combinatorial Multi-armed Bandits (CMAB). We analyze the theoretical properties of several variants of näıve sampling, and empirically compare it against the other existing strategies in the literature for CMABs. We then evaluate these strategies in the context of real-time strategy (RTS) games, a genre of computer games characterized by their very large branching factors. Our results show that as the branching factor grows, näıve sampling outperforms the other sampling strategies.
|
Biomarker-calibrated dietary energy and protein intake associations with diabetes risk among postmenopausal women from the Women's Health Initiative.
|
BACKGROUND
Self-report of dietary energy and protein intakes has been shown to be systematically and differentially underreported.
OBJECTIVE
We assessed and compared the association of diabetes among postmenopausal women with biomarker-calibrated and uncalibrated dietary energy and protein intakes from food-frequency questionnaires (FFQs).
DESIGN
The analyses were performed for 74,155 participants of various race-ethnicities from the Women's Health Initiative. Uncalibrated and calibrated energy and protein intakes from FFQs were assessed for associations with incident diabetes by using HR estimates based on Cox regression.
RESULTS
A 20% increment in uncalibrated energy consumption was associated with increased diabetes risk (HR) of 1.03 (95% CI: 1.01, 1.05), 2.41 (95% CI: 2.06, 2.82) with biomarker calibration, and 1.30 (95% CI: 0.96, 1.76) after adjustment for BMI. A 20% increment in uncalibrated protein (g/d) resulted in an HR of 1.05 (95% CI: 1.03, 1.07), 1.82 (95% CI: 1.56, 2.12) with calibration, and 1.16 (95% CI: 1.05, 1.28) with adjustment for BMI. A 20% increment in uncalibrated protein density (% of energy from protein) resulted in an HR of 1.13 (95% CI: 1.09, 1.17), 1.01 (95% CI: 0.75, 1.37) with calibration, and 1.19 (95% CI: 1.07, 1.32) with adjustment for BMI.
CONCLUSIONS
Higher protein and total energy intakes (calibrated) appear to be associated with a substantially increased diabetes risk that may be mediated by an increase in body mass over time. Diet-disease associations without correction of self-reported measurement error should be viewed with caution. This trial is registered at clinicaltrials.gov as NCT00000611.
|
Life cycle welfare: evidence and conjecture
|
Abstract At a point in time subjective well-being is positively related to income; over the life course subjective well-being is constant despite substantial growth in income. This paradox is explained by new evidence on consumption aspirations. At a point in time aspirations vary fairly little by income level; over the life cycle, aspirations increase about in proportion to income. These shifts in aspirations also affect assessments of past and future well-being in such a way that the choices underlying behavior (based on what psychologists call “decision utility”) turn out not to have their expected welfare effects (experienced utility). Of the two influences shaping consumption aspirations – comparisons with others and with one’s past experience – the former appears more salient early in the life cycle and the latter, later on.
|
Motor skill learning in groups: Some proposals for applying implicit learning and self-controlled feedback
|
Contrary to researchers’ current focus on individual motor skill learning, in institutional settings such as physical education and sports motor skill learning is often taught in groups. In these settings, there is not only the interaction between teacher and learner (analogous to research), but also the many interactions between the learners in the group. In this paper, we discuss the pitfalls of applying research findings without taking into account the different dynamics that the interactions between group members bring about. To this end, we especially discuss implicit motor learning and self-controlled feedback, as these have recently been hailed as being particularly effective for increasing motor skill and self-efficacy. Proposals are provided to adopt these methods for motor skill learning in groups. This is not only relevant for practitioners in physical education and sports, but also establishes an agenda for research.
|
Visualization-Based Active Learning for Video Annotation
|
Video annotation is an effective way to facilitate content-based analysis for videos. Automatic machine learning methods are commonly used to accomplish this task. Among these, active learning is one of the most effective methods, especially when the training data cost a great deal to obtain. One of the most challenging problems in active learning is the sample selection. Various sampling strategies can be used, such as uncertainty, density, and diversity, but it is difficult to strike a balance among them. In this paper, we provide a visualization-based batch mode sampling method to handle such a problem. An iso-contour-based scatterplot is used to provide intuitive clues for the representativeness and informativeness of samples and assist users in sample selection. A semisupervised metric learning method is incorporated to help generate an effective scatterplot reflecting the high-level semantic similarity for visual sample selection. Moreover, both quantitative and qualitative evaluations are provided to show that the visualization-based method can effectively enhance sample selection in active learning.
|
Preventing acute decrease in renal function induced by coronary angiography (PRECORD): a prospective randomized trial.
|
BACKGROUND
Infusion of saline attenuates the decrease in renal function induced by radiographic contrast agents among patients with chronic renal insufficiency.
AIM
The Preventing Renal alteration in Coronary Disease (PRECORD) trial was a randomized trial to assess the effect on renal function of saline infusion during and after coronary angiography in 201 patients without severe chronic renal insufficiency (serum creatinine<140micromol/L).
METHODS
All patients received standard oral hydration: 2000mL of tap water within the 24 hours after coronary angiography. Patients were randomized before the procedure to intravenous hydration (1000mL of 0.9% saline infusion) or no additional hydration. The infusion was started in the catheterization laboratory and continued for 24 hours. The primary endpoint was the change in calculated creatinine clearance between baseline and 24 hours after coronary angiography. The same ionic low osmolar radiographic contrast agent (ioxaglate) was used in all patients.
RESULTS
Both groups had similar baseline characteristics, including age, serum creatinine, volume of contrast and proportion of patients undergoing ad hoc coronary angioplasty. The overall decrease in serum creatinine clearance 24 hours after the procedure was -3.44 (0.68)mL/min. The change in serum creatinine clearance 24 hours after the procedure was -2.81 (1.07)mL/min in the infusion group vs -4.09 (0.91)mL/min in the control group (p=0.38).
CONCLUSION
Renal function is altered only slightly 24 hours after coronary angiography with standard oral hydration alone and is not affected by saline infusion started at the beginning of coronary angiography, even in patients with mild-to-moderate renal dysfunction.
|
Empathy and social functioning in late adulthood.
|
OBJECTIVES
Both cognitive and affective empathy are regarded as essential prerequisites for successful social functioning, and recent studies have suggested that cognitive, but not affective, empathy may be adversely affected as a consequence of normal adult aging. This decline in cognitive empathy is of concern, as older adults are particularly susceptible to the negative physical and mental health consequences of loneliness and social isolation.
METHOD
The present study compared younger (N = 80) and older (N = 49) adults on measures of cognitive empathy, affective empathy, and social functioning.
RESULTS
Whilst older adults' self-reported and performance-based cognitive empathy was significantly reduced relative to younger adults, there were no age-related differences in affective empathy. Older adults also reported involvement in significantly fewer social activities than younger adults, and cognitive empathy functioned as a partial mediator of this relationship.
CONCLUSION
These findings are consistent with theoretical models that regard cognitive empathy as an essential prerequisite for good interpersonal functioning. However, the cross-sectional nature of the study leaves open the question of causality for future studies.
|
Validation of the Osteoporosis Smoking Health Belief instrument.
|
Smoking has a deleterious effect on bone mineral density. Psychometric properties were conducted for 3 smoking cessation subscales of the Osteoporosis Smoking Health Belief (OSHB) instrument: barriers, benefits, and self-efficacy. The instrument was evaluated by 6 nurse researchers, administered to a pilot sample of 23 adult smokers aged 19-39, and to a convenience sample of 59 adult smokers aged 19-84 years attending bingo at churches and community centers. Principal components factor analyses were conducted on the 18 items at both time points and accounted for 65.05% of the variances in the matrix at Time 1 and 71.19% at Time 2. The 3 statistical factors corresponded to the theoretically derived concepts. Cronbach's alphas for benefits of not smoking were .86 at Time 1 and .88 at Time 2; for barriers, .78 at Time 1 and .89 at Time 2; and for self-efficacy, .94 at Time 1 and .96 at Time 2. The test-retest correlations were .68 for benefits, .74 for barriers, and .79 for self-efficacy. Paired t tests showed no significant change over time. The OSHB meets relevant measurement criteria.
|
Word Channel Based Multiscale Pedestrian Detection without Image Resizing and Using Only One Classifier
|
Most pedestrian detection approaches that achieve high accuracy and precision rate and that can be used for realtime applications are based on histograms of gradient orientations. Usually multiscale detection is attained by resizing the image several times and by recomputing the image features or using multiple classifiers for different scales. In this paper we present a pedestrian detection approach that uses the same classifier for all pedestrian scales based on image features computed for a single scale. We go beyond the low level pixel-wise gradient orientation bins and use higher level visual words organized into Word Channels. Boosting is used to learn classification features from the integral Word Channels. The proposed approach is evaluated on multiple datasets and achieves outstanding results on the INRIA and Caltech-USA benchmarks. By using a GPU implementation we achieve a classification rate of over 10 million bounding boxes per second and a 16 FPS rate for multiscale detection in a 640×480 image.
|
Insulin-like Growth Factor 1 and Pressure Load in I-Iypertensive Patients
|
0tihW%iti3ZL Upii@t standing was characterized in hypertensive patients are related tci pressure both in normals and in hypertensive6 Iry load. Am J Hypertens 19%;9:607-609 significant higher rate-pressure product CRPP = systolic blood pressure (mm Hgl x heart rate
|
Fixed and random effects selection in mixed effects models.
|
We consider selecting both fixed and random effects in a general class of mixed effects models using maximum penalized likelihood (MPL) estimation along with the smoothly clipped absolute deviation (SCAD) and adaptive least absolute shrinkage and selection operator (ALASSO) penalty functions. The MPL estimates are shown to possess consistency and sparsity properties and asymptotic normality. A model selection criterion, called the IC(Q) statistic, is proposed for selecting the penalty parameters (Ibrahim, Zhu, and Tang, 2008, Journal of the American Statistical Association 103, 1648-1658). The variable selection procedure based on IC(Q) is shown to consistently select important fixed and random effects. The methodology is very general and can be applied to numerous situations involving random effects, including generalized linear mixed models. Simulation studies and a real data set from a Yale infant growth study are used to illustrate the proposed methodology.
|
Chronological age is not an independent predictor of clinical outcomes after radical nephroureterectomy
|
Higher chronological age has been suggested to confer worse prognosis in patients with upper tract urothelial carcinoma (UTUC). The aim of the current study was to test this hypothesis in a large multicenter external validation cohort of patients treated with radical nephroureterectomy (RNU) while controlling for patient performance status. We retrospectively reviewed the data from 1,169 patients treated with RNU for UTUC. Age at RNU was analyzed both as a continuous and categorical variable (<50 years, n = 66; 50–59.9 years, n = 185; 60–69.9 years, n = 367; 70–79.9 years, n = 419; ≥80 years, n = 132). Median follow-up was 37 months. Actuarial recurrence-free, cancer-specific, and all-cause survival estimates at 5 years after RNU were 69, 73, and 61%, respectively. Advanced age was associated with female gender, higher ECOG status, higher ASA score, and a lower probability of receiving adjuvant chemotherapy (all P values ≤ 0.02). In multivariable analyses, advanced age was associated with decreased recurrence-free (P = 0.021), cancer-specific (P = 0.002), and all-cause survival (P < 0.001) after controlling for the effects of gender, tumor location, number of lymph nodes removed, tumor grade, stage, architecture, necrosis, and lymphovascular invasion. After addition of ECOG status, age remained an independent predictor of only all-cause mortality (P > 0.001). We confirmed that advanced patient age at the time of RNU is associated with worse clinical outcomes after surgery. However, ECOG performance status abrogated the association. Furthermore, a large proportion of elderly patients were cured with RNU. This suggests that chronological age alone is an inadequate indicator criterion to predict response of older UTUC patients to RNU.
|
An exploration of factors affecting Hong Kong ICU nurses in providing oral care.
|
AIM AND OBJECTIVES
This paper aims to explore the factors that affect Hong Kong intensive care unit nurses in providing oral care.
BACKGROUND
The literature shows that evidence-based oral care prevents ventilator-associated pneumonia. Nevertheless, not all intensive care unit nurses provide such care. Although several studies have been undertaken to identify factors that affecting the provision of oral care, none of these studies looked at the situation in Hong Kong.
DESIGN
An exploratory qualitative design was adopted, with audio-taped interviews.
METHODS
A convenience sample of 10 registered nurses with 3-14 years of intensive care unit working experience was recruited from the intensive care unit of one regional hospital in Hong Kong. Transcribed interviews were analysed by means of content analysis.
RESULTS
The participants' descriptions of their oral care practices covered oral health assessment, cleansing the oral cavity and care of the surrounding areas. Findings revealed the following significant factors that influenced intensive care unit nurses in providing oral care: their perceptions of the purpose of oral care; their fears about providing it; the priority of oral care; and inadequate support for oral care.
CONCLUSIONS
The findings indicate that nurses' oral care practices were not evidence based. Factors that affected the provision of oral care were consistent with those found in previous studies.
RELEVANCE TO CLINICAL PRACTICE
Study findings indicate that present oral care training should be revised. The findings also highlight the influence of ward culture on nurses' priorities in providing oral care. Appropriate materials, adequate staffing levels and the establishment of an evidence-based oral care protocol may facilitate the provision of oral care in the intensive care unit.
|
Comparison of Medial-to-lateral versus Traditional Lateral-to-medial Laparoscopic Dissection Sequences for Resection of Rectosigmoid Cancers: Randomized Controlled Clinical Trial
|
Abstract This study aimed to compare medial-to-lateral versus lateral-to-medial laparoscopic dissection sequences for resecting rectosigmoid cancers. We hypothesized that the medial-to-lateral approach was a more efficient procedure and with potentially better oncologic results. Between January 1997 and June 1999, a total of 67 patients of rectosigmoid cancer treated by one surgeon using the laparoscopic approach were recruited for this prospective, randomized, double-blind clinical trial. Using the blocked randomization method, 36 patients were allocated to a medial-to-lateral (M) group and the other 31 to a lateral-to-medial (L) group; the groups were well matched in age, gender, symptoms, body mass index, American Society of Anesthesiology (ASA) class, tumor location, tumor distance above the anal verge, tumor gross morphology, TNM stage of the tumor, and accuracy of preoperative TNM staging (p > 0.05). All patients were followed up until June 2001. We found that the M group had a significantly shorter operating time and lower overall costs than the L group (p < 0.05). There was no significant difference between these two groups in terms of intraoperative complications, conversion rate, postoperative ileus, hospitalization, postoperative pain, postoperative complications, wound length, or disability (p > 0.05). The postoperative proinflammatory response, evaluated by the C-reactive protein level and the erythrocyte sedimentation rate, was significantly lower in the M group (p < 0.05). There was no significant difference between these two groups regarding postoperative immunosuppression, as evaluated by the alterations of total lymphocyte counts and the CD4+/CD8+ ratio (p > 0.05). The extent of dissection of these two dissection approaches was similar, as the harvested lymph nodes were equivalent (p > 0.05). During the whole follow-up period (median 32 months, range 24–54 months), the tumor recurrence rate was similar for these two groups of patients (5.6% in the M group vs. 6.5% in the L group; p > 0.05). These findings indicated that the medial-to-lateral approach was quicker, less expensive and possibly less invasive; moreover, it gave oncologic results similar to those achieved with the traditional lateral-to-medial dissection sequence. We thus concluded that the medial-to-lateral dissection sequence may currently be the most appropriate procedure for laparoscopic resection of rectosigmoid cancers.
|
Psychometric assessment of the Chinese version of the brief illness perception questionnaire in breast cancer survivors
|
OBJECTIVE
The eight-item Brief Illness Perception Questionnaire (B-IPQ) supposedly evaluates cognitive and emotional representations of illness. This study examined the validity and reliability of a traditional Chinese version of the B-IPQ in Hong Kong Chinese breast cancer survivors.
METHODS
358 Chinese breast cancer survivors who had recently ended their primary treatment completed this B-IPQ Chinese version. Confirmatory factor analysis (CFA) tested the factor structure. The internal consistency, construct, predictive and convergent validities of the scale were assessed.
RESULTS
CFA revealed that the original three-factor (cognitive-emotional representations and illness comprehensibility) structure of the B-IPQ poorly fitted our sample. After deleting one item measuring illness coherence, seven-item gave an optimal two-factor (cognitive-emotional representations) structure for the B-IPQ (B-IPQ-7). Cronbach's alpha for the two subscales were 0.653 and 0.821, and for the overall seven-item scale of B-IPQ was 0.783. Correlations of illness perception and physical symptom distress, anxiety, depression and known-group comparison between different treatment status suggested acceptable construct validity. The association between baseline illness perception and psychological distress at 3-month follow up supported predictive validity.
CONCLUSIONS
B-IPQ-7 appears to be a moderately valid measure of illness perception in cancer population, potentially useful for assessing illness representations in Chinese women with breast cancer.
|
Superinflation and Quintessence in the Presence of an Extra Field
|
We discuss the implication of the introduction of an extra field to the dynamics of a scalar field conformally coupled to gravitation in a homogeneous isotropic spatially flat universe. We show that for some reasonable parameter values the dynamical effects are similar to those of our previous model with a single scalar field. Nevertheless, for other parameter values new dynamical effects are obtained.
|
Access to rehabilitation facilities in an unselected hospital population affected by acute stroke
|
The aim of this study was to evaluate the selection criteria and characteristics of the patients who have access to rehabilitation facilities after having experienced an acute stroke. Between January 1993 and February 1994, 383 patients were recruited in 13 hospitals in Lombardy, and telephonically followed up four months after study entry. The data were collected by members of the Associazione Volontari Ospedalieri (Hospital Volunteers' Association). The 4-month mortality rate was 23%. The primary selection criterion for gaining access to rehabilitation facilities was the degree of disability; the secondary factor was age. Rehabilitation facilities were not available to very severely afflicted or self-sufficient patients, but were preferentially made available to young, partially-dependent patients. A rehabilitative intervention within the first month was made available to fewer than 50% of the patients for whom it was indicated. The absence of care for elderly patients and the delay in its availability for those who actually receive it underline the need for new organisational methods. The data presented here also show that voluntary associations can work as observers of the health service. A more complete study is required in order to understand the real dimensions of the problem and the clinical and social characteristics of the population involved. Lo studio è stato condotto per valutare i criteri di selezione e le caratteristiche dei pazienti che accedono alle risorse riabilitative dopo stroke acuto. Dal gennaio 1993 al febbraio 1994 sono stati reclutati in 13 ospedali lombardi 383 pazienti, seguiti con follow-up telefonico 4 mesi dopo l'ammissione allo studio. I dati sono stati raccolti da appartenenti alla Associazione Volontari Ospedalieri. La mortalità a 4 mesi è risultata del 23%. Fattore di selezione primaria per l'accesso alle risorse riabilitative è risultato essere il livello di disabilità. Fattore di selezione secondaria l'età. Le risorse riabilitative non sono risultate di fatto disponibili per i pazienti molto gravi o autosufficienti, queste vengono attribuite preferenzialmente ai pazienti giovani parzialmente dipendenti. Un intervento riabilitativo entro il primo mese è stato disponibile per meno del 50% dei pazienti in cui l'intervento stesso risulta indicato. Le carenze assistenziali per i pazienti anziani e per i lunghi tempi di intervento sottolineano la necessità di nuove modalità organizzative. I dati del presente lavoro indicano inoltre che associazioni di volontariato possono attuare la loro opera anche come osservatori del servizio sanitario. È necessario uno studio più completo per conoscere le reali dimensioni del problema e le caratteristiche cliniche e sociali della popolazione.
|
The Mond Paradigm
|
I review briefly different aspects of the MOND paradigm, with emphasis on phenomenology, epitomized here by many MOND laws of galactic motion–analogous to Kepler's laws of planetary motion. I then comment on the possible roots of MOND in cosmology, possibly the deepest and most far reaching aspect of MOND. This is followed by a succinct account of existing underlying theories. I also reflect on the implications of MOND's successes for the dark matter (DM) paradigm: MOND predictions imply that baryons alone accurately determine the full field of each and every individual galactic object. This conflicts with the expectations in the DM paradigm because of the haphazard formation and evolution of galactic objects and the very different influences that baryons and DM are subject to during the evolution, as evidenced, e.g., by the very small baryon-to-DM fraction in galaxies (compared with the cosmic value). All this should disabuse DM advocates of the thought that DM will someday be able to reproduce MOND: it is inconceivable that the modicum of baryons left over in galaxies can be made to determine everything if a much heavier DM component is present.
|
Online evolutionary collaborative filtering
|
Collaborative filtering algorithms attempt to predict a user's interests based on his past feedback. In real world applications, a user's feedback is often continuously collected over a long period of time. It is very common for a user's interests or an item's popularity to change over a long period of time. Therefore, the underlying recommendation algorithm should be able to adapt to such changes accordingly. However, most existing algorithms do not distinguish current and historical data when predicting the users' current interests. In this paper, we consider a new problem - online evolutionary collaborative filtering, which tracks user interests over time in order to make timely recommendations. We extended the widely used neighborhood based algorithms by incorporating temporal information and developed an incremental algorithm for updating neighborhood similarities with new data. Experiments on two real world datasets demonstrated both improved effectiveness and efficiency of the proposed approach.
|
VALIDITY OF HIGH-SCHOOL GRADES IN PREDICTING STUDENT SUCCESS BEYOND THE FRESHMAN YEAR : High-School Record vs . Standardized Tests as Indicators of Four-Year College Outcomes
|
High-school grades are often viewed as an unreliable criterion for college admissions, owing to differences in grading standards across high schools, while standardized tests are seen as methodologically rigorous, providing a more uniform and valid yardstick for assessing student ability and achievement. The present study challenges that conventional view. The study finds that high-school grade point average (HSGPA) is consistently the best predictor not only of freshman grades in college, the outcome indicator most often employed in predictive-validity studies, but of four-year college outcomes as well. A previous study, UC and the SAT (Geiser with Studley, 2003), demonstrated that HSGPA in college-preparatory courses was the best predictor of freshman grades for a sample of almost 80,000 students admitted to the University of California. Because freshman grades provide only a short-term indicator of college performance, the present study tracked four-year college outcomes, including cumulative college grades and graduation, for the same sample in order to examine the relative contribution of high-school record and standardized tests in predicting longerterm college performance. Key findings are: (1) HSGPA is consistently the strongest predictor of four-year college outcomes for all academic disciplines, campuses and freshman cohorts in the UC sample; (2) surprisingly, the predictive weight associated with HSGPA increases after the freshman year, accounting for a greater proportion of variance in cumulative fourth-year than first-year college grades; and (3) as an admissions criterion, HSGPA has less adverse impact than standardized tests on disadvantaged and underrepresented minority students. The paper concludes with a discussion of the implications of these findings for admissions policy and argues for greater emphasis on the high-school record, and a corresponding de-emphasis on standardized tests, in college admissions. * The study was supported by a grant from the Koret Foundation. Geiser and Santelices: VALIDITY OF HIGH-SCHOOL GRADES 2 CSHE Research & Occasional Paper Series Introduction and Policy Context This study examines the relative contribution of high-school grades and standardized admissions tests in predicting students’ long-term performance in college, including cumulative grade-point average and college graduation. The relative emphasis on grades vs. tests as admissions criteria has become increasingly visible as a policy issue at selective colleges and universities, particularly in states such as Texas and California, where affirmative action has been challenged or eliminated. Compared to high-school gradepoint average (HSGPA), scores on standardized admissions tests such as the SAT I are much more closely correlated with students’ socioeconomic background characteristics. As shown in Table 1, for example, among our study sample of almost 80,000 University of California (UC) freshmen, SAT I verbal and math scores exhibit a strong, positive relationship with measures of socioeconomic status (SES) such as family income, parents’ education and the academic ranking of a student’s high school, whereas HSGPA is only weakly associated with such measures. As a result, standardized admissions tests tend to have greater adverse impact than HSGPA on underrepresented minority students, who come disproportionately from disadvantaged backgrounds. The extent of the difference can be seen by rank-ordering students on both standardized tests and highschool grades and comparing the distributions. Rank-ordering students by test scores produces much sharper racial/ethnic stratification than when the same students are ranked by HSGPA, as shown in Table 2. It should be borne in mind the UC sample shown here represents a highly select group of students, drawn from the top 12.5% of California high-school graduates under the provisions of the state’s Master Plan for Higher Education. Overall, under-represented minority students account for about 17 percent of that group, although their percentage varies considerably across different HSGPA and SAT levels within the sample. When students are ranked by HSGPA, underrepresented minorities account for 28 percent of students in the bottom Family Parents' School API Income Education Decile SAT I verbal 0.32 0.39 0.32 SAT I math 0.24 0.32 0.39 HSGPA 0.04 0.06 0.01 Source: UC Corporate Student System data on 79,785 first-time freshmen entering between Fall 1996 and Fall 1999. Correlation of Admissions Factors with SES Table 1
|
Stripes: Bit-serial deep neural network computing
|
Motivated by the variance in the numerical precision requirements of Deep Neural Networks (DNNs) [1], [2], Stripes (STR), a hardware accelerator is presented whose execution time scales almost proportionally with the length of the numerical representation used. STR relies on bit-serial compute units and on the parallelism that is naturally present within DNNs to improve performance and energy with no accuracy loss. In addition, STR provides a new degree of adaptivity enabling on-the-fly trade-offs among accuracy, performance, and energy. Experimental measurements over a set of DNNs for image classification show that STR improves performance over a state-of-the-art accelerator [3] from 1.30x to 4.51x and by 1.92x on average with no accuracy loss. STR is 57% more energy efficient than the baseline at a cost of 32% additional area. Additionally, by enabling configurable, per-layer and per-bit precision control, STR allows the user to trade accuracy for further speedup and energy efficiency.
|
Impacts of climate change on British Columbia's biodiversity: A literature review
|
This Extended Abstract condenses a literature review that summarized research on the current and potential impacts of climate change on biodiversity in British Columbia. The review, which is preceded by a brief summary of observed and predicted climate changes, brings together the relevant information contained in those publications for the benefit of natural resource managers. Contemporary increases in atmospheric carbon dioxide concentration, average annual temperatures, and sea surface temperatures have been documented, and climatologists predict these increases will continue through this century. Research suggests that whole ecosystems and biogeoclimatic zones will not respond as a unit; rather, individual components of ecosystems will respond. Species will respond to these climate changes either by adapting in place, migrating, or going extinct. Examples of species responses have already been recorded in British Columbia. Finally, the review summarizes research on how to mitigate climate change impacts on biodiversity. Mitigation will require implementing conservation principles, reducing non-climate stressors, providing latitudinal and elevational migration corridors, and instituting long-term monitoring to define causality between climate change and biotic responses. Perhaps the most important advice for natural resource and biodiversity managers is to implement, to the extent possible, good conservation practices.
|
Non-invasive therapy to reduce the body burden of aluminium in Alzheimer's disease.
|
There are unexplained links between human exposure to aluminium and the incidence, progression and aetiology of Alzheimer's disease. The null hypothesis which underlies any link is that there would be no Alzheimer's disease in the effective absence of a body burden of aluminium. To test this the latter would have to be reduced to and retained at a level that was commensurate with an Alzheimer's disease-free population. In the absence of recent human interference in the biogeochemical cycle of aluminium the reaction of silicic acid with aluminium has acted as a geochemical control of the biological availability of aluminium. This same mechanism might now be applied to both the removal of aluminium from the body and the reduced entry of aluminium into the body while ensuring that essential metals, such as iron, are unaffected. Based upon the premise that urinary aluminium is the best non-invasive estimate of body burden of aluminium patients with Alzheimer's disease were asked to drink 1.5 L of a silicic acid-rich mineral water each day for five days and, by comparison of their urinary excretion of aluminium pre-and post this simple procedure, the influence upon their body burden of aluminium was determined. Drinking the mineral water increased significantly (P<0.001) their urinary excretion of silicic acid (34.3 +/- 15.2 to 55.7 +/- 14.2 micromol/mmol creatinine) and concomitantly reduced significantly P=0.037) their urinary excretion of aluminium (86.0 +/- 24.3 to 62.2 +/- 23.2 nmol/mmol creatinine). The latter was achieved without any significant (P>0.05) influence upon the urinary excretion of iron (20.7 +/- 9.5 to 21.7 +/- 13.8 nmol/mmol creatinine). The reduction in urinary aluminium supported the future longer-term use of silicic acid as non-invasive therapy for reducing the body burden of aluminium in Alzheimer's disease.
|
Effect of eight months of inhaled beclomethasone dipropionate and budesonide on carbohydrate metabolism in adults with asthma.
|
BACKGROUND
The safety of high dose inhaled steroids has been a subject of debate. The efficacy and safety of beclomethasone dipropionate and budesonide inhalations were evaluated by measuring their effects on pulmonary function, on the hypothalamic-pituitary-adrenocortical axis, and on carbohydrate metabolism in adults with unstable asthma.
METHODS
Fifteen adults with unstable asthma and 15 healthy controls were studied. Eight patients were treated with beclomethasone dipropionate in initially high (2 mg/day for five months) and subsequently lower (1 mg/day for three months) doses. Seven patients were treated with budesonide at doses of 1.6 mg/day for five months followed by 0.8 mg/day for three months. Blood glucose and serum insulin were measured in an oral glucose tolerance test and plasma cortisol in an adrenocorticotrophic hormone test. The antiasthmatic effect of treatment was evaluated by measuring peak morning expiratory flow rates and forced expiratory volume in one second (FEV1).
RESULTS
The FEV1 increased significantly after one month of treatment (medians 88% v 96%, p < 0.01), and nocturnal symptoms disappeared within two weeks of treatment in both groups. At one month, the high dose significantly decreased serum insulin concentrations as calculated from the areas under the incremental two hours curves in the glucose tolerance test. The decrease was 59% for beclomethasone dipropionate (medians 76 v 31 mU/l/h, p < 0.005) and 42% for budesonide (medians 79 v 46 mU/l/h, p < 0.01). The median areas at five and eight months were intermediate for both drugs. No significant differences were found when the five and eight month values were compared either with the baseline or with one month values. The difference between the baseline values of both groups and the respective values in healthy controls was significant (medians 79 v 49 mU/l/h, p < 0.01). The one month values for the patients and control subjects were similar. Paralleling the changes for insulin, the area under the incremental two hour blood glucose curve decreased significantly (medians 1.4 v 0.4 mmol/l/h, p < 0.05) during the first month of treatment. The five and eight month values were intermediate (medians 0.8 and 0.7 mmol/l/h, respectively). These changes were not significant compared with the baseline or the one month areas. Similar changes were seen in both treatment groups. Neither treatment had any significant effect on plasma cortisol in the one hour adrenocorticotrophic hormone test.
CONCLUSIONS
In patients stressed by uncontrolled asthma, the antiasthmatic effect of high dose beclomethasone dipropionate and budesonide was accompanied by a significant initial decrease in insulin resistance with a parallel improvement in glucose tolerance. During prolonged treatment a small increase in insulin sensitivity was found. The overall effect of beclomethasone dipropionate and budesonide inhalations on carbohydrate metabolism may be beneficial in patients with uncontrolled asthma.
|
Ethical challenges in nursing homes--staff's opinions and experiences with systematic ethics meetings with participation of residents' relatives.
|
BACKGROUND
Many ethical problems exist in nursing homes. These include, for example, decision-making in end-of-life care, use of restraints and a lack of resources.
AIMS
The aim of the present study was to investigate nursing home staffs' opinions and experiences with ethical challenges and to find out which types of ethical challenges and dilemmas occur and are being discussed in nursing homes.
METHODS
The study used a two-tiered approach, using a questionnaire on ethical challenges and systematic ethics work, given to all employees of a Norwegian nursing home including nonmedical personnel, and a registration of systematic ethics discussions from an Austrian model of good clinical practice.
RESULTS
Ninety-one per cent of the nursing home staff described ethical problems as a burden. Ninety per cent experienced ethical problems in their daily work. The top three ethical challenges reported by the nursing home staff were as follows: lack of resources (79%), end-of-life issues (39%) and coercion (33%). To improve systematic ethics work, most employees suggested ethics education (86%) and time for ethics discussion (82%). Of 33 documented ethics meetings from Austria during a 1-year period, 29 were prospective resident ethics meetings where decisions for a resident had to be made. Agreement about a solution was reached in all 29 cases, and this consensus was put into practice in all cases. Residents did not participate in the meetings, while relatives participated in a majority of case discussions. In many cases, the main topic was end-of-life care and life-prolonging treatment.
CONCLUSIONS
Lack of resources, end-of-life issues and coercion were ethical challenges most often reported by nursing home staff. The staff would appreciate systematic ethics work to aid decision-making. Resident ethics meetings can help to reach consensus in decision-making for nursing home patients. In the future, residents' participation should be encouraged whenever possible.
|
Quality of life in swallowing of the elderly patients affected by stroke.
|
BACKGROUND
- The elderly population faces many difficulties as a result of the aging process. Conceptualize and evaluate their life quality is a challenge, being hard to characterize the impact on daily activities and on functional capacity. The stroke is one of the most disabling neurological diseases, becoming a public health problem. As an aggravating result, there is dysphagia, a disorder that compromises the progression of the food from the mouth to the stomach, causing clinical complications to the individual.
OBJECTIVE
- Characterize the life quality of the elderly swallowing affected by stroke.
METHODS
- Cross-sectional study conducted at the University Hospital, attended by 35 elderly with stroke, being 19 women and 16 men, with age between 60 and 90 years old, that self-reported satisfactory overall clinical picture. It was applied the Quality of Life Swallowing protocol. The data were statistically analyzed, by means of ANOVA tests, Spearman correlation, t test, with significance level of 5%.
RESULTS
- The mean age was 69.5 years; as for the scores obtained by the 35 participants in the 11 domains of the protocol, it was observed a change in score indicating severe to moderate impact in quality of life related to self-reported swallowing (31.8% to 59.5%); the domain that most interfered was the feeding time (31.8%).
CONCLUSION
- Elderly affected by stroke that present dysphagia has low scores in quality of life related to swallowing.
|
Graphical models via univariate exponential family distributions
|
Undirected graphical models, or Markov networks, are a popular class of statistical models, used in a wide variety of applications. Popular instances of this class include Gaussian graphical models and Ising models. In many settings, however, it might not be clear which subclass of graphical models to use, particularly for non-Gaussian and non-categorical data. In this paper, we consider a general sub-class of graphical models where the node-wise conditional distributions arise from exponential families. This allows us to derive multivariate graphical model distributions from univariate exponential family distributions, such as the Poisson, negative binomial, and exponential distributions. Our key contributions include a class of M-estimators to fit these graphical model distributions; and rigorous statistical analysis showing that these M-estimators recover the true graphical model structure exactly, with high probability. We provide examples of genomic and proteomic networks learned via instances of our class of graphical models derived from Poisson and exponential distributions.
|
Non-Local Recurrent Network for Image Restoration
|
Many classic methods have shown non-local self-similarity in natural images to be an effective prior for image restoration. However, it remains unclear and challenging to make use of this intrinsic property via deep networks. In this paper, we propose a non-local recurrent network (NLRN) as the first attempt to incorporate non-local operations into a recurrent neural network (RNN) for image restoration. The main contributions of this work are: (1) Unlike existing methods that measure self-similarity in an isolated manner, the proposed non-local module can be flexibly integrated into existing deep networks for end-to-end training to capture deep feature correlation between each location and its neighborhood. (2) We fully employ the RNN structure for its parameter efficiency and allow deep feature correlation to be propagated along adjacent recurrent states. This new design boosts robustness against inaccurate correlation estimation due to severely degraded images. (3) We show that it is essential to maintain a confined neighborhood for computing deep feature correlation given degraded images. This is in contrast to existing practice [43] that deploys the whole image. Extensive experiments on both image denoising and super-resolution tasks are conducted. Thanks to the recurrent non-local operations and correlation propagation, the proposed NLRN achieves superior results to state-of-the-art methods with many fewer parameters. The code is available at https://github.com/Ding-Liu/NLRN.
|
Exploring Legal Patent Citations for Patent Valuation
|
Effective patent valuation is important for patent holders. Forward patent citations, widely used in assessing patent value, have been considered as reflecting knowledge flows, just like paper citations. However, patent citations also carry legal implication, which is important for patent valuation. We argue that patent citations can either be technological citations that indicate knowledge transfer or be legal citations that delimit the legal scope of citing patents. In this paper, we first develop citation-network based methods to infer patent quality measures at either the legal or technological dimension. Then we propose a probabilistic mixture approach to incorporate both the legal and technological dimensions in patent citations, and an iterative learning process that integrates a temporal decay function on legal citations, a probabilistic citation network based algorithm and a prediction model for patent valuation. We learn all the parameters together and use them for patent valuation. We demonstrate the effectiveness of our approach by using patent maintenance status as an indicator of patent value and discuss the insights we learned from this study.
|
Probiotic isolates from unconventional sources: a review
|
The use of probiotics for human and animal health is continuously increasing. The probiotics used in humans commonly come from dairy foods, whereas the sources of probiotics used in animals are often the animals' own digestive tracts. Increasingly, probiotics from sources other than milk products are being selected for use in people who are lactose intolerant. These sources are non-dairy fermented foods and beverages, non-dairy and non-fermented foods such as fresh fruits and vegetables, feces of breast-fed infants and human breast milk. The probiotics that are used in both humans and animals are selected in stages; after the initial isolation of the appropriate culture medium, the probiotics must meet important qualifications, including being non-pathogenic acid and bile-tolerant strains that possess the ability to act against pathogens in the gastrointestinal tract and the safety-enhancing property of not being able to transfer any antibiotic resistance genes to other bacteria. The final stages of selection involve the accurate identification of the probiotic species.
|
AMOEBA: HIERARCHICAL CLUSTERING BASED ON SPATIAL PROXIMITY USING DELAUNATY DIAGRAM
|
Exploratory data analysis is increasingly more necessary as larger spatial data is managed in electro-magnetic media. We propose an exploratory method that reveals a robust clustering hierarchy. Our approach uses the Delaunay diagram to incorporate spatial proximity. It does not require any prior knowledge about the data set, nor does it require parameters from the user. Multi-level clusters are successfully discovered by this new method in only O(nlogn) time, where n is the size of the data set. The efficiency of our method allows us to construct and display a new type of tree graph that facilitates understanding of the complex hierarchy of clusters. We show that clustering methods that adopt a raster-like or vector-like representation of proximity are not appropriate for spatial clustering. We illustrate the robustness of our method with an experimental evaluation with synthetic data sets as well as real data sets.
|
In vitro functional characterization of 37 CYP2C9 allelic isoforms found in Chinese Han population
|
Aim:Cytochrome P450 2C9 (CYP2C9) is a polymorphic enzyme that is responsible for the metabolism of approximately 15% of clinically important drugs. The aim of this study was to assess the catalytic characteristics of 37 CYP2C9 allelic isoforms found in Chinese Han population on the metabolism of tolbutamide in vitro.Methods:The wild-type and 36 CYP2C9 variants were expressed in sf21 insect cells using a baculovirus-mediated expression system. Then the insect microsomes were prepared for assessing the metabolic characteristics of each variant toward the CYP2C9-specific drug substrate tolbutamide.Results:Of 36 allelic variants tested, the intrinsic clearance values of 2 allelic isoforms (CYP2C9.36 and CYP2C9.51) were much higher than the wild-type CYP2C9.1 protein, 3 allelic isoforms (CYP2C9.11, CYP2C9.56 and N418T) exhibited similar intrinsic clearance values as the wild-type enzyme, whereas the other 31 variants showed significantly reduced intrinsic clearance values, ranging from 0.08% to 66.88%, for tolbutamide.Conclusion:Our study provides the most comprehensive data concerning the enzymatic activity of the CYP2C9 variants that are present in the Chinese Han population, and our data suggest that most of the carriers of these alleles might be paid more attention when using CYP2C9 mediated drugs clinically.
|
Support Vector Machine Ensemble with Bagging
|
Even the support vector machine (SVM) has been proposed to provide a good generalization performance, the classification result of the practically implemented SVM is often far from the theoretically expected level because their implementations are based on the approximated algorithms due to the high complexity of time and space. To improve the limited classification performance of the real SVM, we propose to use the SVM ensembles with bagging (bootstrap aggregating). Each individual SVM is trained independently using the randomly chosen training samples via a bootstrap technique. Then, they are aggregated into to make a collective decision in several ways such as the majority voting, the LSE(least squares estimation)-based weighting, and the double-layer hierarchical combining. Various simulation results for the IRIS data classification and the hand-written digit recognitionshow that the proposed SVM ensembles with bagging outperforms a single SVM in terms of classification accuracy greatly.
|
Inventory models with lateral transshipments: A review
|
Lateral transshipments within an inventory system are stock movements between locations of the same echelon. These transshipments can be conducted periodically at predetermined points in time to proactively redistribute stock, or they can be used reactively as a method of meeting demand which cannot be satisfied from stock on hand. The elements of an inventory system considered, e.g. size, cost structures and service level definition, all influence the best method of transshipping. Models of many different systems have been considered. This paper provides a literature review which categorizes the research to date on lateral transshipments, so that these differences can be understood and gaps within the literature can be identified.
|
Latent Contextual Bandits and their Application to Personalized Recommendations for New Users
|
Personalized recommendations for new users, also known as the cold-start problem, can be formulated as a contextual bandit problem. Existing contextual bandit algorithms generally rely on features alone to capture user variability. Such methods are inefficient in learning new users’ interests. In this paper we propose Latent Contextual Bandits. We consider both the benefit of leveraging a set of learned latent user classes for new users, and how we can learn such latent classes from prior users. We show that our approach achieves a better regret bound than existing algorithms. We also demonstrate the benefit of our approach using a large real world dataset and a preliminary user study.
|
Minimal invasive management of scaphoid fractures: from fresh to nonunion.
|
The peculiar shape of scaphoid hinders a precise evaluation of its fracture configuration, displacement and accuracy of screw placement. Its tenuous vascular supply risks the complications of delayed union, nonunion and avascular necrosis. Scaphoid is the focus of ligamentous attachment governing carpal kinematics. Preservation of its anatomy and vascularity is critical for normal wrist function. A new fracture classification clearly denoting every fracture type and guiding the management is introduced. The minimal invasive management of different scaphoid fracture conditions, including acute non-displaced and displaced fracture, delayed presentation, and nonunion are discussed. Role of arthroscopy is emphasized. Detailed surgical techniques are shared here.
|
Excellence in leadership : demands on the professional school principal
|
A professional school principal is the educational leader and manager of a school, and is therefore responsible for the work performance of all the people in the school (i.e. both staff and learners). People are the human resources of schools. They use material resources (such as finances, information equipment, and facilities) to produce a "product", namely, the educated learner. One of the principal's jobs (the so-called principalship) is to help the school achieve a high level of performance through the utilisation of all its human and material resources. This is done through effective, and ultimately excellence in, leadership. More simply stated: a principal's job is to get things done by working with and through other people. Studies of effective and excellent principals reveal that the major reason for principals' failure is an inability to deal with people. If the people perform well, the school performs well; if the people do not perform well, the school does not. In this sense, the leadership task of school principals is of the utmost importance and is probably the most important element of the principal's role and/or task. School principals are essential to the success of schools of all types and sizes. This philosophical review of the literature, which draws its conclusions from recent "best practices" with regard to excellence in school leadership and the so-called "new" principalship, is an attempt to raise and answer some questions concerning new demands on the professional principalship in a changing South Africa where educational reform is the norm rather than the exception.
|
A Novel Dual-Band Printed Diversity Antenna for Mobile Terminals
|
A novel dual-band printed diversity antenna is proposed and studied. The antenna, which consists of two back-to- back monopoles with symmetric configuration, is printed on a printed circuit board. The effects of some important parameters of the proposed antenna are deeply studied and the design methodology is given. A prototype of the proposed antenna operating at UMTS (1920-2170 MHz) and 2.4-GHz WLAN (2400-2484 MHz) bands is provided to demonstrate the usability of the methodology in dual-band diversity antenna for mobile terminals. In the above two bands, the isolations of the prototype are larger than 13 dB and 16 dB, respectively. The measured radiation patterns of the two monopoles in general cover complementary space regions. The diversity performance is also evaluated by calculating the envelope correlation coefficient, the mean effective gains of the antenna elements and the diversity gain. It is proved that the proposed antenna can provide spatial and pattern diversity to combat multipath fading.
|
Psychometric properties of two measures of psychological well-being in adult growth hormone deficiency
|
BACKGROUND
Psychometric properties of two measures of psychological well-being were evaluated for adults with growth hormone deficiency (GHD): the General Well-being Index, (GWBI)--British version of the Psychological General Well-being Index, and the 12-item Well-being Questionnaire (W-BQ12).
METHODS
Reliability, structure and other aspects of validity were investigated in a cross-sectional study of 157 adults with treated or untreated GHD, and sensitivity to change in a randomised placebo-controlled study of three months' growth hormone (GH) withdrawal from 12 of 21 GH-treated adults.
RESULTS
Very high completion rates were evidence that both questionnaires were acceptable to respondents. Factor analyses did not indicate the existence of useful GWBI subscales, but confirmed the validity of calculating a GWBI Total score. However, very high internal consistency reliability (Cronbach's alpha = 0.96, N = 152), probably indicated some item redundancy in the 22-item GWBI. On the other hand, factor analyses confirmed the validity of the three W-BQ12 subscales of Negative Well-being, Energy, and Positive Well-being, each having excellent internal reliability (alphas of 0.86, 0.86 and 0.88, respectively, N from 152 to 154). There was no sign of item redundancy in the highly acceptable Cronbach's alpha of 0.93 (N = 148) for the whole W-BQ12 scale. Whilst neither questionnaire found significant differences between GH-treated and non-GH-treated patients, there were correlations (for GH-treated patients) with duration of GH treatment for GWBI Total (r = -0.36, p = 0.001, N = 85), W-BQ12 Total (r = 0.35, p = 0.001, N = 88) and for all W-BQ12 subscales: thus the longer the duration of GH treatment (ranging from 0.5 to 10 years), the better the well-being. Both questionnaires found that men had significantly better overall well-being than women. The W-BQ12 was more sensitive to change than the GWBI in the GH-Withdrawal study. A significant between-group difference in change in W-BQ12 Energy scores was found [t(18) = 3.25, p = 0.004, 2-tailed]: patients withdrawn from GH had reduced energy at end-point. The GWBI found no significant change.
CONCLUSION
The W-BQ12 is recommended in preference to the GWBI to measure well-being in adult GHD: it is considerably shorter, has three useful subscales, and has greater sensitivity to change.
|
Long-term changes of sexual function in men with obstructive sleep apnea after initiation of continuous positive airway pressure.
|
INTRODUCTION
Obstructive sleep apnea (OSA), particularly intermittent nocturnal hypoxemia, is associated with erectile dysfunction (ED).
AIM
We investigated in patients with OSA whether continuous positive airway pressure (CPAP) therapy has a long-term effect on sexual function, including ED, in the presence of other risk factors for ED.
METHODS
Within a long-term observational design, we reassessed 401 male patients who had been referred for polysomnography, with respect to erectile and overall sexual function. Mean ± standard deviation follow-up time was 36.5 ± 3.7 months. Patients with moderate to severe ED were stratified according to the regular use of CPAP.
MAIN OUTCOME MEASURE
Changes of sexual function were assessed by the 15-item International Index of Erectile Function (IIEF-15) questionnaire, including the domains erectile function (EF), intercourse satisfaction, orgasmic function (OF), sexual desire (SD), and overall satisfaction (OS).
RESULTS
Of the 401 patients, 91 returned a valid IIEF-15 questionnaire at follow-up. Their baseline characteristics were not different from those of the total study group. OSA (apnea-hypopnea index >5/hour) had been diagnosed in 91.2% of patients. In patients with moderate to severe ED (EF domain <17), CPAP users (N = 21) experienced an improvement in overall sexual function (IIEF-15 summary score; P = 0.014) compared with CPAP non-users (N = 18), as well as in the subdomains OF (P = 0.012), SD (P = 0.007), and OS (P = 0.033). Similar results were obtained in patients with poor overall sexual dysfunction (IIEF-15 summary score <44). In patients with moderate to severe ED and low mean nocturnal oxygen saturation (≤93%, median), also the EF subdomain improved in CPAP users vs. non-users (P = 0.047).
CONCLUSIONS
These data indicate that long-term CPAP treatment of OSA and the related intermittent hypoxia can improve or preserve sexual function in men with OSA and moderate to severe erectile or sexual dysfunction, suggesting a certain reversibility of OSA-induced sexual dysfunctions.
|
Semi-automated modular modeling of buildings for model predictive control
|
A promising alternative to standard control strategies for heating, ventilation, air conditioning and blinds positioning of buildings is Model Predictive Control (MPC). Key to MPC is having a sufficiently simple (preferably linear) model of the building's thermal dynamics.
In this paper we propose and test a general approach to derive MPC compatible models consisting of the following steps: First, we use standard geometry and construction data to derive in an automated way a physical first-principles based linear model of the building's thermal dynamics. This describes the evolution of room, wall, floor and ceiling temperatures on a per zone level as a function of external heat fluxes (e.g., solar gains, heating/cooling system heat fluxes etc.). Second, we model the external heat fluxes as linear functions of control inputs and predictable disturbances. Third, we tune a limited number of physically meaningful parameters. Finally, we use model reduction to derive a low-order model that is suitable for MPC.
The full-scale and low-order models were tuned with and compared to a validated EnergyPlus building simulation software model. The approach was successfully applied to the modeling of a representative Swiss office building. The proposed modular approach flexibly supports stepwise model refinements and integration of models for the building's technical subsystems.
|
Methods for Collecting Milk from Mice
|
Mouse models offer unique opportunities to study mammary gland biology and lactation. Phenotypes within the mammary glands, especially those caused by genetic modification, often arise during lactation, and their study requires the collection of adequate volumes of milk. We describe two approaches for collecting milk from lactating mice. Both methods are inexpensive, are easy to use in the laboratory or classroom, are non-invasive, and yield adequate volumes of milk for subsequent analyses.
|
ENG: End-to-end Neural Geometry for Robust Depth and Pose Estimation using CNNs
|
Recovering structure and motion parameters given a image pair or a sequence of images is a well studied problem in computer vision. This is often achieved by employing Structure from Motion (SfM) or Simultaneous Localization and Mapping (SLAM) algorithms based on the real-time requirements. Recently, with the advent of Convolutional Neural Networks (CNNs) researchers have explored the possibility of using machine learning techniques to reconstruct the 3D structure of a scene and jointly predict the camera pose. In this work, we present a framework that achieves state-of-the-art performance on single image depth prediction for both indoor and outdoor scenes. The depth prediction system is then extended to predict optical flow and ultimately the camera pose and trained end-to-end. Our framework outperforms previous deep-learning based motion prediction approaches, and we also demonstrate that the state-of-the-art metric depths can be further improved using the knowledge of pose.
|
An analytical framework for optimizing variant discovery from personal genomes
|
The standardization and performance testing of analysis tools is a prerequisite to widespread adoption of genome-wide sequencing, particularly in the clinic. However, performance testing is currently complicated by the paucity of standards and comparison metrics, as well as by the heterogeneity in sequencing platforms, applications and protocols. Here we present the genome comparison and analytic testing (GCAT) platform to facilitate development of performance metrics and comparisons of analysis tools across these metrics. Performance is reported through interactive visualizations of benchmark and performance testing data, with support for data slicing and filtering. The platform is freely accessible at http://www.bioplanet.com/gcat.
|
Safety and efficacy of spinal vs general anaesthesia in bone marrow harvesting
|
Bone marrow harvesting (BMH) can be performed with either general (GA) or spinal anaesthesia (SPA). Whether SPA is advantageous in BMH and if this technique is safe for procedures performed in the prone position is still controversial. To evaluate the safety and efficacy of both anaesthetic techniques in BMH, 37 allogeneic donors (nine female, 28 male; 34.3 ± 9 years; ASA class 1–2) received either spinal (group 1, n = 20) or general anaesthesia (group 2, n = 17) according to their personal wishes. Under standardised harvesting conditions, haematology parameters, cell counts (MNC, CD34+), haemodynamic parameters, adverse reactions and patient satisfaction were registered. No differences were seen between groups with respect to demographic data, harvesting time (55 ± 17 vs 60 ± 16 min) and bone marrow cell counts (MNC: 6.68 ± 2.1 vs 5.7 ± 1.7 ml/106). The incidence of hypotension was higher in group 1 (45 vs 10.8%; P = 0.042). Postoperative analgesic requirement and emesis were increased in group 2 (P < 0.04) in comparison to group 1. in conclusion, the present study failed to show superiority of spinal over general anaesthesia with regard to the quality of the harvested bone marrow. however, the lower incidence of complaints after spinal anaesthesia appears to offer an advantage over ga in healthy allogeneic bone marrow donors.
|
Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure
|
We investigate how neural networks can learn and process languages with hierarchical, compositional semantics. To this end, we define the artificial task of processing nested arithmetic expressions, and study whether different types of neural networks can learn to compute their meaning. We find that recursive neural networks can implement a generalising solution to this problem, and we visualise this solution by breaking it up in three steps: project, sum and squash. As a next step, we investigate recurrent neural networks, and show that a gated recurrent unit, that processes its input incrementally, also performs very well on this task: the network learns to predict the outcome of the arithmetic expressions with high accuracy, although performance deteriorates somewhat with increasing length. To develop an understanding of what the recurrent network encodes, visualisation techniques alone do not suffice. Therefore, we develop an approach where we formulate and test multiple hypotheses on the information encoded and processed by the network. For each hypothesis, we derive predictions about features of the hidden state representations at each time step, and train ‘diagnostic classifiers’ to test those predictions. Our results indicate that the networks follow a strategy similar to our hypothesised ‘cumulative strategy’, which explains the high accuracy of the network on novel expressions, the generalisation to longer expressions than seen in training, and the mild deterioration with increasing length. This is turn shows that diagnostic classifiers can be a useful technique for opening up the black box of neural networks. We argue that diagnostic classification, unlike most visualisation techniques, does scale up from small networks in a toy domain, to larger and deeper recurrent networks dealing with real-life data, and may therefore contribute to a better understanding of the internal dynamics of current state-of-the-art models in natural language processing.
|
Role of Preoperative Magnetic Resonance Imaging in the Surgical Management of Early-Stage Breast Cancer
|
To examine the role of preoperative magnetic resonance imaging (pMRI) on time to surgery and rates of reoperation and contralateral prophylactic mastectomy (CPM) using a population-based study of New Jersey breast cancer patients. The study included 289 African-American and 320 white women who participated in the Breast Cancer Treatment Disparity Study and underwent breast surgery for newly diagnosed early-stage breast cancer between 2005 and 2010. Patients were identified through rapid case ascertainment by the New Jersey State Cancer Registry. Association between pMRI and time to surgery was examined by using linear regression and, with reoperation and CPM, by using binomial regression. Half (49.9 %) of the study population received pMRI, with higher use for whites compared with African-Americans (62.5 vs. 37.5 %). After adjusting for potential confounders, patients with pMRI versus those without experienced significantly longer time to initial surgery [geometric mean = 38.7 days; 95 % confidence interval (CI) 34.8–43.0; vs. 26.5 days; 95 % CI 24.3–29.0], a significantly higher rate of CPM [relative risk (RR) = 1.82; 95 % CI 1.06–3.12], and a nonsignificantly lower rate of reoperation (RR = 0.76; 95 % CI 0.54–1.08). Preoperative MRI was associated with significantly increased time to surgery and a higher rate of CPM, but it did not affect the rate of reoperation. Physicians and patients should consider these findings when making surgical decisions on the basis of pMRI findings.
|
Automatic tongue image segmentation based on histogram projection and matting
|
This paper mainly discusses how to use histogram projection and LBDM (Learning Based Digital Matting) to extract a tongue from a medical image, which is one of the most important steps in diagnosis of traditional Chinese Medicine. We firstly present an effective method to locate the tongue body, getting the convinced foreground and background area in form of trimap. Then, use this trimap as the input for LBDM algorithm to implement the final segmentation. Experiment was carried out to evaluate the proposed scheme, using 480 samples of pictures with tongue, the results of which were compared with the corresponding ground truth. Experimental results and analysis demonstrated the feasibility and effectiveness of the proposed algorithm.
|
Catching Dynamic Heterogeneous User Data for Identity Linkage Learning
|
Benefitting from the development of social platforms, more and more users tend to register multiple accounts on different social networks. Linking user identities across multiple online social networks based on user behavior patterns is considerable for network supervision and information tracking. However, a user’s online behavior in a social network is dynamic. The user profile may be changed due to some specific reasons such as user migration or job changes. Thus, catching the dynamics of evolutionary user data and collecting the latest user features are important and challenging issues in the area of user identity linkage. Inspired by deep learning models such as word2vec and Deep Walk, this paper proposes an integrated framework to catch the dynamic user data by supplementing vacant features and updating outdated features in data sources. The framework firstly represents all textual and structural user data into Iow- dimensional latent spaces by utilizing word2vec and DeepWalk, then, integrates different user features and predicts vacant data fields based on late fusion approach and cosine similarity computation. We then explore and evaluate the application of our proposed method in a user identity mapping task. The results proved that our framework can successfully catch the dynamic user data and enhance the performance of identity linkage models by supplementing and updating data sources advance with the times.
|
Model Guided Deep Learning Approach Towards Prediction of Physical System Behavior
|
Cyber-physical control systems involve a discrete computational algorithm to control continuous physical systems. Often the control algorithm uses predictive models of the physical system in its decision making process. However, physical system models suffer from several inaccuracies when employed in practice. Mitigating such inaccuracies is often difficult and have to be repeated for different instances of the physical system. In this paper, we propose a model guided deep learning method for extraction of accurate prediction models of physical systems, in presence of artifacts observed in real life deployments. Given an initial potentially suboptimal mathematical prediction model, our model guided deep learning method iteratively improves the model through a data driven training approach. We apply the proposed approach on the closed loop blood glucose control system. Using this proposed approach, we achieve an improvement over predictive Bergman Minimal Model by a factor of around 100.
|
Sensorless drive of permanent magnet brushless DC motor with 180 degree commutation
|
The three-phase wye connected permanent magnet brushless dc motor is conventionally driven by 120 degree commutation. Two phases are conducting current and the other one is always floating without any torque produced in each conduction interval. Rather than the conventional 120 degree drive, all three phases of 180 degree commutation are expected to conduct current in all sectors, which results in more power delivered from inverter side to the motor side for the same power supply voltage. In this paper, a recently proposed sensorless algorithm is highlighted with well performance in low speed operation. Based on dSPACE, comparison of different dynamic conditions between 120 and 180 degree commutation is presented and analyzed comprehensively. Extensive experiment tests show excellent results on dynamic performance of 180 degree commutation, which matches the simulation results from Simulink/Matlab. 180 degree commutation is verified to work properly with the ability to deliver more power when compared with conventional 120 degree commutation.
|
[Therapeutic management of bipolar disorder in France and Europe: a multinational longitudinal study (WAVE-bd)].
|
BACKGROUND
Bipolar disorder is a complex disease which requires multiple healthcare resources and complex medical care programs including pharmacological and non pharmacological treatment. If mood stabilizers remain the corner stone for bipolar disorder treatment, the development of atypical antipsychotics and their use as mood stabilizers has significantly modified therapeutic care. At the present time, psychiatrists have a large variety of psychotropic drugs for bipolar disorder: mood stabilizers, atypical antipsychotics, antidepressants, anxiolytics… However, despite the publication of guidelines on pharmacological treatment, with a high degree of consensus, everyday clinical practices remain heterogeneous. Moreover, there are few longitudinal studies to describe therapeutic management of bipolar disorder, whatever the phase of the disease is. Indeed, most of the studies are carried out on a specific phase of the disease or treatment. And there is no study comparing French and European practices.
OBJECTIVES
In this paper, we aim to present the comparison of the management of pharmacological treatments of bipolar disorder between France and Europe, using the data of the observational Wide AmbispectiVE study of the clinical management and burden of bipolar disorder (WAVE-bd study).
METHODS
The WAVE-bd study is a multinational, multicentre and non-interventional cohort study of patients diagnosed with BD type I or type II, according to DSM IV-TR criteria, in any phase of the disorder, who have experienced at least one mood event during the 12 months before enrolment. In total, 2507 patients have been included across 8 countries of Europe (480 in France). Data collection was retrospective (from 3 to 12 months), but also prospective (from 9 to 15 months) for a total study length of 12 to 27 months. Main outcome measures were the healthcare resource use and pharmacological treatments.
RESULTS
Our results show differences in the therapeutic management of bipolar disorder between France and other European countries. Regarding healthcare resource use, our results show that French patients consult more frequently a psychiatrist or a psychologist and less frequently a general practitioner or the emergency ward in comparison with patients from other European countries. In the whole European population, including France, atypical antipsychotics are widely used. Only 25% of the patients receive lithium and more than 50% of the patients receive antidepressants, while their use in bipolar disorder remains controversial. Most of the patients receive polymedication. Considering all phases of the disease pooled, less lithium and less atypical antipsychotics are prescribed to French patients, whereas they receive more antidepressants and more benzodiazepines than patients from other European countries. On the over hand, prescription of anticonvulsants and electroconvulsive therapy are equal. Moreover, data analyses by polarity of the episodes globally confirm these trends. There are a few exceptions: mixed states, in which lithium is twice more prescribed in France in comparison to other countries; depressive states, in which antidepressants are even more prescribed in other countries than in France; and less prescription of anticonvulsants in manic, mixed and euthymic phases in France.
CONCLUSION
The WAVE-bd study is the first observational study conducted on a large sample of bipolar I and II patients that compares therapeutic management between France and other European countries. The differences observed in therapeutic care across the different phases of the disease show that treatments differ depending on the countries studied, but also according to the preventive or curative phases, polarity of the bipolar disorder, comorbidities, impact of guidelines, and care organization. Although French patients have been treated by less lithium and less atypical antipsychotics than other European patients, they receive more antidepressants and more benzodiazepines. Finally, patients generally receive polymedication and the diversity in prescriptions shows how bipolar disorder is a complex disorder.
|
Linear Phase Low-Pass IIR Digital Differentiators
|
A novel approach to designing approximately linear phase infinite-impulse-response (IIR) digital filters in the passband region is introduced. The proposed approach yields digital IIR filters whose numerators represent linear phase finite-impulse-response (FIR) filters. As an example, low-pass IIR differentiators are introduced. The range and high-frequency suppression of the proposed low-pass differentiators are comparable to those obtained by higher order FIR low-pass differentiators. In addition, the differentiators exhibit almost linear phases in the passband regions
|
Modulation and Multiple Access for 5G Networks
|
Fifth generation (5G) wireless networks face various challenges in order to support large-scale heterogeneous traffic and users, therefore new modulation and multiple access (MA) schemes are being developed to meet the changing demands. As this research space is ever increasing, it becomes more important to analyze the various approaches, therefore, in this paper we present a comprehensive overview of the most promising modulation and MA schemes for 5G networks. Unlike other surreys of 5G networks, this paper focuses on multiplexing techniques, including modulation techniques in orthogonal MA (OMA) and various types of non-OMA (NOMA) techniques. Specifically, we first introduce different types of modulation schemes, potential for OMA, and compare their performance in terms of spectral efficiency, out-of-band leakage, and bit-error rate. We then pay close attention to various types of NOMA candidates, including power-domain NOMA, code-domain NOMA, and NOMA multiplexing in multiple domains. From this exploration, we can identify the opportunities and challenges that will have the most significant impacts on modulation and MA designs for 5G networks.
|
Examining the Importance of Social Relationships and Social Contexts in the Lives of Children With High-Incidence Disabilities:
|
Within the fields of psychology, sociology, and education, there has been a rapid expansion of research focused on understanding individual development within social contexts (Lerner & Simi, 1995; Moos, 2003; Rutter, 2000). Much of this work has been drawn from perspectives that place an emphasis on the dynamic interplay among the developing individual, family, and peer relationships; neighborhood and community contexts; and broader cultural forces. Attachment theory (Bowlby, 1969/1982), theories of ecological development (Bronfenbrenner, 1979; Bronfenbrenner & Morris, 1998), and developmental systems theory (Ford & Lerner, 1992; Lerner, 1998) all emphasize the interactive nature of individual development within contexts. Approaches that rely on such perspectives have the potential to deepen our understanding of the stressors, risks, and supports that can negatively and positively affect development across time (Sameroff, Bartko, Baldwin, Baldwin, & Seifer, 1998). However, despite growing awareness of the importance of social and contextual experiences in the social sciences in general, less is known about how social relationships and contexts influence the lives of children and youth with highincidence disabilities. This is a vulnerable population whose members are more likely to experience peer rejection, depression, anxiety, behavioral and conduct problems, delinquency, poor academic adjustment, school dropout, and poorer longExamining the Importance of Social Relationships and Social Contexts in the Lives of Children With High-Incidence Disabilities
|
Platelet inhibition by abciximab bolus-only administration and oral ADP receptor antagonist loading in acute coronary syndrome patients: the blocking and bridging strategy.
|
INTRODUCTION
Current guidelines still recommend the bolus and infusion administration of glycoprotein IIbIIIa inhibitors in patients with high-risk acute coronary syndrome undergoing percutaneous coronary intervention. We sought to evaluate the extent of platelet inhibition by a blocking and bridging strategy with intracoronary abciximab bolus-only administration and oral loading of adenosine diphosphate receptor antagonists.
PATIENTS AND METHODS
Fifty-six consecutive high-risk acute coronary syndrome patients with bolus-only abciximab administration (0.25 mg/kg i.c.) and loading with 600 mg clopidogrel (55%) or 60 mg prasugrel (45%) were included in this study. Platelet aggregation induced by thrombin receptor-activating peptide and adenosine diphosphate was measured by multiple electrode aggregometry up to 7 days.
RESULTS
Thrombin receptor-activating peptide induced platelet aggregation was significantly suppressed for a minimum of 48 h (45±17U) and returned to a normal range (>84 U) after 6 days (90±26U; p<0.001). Co-medication with prasugrel significantly reduced adenosine diphosphate-induced (p=0.002) and thrombin receptor-activating peptide-induced (p=0.02) platelet aggregation compared with clopidogrel throughout the observation period. No stent thrombosis or repeat myocardial infarction occurred at 30-day follow-up.
CONCLUSIONS
Immediate blocking of platelet aggregation in high-risk acute coronary syndrome patients by intracoronary abciximab bolus-only administration and bridging to prolonged inhibition via oral blockade of ADP receptors effectively inhibited overall platelet reactivity for at least 48 h, questioning the value of continuous abciximab infusion. Co-medication with prasugrel vs. clopidogrel synergistically augmented platelet inhibition.
|
Technical efficiency in tilapia farming of Bangladesh: a stochastic frontier production approach
|
Genetically improved farmed tilapia is increasingly getting popular in Bangladesh. It has high production potential. Its dominant production technology ranges from extensive to improved extensive particularly in the rural areas. This study estimates levels and determinants of farm-level technical efficiency of tilapia farmers of Bangladesh using stochastic frontier production function involving a model for technical inefficiency effects. Data from fifty tilapia farmers of Jessore district are used in the analysis. The mean technical efficiency level of the tilapia farmers is 78%, and thus, the farmers operate 22% below the frontier production. Inefficiency effect is significant, and age, education, income, culture length, pond age, pond depth, water colour and pond tenure, as a group, are significant determinants of technical inefficiency. By operating at full technical efficiency levels, tilapia yield can be improved from the current level of 7.36–8.96 tons per hectare. The decision to add or not to add inputs is sometimes taken arbitrarily and not based on technology requirement. There is a lack of understanding of the technology practices. Fisheries extension efforts are required for proper understanding of the technology practices, further adoption and spread. For promotion of tilapia production, quality feed and seed at affordable price needs to be ensured.
|
Automatic generation of high-performance modular multipliers for arbitrary mersenne primes on FPGAs
|
Modular multiplication is a fundamental and performance determining operation in various public-key cryptosystems. High-performance modular multipliers on FPGAs are commonly realized by several small-sized multipliers, an adder tree for summing up the digit-products, and a reduction circuit. While small-sized multipliers are available in pre-fabricated high-speed DSP slices, the adder tree and the reduction circuit are implemented in standard logic. The latter operations represent the performance bottleneck to high-performance implementations. Previous works attempted to minimize the critical path of the adder tree by rearranging digit-products on digit-level. We report improved performance by regrouping digit-products on bit-level, while incorporating the reduction for Mersenne primes. Our approach leads to very fast modular multipliers, whose latency and throughput characteristics outperform all previous results. We formalize our approach and provide algorithms to automatically generate high-performance modular multipliers for arbitrary Mersenne primes from any small-sized multipliers.
|
Acute effects of low dose nicotine gum on platelet function in non-smoking hypertensive and normotensive men
|
Twenty non-smoking middle-aged men with mild untreated essential hypertension were compared to age-matched controls (n=22) in a double-blind placebo controlled study. Plasma and urinary concentrations of the platelet-specific protein β-thromboglobulin (β-TG), platelet count and mean platelet volume were measured before and after chewing 2 mg nicotine gum. The mean plasma nicotine concentration increased to 4.3 ng/ml in the hypertensive group and to 3.9 ng/ml in the normotensive group after 30 minutes of chewing the nicotine gum. Blood pressure and heart rate increased significantly, but there was no difference between the groups. Venous plasma catecholamine concentrations were unchanged. β-TG concentrations in plasma and urine were similar in the two groups, and plasma β-TG levels did not change after nicotine gum in either group. Urinary high molecular weight β-TG decreased after nicotine compared to placebo. Platelet count and volume increased significantly in the hypertensive group, but not in the normotensive group. The response in platelet count was significantly higher in the hypertensive group. Thus, small amounts of nicotine increase platelet counts more in hypertensive than in normotensive non-smoking men, without inducing the platelet release reaction.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.