title
stringlengths
8
300
abstract
stringlengths
0
10k
A reference architecture for educational data mining
In this paper we present a reference architecture for ETL stages of EDM and LA that works with different data formats and different extraction sites, ensuring privacy and making easier for new participants to enter into the process without demanding more computing power. Considering scenarios with a multitude of virtual environments hosting educational activities, accessible through a common infrastructure, we devised a reference model where data generated from interaction between users and among users and the environment itself, are selected, organized and stored in local “baskets”. Local baskets are then collected and grouped in a global basket. Organization resources like item modeling are used in both levels of basket construction. Using this reference upon a client-server architectural style, a reference architecture was developed and has been used to carry out a project for an official foundation linked to Brazilian Ministry of Education, involving educational data mining and sharing of 100+ higher education institutions and their respective virtual environments. In this architecture, a client-collector inside each virtual environment collects information from database and event logs. This information along with definitions obtained from item models are used to build local baskets. A synchronization protocol keeps all item models synced with client-collectors and server-collectors generating global baskets. This approach has shown improvements on ETL like: parallel processing of items, economy on storage space and bandwidth, privacy assurance, better tenacity, and good scalability.
A Wideband Isotropic Radiated Planar Antenna Using Sequential Rotated L-Shaped Monopoles
In this communication, we have proposed a compact planar antenna for isotropic radiation pattern in a wide operating band. The proposed antenna consists of four sequential rotated L-shaped monopoles that are fed by a compact uniform sequential-phase (SP) feeding network with equal amplitude and incremental 90 ° phase delay. Based on the rotated field method, a full spatial coverage with gain deviation less than 6 dB is achieved in a wide operating band from 2.3 to 2.61 GHz, also with well-impedance matching. A prototype of the proposed antenna has been built and tested. The measured results, including the reflection coefficient, gain, and radiation patterns, are analyzed and compared with the simulated results.
Cervical cancer patient information-seeking behaviors, information needs, and information sources in South Korea
The aim of this study was to explore the cancer information needs, utilization, and source preferences in South Korean women with cervical cancer. This was a multicenter descriptive study comprising 968 cervical cancer patients (stages 0–IVb; mean age, 55 years; response rate, 34.4% of those who agreed to participate) who had been treated from 1983 through 2004 at any of the six South Korean hospitals. The study data were obtained through a mail-in self-response questionnaire that asked about the patients’ cancer information needs, cancer-information-seeking behavior, information sources, and type of information needed. It also collected data about anxiety and depression. Of the 968 cervical cancer patients, 404 (41.7%) had sought cancer information. When patients felt a need for information, their information-seeking behavior increased (overall risk = 4.053, 95% confidence interval = 2.139–7.680). Television and/or radio were the most frequently cited sources, and narratives about cancer experiences were the most easily understood forms of cancer information. More younger patients preferred booklets and pamphlets, while more older patients preferred television and radio. The most needed cancer information at the time of diagnosis and treatment involved diagnosis, stage, and prognosis while after treatment ended it involved self-care techniques. Cervical cancer patients’ need of cancer information varied with age and treatment phase. These findings should help guide the development of educational materials tailored to the needs of individual patients.
[The attempts and current status of cancer rehabilitation at Osaka Medical College Hospital].
We introduced an attempt at cancer rehabilitation at Osaka Medical College Hospital. We also reported trends in the clinical department that ordered the cancer rehabilitation, and the days needed to consult the rehabilitation department after hospitalization for 1,028 patients who needed rehabilitation from January to June 2012. The number of rehabilitation orders for cancer patients has increased in comparison with the same period during 2009, and the percentage of cancer rehabilitation orders has also increased, both in total and in each clinical department consulted. In addition, clinical departments that introduced a rehabilitation schedule along with their treatments ordered cancer rehabilitations much earlier than those departments without such a schedule. In future, to start cancer rehabilitation at an earlier stage, we should endeavor to create awareness of the importance of cancer rehabilitation and the introduction of a rehabilitation schedule along with cancer treatments.
Stable isotope fractionation caused by glycyl radical enzymes during bacterial degradation of aromatic compounds.
Stable isotope fractionation was studied during the degradation of m-xylene, o-xylene, m-cresol, and p-cresol with two pure cultures of sulfate-reducing bacteria. Degradation of all four compounds is initiated by a fumarate addition reaction by a glycyl radical enzyme, analogous to the well-studied benzylsuccinate synthase reaction in toluene degradation. The extent of stable carbon isotope fractionation caused by these radical-type reactions was between enrichment factors (epsilon) of -1.5 and -3.9, which is in the same order of magnitude as data provided before for anaerobic toluene degradation. Based on our results, an analysis of isotope fractionation should be applicable for the evaluation of in situ bioremediation of all contaminants degraded by glycyl radical enzyme mechanisms that are smaller than 14 carbon atoms. In order to compare carbon isotope fractionations upon the degradation of various substrates whose numbers of carbon atoms differ, intrinsic epsilon (epsilon(intrinsic)) were calculated. A comparison of epsilon(intrinsic) at the single carbon atoms of the molecule where the benzylsuccinate synthase reaction took place with compound-specific epsilon elucidated that both varied on average to the same extent. Despite variations during the degradation of different substrates, the range of epsilon found for glycyl radical reactions was reasonably narrow to propose that rough estimates of biodegradation in situ might be given by using an average epsilon if no fractionation factor is available for single compounds.
Building Software-Defined Radios in MATLAB Simulink - A Step Towards Cognitive Radios
Software Defined Radio (SDR) is a flexible architecture which can be configured to adapt various wireless standards, waveforms, frequency bands, bandwidths, and modes of operations. This paper presents a detailed survey of the existing hardware and software platform for SDRs. It also presents prototype system for designing and testing of software defined radios in MATLAB/Simulink and briefly discusses the salient functions of the prototype system for Cognitive Radio (CR). A prototype system for wireless personal area network is built and interfaced with a Universal Software Radio Peripheral-2 (USRP2) main-board and RFX2400 daughter board from Ettus Research LLC. The philosophy behind the prototype is to do all waveform-specific processing such as channel coding, modulation, filtering etc. on a host (PC) and general purpose high-speed operations like digital up and down conversion, decimation and interpolation etc. inside FPGA on an USRP2. MATLAB has a rich family of toolboxes that allows building software-defined and cognitive radio to explore various spectrum sensing, prediction and management techniques.
The Microcredit Summit's Challenge: Working Toward Institutional Financial Self-Sufficiency While Maintaining a Commitment to Serving the Poorest Families
Institutional financial self-sufficiency (IFS) is necessary for a microfinance institution (MFI) to obtain the large amount of funds required to reach and benefit truly large numbers of the poor and poorest households. There is no necessary trade-off between serving large numbers of the poorest households and the attainment of IFS by an MFI, as proven by the case studies in this paper. Cost-effective identification of the poor and the poorest women is essential to maximizing the effectiveness and efficiency of providing microfinance services to them. If the service is not exclusively for the poor and the poorest, it should be operated separately for them to minimize leakage to the nonpoor. The total cost of efficient microcredit to the poor, i.e., the appropriate interest rate, will vary between 35% and 51% of their average loans outstanding, depending on the conditions under which it is provided, and on the quality of the loan portfolio. The poorest women in Asia, Africa, and Latin America are proving that they can and will pay the required cost of this opportunity to reduce their poverty and to provide a better future for their children. This is made possible by the impressive returns to their microenterprises, averaging normally more than 100%. Journal of Microfinance I n t r o d u c t i o n Working toward institutional financial self-sufficiency (IFS) is essential for microfinance institutions (MFIs) to reach and benefit significant numbers of the poorest households—those living in the bottom 50% of the poverty group—with financial services for poverty reduction. IFS reflects an MFI’s “ability to operate at a level of profitability that allows sustained service delivery with minimum or no dependence on donor inputs” (Christen, Rhyne, Vogel, & McKean, 1995, p. vi), international agencies, or charitable organizations. We believe that only by pursuing commercially motivated, for-profit strategies will MFIs, particularly those working with the poorest, achieve our primary goal of reducing poverty among truly large numbers of the poor and poorest. The argument for IFS is well known: As MF[I]s begin to wean themselves away from their dependence on subsidies and start to adopt the practices of good banking they will be forced to further innovate and lower costs. Not only may this ultimately mean better service for poor borrowers, but more importantly, it is argued that as MF[I]s become profitable they will be able to increasing[ly] tap into the vast ocean of private capital funding. If this happens the microfinance sector as a whole will soon be greatly leveraging the limited pool of donor funds and massively increasing the scale of outreach in ways that it is hoped could begin to make a truly significant dent on world poverty. (Conning, 1998, p. 2) IFS is defined as the ability of an MFI to cover all actual operating expenses, as well as adjustments for inflation and subsidies, with adjusted income generated through its financial services operations. Inflation adjustments are twofold: (1) to account for the negative impact, or cost of inflation, on the value of your equity and (2) to account for the positive impact of the revaluation of nonfinancial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mr. Gibbons is managing director of CASHPOR Financial and Technical Seruices. Ms. Meehan is financial advisor for the same organization. 132 Volume 1 Number 1 The Microcredit Summit’s Challenge assets and liabilities for the effects of inflation. Similarly, there are two types of subsidies which must be adjusted for (1) explicit subsidies to properly account for direct donations received by an MFI to cover operating expenses, and (2) implicit subsidies to account for loans received by an MFI at below market rates, and in-kind donations such as rent-free facilities, staff paid by third-parties, technical assistance, and the use of a third party infrastructure (e.g., communication facilities, etc.). In analyzing an MFI’s performance, such adjustments are necessary, since MFIs often operate in highly inflationary environments and receive significant support from third parties—such as government or donors—in the form of implicit subsidies. The adjustments take this support into account and allow an MFI to understand the potential commercial viability of its financial services operations. This is done by comparing adjusted operating income to adjusted operating expenses. If the figure is greater than 1.0, we say an MFI has reached IFS. If IFS has not been achieved, the withdrawal of such “support” could ultimately result in the failure of an MFI, with potentially disastrous effects on the poor clients being served. So MFIs wanting to reach and benefit truly large numbers should consciously work toward IFS. This does not, of course, mean that IFS should be attained at the cost of the overriding goal of poverty reduction. That would defeat the purpose for which we are working—which is not profit as an end in itself, but poverty reduction. Rather it means that IFS should be pursued at a rate that is consistent with substantial poverty reduction. Attainment of both goals must be monitored so as to ensure that IFS does not displace the more important goal of poverty reduction. Even with this qualification, many may disagree with the need to work towards IFS. Perhaps most would argue that nongovernmental Volume 1 Number 1 133 Journal of Microfinance organizations (NGOs) have important social objectives that cannot be executed in a financially sustainable manner. Requiring that an institution do so would result in goal displacement. Outreach and service to the poor and poorest are more important, some might well argue, than making profits. A major purpose of this paper is to try to convince those who want to reach and benefit truly large numbers, say at least 500,000, of the poorest households with microfinance, aim for IFS, and support, rather than displace, their efforts in poverty reduction. The most important reason is funding. Reducing poverty significantly, that is, reaching and benefiting truly large numbers of poor and poorest households, even the 500,000 mentioned above, requires vast amounts of funds. Assuming an average loan outstanding per client of only US$150, for example, the total annual loan fund requirement alone would be US$75 million. Add to that the equity requirements to cover operating losses in the early years of operations and large-scale expansion, and the figure rises further. Attainment of the Microcredit Summit goal of reaching 100 million of the poorest households is estimated to cost around US$21 billion. From where are such vast amounts of funds going to come? Not from donors, whose funds for supporting microfinance are limited, and probably not from governments either, because of competing claims on their funds: though in countries where funds are made available by governments, MFIs should take advantage of them— provided they can do so without incurring crippling interference in their operations. Grants and soft loans have played, and continue to play, major roles in financing MFI start-ups. They are particularly useful at that early stage when equity is usually nonexistent and deficits are large. Guarantees and quasi-equity, which are themselves soft loans, can 134 Volume 1 Number 1 The Microcredit Summit’s Challenge also be of critical importance when the MFI seeks to establish relations with banks. However, grants and soft loans are always limited in supply and time-consuming to secure. For these reasons they are likely to be insufficient for financing the scaling-up of MFIs to reach truly large numbers and IFS. In the likely event that grants and soft loans do not meet funding requirements for scaling-up, MFIs must search elsewhere. Only formal financial institutions are likely to be able to provide the vast financial resources required to reach large numbers of the poor and poorest with microfinance. If profit-oriented, formal, financial institutions are to be interested in entering business partnerships with MFIs, the latter will have to convince these institutions of the strength of the MFI’s operational and financial management, in other words, that the MFIs operate as commercially minded, for-profit entities, just like the other clients of the financial institutions. In order to maximize the potential of this partnership, MFIs will have to build their equity, because it serves as a lever to obtain debt from formal financial institutions and savings deposits (where appropriate) from members. Currently, for MFIs, the most reliable long-term source of such equity is retained earnings. To build retained earnings, MFIs will have to make profits from their outreach to the poor and poorest by reaching truly large numbers. Making profits, in the medium to long term, means the attainment of a sufficient degree of IFS and reasonable adjusted, return on assets (AROA). There is no other way. So it is not a question whether or not we need to pursue IFS so as to be able to reduce extreme poverty in a big way, but rather how best to go about it without losing sight of our overriding concern for poverty reduction. The rest of the paper focuses on this point. Volume 1 Number 1 135 Journal of Microfinance Trade-off between Working with the Poorest and IFS? A few years ago, an influential book that included case studies of 12 MFIs in Asia, Africa, and Latin America argued that MFIs working with the poorest would experience a trade-off with IFS. Specifically, it concluded that “at a given point in time [MFIs] can either go for growth and put their resources into underpinning the success of established and rapidly growing institutions, or go for poverty impact . . . and put their resources into poverty-focused operations with a higher risk of failure and a lower expected return” (Hulme & Mosley, 199
Change in glucose metabolism after long-term treatment with deflazacort and betamethasone
We have compared the long-term effects of different corticosteroids on glucose metabolism by carring out a 75 g oral glucose tolerance test in 27 subjects before and after the administration of deflazacort or betamethasone for two months in random balanced sequence. Fasting plasma glucose and insulin concentrations were significantly higher after betamethasone, whereas deflazacort increased only fasting plasma insulin. After oral glucose there were significant increases in blood glucose and insulin after betamethasone compared with deflazacort. These results suggest that the degree of glucose intolerance and insulin resistance depends on the steroid used and on the dose given, although long-term treatment with deflazacort has a smaller effect on glucose metabolism than betamethasone.
Non-invasive prenatal testing for trisomies 21, 18 and 13: clinical experience from 146,958 pregnancies.
OBJECTIVES To report the clinical performance of massively parallel sequencing-based non-invasive prenatal testing (NIPT) in detecting trisomies 21, 18 and 13 in over 140,000 clinical samples and to compare its performance in low-risk and high-risk pregnancies. METHODS Between 1 January 2012 and 31 August 2013, 147,314 NIPT requests to screen for fetal trisomies 21, 18 and 13 using low-coverage whole-genome sequencing of plasma cell-free DNA were received. The results were validated by karyotyping or follow-up of clinical outcomes. RESULTS NIPT was performed and results obtained in 146,958 samples, for which outcome data were available in 112,669 (76.7%). Repeat blood sampling was required in 3213 cases and 145 had test failure. Aneuploidy was confirmed in 720/781 cases positive for trisomy 21, 167/218 cases positive for trisomy 18 and 22/67 cases positive for trisomy 13 on NIPT. Nine false negatives were identified, including six cases of trisomy 21 and three of trisomy 18. The overall sensitivity of NIPT was 99.17%, 98.24% and 100% for trisomies 21, 18 and 13, respectively, and specificity was 99.95%, 99.95% and 99.96% for trisomies 21, 18 and 13, respectively. There was no significant difference in test performance between the 72,382 high-risk and 40,287 low-risk subjects (sensitivity, 99.21% vs. 98.97% (P = 0.82); specificity, 99.95% vs. 99.95% (P = 0.98)). The major factors contributing to false-positive and false-negative NIPT results were maternal copy number variant and fetal/placental mosaicism, but fetal fraction had no effect. CONCLUSIONS Using a stringent protocol, the good performance of NIPT shown by early validation studies can be maintained in large clinical samples. This technique can provide equally high sensitivity and specificity in screening for trisomy 21 in a low-risk, as compared to high-risk, population.
Combined continuous ethinyl estradiol/norethindrone acetate does not improve forearm blood flow in postmenopausal women at risk for cardiovascular events: a pilot study.
OBJECTIVE This study sought to determine whether combined continuous ethinyl estradiol and norethindrone acetate, a postmenopausal hormone therapy (HT) combination designed to have fewer side effects than cyclical therapies and therapies using medroxyprogesterone acetate (MPA), could improve vascular endothelial function in postmenopausal women with risk factors for cardiovascular disease (CVD). METHODS Eighteen postmenopausal women (mean age 62 +/- 11 years) participated in a randomized, placebo-controlled, crossover design trial of 10 microg estradiol/1 mg norethindrone acetate given once daily for 3 months, with a 1-month washout period between placebo and active treatment phases. Vascular reactivity was assessed at each phase of the study using high-frequency brachial artery ultrasound in response to flow-mediated hyperemia, cold pressor testing, and sublingual nitroglycerin. Markers of cardiovascular risk, including cholesterol levels, inflammatory markers, fibrinolytic markers, and solubilized adhesion molecules, were also measured at each phase. RESULTS We found no significant difference in vascular reactivity measurements during active treatment with ethinyl estradiol/norethindrone acetate vs. placebo. C-reactive protein (CRP) levels increased significantly during active treatment, and high-density lipoprotein (HDL) levels decreased significantly. Vascular cell adhesion molecule-1 (VCAM-1) levels declined during active treatment. Plasminogen activator inhibitor-1 (PAI-1) levels were inversely correlated with flow-mediated hyperemic vascular reactivity, independent of active treatment or placebo phases. CONCLUSIONS In this older postmenopausal population with at least one cardiovascular risk factor, treatment with combined continuous ethinyl estradiol and norethindrone acetate failed to improve vascular endothelial function. The agent's proinflammatory effect or subclinical atherosclerosis in this population may have contributed to this finding.
Toward clean and crackless transfer of graphene.
We present the results of a thorough study of wet chemical methods for transferring chemical vapor deposition grown graphene from the metal growth substrate to a device-compatible substrate. On the basis of these results, we have developed a "modified RCA clean" transfer method that has much better control of both contamination and crack formation and does not degrade the quality of the transferred graphene. Using this transfer method, high device yields, up to 97%, with a narrow device performance metrics distribution were achieved. This demonstration addresses an important step toward large-scale graphene-based electronic device applications.
VISTA : A Generic Toolkit for Visualizing Agent Behavior
In order for agent-based technologies to play an effective role in simulation and trainingsynthetic force applications, they must carry outgenerate realistic and believable behaviors that are virtually indistinguishable from what a human actor might do under the same circumstances. However, systems that are capable of such complex behavior necessarily include complex internal representations and The complicated nature of agent-based systems and of their interactions with their environment.s, however, often These complexities makes it difficult to examine and evaluate an agent's internal reasoning processes without extensive knowledge of the technical details of the agent's implementation. The difficulty of conveying an agent's internal reasoning processes to non-technical users and, consequently, of demonstrating the accuracy of the resulting behaviors is a large hurdle in the acceptance of agentbased systems. To address this challenge, we have developed the Visualization Toolkit for Agents (VISTA), which can be used to build visualization tools that provide insight into an agent's internal reasoning processes. Such tools allow agent developers, subject-matter experts, and training supervisors to verify the correctness of an agent's behavior without delving into the technical details of its implementation. VISTA is a generic infrastructure that tries to makes as few commitments as possible to a particular problem domain or agent architecture. VISTA aims to make it easy for agent developers to construct visualization tools for their particular agent technology. In this paper we describe VISTA and illustrate its usefulness by presenting the Situation Awareness Panel (SAP), a particular instantiation of the VISTA framework, that we used to examine the behavior of Soar agents operating in the tactical air combat domain.
A Low Latency, Loss Tolerant Architecture and Protocol for Wide Area Group Communication
Group communication systems are proven tools upon which to build fault-tolerant systems. As the demands for fault-tolerance increase and more applications require re liable distributed computing over wide area networks, wide area group communication systems are becoming very useful. However, building a wide area group communication system is a challenge. This paper presents the design of the transport protocols of the Spread wide area group communication system. We focus on two aspects of the system. First, the value of using overlay networks for application level group communication services. Second, the requirements and design of effective low latency link protocols use d to construct wide area group communication. We support our claims with the results of live experiments conducted
Population pharmacokinetics of hydroxyurea in cancer patients
The pharmacokinetics of hydroxyurea (HU) were investigated in cancer patients after intravenous infusion or oral administration. On the basis of the minimal value of the objective function (MVOF) and prior knowledge of the disposition of HU in animals and man, the data were best described by a one-compartment pharmacokinetic model with parallel Michaelis-Menten metabolism and first-order renal excretion. The computer program NONMEM (nonlinear mixed effects model) was used to perform the nonlinear regression and provide estimates of the population parameters. For the combined intravenous and oral data set, these parameters were estimated to be: maximal elimination rate (V max), 0.097 mmolh−1 l−1; Michaelis constant for HU elimination (K M), 0.323 mmol/1; renal clearance (Cl R), 90.8 ml/min; volume of distribution (V d), 0.186× (body weight) + 25.41; absorption rate constant (K a), 2.92 h−1; and availability to the systemic circulation (F), 0.792. The principal findings of the investigation are that HU undergoes nonlinear elimination in cancer patients and that HU is reasonably well absorbed following oral administration.
A long-term outcome study of selective mutism in childhood.
OBJECTIVE Controlled study of the long-term outcome of selective mutism (SM) in childhood. METHOD A sample of 33 young adults with SM in childhood and two age- and gender-matched comparison groups were studied. The latter comprised 26 young adults with anxiety disorders in childhood (ANX) and 30 young adults with no psychiatric disorders during childhood. The three groups were compared with regard to psychiatric disorder in young adulthood by use of the Composite International Diagnostic Interview (CIDI). In addition, the effect of various predictors on outcome of SM was studied. RESULTS The symptoms of SM improved considerably in the entire SM sample. However, both SM and ANX had significantly higher rates for phobic disorder and any psychiatric disorder than controls at outcome. Taciturnity in the family and, by trend, immigrant status and a severity indicator of SM had an impact on psychopathology and symptomatic outcome in young adulthood. CONCLUSION This first controlled long-term outcome study of SM provides evidence of symptomatic improvement of SM in young adulthood. However, a high rate of phobic disorder at outcome points to the fact that SM may be regarded as an anxiety disorder variant.
Crossing Cultural Barriers in Research Interviewing
This article critically examines a qualitative research interview in which cultural barriers between a white nonMuslim female interviewer and an African American Muslim interviewee, both from the USA, became evident and were overcome within the same interview. This interview and two follow-up interviews are presented as a 'telling case' about crossing cultural barriers. The analysis focuses on seven phases of the interview (cultural barriers, warming up, crossing the racial barrier, connecting as social workers, connecting as women, connecting as students, and crossing the tape recorder barrier). The discussion outlines the preinterview and during-interview barriers and facilitating conditions and related implications for cross-cultural qualitative research interviewing.
Eavesdropping Attacks on High-Frequency RFID Tokens
RFID systems often use near-field magnetic coupling to implement communication channels. The advertised operational range of these channels is less than 10 cm and therefore several implemented systems assume that the communication channel is location limited and therefore relatively secure. Nevertheless, there have been repeated questions raised about the vulnerability of these near-field systems against eavesdropping and skimming attacks. In this paper I revisit the topic of RFID eavesdropping attacks, surveying previous work and explaining why the feasibility of practical attacks is still a relevant and novel research topic. I present a brief overview of the radio characteristics for popular HF RFID standards and present some practical results for eavesdropping experiments against tokens adhering to the ISO 14443 and ISO 15693 standards. Finally, I discuss how an attacker could construct a low-cost eavesdropping device using easy to obtain parts and reference designs.
From Physiological Signals to Emotions: Implementing and Comparing Selected Methods for Feature Extraction and Classification
Little attention has been paid so far to physiological signals for emotion recognition compared to audio-visual emotion channels, such as facial expressions or speech. In this paper, we discuss the most important stages of a fully implemented emotion recognition system including data analysis and classification. For collecting physiological signals in different affective states, we used a music induction method which elicits natural emotional reactions from the subject. Four-channel biosensors are used to obtain electromyogram, electrocardiogram, skin conductivity and respiration changes. After calculating a sufficient amount of features from the raw signals, several feature selection/reduction methods are tested to extract a new feature set consisting of the most significant features for improving classification performance. Three well-known classifiers, linear discriminant function, k-nearest neighbour and multilayer perceptron, are then used to perform supervised classification
Fresh organically grown ginger (Zingiber officinale): composition and effects on LPS-induced PGE2 production.
Gas chromatography in conjunction with mass spectrometry, a technique previously employed to analyze non-volatile pungent components of ginger extracts modified to trimethylsilyl derivatives, was applied successfully for the first time to analyze unmodified partially purified fractions from the dichloromethane extracts of organically grown samples of fresh Chinese white and Japanese yellow varieties of ginger, Zingiber officinale Roscoe (Zingiberaceae). This analysis resulted in the detection of 20 hitherto unknown natural products and 31 compounds previously reported as ginger constituents. These include paradols, dihydroparadols, gingerols, acetyl derivatives of gingerols, shogaols, 3-dihydroshogaols, gingerdiols, mono- and diacetyl derivatives of gingerdiols, 1-dehydrogingerdiones, diarylheptanoids, and methyl ether derivatives of some of these compounds. The thermal degradation of gingerols to gingerone, shogaols, and related compounds was demonstrated. The major constituent in the two varieties was [6]-gingerol, a chemical marker for Z. officinale. Mass spectral fragmentation patterns for all the compounds are described and interpreted. Anti-inflammatory activities of silica gel chromatography fractions were tested using an in vitro PGE2 assay. Most of the fractions containing gingerols and/or gingerol derivatives showed excellent inhibition of LPS-induced PGE2 production.
A New Convolutional Neural Network-Based Data-Driven Fault Diagnosis Method
Fault diagnosis is vital in manufacturing system, since early detections on the emerging problem can save invaluable time and cost. With the development of smart manufacturing, the data-driven fault diagnosis becomes a hot topic. However, the traditional data-driven fault diagnosis methods rely on the features extracted by experts. The feature extraction process is an exhausted work and greatly impacts the final result. Deep learning (DL) provides an effective way to extract the features of raw data automatically. Convolutional neural network (CNN) is an effective DL method. In this study, a new CNN based on LeNet-5 is proposed for fault diagnosis. Through a conversion method converting signals into two-dimensional (2-D) images, the proposed method can extract the features of the converted 2-D images and eliminate the effect of handcrafted features. The proposed method which is tested on three famous datasets, including motor bearing dataset, self-priming centrifugal pump dataset, and axial piston hydraulic pump dataset, has achieved prediction accuracy of 99.79%, 99.481%, and 100%, respectively. The results have been compared with other DL and traditional methods, including adaptive deep CNN, sparse filter, deep belief network, and support vector machine. The comparisons show that the proposed CNN-based data-driven fault diagnosis method has achieved significant improvements.
Human-Inspired Neurorobotic System for Classifying Surface Textures by Touch
Giving robots the ability to classify surface textures requires appropriate sensors and algorithms. Inspired by the biology of human tactile perception, we implement a neurorobotic texture classifier with a recurrent spiking neural network, using a novel semisupervised approach for classifying dynamic stimuli. Input to the network is supplied by accelerometers mounted on a robotic arm. The sensor data are encoded by a heterogeneous population of neurons, modeled to match the spiking activity of mechanoreceptor cells. This activity is convolved by a hidden layer using bandpass filters to extract nonlinear frequency information from the spike trains. The resulting high-dimensional feature representation is then continuously classified using a neurally implemented support vector machine. We demonstrate that our system classifies 18 metal surface textures scanned in two opposite directions at a constant velocity. We also demonstrate that our approach significantly improves upon a baseline model that does not use the described feature extraction. This method can be performed in real-time using neuromorphic hardware, and can be extended to other applications that process dynamic stimuli online.
Acute and chronic alcohol dose: population differences in behavior and neurochemistry of zebrafish.
The zebrafish has been in the forefront of developmental genetics for decades and has also been gaining attention in neurobehavioral genetics. It has been proposed to model alcohol-induced changes in human brain function and behavior. Here, adult zebrafish populations, AB and SF (short-fin wild type), were exposed to chronic treatment (several days in 0.00% or 0.50% alcohol v/v) and a subsequent acute treatment (1 h in 0.00%, 0.25%, 0.50% or 1.00% alcohol). Behavioral responses of zebrafish to computer-animated images, including a zebrafish shoal and a predator, were quantified using videotracking. Neurochemical changes in the dopaminergic and serotoninergic systems in the brain of the fish were measured using high-precision liquid chromatography with electrochemical detection. The results showed genetic differences in numerous aspects of alcohol-induced changes, including, for the first time, the behavioral effects of withdrawal from alcohol and neurochemical responses to alcohol. For example, withdrawal from alcohol abolished shoaling and increased dopamine and 3,4-dihydroxyphenylacetic acid in AB but not in SF fish. The findings show that, first, acute and chronic alcohol induced changes are quantifiable with automated behavioral paradigms; second, robust neurochemical changes are also detectable; and third, genetic factors influence both alcohol-induced behavioral and neurotransmitter level changes. Although the causal relationship underlying the alcohol-induced changes in behavior and neurochemistry is speculative at this point, the results suggest that zebrafish will be a useful tool for the analysis of the biological mechanisms of alcohol-induced functional changes in the adult brain.
Learning Hierarchical Structures On-The-Fly with a Recurrent-Recursive Model for Sequences
We propose a hierarchical model for sequential data that learns a tree on-thefly, i.e. while reading the sequence. In the model, a recurrent network adapts its structure and reuses recurrent weights in a recursive manner. This creates adaptive skip-connections that ease the learning of long-term dependencies. The tree structure can either be inferred without supervision through reinforcement learning, or learned in a supervised manner. We provide preliminary experiments in a novel Math Expression Evaluation (MEE) task, which is explicitly crafted to have a hierarchical tree structure that can be used to study the effectiveness of our model. Additionally, we test our model in a wellknown propositional logic and language modelling tasks. Experimental results show the potential of our approach.
EANN: Event Adversarial Neural Networks for Multi-Modal Fake News Detection
As news reading on social media becomes more and more popular, fake news becomes a major issue concerning the public and government. The fake news can take advantage of multimedia content to mislead readers and get dissemination, which can cause negative effects or even manipulate the public events. One of the unique challenges for fake news detection on social media is how to identify fake news on newly emerged events. Unfortunately, most of the existing approaches can hardly handle this challenge, since they tend to learn event-specific features that can not be transferred to unseen events. In order to address this issue, we propose an end-to-end framework named Event Adversarial Neural Network (EANN), which can derive event-invariant features and thus benefit the detection of fake news on newly arrived events. It consists of three main components: the multi-modal feature extractor, the fake news detector, and the event discriminator. The multi-modal feature extractor is responsible for extracting the textual and visual features from posts. It cooperates with the fake news detector to learn the discriminable representation for the detection of fake news. The role of event discriminator is to remove the event-specific features and keep shared features among events. Extensive experiments are conducted on multimedia datasets collected from Weibo and Twitter. The experimental results show our proposed EANN model can outperform the state-of-the-art methods, and learn transferable feature representations.
A Branch and Bound Algorithm for Computing k-Nearest Neighbors
Computation of the k-nearest neighbors generally requires a large number of expensive distance computations. The method of branch and bound is implemented in the present algorithm to facilitate rapid calculation of the k-nearest neighbors, by eliminating the necesssity of calculating many distances. Experimental results demonstrate the efficiency of the algorithm. Typically, an average of only 61 distance computations were made to find the nearest neighbor of a test sample among 1000 design samples.
Acid Challenge to the Human Esophageal Mucosa: Effects on Epithelial Architecture in Health and Disease
The histological changes that occur in the squamous epithelium in response to acute acid challenge was examined in healthy controls and proton pump inhibitor-treated gastroesophageal reflux disease (GERD) patients and related to the state of untreated erosive GERD in a saline-controlled, randomized perfusion study. In the basal state a stepwise significant increase in the thickness of the basal cell layer, papillary length, and dilatation of intercellular spaces (DIS) was seen when the three groups were compared. Acid perfusion induced a slight increase in the height of the basal cell layer mainly in healthy volunteers; this layer appears to be reactive to acute acid challenge as well as to acid suppressive therapy. DIS increases promptly in response to acute acid exposure in the healthy epithelium but no changes were seen in the lengths of the papillae or regarding DIS in the GERD patients. A protective effect of luminal nitric oxide on DIS development is suggested.
Best of both worlds: Human-machine collaboration for object annotation
The long-standing goal of localizing every object in an image remains elusive. Manually annotating objects is quite expensive despite crowd engineering innovations. Current state-of-the-art automatic object detectors can accurately detect at most a few objects per image. This paper brings together the latest advancements in object detection and in crowd engineering into a principled framework for accurately and efficiently localizing objects in images. The input to the system is an image to annotate and a set of annotation constraints: desired precision, utility and/or human cost of the labeling. The output is a set of object annotations, informed by human feedback and computer vision. Our model seamlessly integrates multiple computer vision models with multiple sources of human input in a Markov Decision Process. We empirically validate the effectiveness of our human-in-the-loop labeling approach on the ILSVRC2014 object detection dataset.
Effects of family history of alcohol dependence on the subjective response to alcohol using the intravenous alcohol clamp.
BACKGROUND Alcohol use disorders are well recognized to be common, debilitating, and the risk of developing them is influenced by family history (FH). The subjective response to alcohol may be determined familialy and related to the risk of developing alcoholism. The aim of this study was to evaluate differences between family history positive (FHP) and family history negative (FHN) individuals in their response to alcohol within the domains of subjective, coordination, and cognitive effects using an intravenous (IV) clamping method of alcohol administration. METHODS Two groups of healthy subjects, those with an FHP (n = 65) versus those who were FHN (n = 115), between the ages of 21 to 30, participated in 3 test days. Subjects were scheduled to receive placebo, low-dose ethanol (EtOH) (target breath alcohol clamping [BrAC] = 40 mg%), and high-dose EtOH (target BrAC = 100 mg%) on 3 separate test days at least 3 days apart in a randomized order under double-blind conditions. Outcome measures included subjective effects, measures of coordination, and cognitive function. RESULTS Both low- and high-dose alcohol led to dose-related stimulant and sedative subjective effects as measured the Biphasic Alcohol Effects Scale and subjective measures of "high" and "drowsy" measured on a visual analog scale. However, there were no effects of FH. Similar dose-related effects were observed on cognitive and coordination-related outcomes, but were not moderated FH. CONCLUSIONS Results from this study showed that healthy individuals responded to an IV alcohol challenge in a dose-related manner; however, there were no significant differences on subjective response, or on EtOH-induced impairment of coordination or cognition, between individuals with a positive FH for alcoholism and those with a negative FH. Results suggest that FH may not be a specific enough marker of risk, particularly in individuals who are beyond the age where alcohol use disorders often develop.
Adults' sedentary behavior determinants and interventions.
Research is now required on factors influencing adults' sedentary behaviors, and effective approaches to behavioral-change intervention must be identified. The strategies for influencing sedentary behavior will need to be informed by evidence on the most important modifiable behavioral determinants. However, much of the available evidence relevant to understanding the determinants of sedentary behaviors is from cross-sectional studies, which are limited in that they identify only behavioral "correlates." As is the case for physical activity, a behavior- and context-specific approach is needed to understand the multiple determinants operating in the different settings within which these behaviors are most prevalent. To this end, an ecologic model of sedentary behaviors is described, highlighting the behavior settings construct. The behaviors and contexts of primary concern are TV viewing and other screen-focused behaviors in domestic environments, prolonged sitting in the workplace, and time spent sitting in automobiles. Research is needed to clarify the multiple levels of determinants of prolonged sitting time, which are likely to operate in distinct ways in these different contexts. Controlled trials on the feasibility and efficacy of interventions to reduce and break up sedentary behaviors among adults in domestic, workplace, and transportation environments are particularly required. It would be informative for the field to have evidence on the outcomes of "natural experiments," such as the introduction of nonseated working options in occupational environments or new transportation infrastructure in communities.
Applying Genre-Based Ontologies to Enterprise Architecture
This paper elaborates the approach of using ontologies as a conceptual base for enterprise architecture (EA) descriptions. The method focuses on recognising and modelling business critical information concepts, their content, and semantics used to operate the business. Communication genres and open and semi-structured information need interviews are used as a domain analysis method. Ontologies aim to explicate the results of domain analysis and to provide a common reference model for Business Information Architecture (BIA) descriptions. The results are generalised to model further aspects of EA.
Many Groups, One People: The Meaning and Significance of Multicultural Education in Modern America
Abstract Multicultural education is still one of the most controversial and misunderstood concepts, because it means different things to different peoples. This paper is intended to explore the meaning and significance of multicultural education in a global context on the basis of recent cross-cultural research in anthropology.
Optimizing scientific application loops on stream processors
This paper describes a graph coloring compiler framework to allocate on-chip SRF(Stream Register File) storage for optimizing scientific applications on stream processors. Our framework consists of first applying enabling optimizations such as loop unrolling to expose stream reuse and opportunities for maximizing parallelism, i.e., overlapping kernel execution and memory transfers.Then the three SRF management tasks are solved in a unified manner via graph coloring: (1) placing streams in the SRF, (2) exploiting stream use, and (3) maximizing parallelism. We evaluate the performance of our compiler framework by actually running nine representative scientific computing kernels on our FT64 stream processor. Our preliminary results show that compiler management achieves an average speedup of 2.3x compared to First-Fit allocation. In comparison with the performance results obtained from running these benchmarks on Itanium 2, an average speedup of 2.1x is observed.
First- and Second-Order Optimality Conditions for a Class of Optimal Control Problems with Quasilinear Elliptic Equations
A class of optimal control problems for quasilinear elliptic equations is considered, where the coefficients of the elliptic differential operator depend on the state function. Firstand second-order optimality conditions are discussed for an associated control-constrained optimal control problem. In particular, the Pontryagin maximum principle and second-order sufficient optimality conditions are derived. One of the main difficulties is the non-monotone character of the state equation.
Dialogue management systems: a survey and overview
..................................................................................................................................................3
Image Based Characterization of Formal and Informal Neighborhoods in an Urban Landscape
The high rate of global urbanization has resulted in a rapid increase in informal settlements, which can be defined as unplanned, unauthorized, and/or unstructured housing. Techniques for efficiently mapping these settlement boundaries can benefit various decision making bodies. From a remote sensing perspective, informal settlements share unique spatial characteristics that distinguish them from other types of structures (e.g., industrial, commercial, and formal residential). These spatial characteristics are often captured in high spatial resolution satellite imagery. We analyzed the role of spatial, structural, and contextual features (e.g., GLCM, Histogram of Oriented Gradients, Line Support Regions, Lacunarity) for urban neighborhood mapping, and computed several low-level image features at multiple scales to characterize local neighborhoods. The decision parameters to classify formal-, informal-, and non-settlement classes were learned under Decision Trees and a supervised classification framework. Experiments were conducted on high-resolution satellite imagery from the CitySphere collection, and four different cities (i.e., Caracas, Kabul, Kandahar, and La Paz) with varying spatial characteristics were represented. Overall accuracy ranged from 85% in La Paz, Bolivia, to 92% in Kandahar, Afghanistan. While the disparities between formal and informal neighborhoods varied greatly, many of the image statistics tested proved robust.
A Dataset and Architecture for Visual Reasoning with a Working Memory
A vexing problem in artificial intelligence is reasoning about events that occur in complex, changing visual stimuli such as in video analysis or game play. Inspired by a rich tradition of visual reasoning and memory in cognitive psychology and neuroscience, we developed an artificial, configurable visual question and answer dataset (COG) to parallel experiments in humans and animals. COG is much simpler than the general problem of video analysis, yet it addresses many of the problems relating to visual and logical reasoning and memory – problems that remain challenging for modern deep learning architectures. We additionally propose a deep learning architecture that performs competitively on other diagnostic VQA datasets (i.e. CLEVR) as well as easy settings of the COG dataset. However, several settings of COG result in datasets that are progressively more challenging to learn. After training, the network can zero-shot generalize to many new tasks. Preliminary analyses of the network architectures trained on COG demonstrate that the network accomplishes the task in a manner interpretable to humans.
ArticulatedFusion: Real-time Reconstruction of Motion, Geometry and Segmentation Using a Single Depth Camera
This paper proposes a real-time dynamic scene reconstruction method capable of reproducing the motion, geometry, and segmentation simultaneously given live depth stream from a single RGB-D camera. Our approach fuses geometry frame by frame and uses a segmentationenhanced node graph structure to drive the deformation of geometry in registration step. A two-level node motion optimization is proposed. The optimization space of node motions and the range of physically-plausible deformations are largely reduced by taking advantage of the articulated motion prior, which is solved by an efficient node graph segmentation method. Compared to previous fusion-based dynamic scene reconstruction methods, our experiments show robust and improved reconstruction results for tangential and occluded motions.
ExprGAN: Facial Expression Editing With Controllable Expression Intensity
Facial expression editing is a challenging task as it needs a high-level semantic understanding of the input face image. In conventional methods, either paired training data is required or the synthetic face’s resolution is low. Moreover, only the categories of facial expression can be changed. To address these limitations, we propose an Expression Generative Adversarial Network (ExprGAN) for photo-realistic facial expression editing with controllable expression intensity. An expression controller module is specially designed to learn an expressive and compact expression code in addition to the encoder-decoder network. This novel architecture enables the expression intensity to be continuously adjusted from low to high. We further show that our ExprGAN can be applied for other tasks, such as expression transfer, image retrieval, and data augmentation for training improved face expression recognition models. To tackle the small size of the training database, an effective incremental learning scheme is proposed. Quantitative and qualitative evaluations on the widely used Oulu-CASIA dataset demonstrate the effectiveness of ExprGAN.
Vertical Si-Nanowire $n$-Type Tunneling FETs With Low Subthreshold Swing ($\leq \hbox{50}\ \hbox{mV/decade}$ ) at Room Temperature
This letter presents a Si nanowire based tunneling field-effect transistor (TFET) using a CMOS-compatible vertical gate-all-around structure. By minimizing the thermal budget with low-temperature dopant-segregated silicidation for the source-side dopant activation, excellent TFET characteristics were obtained. We have demonstrated for the first time the lowest ever reported subthreshold swing (SS) of 30 mV/decade at room temperature. In addition, we reported a very convincing SS of 50 mV/decade for close to three decades of drain current. Moreover, our TFET device exhibits excellent characteristics without ambipolar behavior and with high Ion/Ioff ratio (105), as well as low Drain-Induced Barrier Lowering of 70 mV/V.
Genome-Wide Regression and Prediction with the BGLR Statistical Package
Many modern genomic data analyses require implementing regressions where the number of parameters (p, e.g., the number of marker effects) exceeds sample size (n). Implementing these large-p-with-small-n regressions poses several statistical and computational challenges, some of which can be confronted using Bayesian methods. This approach allows integrating various parametric and nonparametric shrinkage and variable selection procedures in a unified and consistent manner. The BGLR R-package implements a large collection of Bayesian regression models, including parametric variable selection and shrinkage methods and semiparametric procedures (Bayesian reproducing kernel Hilbert spaces regressions, RKHS). The software was originally developed for genomic applications; however, the methods implemented are useful for many nongenomic applications as well. The response can be continuous (censored or not) or categorical (either binary or ordinal). The algorithm is based on a Gibbs sampler with scalar updates and the implementation takes advantage of efficient compiled C and Fortran routines. In this article we describe the methods implemented in BGLR, present examples of the use of the package, and discuss practical issues emerging in real-data analysis.
Trace Ratio Criterion for Feature Selection
Fisher score and Laplacian score are two popular feature selection algorithms, both of which belong to the general graph-based feature selection framework. In this framework, a feature subset is selected based on the corresponding score (subset-level score), which is calculated in a trace ratio form. Since the number of all possible feature subsets is very huge, it is often prohibitively expensive in computational cost to search in a brute force manner for the feature subset with the maximum subset-level score. Instead of calculating the scores of all the feature subsets, traditional methods calculate the score for each feature, and then select the leading features based on the rank of these feature-level scores. However, selecting the feature subset based on the feature-level score cannot guarantee the optimum of the subset-level score. In this paper, we directly optimize the subset-level score, and propose a novel algorithm to efficiently find the global optimal feature subset such that the subset-level score is maximized. Extensive experiments demonstrate the effectiveness of our proposed algorithm in comparison with the traditional methods for feature selection. Introduction Many classification tasks often need to deal with highdimensional data. Data with a large number of features will result in higher computational cost, and the irrelevant and redundant features may also deteriorate the classification performance. Feature selection is one of the most important approaches for dealing with high-dimensional data (Guyon & Elisseeff 2003). According to the strategy of utilizing class label information, feature selection algorithms can be roughly divided into three categories, namely unsupervised feature selection (Dy & Brodley 2004), semisupervised feature selection (Zhao & Liu 2007a), and supervised feature selection (Robnik-Sikonja & Kononenko 2003). These feature selection algorithms can also be categorized into wrappers and filters (Kohavi & John 1997; Das 2001). Wrappers are classifier-specific and the feature subset is selected directly based on the performance of a specific classifier. Filters are classifier-independent and the Copyright c © 2008,Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. feature subset is selected based on a well-defined criterion. Usually, wrappers could obtain better results than filters because wrappers are directly related to the algorithmic performance of a specific classifier. However, wrappers are computationally more expensive compared with filters and lack of good generalization capability over classifiers. Fisher score (Bishop 1995) and Laplacian score (He, Cai, & Niyogi 2005) are two popular filter-type methods for feature selection, and both belong to the general graph-based feature selection framework. In this framework, the feature subset is selected based on the score of the entire feature subset, and the score is calculated in a trace ratio form. The trace ratio form has been successfully used as a general criterion for feature extraction previously (Nie, Xiang, & Zhang 2007; Wanget al. 2007). However, when the trace ratio criterion is applied for feature selection, since the number of possible subsets of features is very huge, it is often prohibitively expensive in computational cost to search in a brute force manner for the feature subset with the maximum subset-level score. Therefore, instead of calculating the subset-level score for all the feature subsets, traditional methods calculate the score of each feature (feature-level score), and then select the leading features based on the rank of these feature-level scores. The selected subset of features based on the feature-level score is suboptimal, and cannot guarantee the optimum of the subset-level score. In this paper, we directly optimize the subset-level score, and propose a novel iterative algorithm to efficiently find the globally optimal feature subset such that the subset-level score is maximized. Experimental results on UCI datasets and two face datasets demonstrate the effectiveness of the proposed algorithm in comparison with the traditional methods for feature selection. Feature Selection ⊂ Subspace Learning Suppose the original high-dimensional data x ∈ R, that is, the number of features (dimensions) of the data is d. The task of subspace learning is to find the optimal projection matrix W ∈ R (usuallym ≪ d) under an appropriate criterion, and then thed-dimensional datax is transformed to them-dimensional datay by y = W x, (1) whereW is a column-full-rank projection matrix. Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008)
A Survey Based Analysis of IT Adoption and 3 PLs ’ Performance
Purpose – In today’s competitive scenario, effective supply chain management is increasingly dependant on third party logistics (3PL) companies’ capabilities and performance. The dissemination of information technology (IT) has contributed to change the supply chain role of 3PL companies and IT is considered an important element influencing performance of modern logistics companies. Therefore, the purpose of this paper is to explore the relationship between IT and 3PLs’ performance, assuming that logistics capabilities play a mediating role in this relationship. Design/methodology/approach – Empirical evidence based on a questionnaire survey conducted on a sample of logistics service companies operating in the Italian market was used to test a conceptual resource based view (RBV) framework linking IT adoption, logistics capabilities and firm performance. Factor analysis and ordinary least square (OLS) regression analysis have been used to test hypotheses. The focus of the paper is multidisciplinary in nature; management of information systems, strategy, logistics and supply chain management approaches have been combined in the analysis. Findings – The results indicate strong relationships among data gathering technologies, transactional capabilities and firm performance, in terms of both efficiency and effectiveness. Moreover, a positive correlation between enterprise information technologies and 3PL financial performance has been found. Originality/value – The paper successfully uses the concept of logistics capabilities as mediating factor between IT adoption and firm performance. Objective measures have been proposed for IT adoption and logistics capabilities. Direct and indirect relationships among variables have been successfully tested.
Wireless Sensor Networks for Environmental Noise Monitoring
While environmental issues keep gaining increasing attention from the public opinion and policy makers, several experiments demonstrated the feasibility of wireless sensor networks to be used in a large variety of environmental monitoring applications. Focusing on the assessment of environmental noise pollution in urban areas, we provide qualitative considerations and preliminary experimental results that motivate and encourage the use of wireless sensor networks in this context.
Advanced Topics in Workflow Management: Issues, Requirements, and solutions
This paper surveys and investigates the strengths and weaknesses of a number of recent approaches to advanced workflow modelling. Rather than inventing just another workflow language, we briefly describe recent workflow languages, and we analyse them with respect to their support for advanced workflow topics. Object Coordination Nets, Workflow Graphs, WorkFlow Nets, and an approach based on Workflow Evolution are described as dedicated workflow modelling approaches. In addition, the Unified Modelling Language as the de facto standard in objectoriented modelling is also investigated. These approaches are discussed with respect to coverage of workflow perspectives and support for flexibility and analysis issues in workflow management, which are today seen as two major areas for advanced workflow support. Given the different goals and backgrounds of the approaches mentioned, it is not surprising that each approach has its specific strengths and weaknesses. We clearly identify these strengths and weaknesses, and we conclude with ideas for combining their best features.
ESTIMATION QUESTION TYPE ANALYZER FOR MULTI CLOSE DOMAIN INDONESIAN QAS
We propose an automated estimation scheme to analyze question classification in Indonesian multi closed domain question answering systems. The goal is to provide a good questioning classification system even if using only available language sources. Our strategy here is to build a pattern and rule to extract some important words and utilize the results as a feature for classification estimation of automated learning-based questions. Scenarios designed in automated learning estimates: (i) Analyzing questions, to represent the key information needed to answer user questions using target focus and target identification; (ii) Classify the type of question, construct a taxonomy of questions that have been coded into the system to determine the expected answer type, through some question processing patterns and rules. The proposed method is evaluated using datasets collected from various Indonesian websites. Test results show that the classification process using the proposed method is very effective. Introduction Question classification is an important phase in most question answering systems. There are three approaches for question classification [1]: rule based, language modeling and machine learning based. The questions classification of rule-based uses a set of standard heuristic rules based on taxonomy. Rule-based approaches classify questions using rules created manually by experts. Rule-based classification uses rules to detect keyword questions and utilize Word Net to map target categories [2]. Rule-based approaches do not support other domains or different languages because it is difficult to create a new set of rules framework. Rule-based approach performs well in specified data sets and not on the degradation of new data set performance [3]. Rulebased approaches are accurate in predicting certain categories of questions. However, this is not measurable for a large number of questions and syntactic structures. Rule-based questioning classification studies focus more on using the syntax and relationship rules between words [5][6], rules of relation and determines layer taxonomy by determining coarse class [7] [8]. Machine learning focuses on developing computer programs that can teach themselves to grow and change. Machine learning provides potential solutions across these domains and more [4]. There have been many research classification questions with approaches to machine learning such as Purwarianti et al [9] proposing a shallow parser to take some important words and utilize the results as a feature for classification of SVM-based questions. Skowron [10] uses the SVM algorithm with a composite feature of word categories and focus questions using a syntactic semantic structure. Zhang [11] and Mishra [13] compared various methods of machine learning to classify questions such as Nearest Neighbors, Naive Bayes, Decision Tree, Wicked Networks from Winnows (SnoW) and Support Vector Machine (SVM) using bag-of-word and n-gram features. The questions classification is analyzed by various approaches from the point of view of previous researchers to improve class classification for the better. Although the learning engine approach is better but the rule-based approach has its own challenge: how to improve the system feature automatically to produce some complex information so that the classification becomes true. Based on that in this paper will propose the scheme of [Suwarningsih * et al., 4(6): June, 2017] ISSN: 234-5197 Impact Factor: 2.715 INTERNATIONAL JOURNAL OF RESEARCH SCIENCE & MANAGEMENT http: // www.ijrsm.com © International Journal of Research Science & Management [60] classification of questions using a combination approach of template patterns with the rules of relations between words. The generation of inter-word relationship rules uses the learning concept of knowledge based on the variation of the close domain. The proposed contributions are new scheme for Indonesian multi close domains that can be used to such as generate question template patterns and Expected Answer Type (EAT) variations. The rest of the paper is organized as follows: Section. 2 describes several related work on question classification rule-based; Section 3 proposed method an estimation automatic learning for question analysis and question classification; Section. 4 discusses the learning issues involved in QC and presents our learning approach. Related Work The research on the question classification of intensive has been done by many researchers but the question classification is still a trend of continuous research review. This is based on several considerations such as how large the system is able to define the expected type of response [4][6], how large the system is capable of sorting words that include stop-world automatically in the classification of questions [12] [14] and how the system is able to handle variations of question forms to produce accurate high [14]. Especially rule-based classification has its own challenge: how to improve system features automatically to generate some complex information so that classification becomes true. There are many question classification researches with rule-based questioning typically use word extraction functions, syntax patterns and build relationships between words. Such as Te, et al [13] focuses on word extraction and establishing the rules of relations between words using Tibetan. The proposed method is to collect information by comparing keywords. This strategy is done to reduce the search space and improve the efficiency of the word search process. The results of this approach are able to initialize the knowledge base with the relation rules for the Tibetan language question structure. Sarrouti et al [1] proposed an effective and efficient method for Biomedical QTs Classification. They have classified the Biomedical Questions into three broad categories and defined the Syntactic Patterns for particular category of Biomedical Questions. Therefore, using these Biomedical Question Patterns, they have proposed an algorithm for classifying the question into particular category. The proposed method was evaluated on the Benchmark datasets of Biomedical Questions. The experimental results show that the proposed method can be used to effectively classify Biomedical Questions with higher accuracy. Riloff [6] develop a rule based system Quarc that can read a short story and find the sentence in the story that best answer a given question. Quarc uses heuristic rules that look for lexical an semantic rules in the question and the story. Each rule awards a certain number of points to a sentences. After all of the rules have been applied, the sentence that obtains the highest score is returned as the answer. Haris & Omar [5] describes a rule-based approach to analyze and classify written examination questions through natural language processing for computer programming subjects. In general, Bloom's Taxonomy or the Taxonomy of Educational Objectives (TEO) acts as a main guideline in assessing a student's cognitive level. However, academicians need to design the appropriate questions and categorize it to the cognitive level of TEO manually. Biswas et al [6] proposed a compact and effective method for question classification. Here rather than using a two layered taxonomy of 6 course grain and 50 fine grained categories developed by Li and Roth [7]. They have classified the questions into three broad categories studied the syntactic structure of the question and suggest the syntactic patterns and expected answer type for particular category of questions. Using these question Patterns they have also suggested an algorithm for classifying the question into particular category. They have also studied the syntactic structure of the question and suggest the syntactic patterns and expected answer type for particular category of questions [Suwarningsih * et al., 4(6): June, 2017] ISSN: 234-5197 Impact Factor: 2.715 INTERNATIONAL JOURNAL OF RESEARCH SCIENCE & MANAGEMENT http: // www.ijrsm.com © International Journal of Research Science & Management [61] Proposed Work Here, we would like to show our framework of Question processing with method an estimation automatic learning for question analysis, question classification and automatic extraction learning. Framework of proposed method that we have built can be seen in Figure 1. Parsing rsi Extraction and relation word learning tr cti r l ti r l r i datasets collected from various Indonesian Domain t s ts c ll ct fr ri s
Compact active antenna for mobile devices supporting 4G LTE
The advent of 4G LTE has ushered in a growing demand for embedded antennas that can cover a wide range of frequency bands from 698 MHz to 2.69 GHz. A novel active antenna design is presented in this paper that is capable of covering a wide range of LTE bands while being constrained to a 1.8 cm3 volume. The antenna structure utilizes Ethertronics EtherChip 2.0 to add tunability to the antenna structure. The paper details the motivation behind developing the antenna and further discusses the fabrication of the active antenna architecture on an evaluation board and presents the measured results.
Two-level multiscale enrichment methodology for modeling of heterogeneous plates
A new two-level multiscale enrichment methodology for analysis of heterogeneous plates is presented. The enrichments are applied in the displacement and strain levels: the displacement field of a Reissner-Mindlin plate is enriched using the multiscale enrichment functions based on the partition of unity principle; the strain field is enriched using the mathematical homogenization theory. The proposed methodology is implemented for linear and failure analysis of brittle heterogeneous plates. The eigendeformation-based model reduction approach is employed to efficiently evaluate the nonlinear processes in case of failure. The capabilities of the proposed methodology are verified against direct three-dimensional finite element models with full resolution of the microstructure.
Microblogging during two natural hazards events: what twitter may contribute to situational awareness
We analyze microblog posts generated during two recent, concurrent emergency events in North America via Twitter, a popular microblogging service. We focus on communications broadcast by people who were "on the ground" during the Oklahoma Grassfires of April 2009 and the Red River Floods that occurred in March and April 2009, and identify information that may contribute to enhancing situational awareness (SA). This work aims to inform next steps for extracting useful, relevant information during emergencies using information extraction (IE) techniques.
Anxiety and Depression Increase in a Stepwise Manner in Parallel With Multiple FGIDs and Symptom Severity and Frequency
Objectives:Anxiety and depression occur frequently in patients with functional gastrointestinal disorders (FGIDs), but their precise prevalence is unknown. We addressed this issue in a large cohort of adult patients and determined the underlying factors.Methods:In total, 4,217 new outpatients attending 2 hospitals in Hamilton, Ontario, Canada completed questionnaires evaluating FGIDs and anxiety and depression (Hospital Anxiety and Depression scale). Chart review was performed in a random sample of 2,400 patients.Results:Seventy-six percent of patients fulfilled Rome III criteria for FGIDs, but only 57% were diagnosed with FGIDs after excluding organic diseases, and the latter group was considered for the analysis. Compared with patients not meeting the criteria, prevalence of anxiety (odds ratio (OR) 2.66, 95% confidence interval (CI): 1.62–4.37) or depression (OR 2.04, 95% CI: 1.03–4.02) was increased in patients with FGIDs. The risk was comparable to patients with organic disease (anxiety: OR 2.12, 95% CI: 1.24–3.61; depression: OR 2.48, 95% CI: 1.21–5.09). The lowest prevalence was observed in asymptomatic patients (OR 1.37; 95% CI 0.58–3.23 and 0.51; 95% CI 0.10–2.48; for both conditions, respectively). The prevalence of anxiety and depression increased in a stepwise manner with the number of co-existing FGIDs and frequency and/or severity of gastrointestinal (GI) symptoms. Psychiatric comorbidity was more common in females with FGIDs compared with males (anxiety OR 1.73; 95% CI 1.35–2.28; depression OR 1.52; 95% CI 1.04–2.21). Anxiety and depression were formally diagnosed by the consulting physician in only 22% and 9% of patients, respectively.Conclusions:Psychiatric comorbidity is common in patients referred to a secondary care center but is often unrecognized. The prevalence of both anxiety and depression is influenced by gender, presence of organic diseases, and FGIDs, and it increases with the number of coexistent FGIDs and frequency and severity of GI symptoms.
Randomized controlled double-blind trial of optimal dose methylphenidate in children and adolescents with severe attention deficit hyperactivity disorder and intellectual disability.
BACKGROUND Attention deficit hyperactivity disorder is increased in children with intellectual disability. Previous research has suggested stimulants are less effective than in typically developing children but no studies have titrated medication for individual optimal dosing or tested the effects for longer than 4 weeks. METHOD One hundred and twenty two drug-free children aged 7-15 with hyperkinetic disorder and IQ 30-69 were recruited to a double-blind, placebo-controlled trial that randomized participants using minimization by probability, stratified by referral source and IQ level in a one to one ratio. Methylphenidate was compared with placebo. Dose titration comprised at least 1 week each of low (0.5 mg/kg/day), medium (1.0 mg/kg/day) and high dose (1.5 mg/kg/day). Parent and teacher Attention deficit hyperactivity disorder (ADHD) index of the Conners Rating Scale-Short Version at 16 weeks provided the primary outcome measures. Clinical response was determined with the Clinical Global Impressions scale (CGI-I). Adverse effects were evaluated by a parent-rated questionnaire, weight, pulse and blood pressure. Analyses were by intention to treat. TRIAL REGISTRATION ISRCTN 68384912. RESULTS Methylphenidate was superior to placebo with effect sizes of 0.39 [95% confidence intervals (CIs) 0.09, 0.70] and 0.52 (95% CIs 0.23, 0.82) for the parent and teacher Conners ADHD index. Four (7%) children on placebo versus 24 (40%) of those on methylphenidate were judged improved or much improved on the CGI. IQ and autistic symptoms did not affect treatment efficacy. Active medication was associated with sleep difficulty, loss of appetite and weight loss but there were no significant differences in pulse or blood pressure. CONCLUSIONS Optimal dosing of methylphenidate is practical and effective in some children with hyperkinetic disorder and intellectual disability. Adverse effects typical of methylphenidate were seen and medication use may require close monitoring in this vulnerable group.
A lorentz force magnetometer based on a piezoelectric-on-silicon radial-contour mode disk
We report a unique MEMS magnetometer based on a disk shaped radial contour mode thin-film piezoelectric on silicon (TPoS) CMOS-compatible resonator. This is the first device of its kind that targets operation under atmospheric pressure conditions as opposed that existing Lorentz force MEMS magnetometers that depend on vacuum. We exploit the chosen vibration mode to enhance coupling to deliver a field sensitivity of 10.92 mV/T while operating at a resonant frequency of 6.27 MHz, despite of a sub-optimal mechanical quality (Q) factor of 697 under ambient conditions in air.
How protective is cervical cancer screening against cervical cancer mortality in developing countries? The Colombian case
BACKGROUND Cervical cancer is one of the top causes of cancer morbidity and mortality in Colombia despite the existence of a national preventive program. Screening coverage with cervical cytology does not explain the lack of success of the program in reducing incidence and mortality rates by cervical cancer. To address this problem an ecological analysis, at department level, was carried out in Colombia to assess the relationship between cervical screening characteristics and cervical cancer mortality rates. METHODS Mortality rates by cervical cancer were estimated at the department level for the period 2000-2005. Levels of mortality rates were compared to cervical screening coverage and other characteristics of the program. A Poisson regression was used to estimate the effect of different dimensions of program performance on mortality by cervical cancer. RESULTS Screening coverage ranged from 28.7% to 65.6% by department but increases on this variable were not related to decreases in mortality rates. A significant reduction in mortality was found in departments where a higher proportion of women looked for medical advice when abnormal findings were reported in Pap smears. Geographic areas where a higher proportion of women lack health insurance had higher rates of mortality by cervical cancer. CONCLUSIONS These results suggest that coverage is not adequate to prevent mortality due to cervical cancer if women with abnormal results are not provided with adequate follow up and treatment. The role of different dimensions of health care such as insurance coverage, quality of care, and barriers for accessing health care needs to be evaluated and addressed in future studies.
A study on exception detecton and handling using aspect-oriented programming
Aspect-Oriented Programming (AOP) is intended to ease situations that involve many kinds of code tangling. This paper reports on a study to investigate AOP's ability to ease tangling related to exception detection and handling. We took an existing framework written in Java™, the JWAM framework, and partially reengineered its exception detection and handling aspects using AspectJ™, an aspect-oriented programming extension to Java. We found that AspectJ supported implementations that drastically reduced the portion of the code related to exception detection and handling. In one scenario, we were able to reduce that code by a factor of 4. We also found that, with respect to the original implementation in plain Java, AspectJ provided better support for different configurations of exceptional behaviors, more tolerance for changes in the specifications of exceptional behaviors, better support for incremental development, better reuse, automatic enforcement of contracts in applications that use the framework, and cleaner program texts. We also found some weaknesses of AspectJ that should be addressed in the future.
Constructivism in Environmental Education : Beyond Conceptual Change Theory
Introduction Constructivism, as a set of theories about how learners learn, has been an important discourse in the educational research literature for a number of years. Interestingly, it has been far more visible in science education research than in environmental education research. This article considers conceptual change theory within constructivism as a contested concept, outlines differing expressions of constructivism in science education and environmental education, and argues for approaches to environmental education that adopt socially constructivist perspectives with respect to the character of the subject matter content as well as to learners' apprehension of such content. In considering implications for research, this perspective is juxtaposed with a recent United States Education Act, which prescribes a far more objectivist approach to educational research and which serves as a reminder that research itself is a powerful factor in shaping how we construct the nature of subject matter, learning and the implications of these for teaching practice. Constructivism, as a set of theories about how learners learn, has been an important discourse in the educational research literature for a number of years. Many researchers have sought to explore explicitly the concept of constructivism and its implications for pedagogy, curriculum and professional development, or to adopt a constructivist framework in the analysis of educational situations. Historically, there has been a strong "conceptual change" perspective in science education research. However reviews of the environmental education research literature reveal a relative dearth of empirical research that overtly engages the issues of constructivism in the field of environmental education (see, for example, Robertson, 1994). This article considers conceptual change theory within constructivism as a contested concept, outlines differing expressions of constructivism in science education and environmental education, and argues for approaches to environmental education that adopt socially constructivist perspectives with respect to the character of subject matter content as well as to learners' apprehension of such content. In considering implications for research, this perspective is juxtaposed with a recent United States Education Act, which prescribes a far more objectivist approach to educational research. The relative lack of an overtly constructivist perspective in environmental education is the more surprising given the particularly high profile it has achieved in science tAddress for correspondence: AlProfessor Ian Robottom, Faculty of Education, Deakin University, Geelong, Victoria 3217, Australia. Email: [email protected] ISSN:0814-0626 © Australian Association for Environmental Education
Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism
Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism Amir jafari Nikos Tsagarakis Darwin G Caldwell Article information: To cite this document: Amir jafari Nikos Tsagarakis Darwin G Caldwell , (2015),"Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism", Industrial Robot: An International Journal, Vol. 42 Iss 3 pp. Permanent link to this document: http://dx.doi.org/10.1108/IR-12-2014-0433
An Ethics Evaluation Tool for Automating Ethical Decision-Making in Robots and Self-Driving Cars
As we march down the road of automation in robotics and artificial intelligence, we will need to automate an increasing amount of ethical decision-making in order for our devices to operate independently from us. But automating ethical decision-making raises novel questions for engineers and designers, who will have to make decisions about how to accomplish that task. For example, some ethical decisionmaking involves hard moral cases, which in turn requires user input if we are to respect established norms surrounding autonomy and informed consent. The author considers this and other ethical considerations that accompany the automation of ethical decision-making. He proposes some general ethical requirements that should be taken into account in the design room, and sketches a design tool that can be integrated into the design process to help engineers, designers, ethicists, and policymakers decide how best to automate certain forms of ethical decision-making.
Individuum, Society, Humankind: The Triadic Logic of the Species according to Hajime Tanabe
In this collection on the Kyoto School of Philosophy, the author offers the reader Tanabe's religious philosophy, but also, and for the first time, his philosophy of nature and ontology. It is not only on individuum, society, and humankind, but also on the logical structure of Tanabe's thinking, and aspects such as nature, beauty, matter, contemplation, practice, politics, religion, science, history, eternity, etcetera. A highly original work, the more as the reader becomes acquainted with Ozaki's own creative synthetic view of the main problems of Christian-Buddhist theological, resp. philosophical encounter.
Optimal power flow and energy-sharing among multi-agent smart buildings in the smart grid
Buildings account for about 40% of total energy consumption. Efficient building energy control can considerably reduce energy costs. A smart grid takes advantage of bi-directional energy and information flow between the utility grid and the energy user. Smart buildings can charge or discharge energy or power from multiple buildings (multi-agent systems) using smart meters via battery storage in the smart buildings. However, there is very little research on how to share energy among multi-agent systems and optimal power flow among smart buildings (multi-agent systems) in the smart grid. In this paper, the authors use an advanced optimization method to present optimal power flow and energy-sharing among smart buildings. With this research, it is expected that this method can improve the smart grid optimal power flow and energy-sharing stability among smart buildings, and enhance energy dissipation balance to reach stability among many smart buildings in the smart grid.
Integration of ePortfolios in Learning Management Systems
The LMS plays a decisive role in most eLearning environments. Although they integrate many useful tools for managing eLearning activities, they must also be effectively integrated with other specialized systems typically found in an educational environment such as Repositories of Learning Objects or ePortfolio Systems. Both types of systems evolved separately but in recent years the trend is to combine them, allowing the LMS to benefit from using the ePortfolio assessment features. This paper details the most common strategies for integrating an ePortfolio system into an LMS: the data, the API and the tool integration strategies. It presents a comparative study of strategies based on the technical skills, degree of coupling, security features, batch integration, development effort, status and standardization. This study is validated through the integration of two of the most representative systems on each category respectively Mahara and Moodle.
Melanin-based skin spots reflect stress responsiveness in salmonid fish
Within animal populations, genetic, epigenetic and environmental factors interact to shape individual neuroendocrine and behavioural profiles, conferring variable vulnerability to stress and disease. It remains debated how alternative behavioural syndromes and stress coping styles evolve and are maintained by natural selection. Here we show that individual variation in stress responsiveness is reflected in the visual appearance of two species of teleost fish; rainbow trout (Oncorhynchus mykiss) and Atlantic salmon (Salmo salar). Salmon and trout skin vary from nearly immaculate to densely spotted, with black spots formed by eumelanin-producing chromatophores. In rainbow trout, selection for divergent hypothalamus-pituitary-interrenal responsiveness has led to a change in dermal pigmentation patterns, with low cortisol-responsive fish being consistently more spotted. In an aquaculture population of Atlantic salmon individuals with more spots showed a reduced physiological and behavioural response to stress. Taken together, these data demonstrate a heritable behavioural-physiological and morphological trait correlation that may be specific to alternative coping styles. This observation may illuminate the evolution of contrasting coping styles and behavioural syndromes, as occurrence of phenotypes in different environments and their response to selective pressures can be precisely and easily recorded.
Simple, Accurate, and Robust Projector-Camera Calibration
Structured-light systems are simple and effective tools to acquire 3D models. Built with off-the-shelf components, a data projector and a camera, they are easy to deploy and compare in precision with expensive laser scanners. But such a high precision is only possible if camera and projector are both accurately calibrated. Robust calibration methods are well established for cameras but, while cameras and projectors can both be described with the same mathematical model, it is not clear how to adapt these methods to projectors. In consequence, many of the proposed projector calibration techniques make use of a simplified model, neglecting lens distortion, resulting in loss of precision. In this paper, we present a novel method to estimate the image coordinates of 3D points in the projector image plane. The method relies on an uncalibrated camera and makes use of local homographies to reach sub-pixel precision. As a result, any camera model can be used to describe the projector, including the extended pinhole model with radial and tangential distortion coefficients, or even those with more complex lens distortion models.
Detection of a shared colon epithelial epitope on Barrett epithelium by a novel monoclonal antibody.
In Barrett epithelium, the typical esophageal stratified squamous epithelium is replaced by metaplastic columnar epithelial cells, usually in the distal esophagus as a complication of severe reflux esophagitis [1, 2]. Adenocarcinoma develops in 8% to 15% of cases [3-5]. The origin and precise nature of this epithelium that contains various cell types [3-7] are unknown. We previously described a unique murine monoclonal antibody, 7E12H12 (IgM isotype), that reacts specifically with normal colonic epithelial cells but not with 13 other epithelial organs, including the small intestinal enterocytes and the gastric and esophageal mucosa [8]. Using the immunoperoxidase assay [8], we examined the immunoreactivity of the 7E12H12 monoclonal antibody against normal and abnormal esophageal mucosa, especially Barrett epithelium and associated adenocarcinoma of the esophagus. Methods Tissue Samples Retrospective Tissue Materials from the Esophagus and Gastroesophageal Junction Fifty-three biopsy specimens taken during endoscopy at different levels of esophagus from 44 persons, and 12 surgical resection specimens for carcinoma of the esophagus arising in Barrett epithelium (7 patients) and squamous cell carcinoma of the esophagus (5 patients) were obtained from the archival material in the Department of Pathology. Prospective Tissue Specimens from the Esophagus, Stomach, and Upper Small Intestine Fifty-one biopsy specimens were obtained from 12 consecutive persons who were evaluated by upper gastrointestinal endoscopy for persistent symptoms of esophageal reflux disorders and acid peptic syndromes. All patients were evaluated by a single gastroenterologist, and biopsy specimens were taken systematically from various sites of the esophagus, stomach, duodenum, and jejunum (Table 1). Among 12 specimens with variable degrees of esophagitis, 6 specimens were taken from the gastroesophageal junction, and the remaining 6 were taken from distal esophagus just proximal to the squamocolumnar junction. Tissue specimens were fixed in formalin, and paraffin blocks were prepared for routine histologic study. Serial sections from each block were processed to study the immunoreactivity against the 7E12H12 monoclonal antibody by the immunoperoxidase method. The pathologist and the investigators who did the immunocytochemical studies were not informed about the history and clinical diagnosis of the patients, and they evaluated their findings independently. Table 1. Histologic Analysis of the Tissue and the Results of Immunoperoxidase Experiments with the 7E12H12 Monoclonal Antibody (IgM Isotype) Immunoperoxidase Method The method of production and characterization of the 7E12H12 monoclonal antibody has been previously reported [8]. An unrelated mouse monoclonal antibody of IgM isotype (MOPC-104E) was used as a control. Normal colonic mucosal biopsy specimens were included in each experiment as a positive control against the 7E12H12 monoclonal antibody. The immunohistochemical analysis was done as previously described [8] with some modifications as described below [9]. The tissues were sectioned (5 microns), mounted on poly-L-lysine-coated slides, deparaffinized by heating at 56 C for 1 hour, immersed in xylene, rehydrated in 100%, 95%, and 70% alcohol, and finally in phosphate-buffered saline (pH, 7.2). Free aldehydes were reduced with 0.05% sodium borohydride in phosphate-buffered saline (pH, 7.2) for 30 minutes at 4 C. Sections were then sequentially incubated with normal swine serum, 7E12H12 monoclonal antibody, or control murine IgM monoclonal antibody, biotinylated swine antimouse IgM (Dakopatts; Carpinteria, California), hydrogen peroxide solution (3%), and streptavidin-peroxidase (Dakopatts), respectively. Tissue sections were washed in phosphate-buffered saline, treated with 3-3 diaminobenzidene hydrochloride (50 g/150 mL of 0.5 mol/L TRIS-buffer; pH, 7.2) for 30 minutes. The sections were washed, counter stained in hematoxylin or toluidine blue for 1 minute, dehydrated in graded ethanol solutions and then in xylene, and mounted for microscopic examination. The presence of clear brown staining of the tissue was graded as positive and its absence as negative. Results The location and histologic diagnosis of the 116 specimens collected retrospectively and prospectively from 53 patients are shown in Table 1. The mean age of these patients was 57 years for the retrospective group and 51 years for the prospective group (range, 14 to 85 years). Of 22 biopsy specimens, 21 (95%) with established diagnoses of specialized Barrett epithelium reacted with the 7E12H12 monoclonal antibody (Figure 1 C, Table 1). These included 19 of 20 retrospective specimens and 2 of 2 prospective specimens. Among these 21 positive patients, three biopsy specimens were taken from the esophagus at 20 to 25 cm, four from 25 to 30 cm, and 14 from 30 cm or below from the incisor teeth, as defined by the endoscopist. The one biopsy specimen that did not react with 7E12H12 monoclonal antibody was from the distal esophagus lower than 30 cm. Figure 1. The immunoreactivity of 7E12H12 monoclonal antibody against the specialized type of Barrett epithelium and colonic mucosal epithelium as shown by the immunoperoxidase assay. A and B. arrowheads C. D and E. F. G and H. I. arrows A and B C D to I The specialized columnar epithelial cells of Barrett epithelium reacted strongly with 7E12H12 monoclonal antibody (Figure 1 C). The reactivity was more intense in the periphery of the cells (probably the membrane area compared with the cytoplasm). The 7E12H12 monoclonal antibody also reacted with some of the goblet cells, including their contents and basolateral regions of the cells. Each of the 11 specimens from normal gastroesophageal junction (squamocolumnar junction) was negative when reacted with the 7E12H12 monoclonal antibody (Figure 1 D and E). The availability of six operative specimens (from patients with cancer) from this area allowed us to obtain multiple tissue samples to evaluate both structure and immunoperoxidase staining. All of the 16 esophageal tissue specimens with normal squamous epithelium were negative when reacted with 7E12H12 monoclonal antibody (Figure 1 C, D, and E). Twelve specimens (six from distal esophagus and six from squamocolumnar junction) obtained from patients with endoscopic and histologic diagnosis of active esophagitis did not react with 7E12H12 monoclonal antibody. Each of the 12 cases of adenocarcinoma arising in Barrett epithelium reacted with 7E12H12 monoclonal antibody and the staining was intense, mostly cytoplasmic (Figure 1 G and H). However, 12 of the 13 esophageal squamous cell carcinomas did not react with 7E12H12 monoclonal antibody (Figure 1)I. Only one specimen showed some focal and patchy staining in the tumor. Normal colonic biopsy specimens that were examined in parallel during each experiment as positive controls consistently reacted with the monoclonal antibody (Figure 1 F). None of the 21 specimens from sites of the stomach (cardia, fundus, body, and antrum), duodenum, and jejunum reacted with the 7E12H12 monoclonal antibody. Discussion Our study is the first report of a unique epitope shared between normal colonic mucosa and distinct specialized type of Barrett epithelium. Most cases of Barrett epithelium have a histologic similarity with incomplete intestinal metaplasia or small-bowel-like histologic findings and occasional gastric or fundic type of epithelium. However, 7E12H12 failed to react with small-bowel or gastric mucosa, confirming previous reports by us [8] and others [10], using both immunoperoxidase and immunofluorescence assays. These data suggest a histogenetic relation between specialized Barrett epithelium and colonic-type epithelium. The origin of specialized Barrett epithelial cells in distal esophagus has been debated because no specific marker distinguishes Barrett epithelium. Metaplastic columnar epithelial cells of Barrett epithelium develop at higher frequency in the distal esophagus as a complication of reflux esophagitis [1, 2]. The cases that show the intestinal type histologic pattern tend to progress to adenocarcinoma at relatively higher frequency than other types of Barrett epithelia [11]. This type of Barrett epithelium has a colon epithelial phenotype. Several independent groups of investigators have reported a higher frequency of colonic neoplasia in patients with Barrett epithelium than in controls [12-14]. The reason for this clinical association is unknown. However, it is intriguing that a common epitope detected by the 7E12H12 monoclonal antibody is shared in these two tissues. Data from both the retrospective and prospective specimens show that the monoclonal antibody 7E (12H)12 reacts frequently (21 of 22) with the specialized type of benign Barrett epithelium but not with any other tissue systematically sampled from different sites of normal esophagus (including gastroesophageal junction); cardia, fundus, body, and antrum of the stomach; duodenum and jejunum. Further, the 7E12H12 monoclonal antibody reacted with all 12 cases of adenocarcinoma derived from Barrett epithelium; however, it did not react with the esophageal squamous cell carcinomas. In addition to its potential as a new tool to study the histogenesis of Barrett epithelium, the 7E12H12 monoclonal antibody may be valuable in the diagnosis of Barrett epithelium. Endoscopic identification of the transition zone can be difficult, especially when the gastric mucosa extends into the distal esophagus and the structure of the gastric mucosa blends with the esophageal mucosa, which occurs in active esophagitis. Although some aids for endoscopic identification of gastroesophageal junction have been described [15, 16], the endoscopist may find that the area of diaphragmatic compression, gastroesophageal junction, and the gastroesophageal muscular junction are not identical. Because the immunoreactive epitope recognized by the 7E12H12
Effect of a Nontechnical Skills Intervention on First-Year Student Registered Nurse Anesthetists' Skills During Crisis Simulation.
Simulation-based education provides a safe place for student registered nurse anesthetists to practice non-technical skills before entering the clinical arena. An anesthetist's lack of nontechnical skills contributes to adverse patient outcomes. The purpose of this study was to determine whether an educational intervention on nontechnical skills could improve the performance of nontechnical skills during anesthesia crisis simulation with a group of first-year student registered nurse anesthetists. Thirty-two first-year students volunteered for this quasi-experimental study. Each subject was videotaped and rated as he or she performed 6 simulated crisis scenarios: 3 scenarios before the intervention and 3 after the intervention. Findings revealed that the nontechnical skills mean posttest score was greater than pretest scores: t (df = 31) = 1.99, P = .028. The mean gain in scores for standardized nontechnical skills were significantly greater than those for standardized technical skills: t (df = 30) = 1.81, P = .04. In conclusion, a 3-hour educational intervention on nontechnical skills resulted in significant improvement. Nontechnical skills therefore are not acquired through experience, but rather through instruction. An educational intervention using the Anaesthetists' Non-Technical Skills system is a valuable tool in the measurement of nontechnical skills assessment of first-year student registered nurse anesthetists.
Towards a Computational Lexicon for Moroccan Darija: Words, Idioms, and Constructions
We explore the challenges of building a computational lexicon for Moroccan Darija (MD), an Arabic dialect spoken by over 32 million people worldwide that only recently has begun appearing frequently in written form. We raise the question of what belongs in such a lexicon and start by describing our work building traditional word-level lexicon entries with their English translations. We then discuss challenges in translating idiomatic MD phrases and the creation of multi-word expression (MWE) lexicon entries whose meanings could not be fully derived from the individual words. Finally, we describe our preliminary exploration of constructions for inclusion in an MD constructicon, initially eliciting translations of established English constructions, and then shifting to document, when spontaneously offered, variant renderings of native MD counterparts.
Autoimmune disorders: nail signs and therapeutic approaches.
Systemic sclerosis (scleroderma, SSc) is an autoimmune disease that targets small and medium-sized arteries and arterioles in the involved tissues, resulting in a fibrotic vasculopathy and tissue fibrosis. Several prominent nail and periungual changes are apparent in scleroderma. Examination of the nail fold capillaries can reveal the nature and extent of microvascular pathology in patients with collagen vascular disease and Raynaud's phenomenon. Among the complications stemming from Raynaud's phenomenon can be painful ischemic digital ulcers. This can be managed, and potentially prevented, through pharmacologic and nonpharmacologic means. Whereas oral calcium channel blockers remain the most convenient therapy, oral endothelin receptor antagonists and intravenous prostaglandins may be important therapeutic advances for ischemic digital vascular lesions.
Non-functional requirements for COTS software components
Commercially available software components come with the built-in functionality often offering end-user more than they need. A fact that end-user has no or very little influence on component’s functionality promoted nonfunctional requirements which are getting more attention than ever before. In this paper, we identify some of the problems encountered when non-functional requirements for COTS software components need to be defined.
Mechanisms of endocytosis.
Endocytic mechanisms control the lipid and protein composition of the plasma membrane, thereby regulating how cells interact with their environments. Here, we review what is known about mammalian endocytic mechanisms, with focus on the cellular proteins that control these events. We discuss the well-studied clathrin-mediated endocytic mechanisms and dissect endocytic pathways that proceed independently of clathrin. These clathrin-independent pathways include the CLIC/GEEC endocytic pathway, arf6-dependent endocytosis, flotillin-dependent endocytosis, macropinocytosis, circular doral ruffles, phagocytosis, and trans-endocytosis. We also critically review the role of caveolae and caveolin1 in endocytosis. We highlight the roles of lipids, membrane curvature-modulating proteins, small G proteins, actin, and dynamin in endocytic pathways. We discuss the functional relevance of distinct endocytic pathways and emphasize the importance of studying these pathways to understand human disease processes.
Meta-Reinforcement Learning of Structured Exploration Strategies
Exploration is a fundamental challenge in reinforcement learning (RL). Many current exploration methods for deep RL use task-agnostic objectives, such as information gain or bonuses based on state visitation. However, many practical applications of RL involve learning more than a single task, and prior tasks can be used to inform how exploration should be performed in new tasks. In this work, we study how prior tasks can inform an agent about how to explore effectively in new situations. We introduce a novel gradient-based fast adaptation algorithm – model agnostic exploration with structured noise (MAESN) – to learn exploration strategies from prior experience. The prior experience is used both to initialize a policy and to acquire a latent exploration space that can inject structured stochasticity into a policy, producing exploration strategies that are informed by prior knowledge and are more effective than random action-space noise. We show that MAESN is more effective at learning exploration strategies when compared to prior meta-RL methods, RL without learned exploration strategies, and task-agnostic exploration methods. We evaluate our method on a variety of simulated tasks: locomotion with a wheeled robot, locomotion with a quadrupedal walker, and object manipulation.
ASIA impairment scale conversion in traumatic SCI: is it related with the ability to walk? A descriptive comparison with functional ambulation outcome measures in 273 patients
Study design:Prospective multicenter longitudinal cohort study.Objectives:To determine the relationship between improvements of the American Spinal Injury Association/International Spinal Cord Society (ASIA/ISCoS) neurological standard scale (AIS) outcome measure and improvements of functional ambulatory outcome measures in patients with traumatic spinal cord injury (SCI).Setting:European multicenter study of human SCI (EM-SCI).Methods:In 273 eligible patients with traumatic SCI, acute (0–15 days) and chronic phase (6 or 12 months) AIS grades, timed up and go (TUG) test and 10-m walk test (10MWT) outcome measurements were analyzed. Subanalysis of those patients who did have AIS conversion was performed to assess its relation with functional ambulatory outcomes.Results:Studied population consisted of 161 acute phase AIS grade A patients; 37 grade B; 43 grade C and 32 acute phase AIS grade D patients. Forty-two patients (26%) converted from AIS grade A, 27 (73%) from grade B, 32 (75%) from grade C and five patients (16%) from AIS grade D. The frequencies of AIS conversions and functional ambulation recovery outcomes were significantly different (P<0.001) in patients with motor complete SCI. The ratio of patients with both recovery of ambulatory function and AIS conversion (n=101) differed significantly (P<0.001) between the acute phase AIS grade scores; AIS grade A (6/40 patients, 15%), B (9/27 patients, 33%), C (23/29 patients, 79%) and D (5/5 patients 100%).Conclusions:The AIS conversion outcome measure is poorly related to the ability to walk in traumatic SCI patients. Therefore, the authors recommend the use of functional ambulation recovery outcome measures in prognosticating the recovery of walking capacity and performance of patients with SCI.
A 6-bit 0.81-mW 700-MS/s SAR ADC With Sparkle-Code Correction, Resolution Enhancement, and Background Window Width Calibration
This paper presents a 6-bit high-speed successive approximation register analog-to-digital converter (ADC) with sparkle-code correction. By quantizing the comparator decision time (CDT), the sparkle codes are identified and corrected, reducing the error rate from 10−4 to below 10−9. Furthermore, CDT quantization enables 1-bit increase in the ADC resolution by setting the detection boundary to be ±0.25 LSB. Thus, only five comparison cycles are needed to reach 6 bits, leading to an increased ADC speed. A novel dither-based background calibration technique is devised to accurately control the CDT detection window size and ensure process, temperature, and voltage robustness. A prototype ADC in 40-nm CMOS achieves 35.3-dB signal-to-noise/distortion ratio (SNDR) and consumes 0.81 mW while sampling at 700 MS/s.
An effective hybridized classifier for breast cancer diagnosis
After lung cancer, breast cancer is known to be the greatest cause for death among females [20]. The improving effectiveness of machine learning approaches is being given a lot of importance by medical practitioners for breast cancer diagnosis. The paper proposes an effective hybridized classifier for breast cancer diagnosis. The classifier is made by combining an unsupervised artificial neural network (ANN) method named self organizing maps (SOM) with a supervised classifier called stochastic gradient descent (SGD). Also a comparative analysis is performed between the proposed approach and three supervised state of the art machine learning techniques decision tree (DTs), random forests (RF) and support vector machine (SVM). Initially SGD method is used in isolation for the classification task and then it is made to perform the classification after being hybridized with the unsupervised ANN technique on Wisconsin Breast Cancer Database (WBCD) [10]. The comparison is based up on classification accuracy that is produced by generating a confusion matrix. For verifying consistency of accuracy values, the classification task was repeated with Internet Advertisements Dataset [11]. The results of the classification experimentation using hybridization of SOM with SGD are much more superior to SGD in isolation. All the accuracy values have been computed after achieving a ten-fold cross validation on the both the datasets to further verify the classifier's performance.
PV generation enhancement with a virtual inertia emulator to provide inertial response to the grid
With high-penetration levels of renewable generating sources being integrated into the existing electric power grid, conventional generators are being replaced and grid inertial response is deteriorating. This technical challenge is more severe with photovoltaic (PV) generation than with wind generation because PV generation systems cannot provide inertial response unless special countermeasures are adopted. To enhance the inertial response, this paper proposes to synthesize a virtual inertia emulator (VIE) by using a battery energy storage system (BESS) and a three-phase grid-tied inverter to simulate a traditional generator such that the inertial response can be appropriately enhanced without sacrificing energy efficiency. Control systems for the VIE are presented in this paper, along with simulation results from PSCAD and MATLAB to validate the effectiveness of the proposed scheme.
CS 229 , Autumn 2010 Practice Midterm
1. The midterm will have about 5-6 long questions, and about 8-10 short questions. Space will be provided on the actual midterm for you to write your answers. 2. The midterm is meant to be educational, and as such some questions could be quite challenging. Use your time wisely to answer as much as you can! 3. For additional practice, please see CS 229 extra problem sets available at 1. [13 points] Generalized Linear Models Recall that generalized linear models assume that the response variable y (conditioned on x) is distributed according to a member of the exponential family: p(y; η) = b(y) exp(ηT (y) − a(η)), where η = θ T x. For this problem, we will assume η ∈ R. (a) [10 points] Given a training set {(x (i) , y (i))} m i=1 , the loglikelihood is given by (θ) = m i=1 log p(y (i) | x (i) ; θ). Give a set of conditions on b(y), T (y), and a(η) which ensure that the loglikelihood is a concave function of θ (and thus has a unique maximum). Your conditions must be reasonable, and should be as weak as possible. (E.g., the answer " any b(y), T (y), and a(η) so that (θ) is concave " is not reasonable. Similarly, overly narrow conditions, including ones that apply only to specific GLMs, are also not reasonable.) (b) [3 points] When the response variable is distributed according to a Normal distribution (with unit variance), we have b(y) = 1 √ 2π e −y 2 2 , T (y) = y, and a(η) = η 2 2. Verify that the condition(s) you gave in part (a) hold for this setting.
Baltica. A synopsis of vendian-permian palaeomagnetic data and their palaeotectonic implications
Torsvlk, T H , Smethurst , M A., Van der Voo, R , Trench, A , Abrahamsen, N and Halvorsen, E , 1992 Baltlca. A synopsis of V e n d l a n P e r m m n palaeomagnetic data and their palaeotectonlc implications Earth-Sci. Rev , 33 133-152. In light of recent additions to the Palaeozoic palaeo-magnetlc data-base, particularly for the Ordovlclan era, a revised apparent polar wander (APW) path for Baltica has been constructed following a rigorous synthesis of all Late Precambr lan-Permlan data The AP W path is characterized by two prominent loops Firstly, a Late Precambr lan-Cambr lan loop probably relating to a rifting event and secondly, a younger loop relating to a Mid-Silurian (Scandlan) colhsion event These features Imply major change In plate-tectonic reconfiguratlon Baltlca probably represented an individual continental unit m Early Palaeozoic times and was positioned m high southerly latitudes in an "'inverted" geographic orientation In such a reconstruction Baltlca was separated from the northern margin of Gondwana by the Tornqulst Sea and from Laurentla by the Iapetus Ocean The Tornqulst Zone is thus interpreted as a passive or dextral transform margin during the early Palaeozoic While undergoing counter-clockwise rotations (up to 1 6°/Ma), Baltica drifted northward through most ol the Palaeozoic, except for a short period of southerly movement in Late Silurian-Early Devonian times after colhslon with Laurentla. Rapid movements in latitude (up to 9 cm/y r ) are noted m Late Precambr lan/ear ly Palaeozoic times and slgmficant decrease in velocities throughout Palaeozoic time probably reflect the progressive amalgamation of a larger continent by Early-Devonian (Euramerlca) and Permian (Pangea) times The Tornqulst Sea had a principal component of palaeo-east-west orientation. Hence it is difficult to be precise in the timing of when micro-continents such as Eastern Avalonla and the European Masslfs ultimately collided along the southwestern margin of Baltica These micro-continents are considered to have been peripheral to Gondwana (m high southerly latitudes) during the Early Ordovlclan. Eastern Avalonla clearly had rifted off Gondwana by Llanvlrn-Llandedo times and may have collided with Baltlca during Late Ordovlclan times, although the present available Sdunan palaeomagnetlc data from Eastern Avalonia may suggest colhslon in Late Silurian t imes Across the lapetus facing margin of Baltica, Laurentla was s~tuated in equatorml to southerly latitudes during most of the Lower Palaeozoic These continents collided in Mid-Silurian times, l e a first collision between southwestern Norway and Green land /Sco t land which gave rise to the early Scandlan Orogeny (425 Ma) in southwestern Norway possible followed by a later, but less dramatic, Scandlan event in northern Norway at around 410 Ma Since Baltlca was geographically reverted m early Palaeozoic times, the colhsional margin could not have been a margin that once rifted off Laurentm as assumed in a number of plate-tectonic models I N T R O D U C T I O N The Palaeozoic palaeocontinent of Baltica is bounded to the west by the Iapetus suture, C o r r e s p o n d e n c e to: T H. Torsv ik , G e o l o g i c a l Su rvey o f Norway , P B 3006 L a d e . N-7002 T r o n d h e l m , Norway . to the north by the Trollfjord-Komagelv Fault Zone (Fig. 1), to the east by the Ural mountains, to the south by the VariscanHercynian suture, and to the southwest by the Tornquist Zone (Pegrum, 1984). As such, Baltica includes parts or all of Norway, Sweden, Finland, Denmark, Poland, Russia, Es0 0 1 2 8 2 5 2 / 9 2 / $ 1 5 00 © 1992 E l sev ie r Sc ience P u b h s h e r s B.V. Al l r igh t s r e s e r v e d 134 T H TORSVIK ET AL Trond H Torsvlk (34) was awarded a Dr. Phdos from the Umversaty of Bergen m 1985 From 1982 to 1988 he was a research fellow at the Umversltles of Bergen and Oxford and later held a visiting sc~entlst stipend at the University of Oxford (1989 and 1090) Dr Torsvlk is now a senior scwntlst at the Norwegmn Geological Survey Current works emphasize the global perspectwes in palaeomagnetlc palaeogeographical reconstructions In 1991 he was awarded the Outstanding Young Scientist Award from the European Umon of Geosclences Allan Trench: Born September 27, 1963 Present position Reserach Fellow m Geophysics, Umv. of Western Austraha Previous position and Degrees' BSc (HONS) Geology, London Umverslty in 1985 Ph D m Geophysics, Glasgow Umverslty m 1988 Research Fellow m Geophysics, Oxford Unwerslty m 1989-91 Geophysicist at the Norwegmn Geological Survey m 1991 Enk Halvorsen, born July 8, •942. Master of Science m Geophysics (Umverslty of Bergen, Norway) 1966 Semor lecturer, Sogn og Fjordane College, Norway. Dr Mark Andrew Smethurst (31) BSc Honours degree m Geology Umverslty of Lwerpool 1979-1982 PhD University of Leeds 1983-1986 Post-Doctoral Research Fellow' University of Oxford 1986-1989 Semor Geophysicist Norwegian Geological Survey 1989 to present. Rob Van der Voo, born August 4, 1940. B Sc, M Sc. and Ph D. degrees m 1961, 1965 and 1969 all from the University of Utrecht, the Netherlands. Chairman and Professor of Geological Sciences at the Umverslty of Michigan, U S A Nlels Abrahamsen, born Aprd 25, 1939 Mag sclent (Aarhus Umvers~ty, Denmark) m Geology 1965 Ph D m Geophysics (Aarhus University, Denmark) 1971. Semor lecturer, Dept of Earth Sciences, Aarhus University, Denmark. BALFICA SYNOPSIS OF VENDIAN PERMIAN PALAEOMAGNETIC DATA 135 tonia, Latvia, Lithuania and Ukraine. During Palaeozoic time, a number of orogenic events occurred along the margins of Baltica, ultimately leading to the incorporation of Baltica into the supercontinent Pangea. In this paper, we quantify the Palaeozoic drift history of Baltica using a rigorous synthesis of presently available palaeomagnetic data. In so doing, we attempt to further constrain the geotectonic history of the continent's margins. A newly-determined apparent polar wander (APW) path for Baltica will be described in detail. The generation of that APW path followed thorough re-evaluation of all published poles for Baltica made during the European Geotraverse Palaeomagnetic workshop in Lulefi, Sweden (Pesonen and Van der Voo, 1991).
A Countermeasure against Spoofing and DoS Attacks based on Message Sequence and Temporary ID in CAN
The development of Information and Communications Technologies (ICT) has affected various fields including the automotive industry. Therefore, vehicle network protocols such as Controller Area Network (CAN), Local Interconnect Network (LIN), and FlexRay have been introduced. Although CAN is the most widely used for vehicle network protocol, its security issue is not properly addressed. In this paper, we propose a security gateway, an improved version of existing CAN gateways, to protect CAN from spoofing and DoS attacks. We analyze sequence of messages based on the driver’s behavior to resist against spoofing attack and utilize a temporary ID and SipHash algorithm to resist against DoS attack. For the verification of our proposed method, OMNeT++ is used. The suggested method shows high detection rate and low increase of traffic. Also, analysis of frame drop rate during DoS attack shows that our suggested method can defend DoS attack.
Using Multiple Barometers to Detect the Floor Location of Smart Phones with Built-in Barometric Sensors for Indoor Positioning
Following the popularity of smart phones and the development of mobile Internet, the demands for accurate indoor positioning have grown rapidly in recent years. Previous indoor positioning methods focused on plane locations on a floor and did not provide accurate floor positioning. In this paper, we propose a method that uses multiple barometers as references for the floor positioning of smart phones with built-in barometric sensors. Some related studies used barometric formula to investigate the altitude of mobile devices and compared the altitude with the height of the floors in a building to obtain the floor number. These studies assume that the accurate height of each floor is known, which is not always the case. They also did not consider the difference in the barometric-pressure pattern at different floors, which may lead to errors in the altitude computation. Our method does not require knowledge of the accurate heights of buildings and stories. It is robust and less sensitive to factors such as temperature and humidity and considers the difference in the barometric-pressure change trends at different floors. We performed a series of experiments to validate the effectiveness of this method. The results are encouraging.
A Compositional Distributional Model of Meaning
We propose a mathematical framework for a unification of the distributional theory of meaning in terms of vector space models, and a compositional theory for grammatical types, namely Lambek’s pregroup semantics. A key observation is that the monoidal category of (finite dimensional) vector spaces, linear maps and the tensor product, as well as any pregroup, are examples of compact closed categories. Since, by definition, a pregroup is a compact closed category with trivial morphisms, its compositional content is reflected within the compositional structure of any non-degenerate compact closed category. The (slightly refined) category of vector spaces enables us to compute the meaning of a compound well-typed sentence from the meaning of its constituents, by ‘lifting’ the type reduction mechanisms of pregroup semantics to the whole category. These sentence meanings live in a single space, independent of the grammatical structure of the sentence. Hence we can use the inner-product to compare meanings of arbitrary sentences. A variation of this procedure which involves constraining the scalars of the vector spaces to the semiring of Booleans results in the well-known Montague semantics.
A Single-Stage Single-Switch LED Driver Based on the Integrated SEPIC Circuit and Class-E Converter
A novel high-power-factor single-stage single-switch light-emitting diodes (LEDs) driver for street lighting system is proposed in this paper. By integrating the single-ended primary-inductor converter (SEPIC) power factor correction circuit and Class-E resonant dc/dc converter, the proposed converter exhibits extreme simplicity and high reliability, as there is only one active power switch. The LED driver could achieve nearly a unit power factor by operating the SEPIC circuit at discontinuous conduction mode. With careful parameters design of a single Class-E resonant converter, the proposed converter can achieve soft-switching characteristics, which could significantly reduce the switching losses and greatly improve the system efficiency. Operational principle, analytical results, and design considerations at 100 kHz are presented, and a 100-W laboratory prototype is proposed to verify the theoretical analysis, whose efficiency is as high as 91.2% in full-load state under 110 VAC input.
Alcohol intake and risk of Parkinson's disease: a meta-analysis of observational studies.
BACKGROUND The association of alcohol intake with risk of Parkinson's disease remains unclear. METHODS Pertinent studies were identified in PubMed and EMBASE. The fixed-effect or random-effect model was selected based on heterogeneity. The dose-response relationship was assessed by restricted cubic splines. RESULTS We included 32 articles, involving 677,550 subjects (9994 cases). The smoking-adjusted risk of Parkinson's disease for the highest versus lowest level of alcohol intake was relative risk (RR) 0.78 (95% confidence interval [CI], 0.67-0.92) overall, 0.86 (95% CI, 0.75-0.995) in prospective studies, and 0.74 (95% CI, 0.58-0.96) in matched case-control studies. A significant association was found with beer (0.59; 95% CI, 0.39-0.90) but not with wine and liquor, and for males (0.65; 95% CI, 0.47-0.90) after a sensitivity analysis but not for females. The risk of Parkinson's disease decreased by 5% (0.95; 95% CI, 0.89-1.02) for every 1 drink/day increment in alcohol intake in a linear (Pfor nonlinearity  = 0.85) dose-response manner. CONCLUSIONS Alcohol intake, especially beer, might be inversely associated with risk of Parkinson's disease
Continuous subcutaneous insulin infusion: an approach to achieving normoglycaemia.
A study was performed to examine the feasibility of achieving long periods of near-normoglycaemia in patients with diabetes mellitus by giving a continuous subcutaneous infusion of insulin solution from a miniature, battery-driven, syringe pump. Twelve insulin-dependent diabetics had their insulin pumped through a subcutaneously implanted, fine nylon cannula; the basal infusion rate was electronically stepped up eightfold before meals. The blood glucose profile of these patients was closely monitored during the 24 hours of the subcutaneous infusion and compared with the profile on a control day, when the patients were managed with their usual subcutaneous insulin. Diet and exercise were standardised on both days. In five out of 14 studies the subcutaneous insulin infusion significantly lowered the mean blood glucose concentration without producing hypoglycaemic symptoms; in another six patients the mean blood glucose concentration was maintained. As assessed by the M value the level of control was statistically improved in six out of 14 studies by the infusion method and maintained in six other patients. To assess the effects of blood glucose control on diabetic microvascular disease it will be necessary to achieve long-term normoglycaemia in selected diabetics. The results of this preliminary study suggest that a continuous subcutaneous insulin infusion may be a means of maining physiological glucose concentrations in diabetics. Though several problems remain--for example, in determining the rate of infusion--longer-term studies with the miniature infusion pumps are now needed.
Towards Parallel Spatial Query Processing for Big Spatial Data
In recent years, spatial applications have become more and more important in both scientific research and industry. Spatial query processing is the fundamental functioning component to support spatial applications. However, the state-of-the-art techniques of spatial query processing are facing significant challenges as the data expand and user accesses increase. In this paper we propose and implement a novel scheme (named VegaGiStore) to provide efficient spatial query processing over big spatial data and numerous concurrent user queries. Firstly, a geography-aware approach is proposed to organize spatial data in terms of geographic proximity, and this approach can achieve high aggregate I/O throughput. Secondly, in order to improve data retrieval efficiency, we design a two-tier distributed spatial index for efficient pruning of the search space. Thirdly, we propose an "indexing + MapReduce'' data processing architecture to improve the computation capability of spatial query. Performance evaluations of the real-deployed VegaGiStore system confirm its effectiveness.
Long-term safety and tolerability of rotigotine transdermal system in patients with early-stage idiopathic Parkinson's disease: a prospective, open-label extension study.
PURPOSE This prospective, open-label extension (SP702; NCT00594165) of a 6-month double-blind, randomized study investigated the long-term safety and tolerability of rotigotine transdermal system in early Parkinson's disease (PD). METHODS Patients with early-stage idiopathic PD received transdermal rotigotine for up to 6 years at optimal dose (up to 16 mg/24h). Adjunctive levodopa was allowed. Primary outcomes included adverse events (AEs) and extent of rotigotine exposure. Other outcomes included time to levodopa, incidence of dyskinesias, and efficacy using the Unified Parkinson's Disease Rating Scale (UPDRS) II+III total score. RESULTS Of 217 patients entering the open-label study, 47% were still in the study upon closure; 24% withdrew because of AEs and 6% because of lack of efficacy. The median exposure to rotigotine was 1910 days (≈ 5 years, 3 months; range 1-2188 days). Most common AEs were somnolence (23% per patient-year), falls (17%), peripheral edema (14%), nausea (12%), and application site reactions (ASRs; 12%). 3% withdrew because of ASRs. 26% patients did not initiate levodopa; of those who did, fewer than half started levodopa in the first year. Dyskinesias were reported by 25% patients; the majority (83%) reported their first episode after initiating levodopa. Mean UPDRS II+III total scores remained below double-blind baseline for up to 2 years of open-label treatment. CONCLUSION This is the longest interventional study of rotigotine conducted to date. Transdermal rotigotine was generally well tolerated for up to 6 years; AEs reported were similar to those observed in shorter studies and led to discontinuation in only 24% patients.
Microsporidia: emerging advances in understanding the basic biology of these unique organisms.
Microsporidia are long-known parasites of a wide variety of invertebrate and vertebrate hosts. The emergence of these obligate intracellular organisms as important opportunistic pathogens during the AIDS pandemic and the discovery of new species in humans renewed interest in this unique group of organisms. This review summarises recent advances in the field of molecular biology of microsporidia which (i) contributed to the understanding of the natural origin of human-infecting microsporidia, (ii) revealed unique genetic features of their dramatically reduced genome and (iii) resulted in the correction of their phylogenetic placement among eukaryotes from primitive protozoans to highly evolved organisms related to fungi. Microsporidia might serve as new intracellular model organisms in the future given that gene transfer systems will be developed.
The Semantics and Pragmatics of Presupposition
In this paper, we offer a novel analysis of presuppositions, paving particular attention to the interaction between the knowledge resources that are required to interpret them. The analysis has two main features. First, we capture an analogy between presuppositions, anaphora and scope ambiguity (cf. van der Sandt 1992), by utilizing semantic underspecification (c£ Reyle 1993). Second, resolving this underspccification requires reasoning about how the presupposition is rhetorically connected to the discourse context. This has several consequences. First, since pragmatic information plays a role in computing the rhetorical relation, it also constrains the interpretation of presuppositions. Our account therefore provides a formal framework for analysing problematic data, which require pragmatic reasoning. Second, binding presuppositions to the context via rhetorical links replaces accommodating them, in the sense of adding them to the context (cf. Lewis 1979). The treatment of presupposition is thus generalized and integrated into the discourse update procedure. We formalize this approach in SDKT (Asher 1993; Lascarides & Asher 1993), and demonstrate that it provides a rich framework for interpreting presuppositions, where semantic and pragmatic constraints arc integrated. 1 I N T R O D U C T I O N The interpretation of a presupposition typically depends on the context in which it is made. Consider, for instance, sentences (i) vs. (2), adapted from van der Sandt (1992); the presupposition triggered by Jack's son (that Jack has a son) is implied by (1), but not by (2). (1) If baldness is hereditary, then Jack's son is bald. (2) If Jack has a son, then Jack's son is bald. The challenge for a formal semantic theory of presuppositions is to capture contextual effects such as these in an adequate manner. In particular, such a theory must account for why the presupposition in (1) projects from an embedded context, while the presupposition in (2) does not This is a special case of the Projection Problem; If a compound sentence S is made up of 240 The Semantics and Pragmatics of Presupposition constituent sentences 5, , . . . ,Sn , each with presuppositions P, ,... ,Pn, then what are the presuppositions of 5? Many recent accounts of presupposition that offer solutions to the Projection Problem have exploited the dynamics in dynamic semantics (e.g. Beaver 1996; Geurts 1996; Heim 1982; van der Sandt 1992). In these frameworks, assertional meaning is a relation between an input context (or information state) and an output context Presuppositions impose tests on the input context, which researchers have analysed in two ways: either the context must satisfy the presuppositions of the clause being interpreted (e.g. Beaver 1996; Heim 1982) or the presuppositions are anaphoric (e.g. van der Sandt 1992) and so must be bound to elements in the context But clauses carrying presuppositions can be felicitous even when the context fails these tests (e.g. (1)). A special purpose procedure known as accommodation is used to account for this (cf. Lewis 1979): if the context fails the presupposition test, then the presupposition is accommodated or added to it, provided various constraints are met (e.g. the result must be satisfiable). This combination of test and accommodation determines the projection of a presupposition. For example, in (1), the antecedent produces a context which fails the test imposed by the presupposition in the consequent (cither satisfaction or binding). So it is accommodated. Since it can be added to the context outside the scope of the conditional, it can project out from its embedding. In contrast, the antecedent in (2) ensures that the input context passes the presupposition test So the presupposition is not accommodated, the input context is not changed, and the presupposition is not projected out from the conditional. Despite these successes, this approach has trouble with some simple predictions. Compare the following two dialogues (3abc) and (3abd): (3) a. A: Did you hear about John? b. B: No, what? c A: He had an accident. A car hit him. d. A: He had an accident ??The car hit him. The classic approach we just outlined would predict no difference between these two discourses and would find them both acceptable. But (3abd) is unacceptable. As it stands it lacks discourse coherence, while (3abc) does not; the presupposition of the car cannot be accommodated in (3abd). We will argue that the proper treatment of presuppositions in discourse, like a proper treatment of assertions, requires a notion of discourse coherence and must take into account the rhetorical function of both presupposed and asserted information. We will provide a formal account of presuppositions, which integrates constraints from compositional semantics and pragmatics in the required manner. Nicholas Asher and Alex Lascaridn 241 We will start by examining van der Sandt's theory of presupposition satisfaction, since he offers the most detailed proposal concerning accommodatioa We will highlight some difficulties, and offer a new proposal which attempts to overcome them. We will adopt van der Sandt's view that presuppositions are anaphoric, but give it some new twists. First, like other anaphoric expressions (e.g. anaphoric pronouns), presuppositions have an underspecified semantic content Interpreting them in context involves resolving the underspecification. The second distinctive feature is the way we resolve underspecification. We assume a formal model of discourse semantics known as SDRT (e.g. Asher 1993; Lascarides & Asher 1993). where semantic underspecification in a proposition is resolved by reasoning about the way that proposition rhetorically connects to the discourse context Thus, interpreting presuppositions becomes a part of discourse update in SDRT. This has three important consequences. The first concerns pragmatics. SDRT provides an explicit formal account of how semantic and pragmatic information interact when computing a rhetorical link between a proposition and its discourse context This interaction will define the interpretation of presuppositions, and thus provide a richer source of constraints on presuppositions than standard accounts. This account of presuppositions will exploit pragmatic information over and above the clausal implicatures of the kind used in Gazdar's (1979) theory of presuppositions. We'll argue in section 2 that going beyond these implicatures is necessary to account for some of the data. The second consequence of interpreting presuppositions is SDRT concerns accommodation. In all previous dynamic theories of presupposition, accommodation amounts to adding, but not relating, the presupposed content to some accessible part of the context This mechanism is peculiar to presuppositions; it does not feature in accounts of any other phenomena, including other anaphoric phenomena. In contrast, we model presuppositions entirely in terms of the SDRT discourse update procedure. We replace the notion that presuppositions are added to the discourse context with the notion that they are rhetorically linked to it Given that the theory of rhetorical structure in SDRT is used to model a wide range of linguistic phenomena when applied to assertions, it would be odd if presupposed information were to be entirely insensitive to rhetorical function. We will show that presupposed information is sensitive to rhetorical function and that the notion of accommodation should be replaced with a more constrained notion of discourse update. The third consequence concerns the compositional treatment of presupposition. Our approach affords that one could call a compositional treatment of presuppositions. The discourse semantics of SDRT is The Semantics and Pragmatics of Presupposition compositional upon discourse structure: the meaning of a discourse is a function of the meaning of its parts and how they are related to each other. In SDRT presuppositions, like assertions, generate underspecified but interpretable logical forms. The procedure for constructing the semantic representation of discourse takes these underspecified logical forms, resolves some of the underspecifications and relates them together by means of discourse relations representing their rhetorical function in the discourse. So presuppositions have a content that contributes to the content of the discourse as a whole. Indeed, presuppositions have no less a compositional treatment than assertions. Our discourse-based approach affords a wider perspective on presuppositions. Present dynamic accounts of presupposition have concentrated on phenomena like the Projection Problem. For us the Projection Problem amounts to an important special case, which applies to single sentence discourses, of the more general 'discourse' problem: how do presuppositions triggered by elements of a multi-sentence discourse affect its structure and content? We aim to tackle this question here. And we claim that a rich notion of discourse structure, which utilizes rhetorical relations, is needed. While we believe that our discourse based theory of presupposition is novel, we hasten to add that many authors on presupposition like Beaver (1996) and van der Sandt (1992) would agree with us that the treatment of presupposition must be integrated with a richer notion of discourse structure and discourse update than is available in standard dynamic semantics (e.g. Kamp & Reyle's DRT, Dynamic Predicate Logic or Update Semantics), because they believe that pragmatic information constrains the interpretation of presuppositions. We wish to extend their theories with this requisite notion of discourse structure. 2 VAN DER SANDT'S DYNAMIC A C C O U N T AND ITS PROBLEMS Van der Sandt (1992) views presuppositions as anaphors with semantic content He develops this view within the framework of DRT (Kamp & Reyle 1993), in order to exploit its constraints on anaphoric antecedents. A presupposition can bind t
Handwriting segmentation of unconstrained Oriya text
Segmentation of handwritten text into lines, words and characters is one of the important steps in the handwritten recognition system. For the segmentation of unconstrained Oriya handwritten text into individual characters, a water reservoir-concept based scheme is proposed in this paper. Here, at first, the text image is segmented into lines, and then lines are segmented into individual words, and words are segmented into individual characters. For line segmentation the document is divided into vertical stripes. Analyzing the heights of the water reservoirs obtained from different components of the document, the width of a stripe is calculated. Stripe-wise horizontal histograms are then computed and the relationship of the peak-valley points of the histograms is used for line segment. Based on vertical projection profile and structural features of Oriya characters, text lines are segmented into words. For character segmentation, at first, isolated and connected (touching) characters in a word are detected. Using structural, topological and water-reservoir-concept based features touching characters of the word are then segmented.
Goal-Directed Decision Making with Spiking Neurons
UNLABELLED Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. SIGNIFICANCE STATEMENT Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level.
Context-Aware Single Image Rain Removal
Rain removal from a single image is one of the challenging image denoising problems. In this paper, we present a learning-based framework for single image rain removal, which focuses on the learning of context information from an input image, and thus the rain patterns present in it can be automatically identified and removed. We approach the single image rain removal problem as the integration of image decomposition and self-learning processes. More precisely, our method first performs context-constrained image segmentation on the input image, and we learn dictionaries for the high-frequency components in different context categories via sparse coding for reconstruction purposes. For image regions with rain streaks, dictionaries of distinct context categories will share common atoms which correspond to the rain patterns. By utilizing PCA and SVM classifiers on the learned dictionaries, our framework aims at automatically identifying the common rain patterns present in them, and thus we can remove rain streaks as particular high-frequency components from the input image. Different from prior works on rain removal from images/videos which require image priors or training image data from multiple frames, our proposed self-learning approach only requires the input image itself, which would save much pre-training effort. Experimental results demonstrate the subjective and objective visual quality improvement with our proposed method.
Acute respiratory health effects of air pollution on children with asthma in US inner cities.
BACKGROUND Children with asthma in inner-city communities may be particularly vulnerable to adverse effects of air pollution because of their airways disease and exposure to relatively high levels of motor vehicle emissions. OBJECTIVE To investigate the association between fluctuations in outdoor air pollution and asthma morbidity among inner-city children with asthma. METHODS We analyzed data from 861 children with persistent asthma in 7 US urban communities who performed 2-week periods of twice-daily pulmonary function testing every 6 months for 2 years. Asthma symptom data were collected every 2 months. Daily pollution measurements were obtained from the Aerometric Information Retrieval System. The relationship of lung function and symptoms to fluctuations in pollutant concentrations was examined by using mixed models. RESULTS Almost all pollutant concentrations measured were below the National Ambient Air Quality Standards. In single-pollutant models, higher 5-day average concentrations of NO2, sulfur dioxide, and particles smaller than 2.5 microm were associated with significantly lower pulmonary function. Higher pollutant levels were independently associated with reduced lung function in a 3-pollutant model. Higher concentrations of NO2 and particles smaller than 2.5 microm were associated with asthma-related missed school days, and higher NO2 concentrations were associated with asthma symptoms. CONCLUSION Among inner-city children with asthma, short-term increases in air pollutant concentrations below the National Ambient Air Quality Standards were associated with adverse respiratory health effects. The associations with NO2 suggest that motor vehicle emissions may be causing excess morbidity in this population.
Surface from Scattered Points A Brief Survey of Recent Developments
The paper delivers a brief overview of recent developments in the field of surface reconstruction from scattered point data. The focus is on computational geometry methods, implicit surface interpolation techniques, and shape learning approaches.
A Survey On Video Forgery Detection
The Digital Forgeries though not visibly identifiable to human perception it may alter or meddle with underlying natural statistics of digital content. Tampering involves fiddling with video content in order to cause damage or make unauthorized alteration/modification. Tampering detection in video is cumbersome compared to image when considering the properties of the video. Tampering impacts need to be studied and the applied technique/method is used to establish the factual information for legal course in judiciary. In this paper we give an overview of the prior literature and challenges involved in video forgery detection where passive approach is found.
Shifted Jacobi tau method for solving the space fractional diffusion equation
In this paper, approximation techniques based on the shifted Jacobi together with spectral tau technique are presented to solve a class of initial-boundary value problems for the fractional diffusion equations with variable coefficients on a finite domain. The fractional derivatives are described in the Caputo sense. The technique is derived by expanding the required approximate solution as the elements of shifted Jacobi polynomials. Using the operational matrix of the fractional derivative, the problem can be reduced to a set of linear algebraic equations. Numerical examples are included to demonstrate the validity and applicability of the technique and a comparison is made with the existing results to show that the proposed method is easy to implement and produce accurate results.
Supervised learning from incomplete data via an EM approach
Real-world learning tasks may involve high-dimensional data sets with arbitrary patterns of missing data. In this paper we present a framework based on maximum likelihood density estimation for learning from such data set.s. VVe use mixture models for the density estimates and make two distinct appeals to the ExpectationMaximization (EM) principle (Dempster et al., 1977) in deriving a learning algorithm-EM is used both for the estimation of mixture components and for coping wit.h missing dat.a. The resulting algorithm is applicable t.o a wide range of supervised as well as unsupervised learning problems. Result.s from a classification benchmark-t.he iris data set-are presented.
Consolidated Tree Classifier Learning in a Car Insurance Fraud Detection Domain with Class Imbalance
This paper presents an analysis of the behaviour of Consolidated Trees, CT (classification trees induced from multiple subsamples but without loss of explaining capacity). We analyse how CT trees behave when used to solve a fraud detection problem in a car insurance company. This domain has two important characteristics: the explanation given to the classification made is critical to help investigating the received reports or claims, and besides, this is a typical example of class imbalance problem due to its skewed class distribution. In the results presented in the paper CT and C4.5 trees have been compared, from the accuracy and structural stability (explaining capacity) point of view and, for both algorithms, the best class distribution has been searched.. Due to the different associated costs of different error types (costs of investigating suspicious reports, etc.) a wider analysis of the error has also been done: precision/recall, ROC curve, etc.
Intervening before the onset of Type 1 diabetes: baseline data from the European Nicotinamide Diabetes Intervention Trial (ENDIT)
To set up a clinical trial to establish whether nicotinamide can prevent or delay clinical onset of Type 1 diabetes. The European Nicotinamide Diabetes Intervention Trial is a randomised, double-blind, placebo-controlled intervention trial undertaken in 18 European countries, Canada and the USA. Entry criteria were a first-degree family history of Type 1 diabetes, age 3–40 years, confirmed islet cell antibody (ICA) levels greater than or equal to 20 JDF units, and a non-diabetic OGTT; the study group was further characterised by intravenous glucose tolerance testing, measurement of antibodies to GAD, IA-2 and insulin and HLA class II genotyping. ICA screening was carried out in approximately 30,000 first-degree relatives. A total of 1004 individuals fulfilled ICA criteria for eligibility, and 552 (288 male) were randomised to treatment. Of these, 331 were aged less than 20 years (87% siblings and 13% offspring of the proband with diabetes) and 221 were 20 years of age or more (76% parents, 21% siblings and 3% offspring). Oral glucose tolerance was normal in 500 and impaired in 52 (9.4%), and first phase insulin response in the IVGTT was below the 10th centile in 34%. Additional islet autoantibodies were identified in 354 trial entrants. Diabetes-associated HLA class II haplotypes were found in 84% of the younger age group and 80% of the older group. The protective haplotype HLA-DQA1*0102-DQB1*0602 was found in 10% overall. ENDIT has shown that a trial of an intervention designed to halt or delay progression to Type 1 diabetes can be carried out on a multinational collaborative basis, as and when potentially safe and effective forms of intervention become available. Primary screening with biochemically defined autoantibodies will substantially reduce the number of lower risk individuals to be included in future intervention trials
Natural Hazards and Earth System Sciences Seismic risk mapping for Germany
The aim of this study is to assess and map the seismic risk for Germany, restricted to the expected losses of damage to residential buildings. There are several earthquake prone regions in the country which have produced Mw magnitudes above 6 and up to 6.7 corresponding to observed ground shaking intensity up to VIII–IX (EMS-98). Combined with the fact that some of the earthquake prone areas are densely populated and highly industrialized and where therefore the hazard coincides with high concentration of exposed assets, the damaging implications from earthquakes must be taken seriously. In this study a methodology is presented and pursued to calculate the seismic risk from (1) intensity based probabilistic seismic hazard, (2) vulnerability composition models, which are based on the distribution of residential buildings of various structural types in representative communities and (3) the distribution of assets in terms of replacement costs for residential buildings. The estimates of the risk are treated as primary economic losses due to structural damage to residential buildings. The obtained results are presented as maps of the damage and risk distributions. For a probability level of 90% non-exceedence in 50 years (corresponding to a mean return period of 475 years) the mean damage ratio is up to 20% and the risk up to hundreds of millions of euro in the most endangered communities. The developed models have been calibrated with observed data from several damaging earthquakes in Germany and the nearby area in the past 30 years.
Lagrangian approach to the dynamics of dark matter-wave solitons
We analyze the dynamics of dark matter-wave solitons on a Thomas-Fermi cloud described by the Gross-Pitaevskii equation with radial symmetry. One-dimensional, ring, and spherical dark solitons are considered, and the evolution of their amplitudes, velocities, and centers is investigated by means of a Lagrangian approach. In the case of large-amplitude oscillations, higher-order corrections to the corresponding equations of motion for the soliton characteristics are shown to be important in order to accurately describe its dynamics. The numerical results are found to be in very good agreement with the analytical predictions.
Semiconductor losses in voltage source and current source IGBT converters based on analytical derivation
A crucial criterion for the dimensioning of three phase PWM converters is the cooling of the power semiconductors and thus determination of power dissipation in the semiconductors at certain operating points and its maximum. Methods for the calculation and simulation of semiconductor losses in the most common voltage source and current source three phase PWM converters are well known. Here a complete analytical calculation of the power semiconductor losses for both converter types is presented, most parts are already known, some parts are developed here, as far as the authors know. Conduction losses as well as switching losses are included in the calculation using a simplified model, based on power semiconductor data sheet information. This approach should benefit the prediction and further investigations of the performance of power semiconductor losses for both kinds of converters. Results of the calculation are shown. Dependencies of the semiconductor power losses on the type of converter, the operating point and the pulse width modulation are pointed out, showing the general behaviour of power losses for both converter types.
Deontological and utilitarian inclinations in moral decision making: a process dissociation approach.
Dual-process theories of moral judgment suggest that responses to moral dilemmas are guided by two moral principles: the principle of deontology states that the morality of an action depends on the intrinsic nature of the action (e.g., harming others is wrong regardless of its consequences); the principle of utilitarianism implies that the morality of an action is determined by its consequences (e.g., harming others is acceptable if it increases the well-being of a greater number of people). Despite the proposed independence of the moral inclinations reflecting these principles, previous work has relied on operationalizations in which stronger inclinations of one kind imply weaker inclinations of the other kind. The current research applied Jacoby's (1991) process dissociation procedure to independently quantify the strength of deontological and utilitarian inclinations within individuals. Study 1 confirmed the usefulness of process dissociation for capturing individual differences in deontological and utilitarian inclinations, revealing positive correlations of both inclinations to moral identity. Moreover, deontological inclinations were uniquely related to empathic concern, perspective-taking, and religiosity, whereas utilitarian inclinations were uniquely related to need for cognition. Study 2 demonstrated that cognitive load selectively reduced utilitarian inclinations, with deontological inclinations being unaffected. In Study 3, a manipulation designed to enhance empathy increased deontological inclinations, with utilitarian inclinations being unaffected. These findings provide evidence for the independent contributions of deontological and utilitarian inclinations to moral judgments, resolving many theoretical ambiguities implied by previous research.
Mean value coordinates
Equation (1.1) expresses v0 as a convex combination of the neighbouring points v1, . . . , vk. In the simplest case k = 3, the weights λ1, λ2, λ3 are uniquely determined by (1.1) and (1.2) alone; they are the barycentric coordinates of v0 with respect to the triangle [v1, v2, v3], and they are positive. This motivates calling any set of non-negative weights satisfying (1.1–1.2) for general k, a set of coordinates for v0 with respect to v1, . . . , vk. There has long been an interest in generalizing barycentric coordinates to k-sided polygons with a view to possible multisided extensions of Bézier surfaces; see for example [8 ]. In this setting, one would normally be free to choose v1, . . . , vk to form a convex polygon but would need to allow v0 to be any point inside the polygon or on the polygon, i.e. on an edge or equal to a vertex. More recently, the need for such coordinates arose in methods for parameterization [2 ] and morphing [5 ], [6 ] of triangulations. Here the points v0, v1, . . . , vk will be vertices of a (planar) triangulation and so the point v0 will never lie on an edge of the polygon formed by v1, . . . , vk. If we require no particular properties of the coordinates, the problem is easily solved. Because v0 lies in the convex hull of v1, . . . , vk, there must exist at least one triangle T = [vi1 , vi2 , vi3 ] which contains v0, and so we can take λi1 , λi2 , λi3 to be the three barycentric coordinates of v0 with respect to T , and make the remaining coordinates zero. However, these coordinates depend randomly on the choice of triangle. An improvement is to take an average of such coordinates over certain covering triangles, as proposed in [2 ]. The resulting coordinates depend continuously on v0, v1, . . . , vk, yet still not smoothly. The
An Introduction to MCMC for Machine Learning
This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.