title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Securing virtual machine orchestration with blockchains | The blockchain technology is gaining momentum because of its possible application to other systems than the cryptocurrency one. Indeed, blockchain, as a de-centralized system based on a distributed digital ledger, can be utilized to securely manage any kind of assets, constructing a system that is independent of any authorization entity. In this paper, we briefly present blockchain and our work in progress, the VMOA blockchain, to secure virtual machine orchestration operations for cloud computing and network functions virtualization systems. Using tutorial examples, we describe our design choices and draw implementation plans. |
Lung biopsy using harmonic scalpel: a randomised single institute study. | OBJECTIVE
Applicability of harmonic scalpel in lung biopsy was investigated in a randomised single institute study.
METHODS
Safety of the method, morbidity, drainage duration and in-hospital stays were compared in two randomised groups of patients in which either ultrasonic harmonic scalpel (n:20) or endostapler (n:20) were used for pulmonary biopsies during VATS.
RESULTS
An advantage of 16min in average operation time was found in favour of the harmonic scalpel (30.75 vs 46.9min) which was significant. There were no differences in average drainage duration (40.2 vs 30.6h) and pleural fluid volume (258 vs 232ml). Minor complication rates (3 vs 3) were identical and in-hospital stays (7.6 vs 7.2 days) were also similar.
CONCLUSIONS
Overall, the vibration transmission method was shown not to be inferior to the standard endostapling technique. A safe new method offers an alternative technique for peripherial lung biopsy. |
Diffusion-weighted imaging and cognition in the leukoariosis and disability in the elderly study. | BACKGROUND AND PURPOSE
The mechanisms by which leukoariosis impacts on clinical and cognitive functions are not yet fully understood. We hypothesized that ultrastructural abnormalities of the normal-appearing brain tissue (NABT) assessed by diffusion-weighted imaging played a major and independent role.
METHODS
In addition to a comprehensive clinical, neuropsychologic, and imaging work-up, diffusion-weighted imaging was performed in 340 participants of the multicenter leukoariosis and disability study examining the impact of white matter hyperintensities (WMH) on 65- to 85-year old individuals without previous disability. WMH severity was rated according to the Fazekas score. Multivariate regression analysis served to assess correlations of histogram metrics of the apparent diffusion coefficient (ADC) of whole-brain tissue, NABT, and of the mean ADC of WMH with cognitive functions.
RESULTS
Increasing WMH scores were associated with a higher frequency of hypertension, a greater WMH volume, more brain atrophy, worse overall cognitive performance, and changes in ADC. We found strong associations between the peak height of the ADC histogram of whole-brain tissue and NABT with memory performance, executive dysfunction, and speed, which remained after adjustment for WMH lesion volume and brain atrophy and were consistent among centers. No such association was seen with the mean ADC of WMH.
CONCLUSIONS
Ultrastructural abnormalities of NABT increase with WMH severity and have a strong and independent effect on cognitive functions, whereas diffusion-weighted imaging metrics within WMH have no direct impact. This should be considered when defining outcome measures for trials that attempt to ameliorate the consequences of WMH progression. |
Value Creation by Automation and Robotics in Construction: An Introduction | In the decades to come, building production will concentrate in the metropolitan centres of the world due to the migration of the world’s population to the major cities. An improvement of the construction process in densely populated inner cities will be the task of the future. This focuses on performance management, construction engineering and construction management. New developments being discussed in this field are new design strategies, human machine technologies, employee safety, progress monitoring, distributed production information and Personal Digital Assistants (PDAs). |
Development of student teachers' understanding of conditions for life, cycles of matter and energy flow in an aquatic ecosystem - a case study | Development of student teachers' understanding of conditions for life, cycles of matter and energy flow in an aquatic ecosystem - a case study |
A pilot study of the use of EEG-based synchronized Transcranial Magnetic Stimulation (sTMS) for treatment of Major Depression | BACKGROUND
Repetitive Transcranial Magnetic Stimulation (rTMS) is an effective treatment for Major Depressive Disorder (MDD), and is based upon delivery of focal high-energy pulses of electromagnetic stimulation. We postulated that delivery of rTMS at the subject's individual alpha frequency (synchronized TMS, or sTMS) would achieve efficacy with lower energy of stimulation. We developed a device that rotates neodymium cylindrical magnets at three locations along the midline above the subject's scalp to impart low-energy, sinusoidal-waveform magnetic brain stimulation over a broad area, and performed this efficacy study.
METHOD
Fifty-two subjects with MDD were enrolled in a randomized, sham controlled, double-blind treatment study (Trial Registration: NCT01683019). Forty-six subjects were included in the final analysis. Most subjects received concurrent antidepressant medications that remained unchanged during the study. Subjects were randomized to three treatment groups: 1) active sTMS with a fixed frequency at the subject's alpha frequency; 2) active sTMS with a random stimulus frequency that varied between 8 Hz and 13 Hz; and, 3) sham sTMS. 20 half-hour sTMS sessions were administered 5 days per week for 4 weeks.
RESULTS
Subjects with either fixed or random frequency active sTMS had statistically significantly greater percentage reduction in depression severity compared to sham (48.5% vs. 19.3%, respectively; p = 0.001). No significant difference was found between fixed and random groups (p = 0.30). No significant side effects were reported.
CONCLUSIONS
These results suggest that sTMS may be an effective treatment for MDD. |
Channel Agnostic End-to-End Learning Based Communication Systems with Conditional GAN | In this article, we use deep neural networks (DNNs) to develop an end-to-end wireless communication system, in which DNNs are employed for all signal-related functionalities, including encoding, decoding, modulation, and equalization. However, accurate instantaneous channel transfer function, i.e., the channel state information (CSI), is necessary to compute the gradient of the DNN representing. In many communication systems, the channel transfer function is hard to obtain in advance and varies with time and location. In this article, this constraint is released by developing a channel agnostic end-to-end system that does not rely on any prior information about the channel. We use a conditional generative adversarial net (GAN) to represent the channel effects, where the encoded signal of the transmitter will serve as the conditioning information. In addition, in order to obtain accurate channel state information for signal detection at the receiver, the received signal corresponding to the pilot data is added as a part of the conditioning information. From the simulation results, the proposed method is effective on additive white Gaussian noise (AWGN) and Rayleigh fading channels, which opens a new door for building data-driven communication systems. |
The one-time pad revisited | The one-time pad, the mother of all encryption schemes, is well known to be information-theoretically secure, in contrast to most encryption schemes used in practice, which are at most computationally secure. In this paper, we focus on another, completely different aspect in which the one-time pad is superior to normal encryption, and which surfaces only when the receiver (not only the eavesdropper) is considered potentially dishonest, as can be the case in a larger protocol context in which encryption is used as a sub-protocol. For example, such a dishonest receiver (who is, say, coerced by the eavesdropper) can in normal encryption verifiably leak the message to the eavesdropper by revealing the secret key. While this leakage feature can provably not be avoided completely, it is more limited if the one-time pad is used. We use the constructive cryptography framework to make these statements precise. |
A wideband CMOS LNA design approach | A methodology for designing wideband CMOS LNAs is proposed. This is an extension to the existing narrowband power-constrained simultaneous noise-input matching (PCSNIM) LNA design technique. To demonstrate the application of the proposed method, two wideband single-ended cascode LNAs are designed in a 0.18 /spl mu/m CMOS technology to operate in the frequency bands of 1.5-2.5 GHz and 3.2-4.8 GHz, respectively. Based on simulation results, these LNAs achieve: maximum forward gain of 15.4 dB (17.9 dB) with 3 dB bandwidth of 1.6 GHz (2.7 GHz); maximum in-band noise figure of 0.9 dB (1.6 dB); worst case 1dB compression point of -13 dBm (-15 dBm); IIP3 of better than -2.5 dBm (-4.5 dBm) over the entire bandwidth. The LNAs draw 7.5 mA (8.8 mA) from a 1.5 V supply. |
Neural Models for Key Phrase Detection and Question Generation | We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings. |
Classroom Misbehavior in the Eyes of Students: A Qualitative Study | Using individual interviews, this study investigated perceptions of classroom misbehaviors among secondary school students in Hong Kong (N = 18). Nineteen categories of classroom misbehaviors were identified, with talking out of turn, disrespecting teacher, and doing something in private being most frequently mentioned. Findings revealed that students tended to perceive misbehaviors as those actions inappropriate in the classroom settings and even disrupting teachers' teaching and other students' learning. Among various misbehaviors, talking out of turn and disrespecting teacher were seen as the most disruptive and unacceptable. These misbehaviors were unacceptable because they disturbed teaching and learning, and violated the values of respect, conformity, and obedience in the teacher-student relationship within the classroom. The frequency and intensity of misbehaviors would escalate if students found it fun, no punishment for such misbehaviors, or teachers were not authoritative enough in controlling the situations. Implications for further research and classroom management are discussed. |
Blood Flow Restriction Training: Implementation into Clinical Practice | To improve muscular strength and hypertrophy the American College of Sports Medicine recommends moderate to high load resistance training. However, use of moderate to high loads are often not feasible in clinical populations. Therefore, the emergence of low load (LL) blood flow restriction (BFR) training as a rehabilitation tool for clinical populations is becoming popular. Although the majority of research on LL-BFR training has examined healthy populations, clinical applications are emerging. Overall, it appears BFR training is a safe and effective tool for rehabilitation. However, additional research is needed prior to widespread application. |
French literature, thought and culture in the nineteenth century : a material world : essays in honour of D.G. Charlton | List of Illustrations - Preface - Notes on the Contributors - A Tribute to Donald Geoffrey Charlton and a Bibliography of his Writings C.W.Thompson - Romanticism and the Material World: Mind, Nature and Analogy C.Crossley - Materialism in Nineteenth-Century France M.Kelly - Managing Democratic Change: the Politics of Materialism in the Work of Alexis de Tocqueville D.Hanley - Producing, Retailing, Consuming: France 1830-70 R.Magraw - Things, Distinction and Decay in Nineteenth-Century French Literature B.Rigby - From Neo-Classical to Romantic Aesthetics: The Status of the Material World in Nineteenth-Century French Drama W.D.Howarth - Towards the Materiality of the Sign: Aesthetics and Poetics in Nineteenth-Century France D.Scott - No Object too Humble? Still Life Painting in French Art Criticism during the Second Empire J.Kearns - Unstable Objects: acropole and barbarie in Rimbaud's Villes I M.Treharne - Immaterial Views? Science, Intransigence and the Female Spectator of Modern French Art in 1879 A.Callen - Object Choices: Taste and Fetichism in Flaubert's L'Education sentimentale D.Knight - Bibliography - Index |
IM2GPS: estimating geographic information from a single image | Estimating geographic information from an image is an excellent, difficult high-level computer vision problem whose time has come. The emergence of vast amounts of geographically-calibrated image data is a great reason for computer vision to start looking globally - on the scale of the entire planet! In this paper, we propose a simple algorithm for estimating a distribution over geographic locations from a single image using a purely data-driven scene matching approach. For this task, we leverage a dataset of over 6 million GPS-tagged images from the Internet. We represent the estimated image location as a probability distribution over the Earthpsilas surface. We quantitatively evaluate our approach in several geolocation tasks and demonstrate encouraging performance (up to 30 times better than chance). We show that geolocation estimates can provide the basis for numerous other image understanding tasks such as population density estimation, land cover estimation or urban/rural classification. |
Flood Mapping and Flood Dynamics of the Mekong Delta: ENVISAT-ASAR-WSM Based Time Series Analyses | Satellite remote sensing is a valuable tool for monitoring flooding. Microwave sensors are especially appropriate instruments, as they allow the differentiation of inundated from non-inundated areas, regardless of levels of solar illumination or frequency of cloud cover in regions experiencing substantial rainy seasons. In the current study we present the longest synthetic aperture radar-based time series of flood and inundation information derived for the Mekong Delta that has been analyzed for this region so far. We employed overall 60 Envisat ASAR Wide Swath Mode data sets at a spatial resolution of 150 meters acquired during the years 2007–2011 to facilitate a thorough understanding of the flood regime in the Mekong Delta. The Mekong Delta in southern Vietnam comprises 13 provinces and is home to 18 million inhabitants. Extreme dry seasons from late December to May and wet seasons from June to December characterize people’s rural life. In this study, we show which areas of the delta are frequently affected by floods and which regions remain dry all year round. Furthermore, we present which areas are flooded at which frequency and elucidate the patterns of flood progression over the course of the rainy season. In this context, we also examine the impact of dykes on floodwater emergence and assess the relationship between retrieved flood occurrence patterns and land use. In addition, the advantages and shortcomings of ENVISAT ASAR-WSM based flood mapping are discussed. The results contribute to a comprehensive understanding of Mekong Delta flood OPEN ACCESS Remote Sens. 2013, 5 688 dynamics in an environment where the flow regime is influenced by the Mekong River, overland water-flow, anthropogenic floodwater control, as well as the tides. |
Top-Down Tree Structured Text Generation | Text generation is a fundamental building block in natural language processing tasks. Existing sequential models performs autoregression directly over the text sequence and have difficulty generating long sentences of complex structures. This paper advocates a simple approach that treats sentence generation as a tree-generation task. By explicitly modelling syntactic structures in a constituent syntactic tree and performing topdown, breadth-first tree generation, our model fixes dependencies appropriately and performs implicit global planning. This is in contrast to transition-based depth-first generation process, which has difficulty dealing with incomplete texts when parsing and also does not incorporate future contexts in planning. Our preliminary results on two generation tasks and one parsing task demonstrate that this is an effective strategy. |
Online Adaptation of Convolutional Neural Networks for Video Object Segmentation | We tackle the task of semi-supervised video object segmentation, i.e. segmenting the pixels belonging to an object in a video using the ground truth pixel mask for the first frame. We build on the recently introduced one-shot video object segmentation (OSVOS) approach which uses a pretrained network and fine-tunes it on the first frame. While achieving impressive performance, at test time OSVOS uses the fine-tuned network in unchanged form and is not able to adapt to large changes in object appearance. To overcome this limitation, we propose Online Adaptive Video Object Segmentation (OnAVOS) which updates the network online using training examples selected based on the confidence of the network and the spatial configuration. Additionally, we add a pretraining step based on objectness, which is learned on PASCAL. Our experiments show that both extensions are highly effective and improve the state of the art on DAVIS to an intersection-over-union score of 85.7%. |
Face recognition using PCA and SVM | Automatic recognition of people has received much attention during the recent years due to its many applications in different fields such as law enforcement, security applications or video indexing. Face recognition is an important and very challenging technique to automatic people recognition. Up to date, there is no technique that provides a robust solution to all situations and different applications that face recognition may encounter. In general, we can make sure that performance of a face recognition system is determined by how to extract feature vector exactly and to classify them into a group accurately. It, therefore, is necessary for us to closely look at the feature extractor and classifier. In this paper, Principle Component Analysis (PCA) is used to play a key role in feature extractor and the SVMs are used to tackle the face recognition problem. Support Vector Machines (SVMs) have been recently proposed as a new classifier for pattern recognition. We illustrate the potential of SVMs on the Cambridge ORL Face database, which consists of 400 images of 40 individuals, containing quite a high degree of variability in expression, pose, and facial details. The SVMs that have been used included the Linear (LSVM), Polynomial (PSVM), and Radial Basis Function (RBFSVM) SVMs. We provide experimental evidence which show that Polynomial and Radial Basis Function (RBF) SVMs performs better than Linear SVM on the ORL Face Dataset when both are used with one against all classification. We also compared the SVMs based recognition with the standard eigenface approach using the Multi-Layer Perceptron (MLP) Classification criterion. |
Building high-performance smartphones via non-volatile memory: The swap approach | Smartphones are getting increasingly high-performance with advances in mobile processors and larger main memories to support feature-rich applications. However, the storage subsystem has always been a prohibitive factor that slows down the pace of reaching even higher performance while maintaining good user experience. Despite today's smartphones are equipped with larger-than-ever main memories, they consume more energy and still run out of memory. But the slow NAND flash based storage vetoes the possibility of swapping---an important technique to extend main memory---and leaves a system that constantly terminates user applications under memory pressure.
In this paper, we revisit swapping for smartphones with fast, byte-addressable, non-volatile memory (NVM) technologies. Instead of using flash, we build the swap area with NVM, to allow high performance without sacrificing user experience. Based on NVM's high performance and byte-addressability, we show that a copy-on-write swap-in scheme can achieve even better performance by avoiding unnecessary memory copy operations. To avoid fast worn-out of certain NVMs, we also propose Heap-Wear, a wear leveling algorithm that more evenly distributes writes in NVM. Evaluation results based on the Google Nexus 5 smartphone show that our solution can effectively enhance smartphone performance and give better wear-leveling of NVM. |
The dynamic analysis of WannaCry ransomware | The global ransomware cyberattacks cripples the national hospital system across the United Kingdom, and causes waves of appointments and operations to be cancelled. Similar attacking methods have come to sweep over the world. Such trend of highprofile cyberattack sheds the lights on rapid defence through the malware information sharing platform. A complete malware analysis process is quite a time-consuming campaign. The dynamic analysis of WannaCry ransomware explores behavioural indicators and extracts important IOCs (Indicators of Compromise). Utilizing Yara tool to create customized patterns is useful for malware information sharing mechanism. Also, such mechanism help reduce time and human resource spent on detecting or finding similar malware families. We aim to generate effective cyber threat intelligence by formulating collected IOCs into structured formations. The positive effects show on immediate defensive response to security breaches, and meanwhile the integrated information security protection is consolidated. |
Performance of Mergers and Acquisitions under Corporate Governance Perspective —Based on the Analysis of Chinese Real Estate Listed Companies | This paper investigates the impact of mergers and acquisitions (M&A) on corporate performance. This article selects 36 M&A cases of China’s listed real estate companies in Shanghai and Shenzhen Stock Exchanges from 2008 to 2009. Regarding the corporate value in 2011 as the measure of the long-term performance, we will explore the relationship among check-and-balance Ownership Structure, board size, and institutional investors impact the performance. This paper concludes a positive impact ownership structure on the M&A performance. In addition, the empirical analysis reveals that that the board size has a significant negative effect on the performance. Additionally, the results of the paper indicate that the CEO-Chairman duality has a significant impact on the long-term performance. Besides, institutional investors have positive effect on M&A performance. |
Monetary and Capital Markets Department What Determines Government Spending Multipliers ? Prepared by | This paper studies how the effects of government spending vary with the economic environment. Using a panel of OECD countries, we identify fiscal shocks as residuals from an estimated spending rule and trace their macroeconomic impact under different conditions regarding the exchange rate regime, public indebtedness, and health of the financial system. The unconditional responses to a positive spending shock broadly confirm earlier findings. However, conditional responses differ systematically across exchange rate regimes, as real appreciation and external deficits occur mainly under currency pegs. We also find output and consumption multipliers to be unusually high during times of financial crisis. JEL Classification Numbers: E62, E63, F41 |
Field trials to evaluate the efficacy of emamectin benzoate in the control of sea lice, Lepeophtheirus salmonis (Krøyer) and Caligus elongatus Nordmann, infestations in Atlantic salmon Salmo salar L. | Three field trials were conducted to evaluate the efficacy of emamectin benzoate as a treatment Ž . Ž . for sea lice, Lepeophtheirus salmonis Krøyer and Caligus elongatus Nordmann , infestations on Ž . Atlantic salmon Salmo salar L. . Trials were carried out at sea temperatures of 13.0–15.58C and 7.2–8.58C. Salmon naturally infested with sea lice, with mean weights of 438, 513 and 2662 g, respectively, were held in experimental pens on commercial sites. At day y1 or y2, 20 or 30 fish were sampled from each pen to determine pre-treatment numbers of lice. Emamectin benzoate was administered in-feed at a dose of 50 mg kg biomass day for 7 consecutive days. Sea lice were counted again on days 7, 14 and 21, and comparisons made with untreated control fish. Treatment with emamectin benzoate was effective against chalimus and motile stages of sea lice. In all three trials, treated groups were surrounded by pens of heavily infested fish and L. salmonis numbers increased over time on control fish by 87–284%, whereas over the same period, L. salmonis were reduced on treated fish by 68–98%. In the low temperature trial, reductions were slower but numbers were still 90% lower than on control fish at day 21. At the end of the ) Corresponding author. Marine Environmental Research Laboratory, University of Stirling, Machrihanish, Argyll PA28 6PW, Scotland, UK. 0044-8486r00r$ see front matter q 2000 Elsevier Science B.V. All rights reserved. Ž . PII: S0044-8486 99 00374-9 ( ) J. Stone et al.rAquaculture 186 2000 205–219 206 third trial, both control pens were treated with hydrogen peroxide owing to heavy lice burdens. However, L. salmonis numbers rapidly increased again and at day 55, fish treated only with emamectin benzoate still had 80% fewer lice than control fish. In the two summer trials, large numbers of C. elongatus were rapidly reduced by treatment with 82–84% efficacy by day 21. Despite the potential for continuous re-infestation, oral treatment with emamectin benzoate presented an effective means of controlling all parasitic stages of L. salmonis and C. elongatus on farmed salmon, and in one trial, numbers remained lower on treated fish for at least 55 days. q 2000 Elsevier Science B.V. All rights reserved. |
The interaction interface for the distributed computing system's nodes | This article considers up-to-date problems of parallel computing systems development in Russia. The study describes an engineering of an interface for organization of high-performance system with programmable structure and for its partition to specific subsystems focused on applied tasks. The solution is based on the concept of homogeneous computing systems (HCS) relying on the model of collective of calculators. HCS concept was formulated in the Institute of Mathematics of the USSR under the leadership of E.V. Evreinov in 1962 [1]. The approach is particularly relevant in terms of growing popularity of information networks which combining numerous personal computers (PCs). It's obviously necessary to develop software tools to integrate disparate calculators into the system and identify the required number of sub-systems for specific problem solving. An interference into the hardware and software parts of the calculators must be avoided during the software development process. Programmable structure of the computer system is provided at the stage of unification of calculators into software subsystem interface called an agent of DCS PS. The experience of the development of the Interface organization of distributed computing systems with programmable structure and allocation of subsystems for specific applications. The software described in the present document has been tested while an experimental organization of the DCS PS at the department of the computer engineering of NSTU. |
A Hybrid Generative/Discriminative Approach to Semi-Supervised Classifier Design | Semi-supervised classifier design that simultaneously utilizes both labeled and unlabeled samples is a major research issue in machine learning. Existing semisupervised learning methods belong to either generative or discriminative approaches. This paper focuses on probabilistic semi-supervised classifier design and presents a hybrid approach to take advantage of the generative and discriminative approaches. Our formulation considers a generative model trained on labeled samples and a newly introduced bias correction model. Both models belong to the same model family. The proposed hybrid model is constructed by combining both generative and bias correction models based on the maximum entropy principle. The parameters of the bias correction model are estimated by using training data, and combination weights are estimated so that labeled samples are correctly classified. We use naive Bayes models as the generative models to apply the hybrid approach to text classification problems. In our experimental results on three text data sets, we confirmed that the proposed method significantly outperformed pure generative and discriminative methods when the classification performances of the both methods were comparable. |
Business Intelligence | Business intelligence systems combine operational data with analytical tools to present complex and competitive information to planners and decision makers. The objective is to improve the timeliness and quality of inputs to the decision process. Business Intelligence is used to understand the capabilities available in the firm; the state of the art, trends, and future directions in the markets, the technologies, and the regulatory environment in which the firm competes; and the actions of competitors and the implications of these actions. The emergence of the data warehouse as a repository, advances in data cleansing, increased capabilities of hardware and software, and the emergence of the web architecture all combine to create a richer business intelligence environment than was available previously. Although business intelligence systems are widely used in industry, research about them is limited. This paper, in addition to being a tutorial, proposes a BI framework and potential research topics. The framework highlights the importance of unstructured data and discusses the need to develop BI tools for its acquisition, integration, cleanup, search, analysis, and delivery. In addition, this paper explores a matrix for BI data types (structured vs. unstructured) and data sources (internal and external) to guide research. |
News Blogs and Citizen Journalism: New Directions for e-Journalism | Even for a casual observer of the journalistic industry it is becoming difficult to escape the conclusion that journalism is entering a time of crisis. At the same time that revenues and readerships for traditional publications from newspapers to broadcast news are declining, journalistic content is being overtaken by a flotilla of alternative options ranging from the news satire of The Daily Show in the United States to the citizen journalism of South Korea's OhmyNews and a myriad of other news blogs and citizen journalism Websites. Worse still, such new competitors with the products of the journalism industry frequently take professional journalists themselves to task where their standards have appeared to have slipped, and are beginning to match the news industry's incumbents in terms of insight and informational value: recent studies have shown, for example, that avid Daily Show viewers are as if not better informed about the U.S. political process as those who continue to follow mainstream print or television news (see e.g. Fox et al., 2007). The show's host Jon Stewart – who has consistently maintained his self-description as a comedian, not a journalist – even took the fight directly to the mainstream with his appearance on CNN's belligerent talk show Crossfire, repeatedly making the point that the show's polarised and polarising 'left vs. right' format was " hurting " politics in America (the show disappeared from CNN's line-up a few months after Stewart's appearance; Stewart, 2004). Similarly, news bloggers and citizen journalists have shown persistence and determination both in uncovering political and other scandals, and in highlighting the shortcomings of professional journalism as it investigates and reports on such scandals. This gradual decline of industrial journalism as the dominant force in the public sphere can be linked directly with a broader shift from industrial to post-industrial paradigms. As the industrial age makes way for the information age, and as its hierarchical and centralised structures for the organisation of production, distribution, and market economies transform towards a networked, heterarchical |
Care and Justice: The Perspective of the Passions | Part IV develops a topic only hinted at in the previous chapters (see Part III, Chap. 7): the relationship between care and justice. An inescapable topic, not just because the reflection on care came about as a critique of the liberal justice theories (Rawls), but also because recently the problem of social justice has forcefully emerged in some social movements in the global scenario (from the ‘Arab Spring’ to the indignados). The women care theorists (from Gilligan to Kittay) propose integrating the paradigm of justice, based on the parameters of an abstract individualism and the subjects’ rationality, independence and equality, with the paradigm of care, based on the values of concreteness, affectivity, interdependence and relationality. This proposal to integrate the two, which one can agree with in general, however risks reproposing and legitimizing a purely formal idea of justice: namely, of neglecting the problem of the motivations that are at the basis of the demand for justice. The thesis is proposed that justice also presupposes sentiments and passions (such as compassion, as Nussbaum suggests, and indignation), which are not exclusive to care or the ethics of care. To stress this aspect is to start from the concrete complaints of individuals and groups against injustice; that is, it is to renounce the perfect model of justice and start from injustice, or rather from the reality of injustice (Sen, Renault). If we are to reflect on the passions first of all we can recognize the different nature of the affective motivations at the basis of the demand for justice and distinguish between legitimate complaints and illegitimate claims (as appears clear in the exemplary distinction between indignation and envy). In second place, we can better understand the motivations presiding over care (such as love), so as to avert the vision of care as pure altruism and assistance. Care of the world presupposes integrating the equivalent logic of justice with the asymmetric logic of care. |
Brain Tumor Detection and Classification Using Deep Learning Classifier on MRI Images | Magnetic Resonance Imaging (MRI) has become an effective tool for clinical research in recent years and has found itself in applications such as brain tumour detection. In this study, tumor classification using multiple kernel-based probabilistic clustering and deep learning classifier is proposed. The proposed technique consists of three modules, namely segmentation module, feature extraction module and classification module. Initially, the MRI image is pre-processed to make it fit for segmentation and de-noising process is carried out using median filter. Then, pre-processed image is segmented using Multiple Kernel based Probabilistic Clustering (MKPC). Subsequently, features are extracted for every segment based on the shape, texture and intensity. After features extraction, important features will be selected using Linear Discriminant Analysis (LDA) for classification purpose. Finally, deep learning classifier is employed for classification into tumor or non-tumor. The proposed technique is evaluated using sensitivity, specificity and accuracy. The proposed technique results are also compared with existing technique which uses Feed-Forward Back Propagation Network (FFBN). The proposed technique achieved an average sensitivity, specificity and accuracy of 0.88, 0.80 and 0.83, respectively with the highest values as about 1, 0.85 and 0.94. Improved results show the efficiency of the proposed technique. |
Short versus conventional term glucocorticoid therapy in acute exacerbation of chronic obstructive pulmonary disease - the "REDUCE" trial. | BACKGROUND
International guidelines advocate a 10 to 14-day course of systemic glucocorticoid therapy in the management of COPD exacerbations. The optimal duration of therapy is unknown and glucocorticoids have serious adverse effects. The aim of this trial is to demonstrate non-inferiority of a five-day compared to a 14-day course of systemic glucocorticoids with respect to COPD outcome, thereby significantly reducing steroid exposure and side effects in patients with COPD exacerbations.
METHODS
This is a randomised, placebo-controlled, non-inferiority multicentre trial. Patients with acute COPD exacerbation are randomised to receive 40 mg of prednisone-equivalent daily for 14 days (conventional arm) or glucocorticoid treatment for 5 days, followed by placebo for another 9 days (intervention arm). Follow-up is 180 days. The primary endpoint is time to next exacerbation. Secondary endpoints include cumulative glucocorticoid dose, time to open-label glucocorticoid therapy, glucocorticoid-associated side effects and complications, duration of hospital stay, death, change in FEV1, need for assisted ventilation, clinical outcome assessed by standardised questionnaires, and suppression of the hypothalamic-pituitary-adrenal axis.
RESULTS
Mean age (± SD) of patients who finished the study was 70 ± 11 years. 12% had mild or moderate disease, whereas severe and very severe stages were found in 30 and 58%, respectively. At the time of inclusion, 20% of patients were under treatment with systemic glucocorticoids.
CONCLUSIONS
If the strategy of significantly reducing cumulative exposure to glucocorticoids while taking advantage of their beneficial short-term effects proves to be successful, it will warrant a change in common glucocorticoid prescription practice, thereby improving the management of COPD. |
Mortality indicators in community-acquired pneumonia requiring intensive care in Turkey. | BACKGROUND
Severe community-acquired pneumonia (SCAP) is a fatal disease. This study was conducted to describe an outcome analysis of the intensive care units (ICUs) of Turkey.
METHODS
This study evaluated SCAP cases hospitalized in the ICUs of 19 different hospitals between October 2008 and January 2011. The cases of 413 patients admitted to the ICUs were retrospectively analyzed.
RESULTS
Overall 413 patients were included in the study and 129 (31.2%) died. It was found that bilateral pulmonary involvement (odds ratio (OR) 2.5, 95% confidence interval (CI) 1.1-5.7) and CAP PIRO score (OR 2, 95% CI 1.3-2.9) were independent risk factors for a higher in-ICU mortality, while arterial hypertension (OR 0.3, 95% CI 0.1-0.9) and the application of non-invasive ventilation (OR 0.2, 95% CI 0.1-0.5) decreased mortality. No culture of any kind was obtained for 90 (22%) patients during the entire course of the hospitalization. Blood, bronchoalveolar lavage, and non-bronchoscopic lavage cultures yielded enteric Gram-negatives (n=12), followed by Staphylococcus aureus (n=10), pneumococci (n=6), and Pseudomonas aeruginosa (n=6). For 22% of the patients, none of the culture methods were applied.
CONCLUSIONS
SCAP requiring ICU admission is associated with considerable mortality for ICU patients. Increased awareness appears essential for the microbiological diagnosis of this disease. |
Inside Waymo's self-driving car: My favorite transistors | Waymo's self-driving cars contain a broad set of technologies that enable our cars to sense the vehicle surroundings, perceive and understand what is happening in the vehicle vicinity, and determine the safe and efficient actions that the vehicle should take. Many of these technologies are rooted in advanced semiconductor technologies, e.g. faster transistors that enable more compute or low noise designs that enable the faintest sensor signals to be perceived. This paper summarizes a few areas where semiconductor technologies have proven to be fundamentally enabling to self-driving capabilities. The paper also lays out some of the challenges facing advanced semiconductors in the automotive context, as well as some of the opportunities for future innovation. |
Constructive Analysis of Intensional Phenomena in Natural Language | Chierchia [2, 3, 4], pointed out the inadequacy of Montague’s approach in the analysis of certain natural language constructions, such as nominalization and propositional attitude reports. The problem seems to be related to the strong typing of Montague’s Intensional Logic, and its interpretation of propositions as sets of possible worlds. Turner in [9], following Bealer’s intuitions [1], offers an interesting solution by building an intensional logic in which the intension of propositions and propositional functions are treated as individuals of a different kind, Turner’s framework is called property theory. For a classic approach to intensionality see [5, 6]. In this paper, we show that is possible to recast Turner’s proposals in the constructive type theory providing a simple analysis of intentionality and an elegant solution to the puzzle sentences. |
Knowledge Sharing: Influences of Trust, Commitment and Cost | Purpose – This paper’s aim is to examine the influence of perceived cost of sharing knowledge and affective trust in colleagues on the relationship between affective commitment and knowledge sharing. Design/methodology/approach – The methodology used was a survey of 496 employees from 15 organizations across ten industries. Findings – Affective trust in colleagues moderates the relationship between affective commitment and knowledge sharing and the relationship between cost of knowledge sharing and knowledge sharing. Research limitations/implications – Future researchers should operationalize the perceived cost of knowledge sharing construct to include other potential group barriers; for instance, politics and organizational barriers, management commitment and lack of trust. Practical implications – The findings of this study suggest that employees who value social relationships and social resources tend to view knowledge as a collectively owned commodity. As such, their knowledge sharing behavior reflects the model of reciprocal social exchanges. Social implications – The results of this study indicate that an organizational culture that encourages affect-based trust between colleagues will facilitate knowledge sharing. Originality/value – The paper bridges the gap between the literature on knowledge sharing, perceived cost of knowledge sharing, affective organizational commitment and trust in a single model. |
Effects of homocysteine-lowering with folic acid plus vitamin B12 vs placebo on mortality and major morbidity in myocardial infarction survivors: a randomized trial. | CONTEXT
Blood homocysteine levels are positively associated with cardiovascular disease, but it is uncertain whether the association is causal.
OBJECTIVE
To assess the effects of reducing homocysteine levels with folic acid and vitamin B(12) on vascular and nonvascular outcomes.
DESIGN, SETTING, AND PATIENTS
Double-blind randomized controlled trial of 12,064 survivors of myocardial infarction in secondary care hospitals in the United Kingdom between 1998 and 2008.
INTERVENTIONS
2 mg folic acid plus 1 mg vitamin B(12) daily vs matching placebo.
MAIN OUTCOME MEASURES
First major vascular event, defined as major coronary event (coronary death, myocardial infarction, or coronary revascularization), fatal or nonfatal stroke, or noncoronary revascularization.
RESULTS
Allocation to the study vitamins reduced homocysteine by a mean of 3.8 micromol/L (28%). During 6.7 years of follow-up, major vascular events occurred in 1537 of 6033 participants (25.5%) allocated folic acid plus vitamin B(12) vs 1493 of 6031 participants (24.8%) allocated placebo (risk ratio [RR], 1.04; 95% confidence interval [CI], 0.97-1.12; P = .28). There were no apparent effects on major coronary events (vitamins, 1229 [20.4%], vs placebo, 1185 [19.6%]; RR, 1.05; 95% CI, 0.97-1.13), stroke (vitamins, 269 [4.5%], vs placebo, 265 [4.4%]; RR, 1.02; 95% CI, 0.86-1.21), or noncoronary revascularizations (vitamins, 178 [3.0%], vs placebo, 152 [2.5%]; RR, 1.18; 95% CI, 0.95-1.46). Nor were there significant differences in the numbers of deaths attributed to vascular causes (vitamins, 578 [9.6%], vs placebo, 559 [9.3%]) or nonvascular causes (vitamins, 405 [6.7%], vs placebo, 392 [6.5%]) or in the incidence of any cancer (vitamins, 678 [11.2%], vs placebo, 639 [10.6%]).
CONCLUSION
Substantial long-term reductions in blood homocysteine levels with folic acid and vitamin B(12) supplementation did not have beneficial effects on vascular outcomes but were also not associated with adverse effects on cancer incidence.
TRIAL REGISTRATION
isrctn.org Identifier: ISRCTN74348595. |
Automated Scenario Generation - Coupling Planning Techniques with Smart Objects | Serious games allow for adaptive and personalised forms of training; the nature and timing of learning activities can be tailored to the trainee’s needs and interests. Autonomous game-based training requires for the automatic selection of appropriate exercises for an individual trainee. This paper presents a framework for an automated scenario generation system. The underlying notion is that a learning experience is defined by the objects and agents that inhabit the training environment. Our system uses automated planning to assess the behaviour required to achieve the (personalised) training objective. It then generates a scenario by selecting semantically annotated (or ‘smart’) objects and by assigning goals to the virtual characters. The resulting situations trigger the trainee to execute the desired behaviour. To test the framework, a prototype has been developed to train the First Aid treatment of burns. Experienced instructors evaluated scenarios written by three types of authors: the prototype, first-aid experts, and laymen. The prototype produced scenarios that were at least as good as laymen scenarios. First-aid experts seemed the best scenario writers, although differences were not significant. It is concluded that combining automated planning, smart objects, and virtual agent behaviour, is a promising approach to automated scenario generation. |
Business process management: a review and evaluation | Users who downloaded this article also downloaded: Mohamed Zairi, (1997),"Business process management: a boundaryless approach to modern competitiveness", Business Process Management Journal, Vol. 3 Iss 1 pp. 64-80 http://dx.doi.org/10.1108/14637159710161585 Ryan K.L. Ko, Stephen S.G. Lee, Eng Wah Lee, (2009),"Business process management (BPM) standards: a survey", Business Process Management Journal, Vol. 15 Iss 5 pp. 744-791 http://dx.doi.org/10.1108/14637150910987937 Mohamed Zairi, David Sinclair, (1995),"Business process re#engineering and process management: A survey of current practice and future trends in integrated management", Business Process Re-engineering & Management Journal, Vol. 1 Iss 1 pp. 8-30 |
Layered 4D Representation and Voting for Grouping from Motion | We address the problem of perceptual grouping from motion cues by formulating it as a motion layers inference from a sparse and noisy point set in a 4D space. Our approach is based on a layered 4D representation of data, and a voting scheme for token communication, within a tensor voting computational framework. Given two sparse sets of point tokens, the image position and potential velocity of each token are encoded into a 4D tensor. By enforcing the smoothness of motion through a voting process, the correct velocity is selected for each input point as the most salient token. An additional dense voting step allows for the inference of a dense representation in terms of pixel velocities, motion regions, and boundaries. Using a 4D space for this tensor voting approach is essential since it allows for a spatial separation of the points according to both their velocities and image coordinates. Unlike most other methods that optimize certain objective functions, our approach is noniterative and, therefore, does not suffer from local optima or poor convergence problems. We demonstrate our method with synthetic and real images, by analyzing several difficult cases—opaque and transparent motion, rigid and nonrigid motion, curves and surfaces in motion. |
Cloud forensics: Technical challenges, solutions and comparative analysis | Cloud computing is arguably one of the most significant advances in information technology (IT) services today. Several cloud service providers (CSPs) have offered services that have produced various transformative changes in computing activities and presented numerous promising technological and economic opportunities. However, many cloud customers remain reluctant to move their IT needs to the cloud, mainly due to their concerns on cloud security and the threat of the unknown. The CSPs indirectly escalate their concerns by not letting customers see what is behind virtual wall of their clouds that, among others, hinders digital investigations. In addition, jurisdiction, data duplication and multi-tenancy in cloud platform add to the challenge of locating, identifying and separating the suspected or compromised targets for digital forensics. Unfortunately, the existing approaches to evidence collection and recovery in a non-cloud (traditional) system are not practical as they rely on unrestricted access to the relevant system and user data; something that is not available in the cloud due its decentralized data processing. In this paper we systematically survey the forensic challenges in cloud computing and analyze their most recent solutions and developments. In particular, unlike the existing surveys on the topic, we describe the issues in cloud computing using the phases of traditional digital forensics as the base. For each phase of the digital forensic process, we have included a list of challenges and analysis of their possible solutions. Our description helps identifying the differences between the problems and solutions for non-cloud and cloud digital forensics. Further, the presentation is expected to help the investigators better understand the problems in cloud environment. More importantly, the paper also includes most recent development in cloud forensics produced by researchers, National Institute of Standards and Technology and Amazon. © 2015 Elsevier Ltd. All rights reserved. |
Leveraging Billions of Faces to Overcome Performance Barriers in Unconstrained Face Recognition | We employ the face recognition technology developed in house at face.com to a well accepted benchmark and show that without any tuning we are able to considerably surpass state of the art results. Much of the improvement is concentrated in the high-valued performance point of zero false positive matches, where the obtained recall rate almost doubles the best reported result to date. We discuss the various components and innovations of our system that enable this significant performance gap. These components include extensive utilization of an accurate 3D reconstructed shape model dealing with challenges arising from pose and illumination. In addition, discriminative models based on billions of faces are used in order to overcome aging and facial expression as well as low light and overexposure. Finally, we identify a challenging set of identification queries that might provide useful focus for future research. 1 Benchmark and results The LFW benchmark [6] has become the de-facto standard testbed for unconstrained face recognition with over 100 citations in the face recognition literature since its debut 3 years ago. Extensive work [15, 14, 13, 5, 7, 4, 10, 3, 8, 9, 11, 16] has been invested in improving the recognition score which has been considerably increased since the first non-trivial result of 72% accuracy. We employ face.com’s r2011b face recognition engine to the LFW benchmark without any dataset specific pre-tuning. The obtained mean accuracy is 91.3% ± 0.3, achieved on the test set (view 2) under the unrestricted LFW protocol. Figure 1 (a) presents the ROC curve obtained in comparison to previous results. Remarkably, much of the obtained improvement is achieved at the conservative performance range, i.e., at low False Acceptance Rates (FAR). 1face.com has a public API service [1] which currently employs a previous version of the engine. 1 ar X iv :1 10 8. 11 22 v1 [ cs .C V ] 4 A ug 2 01 1 |
RAnoLazIne for the treatment of diastolic heart failure in patients with preserved ejection fraction: the RALI-DHF proof-of-concept study. | OBJECTIVES
This study investigated whether inhibiting late Na(+) current by using ranolazine improved diastolic function in patients with heart failure with preserved ejection fraction (HFpEF).
BACKGROUND
HFpEF accounts for >50% of all HF patients, but no specific treatment exists.
METHODS
The RALI-DHF (RAnoLazIne for the Treatment of Diastolic Heart Failure) study was a prospective, randomized, double-blind, placebo-controlled small proof-of-concept study. Inclusion criteria were EF ≥45%, a mitral E-wave velocity/mitral annular velocity ratio (E/E') >15 or N-terminal pro-B-type natriuretic peptide (NT-proBNP) concentration >220 pg/ml, a left ventricular end-diastolic pressure (LVEDP) ≥18 mm Hg, and time-constant of relaxation (tau) ≥50 ms. Patients were randomized to ranolazine (n = 12) or placebo (n = 8). Treatment consisted of intravenous infusion for 24 h, followed by oral treatment for 13 days.
RESULTS
After 30 min of infusion, LVEDP (p = 0.04) and pulmonary capillary wedge pressure (p = 0.04) decreased in the ranolazine group but not in the placebo group. Mean pulmonary artery pressure showed a trend toward a decrease in the ranolazine group that was significant under pacing conditions at 120 beats/min (p = 0.02), but not for the placebo group. These changes occurred without changes in left ventricular end-systolic pressure or systemic or pulmonary resistance but in the presence of a small but significant decrease in cardiac output (p = 0.04). Relaxation parameters (e.g., tau, rate of decline of left ventricular pressure per minute [dP/dtmin]) were unaltered. Echocardiographically, the E/E' ratio did not significantly change after 22 h. After 14 days of treatment, no significant changes were observed in echocardiographic or cardiopulmonary exercise test parameters. There were no significant effects on NT-pro-BNP levels.
CONCLUSIONS
Results of this proof-of-concept study revealed that ranolazine improved measures of hemodynamics but that there was no improvement in relaxation parameters. (Ranolazine in Diastolic Heart Failure [RALI-DHF]; NCT01163734). |
The segmented and annotated IAPR TC-12 benchmark | Automatic image annotation (AIA), a highly popular topic in the field of information retrieval research, has experienced significant progress within the last decade. Yet, the lack of a standardized evaluation platform tailored to the needs of AIA, has hindered effective evaluation of its methods, especially for region-based AIA. Therefore in this paper, we introduce the segmented and annotated IAPR TC-12 benchmark; an extended resource for the evaluation of AIA methods as well as the analysis of their impact on multimedia information retrieval. We describe the methodology adopted for the manual segmentation and annotation of images, and present statistics for the extended collection. The extended collection is publicly available and can be used to evaluate a variety of tasks in addition to image annotation. We also propose a soft measure for the evaluation of annotation performance and identify future research areas in which this extended test collection is likely to make a contribution. 2009 Elsevier Inc. All rights reserved. |
An empirical study on developer-related factors characterizing fix-inducing commits | This paper analyzes developer related factors that could influence the likelihood for a commit to induce a fix. Specifically, we focus on factors that could potentially hinder developers’ ability to correctly understand the code components involved in the change to be committed: (i) the coherence of the commit (i.e., how much it is focused on a specific topic), (ii) the experience level of the developer on the files involved in the commit, and (iii) the interfering changes performed by other developers on the files involved in past commits. The results of our study indicate that “fix-inducing” commits (i.e., commits that induced a fix) are significantly less coherent than “clean” commits (i.e., commits that did not induce a fix). Surprisingly, “fix-inducing” commits are performed by more experienced developers, yet, those are the developers performing more complex changes in the system. Finally, “fix-inducing” commits have a higher number of past interfering changes as compared to “clean” commits. Our empirical study sheds light on previously unexplored factors and presents significant results that can be used to improve approaches for defect prediction. Copyright c © 2015 John Wiley & Sons, Ltd. |
A palliative approach for heart failure end-of-life care | PURPOSE OF REVIEW
The current review discusses the integration of guideline and evidence-based palliative care into heart failure end-of-life (EOL) care.
RECENT FINDINGS
North American and European heart failure societies recommend the integration of palliative care into heart failure programs. Advance care planning, shared decision-making, routine measurement of symptoms and quality of life and specialist palliative care at heart failure EOL are identified as key components to an effective heart failure palliative care program. There is limited evidence to support the effectiveness of the individual elements. However, results from the palliative care in heart failure trial suggest an integrated heart failure palliative care program can significantly improve quality of life for heart failure patients at EOL.
SUMMARY
Integration of a palliative approach to heart failure EOL care helps to ensure patients receive the care that is congruent with their values, wishes and preferences. Specialist palliative care referrals are limited to those who are truly at heart failure EOL. |
Parallel Loop Self-Scheduling for Heterogeneous Cluster Systems with Multi-core Computers | Multicore computers have been widely included in cluster systems. They are shared memory architecture. However, previous research on parallel loop self-scheduling did not consider the feature of multicore computers. It is more suitable for shared-memory multiprocessors to adopt OpenMP for parallel programming. Therefore, in this paper, we propose to adopt hybrid programming model MPI+OpenMP to design loop self-scheduling schemes for cluster systems with multicore computers. Initially, each computer runs only one MPI process no matter how many cores it has. A MPI process will fork OpenMP threads depending on the number of cores in the computer. Each idle slave MPI-process will request tasks from the master process. The tasks dispatched to a process will be executed in parallel by OpenMP threads. According to the experimental results, our method outperforms the previous work by 18.66% or 29.76% depending on the problem size. Moreover, the performance improvement is very stable no matter our method is based on which traditional scheme. |
New Zealanders on the Population Geography of the Western Island | New Zealand origin academics have played a key role in the academic study of Australia's population in the post-war period. The paper argues that New Zealanders have contributed not only to the furthering of knowledge of the processes of change in the Australian population but have been important in the teaching of population geography in Australian universities, made inputs into policy relating to population and been influential in the development of the Australian Population Association. Major contributions have been made by New Zealanders not only in the traditionally strong areas of population geography such as internal and international migration but also in the areas of fertility, mortality and ageing. |
Wayfinding Choremes | How can we represent spatial information in maps in a cognitively adequate way? The present article outlines a cognitive conceptual approach that proposes primitive conceptual elements from which maps can be constructed. Based on work in geography that starts with abstract models of geographic phenomena, namely modelisation chorematique by R. Brunet (1980, 1987), we coin primitive conceptual elements of route directions wayfinding choremes. Sketch map drawings were analyzed as they obey the same medial constraints as maps but are constructed in a way that provides insights into human conceptions. A distinction between structural and functional aspects of wayfinding presents a useful method to gain further knowledge about human conceptualizations and leads to a practicable cognitive conceptual approach to map construction. |
Weakly Supervised Object Boundaries | State-of-the-art learning based boundary detection methods require extensive training data. Since labelling object boundaries is one of the most expensive types of annotations, there is a need to relax the requirement to carefully annotate images to make both the training more affordable and to extend the amount of training data. In this paper we propose a technique to generate weakly supervised annotations and show that bounding box annotations alone suffice to reach high-quality object boundaries without using any object-specific boundary annotations. With the proposed weak supervision techniques we achieve the top performance on the object boundary detection task, outperforming by a large margin the current fully supervised state-of-theart methods. |
Functional tooth regenerative therapy: tooth tissue regeneration and whole-tooth replacement | Oral and general health is compromised by irreversible dental problems, including dental caries, periodontal disease and tooth injury. Regenerative therapy for tooth tissue repair and whole-tooth replacement is currently considered a novel therapeutic concept with the potential for the full recovery of tooth function. Several types of stem cells and cell-activating cytokines have been identified in oral tissues. These cells are thought to be candidate cell sources for tooth tissue regenerative therapies because they have the ability to differentiate into tooth tissues in vitro and in vivo. Whole-tooth replacement therapy is regarded as an important model for the development of an organ regenerative concept. A novel three-dimensional cell-manipulation method, designated the organ germ method, has been developed to recapitulate organogenesis. This method involves compartmentalisation of epithelial and mesenchymal cells at a high cell density to mimic multicellular assembly conditions and epithelial–mesenchymal interactions. A bioengineered tooth germ can generate a structurally correct tooth in vitro and erupt successfully with the correct tooth structure when transplanted into the oral cavity. We have ectopically generated a bioengineered tooth unit composed of a mature tooth, periodontal ligament and alveolar bone, and that tooth unit was successfully engrafted into an adult jawbone through bone integration. Such bioengineered teeth were able to perform normal physiological tooth functions, such as developing a masticatory potential in response to mechanical stress and a perceptive potential for noxious stimuli. In this review, we describe recent findings and technologies underpinning tooth regenerative therapy. |
Limb anomalies in the CHARGE association. | We report a male infant with iris coloboma, choanal atresia, postnatal retardation of growth and psychomotor development, genital anomaly, ear anomaly, and anal atresia. In addition, there was cutaneous syndactyly and nail hypoplasia of the second and third fingers on the right and hypoplasia of the left second finger nail. Comparable observations have rarely been reported and possibly represent genetic heterogeneity. |
Effects of dietary nitrate, caffeine, and their combination on 20-km cycling time trial performance. | The aim of this study was to examine the acute supplementation effects of dietary nitrate, caffeine, and their combination on 20-km cycling time trial performance. Using a randomized, counterbalanced, double-blind Latin-square design, 14 competitive female cyclists (age: 31 ± 7 years; height: 1.69 ± 0.07 m; body mass: 61.6 ± 6.0 kg) completed four 20-km time trials on a racing bicycle fitted to a turbo trainer. Approximately 2.5 hours before each trial, subjects consumed a 70-ml dose of concentrated beetroot juice containing either 0.45 g of dietary nitrate or with the nitrate content removed (placebo). One hour before each trial, subjects consumed a capsule containing either 5 mg·kg of caffeine or maltodextrin (placebo). There was a significant effect of supplementation on power output (p = 0.001), with post hoc tests revealing higher power outputs in caffeine (205 ± 21 W) vs. nitrate (194 ± 22 W) and placebo (194 ± 25 W) trials only. Caffeine-induced improvements in power output corresponded with significantly higher measures of heart rate (caffeine: 166 ± 12 b·min vs. placebo: 159 ± 15 b·min; p = 0.02), blood lactate (caffeine: 6.54 ± 2.40 mmol·L vs. placebo: 4.50 ± 2.11 mmol·L; p < 0.001), and respiratory exchange ratio (caffeine: 0.95 ± 0.04 vs. placebo: 0.91 ± 0.05; p = 0.03). There were no effects (p ≥ 0.05) of supplementation on cycling cadence, rating of perceived exertion, (Equation is included in full-text article.), or integrated electromyographic activity. The results of this study support the well-established beneficial effects of caffeine supplementation on endurance performance. In contrast, acute supplementation with dietary nitrate seems to have no effect on endurance performance and adds nothing to the benefits afforded by caffeine supplementation. |
An Improved Dijkstra Algorithm for Firefighting | In this paper, briefly the efforts going toward a mathematical calculation to validate a new-Dijkstra algorithm from the original normal Dijkstra algorithm [1] as an improvised Dijkstra. The result of this improvised Dijkstra was used to find the shortest path for firefighting unit to reach the location of the fire exactly. The idea depends on the influence of turns on the path, while there are two equal paths’, the more number of turns take more time to pass through it and the less number turns takes less time to pass through it. To apply this scenario practically we take a small real area in south Khartoum. The result gives strong justification that proves and verifies our methodology with a clear contribution in Improved Dijkstra algorithm for firefighting like Geo-Dijkstra. Furthermore, an evaluation of the above-mentioned algorithms has been done showing very promising and realistic results. |
Effect of Number of Layers on Performance of Fractional-Slot Concentrated-Windings Interior Permanent Magnet Machines | Interior PM machines equipped with fractional-slot concentrated-windings are good candidates for high-speed traction applications. This is mainly due to the higher power density and efficiency that can be achieved. The main challenge with this type of machines is the high rotor losses at high speeds/frequencies. This paper will thoroughly investigate the effect of number of winding layers on the performance of this type of machines. It will be shown that by going to higher number of layers, there can be significant improvement in efficiency especially at high speeds mainly due to the reduction of the winding factor/magnitude of the most dominant stator mmf subharmonic component. It will also be shown that there is significant improvement in torque density. Even though there is reduction in the winding factor of the stator synchronous torque-producing mmf component, this is more than offset by increase in machine saliency and reluctance torque. The paper will provide general guidelines regarding the optimum slot/pole/phase combinations based on torque density and efficiency. Sample designs of various slot/pole combinations are used to quantify the benefit of going to higher number of layers in terms of torque density, efficiency, and torque ripple. |
Hyperdimensional Data Analysis Using Parallel Coordinates 1 | This paper presents the basic results for using the parallel coordinate representation as a high dimensional data analysis tool. Several alternatives are reviewed. The basic algorithm for parallel coordinates is laid out and a discussion of its properties as a projective transformation are shown. The several of the duality results are discussed along with their interpretations as data analysis tools. A discussion of permutations of the parallel coordinate axes is given and some examples are given. Some extensions of the parallel coordinate idea are given. The paper closes with a discussion of implementation and some of our experiences are relayed. 1This research was supported by the Air Force Office of Scientific Research under grant number AFOSR-870179, by the Army Research Office under contract number DAAL03-87-K-0087 and by the National Science Foundation under grant number DMS-8701931 . Hyperdimensional Data Analysis Using Parallel Coordinates |
What makes undergraduate students enroll into an elective course?: The case of Islamic accounting | Purpose – The main purpose of this paper is to investigate the acceptance level of Islamic accounting course by undergraduate students at the Universiti Malaysia Sabah (UMS). The study used theory of reasoned action (TRA) to analyze the findings. Design/methodology/approach – The primary data for the study are collected using self-administered questionnaires. Altogether, the sample comprised of 135 respondents. Data are analyzed using statistical package for social science 13.0 and Analysis of Moment Structures 7.0 to determine the acceptance level and model fit. Findings – Attitude (ATT), subjective norm (SN), and amount of information on Islamic accounting (AIIA) are found to affect the intention of students to enroll in the Islamic accounting course. Other proposed hypotheses are also supported. Research limitations/implications – The study limitations are confined to three only. The first limitation is the narrow focus on one University in Malaysia as a case study. The second is about its limited relevant measures used in the model that may potentially support the acceptance. The third is about the lack of adequate sample of non-Muslim students. Nevertheless, these limitations drive for the future research in the area of Islamic accounting. Practical implications – Despite its limitations, this study is still of importance in providing insights on a particular issue. The findings of this study shed some light on the students’ acceptance level of an Islamic course. This course is unique as it is different in orientation compared to other existing courses on offer. This paper also provides an invaluable insight, especially in the case of UMS, to consider Islamic accounting course as a core course in the future instead of only an elective course. The university’s management should consider the importance of students’ ATT, SN, and AIIA prior to offering the course. Originality/value – This paper examines undergraduate students’ acceptance level of an Islamic accounting course using TRA and highlights the factors affecting the acceptance of students of an Islamic accounting course in a Malaysian higher learning institution. |
Ramadan Fasting and Weight-Lifting Training on Vascular Volumes and Hematological Profiles in Young Male Weight-Lifters | During the holy month of Ramadan the quality of food and eating patterns changed and drinking will be stopped for at least 10 to 16 hours on the basis lunar calendar. The effects of exercise and fasting solely or combined on metabolic and hematologic responses well established. The purpose of the present study was to study the effects of Ramadan fasting with or without weight-lifting training on vascular volumes and selected hematological indices in young male weight-lifters. Blood samples were taken at 24h before and 24h after the last day of holy Ramadan fasting period and weight-lifting training for determination of the selected red and white blood cells compositions. The Vascular volumes (Blood, Red cells, and plasma volumes) were determined and calculated by using both hemoglobin and hematocrit estimations before and after Ramadan fasting with or without weight-lifting training. The results indicate that red cell volume and MCHC were significantly decreased and increased in fasting group (P<0.05) respectively. The percentages of hematocrit were significantly low in both training and fasting groups. A higher and significant platelet count was found in training group. The present data indicate that weight-lifting training during the holy Ramadan in compare to fasting had no impact on red and white cells composition exception on platelet count and Hematocrit. A decrease in Hematocrit might be due to the incomplete dehydration period which amplified by stop drinking and nutritional habits during the holy Ramadan. |
Effect of Additional Cross-links on Sorption by Keratin | THE removal of cross-links between the peptide chains of keratin can lead to a considerable increase in the amount of water vapour and formic acid vapour absorbed at high relative pressures1, and similar increases in the uptake of water vapour by keratin occur after peptide bond cleavage2. This communication indicates how the introduction of additional cross-links may affect the subsequent sorption behaviour of keratin. |
An 800 MHz 2$\,\times\,$ 1 Compact MIMO Antenna System for LTE Handsets | A compact size 2 × 1 multiple-input-multiple-output (MIMO) antenna system operating in the 800 MHz band is proposed for long term evolution (LTE) handsets. The fabricated MIMO system has a typical isolation of 12 dB and a maximum gain of 2.2 dBi. Two isolation enhancement methods are investigated; one that looks into the depth of the ground split separating the two antennas, the other introduces extra splits within the arms of the GND plane. It is found that introducing extra splits within the arms of the GND plane of the proposed geometry enhances the isolation by at least 3 dB. The proposed MIMO antenna system is based on meander line antennas, and covers the band from 760-886 MHz. The size of the antenna system is 40 × 50 mm2. |
A vertical W-band surface-micromachined Yagi-Uda antenna | A vertical W-band surface-micromachined Yagi-Uda monopole array is proposed and implemented for the first time. The monopoles stand on a high-k soda-lime glass substrate and are fed by a coplanar waveguide. The geometry functions as a vertically placed Yagi-Uda antenna due to the image theory. The performance of the vertical Yagi-Uda is to be superior to conventional W-band printed antennas due to minimum substrate wave effects. A successful implementation of the vertical Yagi-Uda structure is demonstrated using high-aspect-ratio surface micromachining fabrication technologies. The test results of a 5 element structure demonstrate a 10-dB-impedance-bandwidth of 12% at 100 GHz. A directivity of 8.2 dBi is projected for this prototype. |
Hysteroscopic resection of uterine submucous leiomyoma protruding through hymen in a 16-year-old adolescent. | BACKGROUND
Uterine leiomyomas are rarely seen in adolescent and to date nine leiomyoma cases have been reported under age 17. Eight of these have been treated surgically via laparotomic myomectomy.
CASE
A 16-year-old girl presented with a painless, lobulated necrotic mass protruding through the introitus. The mass originated from posterior uterine wall resected using hysteroscopy. Final pathology report revealed a submucous uterine leiomyoma.
SUMMARY AND CONCLUSION
Submucous uterine leiomyomas may present as a vaginal mass in adolescents and can be safely treated using hysteroscopy. |
Neocognitron: A new algorithm for pattern recognition tolerant of deformations and shifts in position | -Suggested by the structure of the visual nervous system, a new algorithm is proposed for pattern recognlt~on This algorithm can be reahzed with a multllayered network consisting of neuron-hke cells The network, "neocognltron", is self-organized by unsupervised learnmg, and acquires the abdlty to recognize stimulus patterns according to the &fferences in their shapes Any patterns which we human beings judge to be ahke are also judged to be of the same category by the neocognltron The neocognltron recognizes stimulus patterns correctly without being affected by shifts m position or even by considerable d~stortlons in shape of the stimulus patterns Visual pattern recognition Unsupervised learning Neural network model Deformatmn-reslstant Self-orgamzatmn Visual nervous system Posltlon-mvarlant Multdayered network Simulation |
All-at-once Optimization for Coupled Matrix and Tensor Factorizations | Joint analysis of data from multiple sources has the potential to improve our understanding of the underlying structures in complex data sets. For instance, in restaurant recommendation systems, recommendations can be based on rating histories of customers. In addition to rating histories, customers’ social networks (e.g., Facebook friendships) and restaurant categories information (e.g., Thai or Italian) can also be used to make better recommendations. The task of fusing data, however, is challenging since data sets can be incomplete and heterogeneous, i.e., data consist of both matrices, e.g., the person by person social network matrix or the restaurant by category matrix, and higher-order tensors, e.g., the “ratings” tensor of the form restaurant by meal by person. In this paper, we are particularly interested in fusing data sets with the goal of capturing their underlying latent structures. We formulate this problem as a coupled matrix and tensor factorization (CMTF) problem where heterogeneous data sets are modeled by fitting outer-product models to higher-order tensors and matrices in a coupled manner. Unlike traditional approaches solving this problem using alternating algorithms, we propose an all-at-once optimization approach called CMTF-OPT (CMTF-OPTimization), which is a gradient-based optimization approach for joint analysis of matrices and higher-order tensors. We also extend the algorithm to handle coupled incomplete data sets. Using numerical experiments, we demonstrate that the proposed all-at-once approach is more accurate than the alternating least squares approach. |
Scratch: A Sneak Preview | information technologies as “the ability to reformulate knowledge, to express oneself creatively and appropriately, and to produce and generate information (rather than simply to comprehend it).” Fluency, according to the report, “goes beyond traditional notions of computer literacy...[It] requires a deeper, more essential understanding and mastery of information technology for information processing, communication, and problem solving than does computer literacy as traditionally defined.” Scratch is a networked, media-rich programming environment designed to enhance the development of technological fluency at after-school centers in economically-disadvantaged communities. Just as the LEGO MindStorms robotics kit added programmability to an activity deeply rooted in youth culture (building with LEGO bricks), Scratch adds programmability to the media-rich and network-based activities that are most popular among youth at afterschool computer centers. Taking advantage of the extraordinary processing power of current computers, Scratch supports new programming paradigms and activities that were previously infeasible, making it better positioned to succeed than previous attempts to introduce programming to youth. In the past, most initiatives to improve technological fluency have focused on school classrooms. But there is a growing recognition that after-school centers and other informal learning settings can play an important role, especially in economicallydisadvantaged communities, where schools typically have few technological resources and many young people are alienated from the formal education system. Our working hypothesis is that, as kids work on personally meaningful Scratch projects such as animated stories, games, and interactive art, they will develop technological fluency, mathematical and problem solving skills, and a justifiable selfconfidence that will serve them well in the wider spheres of their lives. During the past decade, more than 2000 community technology centers (CTCs) opened in the United States, specifically to provide better access to technology in economically-disadvantaged communities. But most CTCs support only the most basic computer activities such as word processing, email, and Web browsing, so participants do not gain the type of fluency described in the NRC report. Similarly, many after-school centers (which, unlike CTCs, focus exclusively on youth) have begun to introduce computers, but they too tend to offer only introductory computer activities, sometimes augmented by educational games. |
An approach to detecting duplicate bug reports using natural language and execution information | An open source project typically maintains an open bug repository so that bug reports from all over the world can be gathered. When a new bug report is submitted to the repository, a person, called a triager, examines whether it is a duplicate of an existing bug report. If it is, the triager marks it as DUPLICATE and the bug report is removed from consideration for further work. In the literature, there are approaches exploiting only natural language information to detect duplicate bug reports. In this paper we present a new approach that further involves execution information. In our approach, when a new bug report arrives, its natural language information and execution information are compared with those of the existing bug reports. Then, a small number of existing bug reports are suggested to the triager as the most similar bug reports to the new bug report. Finally, the triager examines the suggested bug reports to determine whether the new bug report duplicates an existing bug report. We calibrated our approach on a subset of the Eclipse bug repository and evaluated our approach on a subset of the Firefox bug repository. The experimental results show that our approach can detect 67%-93% of duplicate bug reports in the Firefox bug repository, compared to 43%-72% using natural language information alone. |
Extended Theory on the Inductance Calculation of Planar Spiral Windings Including the Effect of Double-layer Electromagnetic Shield | Recent progress on wireless planar battery charging platform highlights a requirement that the platform must be shielded underneath so that the electromagnetic (EM) flux will not leak through the bottom of the charging platform. The presence of the EM shield will inevitably alter the flux distribution and thus the inductance of the planar windings. In this paper, a theory of calculating inductance of spiral windings is extended to determine the inductance of planar spiral windings shielded by a double-layer planar EM shield. With the generalized equations, the impedance of the planar spiral windings on double-layer shielding substrate and the optimal thickness of shielding materials can be calculated accurately without using time-consuming finite-element method. Therefore, the influence of the double-layer electromagnetic shield on the inductance of the planar spiral windings can be analyzed. Simulations and measurements have been carried out for several shielding plates with different permeability, conductivity and thickness. Both of the simulations and measurements of the winding inductance agree well with the extended theory. |
Knowledge of First Aid Skills Among Students of a Medical College in Mangalore City of South India | BACKGROUND
The adequate knowledge required for handling an emergency without hospital setting at the site of the accident or emergency may not be sufficient as most medical schools do not have formal first aid training in the teaching curriculum.
AIM
The aim of this study is to assess the level of knowledge of medical students in providing first aid care.
SUBJECTS AND METHODS
This cross-sectional study was conducted during May 2011 among 152 medical students. Data was collected using a self-administered questionnaire. Based on the scores obtained in each condition requiring first aid, the overall knowledge was graded as good, moderate and poor.
RESULTS
Only 11.2% (17/152) of the total student participants had previous exposure to first aid training. Good knowledge about first aid was observed in 13.8% (21/152), moderate knowledge in 68.4% (104/152) and poor knowledge in 17.8% (27/152) participants. Analysis of knowledge about first aid management in select conditions found that 21% (32/152) had poor knowledge regarding first aid management for shock and for gastro esophageal reflux disease and 20.4% (31/152) for epistaxis and foreign body in eyes. All students felt that first aid skills need to be taught from the school level onwards and all of them were willing to enroll in any formal first aid training sessions.
CONCLUSION
The level of knowledge about first aid was not good among majority of the students. The study also identified the key areas in which first aid knowledge was lacking. There is thus a need for formal first aid training to be introduced in the medical curriculum. |
Transductive Event Classification through Heterogeneous Networks | Events can be defined as "something that occurs at specific place and time associated with some specific actions". In general, events extracted from news articles and social networks are used to map the information from web to the various phenomena that occur in our physical world. One of the main steps to perform this relationship is the use of machine learning algorithms for event classification, which has received great attention in the web document engineering field in recent years. Traditional machine learning algorithms are based on vector space model representations and supervised classification. However, events are composed of multiple representations such as textual data, temporal information, geographic location and other types of metadata. All these representations are poorly represented together in a vector space model. Moreover, supervised classification requires the labeling of a significant sample of events to construct a training set for learning process, thereby hampering the practical application of event classification. In this paper, we propose a method called TECHN (Transductive Event Classification through Heterogeneous Networks), which considers event metadata as different objects in an heterogeneous network. Besides, the TECHN method has the ability to automatically learn which types of network objects (event metadata) are most efficient in the classification task. In addition, our TECHN method is based on a transductive classification that considers both labeled events and a vast amount of unlabeled events. The experimental results show that TECHN method obtains promising results, especially when we consider different weights of importance for each type of event metadata and a small set of labeled events. |
Achieving industrial relevance with academic excellence: lessons from the Oregon Master of Software engineering | Many educational institutions are developing graduate programs in software engineering targeted to working professionals. These educators face the dilemma of providing programs with both industrial relevance and academic excellence. This paper describes our experience and lessons learned in developing such a program, the Oregon Master of Software Engineering (OMSE). It describes a structured approach to curriculum design, curriculum design principles and methods that can be applied to develop a quality professional program. |
Full STEAM ahead: Exactly sparse gaussian process regression for batch continuous-time trajectory estimation on SE(3) | This paper shows how to carry out batch continuous-time trajectory estimation for bodies translating and rotating in three-dimensional (3D) space, using a very efficient form of Gaussian-process (GP) regression. The method is fast, singularity-free, uses a physically motivated prior (the mean is constant body-centric velocity), and permits trajectory queries at arbitrary times through GP interpolation. Landmark estimation can be folded in to allow for simultaneous trajectory estimation and mapping (STEAM), a variant of SLAM. |
Spatial Scalability Within the H.264/AVC Scalable Video Coding Extension | A scalable extension to the H.264/AVC video coding standard has been developed within the joint video team (JVT), a joint organization of the ITU-T video coding group (VCEG) and the ISO/IEC moving picture experts group (MPEG). The extension allows multiple resolutions of an image sequence to be contained in a single bit stream. In this paper, we introduce the spatially scalable extension within the resulting scalable video coding standard. The high-level design is described and individual coding tools are explained. Additionally, encoder issues are identified. Finally, the performance of the design is reported. |
Hierarchical Orderings of Textual Units | Text representation is a central task for any approach to automatic learning from texts. It requires a format which allows to interrelate texts even if they do not share content words, but deal with similar topics. Furthermore, measuring text similarities raises the question of how to organize the resulting clusters. This paper presents cohesion trees (CT) as a data structure for the perspective, hierarchical organization of text corpora. CTs operate on alternative text representation models taking lexical organization, quantitative text characteristics, and text structure into account. It is shown that CTs realize text linkages which are lexically more homogeneous than those produced by minimal spanning trees. |
Musical Sense-Making and the Concept of Affordance: An Ecosemiotic and Experiential Approach | This article is interdisciplinary in its claims. Evolving around the ecological concept of affordance, it brings together pragmatics and ecological psychology. Starting from the theoretical writings of Peirce, Dewey and James, the biosemiotic claims of von Uexküll, Gibson’s ecological approach to perception and some empirical evidence from recent neurobiological research, it elaborates on the concepts of experiential and enactive cognition as applied to music. In order to provide an operational description of this approach, it introduces some conceptual tools from the domain of cybernetics with a major focus on the concept of circularity, which links perception to action in a continuous process of sense-making and interaction with the environment. As such, it is closely related to some pragmatic, biosemiotic and ecosemiotic claims which can be subsumed under the general notion of functional significance. An attempt is made to apply this conceptual framework to the process of musical sense-making which involves the realisation of systemic cognition in the context of epistemic interactions that are grounded in our biology and possibilities for adaptive control. Central in this approach is the concept of coping with the environment, or, in musical terms, to perceive the sounding music in terms of what it affords for the consummation of musical behaviour. |
Meat and fat intake and pancreatic cancer risk in the Netherlands Cohort Study. | Meat contains numerous carcinogens, such as heterocyclic amines, polycyclic aromatic hydrocarbons, and N-nitroso compounds, which can be derived either from natural food or during the process of food preparation. These carcinogens may increase pancreatic cancer risk. Furthermore, studies in animals showed that polyunsaturated fatty acids, especially linoleic acid, increase pancreatic cancer risk. We examined prospectively the relation between pancreatic cancer risk and intake of fresh meat, processed meat, fish, eggs, total fat, and different types of fat. The Netherlands Cohort Study consisted of 120,852 men and women who completed a baseline questionnaire in 1986. After 13.3 years of follow-up, 350 pancreatic cancer cases (66% microscopically confirmed) were available for analysis. A validated 150-item food-frequency questionnaire was used to calculate intake of fresh meat, processed meat, fish, eggs, fat and different types of fat. No association was found when examining the association between intake of fresh meat, other types of meat, fish, eggs, dietary intake of total fat and different types of fat and risk of pancreatic cancer. It is important for future studies to investigate the relation between different meat-cooking methods and pancreatic cancer. |
Granitic Rocks and their Formation Depth in the Crust | Granitic rocks as the major component in the continental crust are particularly helpful to decipher the evolution of continental tectonics,which is quite similar in the way that basaltic rocks,being the major component of the oceanic crust,can be used to trace the tectonic settings in plate tectonics. Diversity of source rock composition and formation depth are two crucial factors controlling the compositions of resultant granitic rocks. It is proposed in this study that the variation of Sr and Yb contents of the granitic rocks is related to their formation depth,which can be further used as a new classification criteria for granitic rocks. Provided most of granitic rocks are derived from the lowest part of the lower crust,the depth from which the granitic rocks formed may indicate the thickness of the crust,therefore,the variation of the formation depth can be used to trace the evolution of vertical movement of the continental crust,i.e.,to indentify ancient plateaus and mountain ranges which ever existed in the history of the earth. Major concerns about the relationship of the composition of granitic rocks and formation depth are also discussed in the paper. |
Cooperative Resource Management in Cloud-Enabled Vehicular Networks | Cloud-enabled vehicular networks are a new paradigm to improve the quality of vehicular services, which have drawn considerable attention in industry and academia. In this paper, we consider the resource management and sharing problem for bandwidth and computing resources to support mobile applications in cloud-enabled vehicular networks. In such an environment, cloud service providers (SPs) can cooperate to form coalitions to share their idle resources with each other. We propose a coalition game model based on two-sided matching theory for cooperation among cloud SPs to share their idle resources. As a result, the resources can be better utilized, and the QoS for users can be improved. Numerical results indicate that our scheme can improve resource utilization and increase by 75% the QoS of the applications compared with that without cooperation. Moreover, the higher service cost of cooperation brings negative effect on coalition formation. The higher cooperation willingness of cloud SPs and the lower service cost support more service applications. |
Simple day-case surgery for pilonidal sinus disease. | BACKGROUND
Pilonidal disease is a common and usually minor disease. Although wide excisional surgery has been common practice, there are more simple alternatives. This review focused on the aetiology and management of pilonidal disease.
METHODS
A comprehensive review of the literature on pilonidal disease was undertaken. MEDLINE searches for all articles listing pilonidal disease (1980-2010) were performed to determine the aetiology and results of surgical and non-surgical treatments. Single papers describing new techniques or minor modifications of established techniques were excluded. Further articles were traced through reference lists.
RESULTS
Patients with minimal symptoms and those having drainage of a single acute abscess can be treated expectantly. Non-surgical treatments may be of value but their long-term results are unknown. There is no rational basis or need for wide excision of the abscess and sinus. Simple removal of midline skin pits, the primary cause of pilonidal disease, with lateral drainage of the abscess and sinus is effective in most instances. Hirsute patients with extensive primary disease and deep natal clefts, or with recurrent disease and unhealed midline wounds, may also require flattening of the natal cleft with off-midline skin closure. These more conservative procedures are usually done as a day case, require minimal care in the community and are associated with a rapid return to work. They also avoid the occasional debilitating complications of surgical treatment.
CONCLUSION
Simple day-case surgery to eradicate midline skin pits without wide excision of the abscesses and sinus is rational, safe and effective for patients with pilonidal sinus disease. |
Detection of Daily Activities and Sports With Wearable Sensors in Controlled and Uncontrolled Conditions | Physical activity has a positive impact on people's well-being, and it may also decrease the occurrence of chronic diseases. Activity recognition with wearable sensors can provide feedback to the user about his/her lifestyle regarding physical activity and sports, and thus, promote a more active lifestyle. So far, activity recognition has mostly been studied in supervised laboratory settings. The aim of this study was to examine how well the daily activities and sports performed by the subjects in unsupervised settings can be recognized compared to supervised settings. The activities were recognized by using a hybrid classifier combining a tree structure containing a priori knowledge and artificial neural networks, and also by using three reference classifiers. Activity data were collected for 68 h from 12 subjects, out of which the activity was supervised for 21 h and unsupervised for 47 h. Activities were recognized based on signal features from 3-D accelerometers on hip and wrist and GPS information. The activities included lying down, sitting and standing, walking, running, cycling with an exercise bike, rowing with a rowing machine, playing football, Nordic walking, and cycling with a regular bike. The total accuracy of the activity recognition using both supervised and unsupervised data was 89% that was only 1% unit lower than the accuracy of activity recognition using only supervised data. However, the accuracy decreased by 17% unit when only supervised data were used for training and only unsupervised data for validation, which emphasizes the need for out-of-laboratory data in the development of activity-recognition systems. The results support a vision of recognizing a wider spectrum, and more complex activities in real life settings. |
Ergebnisse der Ballonkyphoplastie in der Behandlung von osteoporotischen Wirbelkörperfrakturen | Das Ziel dieser Untersuchung ist es, die Schmerzreduktion, die Verbesserung der Wirbelkörperdeformität, unerwünschte Wirkungen sowie die mittelfristigen Behandlungsergebnisse nach Ballonkyphoplastie bei osteoporotischen Wirbelkörperkompressionsfrakturen darzustellen. Es wurden 87 Patienten mit 145 osteoporotischen Wirbelkörperfrakturen, bei denen eine medikamentöse Therapie fehlschlug, mittels Ballonkyphoplastie behandelt und sämtliche Daten prospektiv erfasst. Die Verbesserung der Wirbelkörperhöhe wurde in CT-Untersuchungen mittels Messung von Cobb- und Kyphosewinkel, anteriorer, mittlerer sowie posteriorer Wirbelköperhöhe ermittelt. Das Schmerzausmaß wurde anhand einer visuellen Analogskala (VAS) erfasst. Der Vergleich der präoperativen mit den postoperativen CT-Untersuchungen zeigte eine signifikante Reduktion des Kyphosewinkels von im Mittel 5,7° (2°–24°). Die Schmerzen besserten sich in der VAS signifikant von 7,8±2,4 auf 2,0±1,5 (Schmerzverbesserung bei 95,5% der Patienten). Ein symptomfreier Austritt des Knochenzements aus dem Wirbelkörper wurde in 28/145 Fällen (19,3%) beobachtet. Nach im Mittel 13 Monaten (12–16 Monate) konnten bisher 35 Patienten mit 51 Wirbelkörperfrakturen nachuntersucht werden (CT und VAS). Es zeigten sich eine anhaltende Schmerzreduktion und kein signifikanter Korrekturverlust des Kyphosewinkels. Erneute Frakturen mit Symptomen sind in diesem Kollektiv bei 4 Patienten, klinisch stumme neue Frakturen bei 5/35 Patienten aufgetreten. Sieben Frakturen lagen dabei im Anschluss an die initiale Fraktur, 2 Frakturen an einer entfernten Lokalisation. Bei 2 Patienten konnte im CT ein symptomfreier Höhenverlust des mittels Kyphoplastie versorgten Wirbels beobachtet werden. Mit Hilfe der Ballonkyphoplastik kann bei Patienten mit schmerzhaften osteoporotischen Wirbelkörperfrakturen mittelfristig eine erhebliche Besserung der Schmerzsymptomatik und eine Aufrichtung der Wirbelkörper erreicht werden. Bei 26% der Patienten traten jedoch vermehrt in Nachbarsegmenten neue Frakturen auf, von denen ca. 50% symptomfrei verlaufen. The aim of this study was to evaluate the reduction of pain, improvement of sagittal alignment, complications and intermediate term results of balloon kyphoplasty in the treatment of osteoporotic vertebral compression fractures (VCF). The study group consisted of 87 patients with 145 VCFs which were not responsive to non-operative treatment. All data were collected prospectively. Improvement of sagittal alignment (Cobb and kyphotic angles, anterior, middle and posterior height) was determined from CT scans. Pain was evaluated by means of a visual analogue scale (VAS). Postoperative CT scans revealed a significant reduction of the mean kyphotic angle of 5.7° (range 2-24°) and a significant reduction of pain from 7.8±2.4 to 2.0±1.5 in the VAS (improvement of pain in 95.5% of patients). An asymptomatic leakage of cement was observed in 28 out of 145 vertebrae (19.3%). The outcome of 35 patients with 51 VCFs was evaluated after a mean of 13 (range 12-70) months (CT and VAS) and there was a persisting reduction of pain and no loss of reduction. In this group of patients new symptomatic fractures were evident in 4 and clinically asymptomatic (only seen on CT) fractures were detected in 5 out of 35 patients, 7 fractures were adjacent to and 2 fractures were remote from the initially treated level. In two patients an asymptomatic moderate loss of reduction was detected. These intermediate term results indicate that kyphoplasty reduces pain and improves sagittal alignment in patients with VCF. However, in 26% of patients new fractures occurred, predominantly in adjacent levels but approximately 50% of these fractures were clinically asymptomatic. |
Millimeter-wave CMOS design | This paper describes the design and modeling of CMOS transistors, integrated passives, and circuit blocks at millimeter-wave (mm-wave) frequencies. The effects of parasitics on the high-frequency performance of 130-nm CMOS transistors are investigated, and a peak f/sub max/ of 135 GHz has been achieved with optimal device layout. The inductive quality factor (Q/sub L/) is proposed as a more representative metric for transmission lines, and for a standard CMOS back-end process, coplanar waveguide (CPW) lines are determined to possess a higher Q/sub L/ than microstrip lines. Techniques for accurate modeling of active and passive components at mm-wave frequencies are presented. The proposed methodology was used to design two wideband mm-wave CMOS amplifiers operating at 40 GHz and 60 GHz. The 40-GHz amplifier achieves a peak |S/sub 21/| = 19 dB, output P/sub 1dB/ = -0.9 dBm, IIP3 = -7.4 dBm, and consumes 24 mA from a 1.5-V supply. The 60-GHz amplifier achieves a peak |S/sub 21/| = 12 dB, output P/sub 1dB/ = +2.0 dBm, NF = 8.8 dB, and consumes 36 mA from a 1.5-V supply. The amplifiers were fabricated in a standard 130-nm 6-metal layer bulk-CMOS process, demonstrating that complex mm-wave circuits are possible in today's mainstream CMOS technologies. |
Mode-Seeking on Hypergraphs for Robust Geometric Model Fitting | In this paper, we propose a novel geometric model fitting method, called Mode-Seeking on Hypergraphs (MSH), to deal with multi-structure data even in the presence of severe outliers. The proposed method formulates geometric model fitting as a mode seeking problem on a hypergraph in which vertices represent model hypotheses and hyperedges denote data points. MSH intuitively detects model instances by a simple and effective mode seeking algorithm. In addition to the mode seeking algorithm, MSH includes a similarity measure between vertices on the hypergraph and a "weight-aware sampling" technique. The proposed method not only alleviates sensitivity to the data distribution, but also is scalable to large scale problems. Experimental results further demonstrate that the proposed method has significant superiority over the state-of-the-art fitting methods on both synthetic data and real images. |
Adult norms for the Rey-Osterrieth Complex Figure Test and for supplemental recognition and matching trials from the Extended Complex Figure Test. | The Rey-Osterrieth Complex Figure Test ("the Rey"; Osterrieth, 1944; Rey, 1941) has accumulated a considerable literature as a test of visual-spatial perception/construction and memory. The Extended Complex Figure Test (ECFT; Fastenau, 1996a, in press-a; Fastenau & Manning, 1992) supplements the Rey with Recognition and Matching trials that follow Copy, Immediate Recall, and Delayed Recall. The Rey and ECFT were administered to 211 healthy adults. Age ranged from 30 years to 85 years (M = 62.9, SD = 14.2), education ranged from 12 years to 25 years (M = 14.9, SD = 2.6), 55% were women, and over 95% were Caucasian. Age and education effects were evident on all trials (Multiple R ranged .23 to .50, p < .05), but education explained minimal variance (usually 2-3%) on copy and memory trials. Gender effects were negligible, if present. Age-appropriate norms are presented using Osterrieth's 36-point scoring, overlapping cells, and convenient tables for converting raw scores to scaled scores. |
A d–q Voltage Droop Control Method With Dynamically Phase-Shifted Phase-Locked Loop for Inverter Paralleling Without Any Communication Between Individual Inverters | This paper presents a modified droop control method for equal load sharing of parallel-connected inverters, without any communication between individual inverters. Droop in <italic>d</italic>- and <italic>q</italic>-axis voltages are given depending upon <italic>d</italic>- and <italic>q</italic>-axis currents, respectively. Each inverter works in the voltage control mode, where it controls the filter capacitor voltage. Voltage references of each inverter come from <italic>d</italic>- and <italic>q</italic>-axis voltage droops. This <italic>d</italic>- and <italic>q</italic> -axis voltage droops force the parallel-connected inverters to share equal current and hence equal active and reactive power. A dynamically phase-shifted phase-locked loop (PLL) technique is locally designed for generating phase reference of each inverter. The phase angle between filter capacitor voltage vector and <italic>d</italic>-axis is dynamically adjusted with the change in <italic>q</italic>-axis inverter current to generate the phase reference of each inverter. The strategy has been verified with simulations and experiments and results are presented in this paper. |
Dataset Coverage for Testing Machine Learning Computer Programs | Machine learning programs are non-testable, and thus testing with pseudo oracles is recommended. Although metamorphic testing is effective for testing with pseudo oracles, identifying metamorphic properties has been mostly ad hoc. This paper proposes a systematic method to derive a set of metamorphic properties for machine learning classifiers, support vector machines. The proposal includes a new notion of test coverage for the machine learning programs; this test coverage provides a clear guideline for conducting a series of metamorphic testing. |
A Framework for Institutional Adoption and Implementation of Blended Learning in Higher Education. | Available online 25 September 2012 |
Comparison between metal and biodegradable suture anchors in the arthroscopic treatment of traumatic anterior shoulder instability: a prospective randomized study | The purpose of this study was to compare the clinical outcome of arthroscopic treatment of shoulder instability with metal and biodegradable suture anchors. Arthroscopic stabilization was performed in 78 patients with recurrent traumatic anterior shoulder instability. They were divided into 2 groups of 39 patients each, according to suture anchors used: metal anchors in group 1, and biodegradable anchors in group 2. Results were evaluated by use of the Disabilities of the Arm, Shoulder and Hand (DASH) self-administered questionnaire; Rowe score; Constant score normalized for age and gender, and recurrence of dislocation. On analyzing the results at a 2-year follow-up, we considered the following independent variables: age; gender; arm dominance; duration of symptoms, age at first dislocation, number of dislocations, type of work; type of sport; sports activity level; lesion of the anterior labrum and anterior-inferior gleno-humeral ligament; SLAP lesion, and number of suture anchors. Comparison between groups did not show significant differences for each variable considered. Overall, according to the results, median DASH scores were 4.5 points (range 0–27) in group 1 and 7 points (range 0–25) in group 2 (n.s.); median Rowe scores were 100 points (range 60–100) and 100 points (range 25–100), respectively (n.s.); and median Constant scores were 98 points (range 81–107) and 98 points (range 87–121), respectively (n.s.). Recurrence was observed in 1 patient (2.8%) in group 1 and in 2 patients (5.9%) in group 2. Overall recurrence rate was 4.3%. Univariate and multivariate analysis showed that age, duration of symptoms, number of dislocations, type of work, and type of sports significantly and independently influenced the outcomes. Differences between groups 1 and 2 were not significant. At a short-term follow-up, differences between arthroscopic shoulder stabilization with metal and biodegradable suture anchors were not statistically significant. Clinical relevance of the study is that there is no difference in the use of metal or biodegradable suture anchors for the arthroscopic treatment of shoulder instability. |
Time required for Judgements of Numerical Inequality | AN educated adult can tell which of two digits is the larger with virtually no uncertainty. By what process is this accomplished ? On the one hand, it is conceivable that such judgements are made in the same way as judgements of stimuli varying along physical continua. On the other hand, numerical judgements may be made at a different, less perceptual and more cognitive, level. For instance, the task may be one of memory access, each possible pair of numerals being stored with a corresponding inequality sign ; or perhaps some sort of digital computation is performed, such as counting the space between the two numerical values. |
Lipoprotein-associated phospholipase A₂ activity and mass in relation to vascular disease and nonvascular mortality. | OBJECTIVES
To assess whether associations of circulating lipoprotein-associated phospholipase A₂ (Lp-PLA₂) with vascular disease are independent of other risk factors.
METHODS
Lp-PLA₂ activity and mass, lipids and other characteristics were measured at baseline in 19,037 individuals at high risk of vascular disease in a randomized trial of simvastatin with 5-year average follow-up.
RESULTS
Lp-PLA₂ activity and mass were correlated with each other (r = 0.56), lipids and other vascular risk factors. The moderate association of Lp-PLA₂ activity with occlusive coronary events (n = 2531) in analyses adjusted for nonlipid factors (hazard ratio per 1 SD [HR] 1.11, 95% CI 1.06-1.15) became nonsignificant after further adjustment for apolipoproteins (HR 1.02, 0.97-1.06). Such adjustment also attenuated HRs with Lp-PLA₂ mass from 1.08 (1.03-1.12) to 1.05 (1.01-1.09). By contrast, the HR with apolipoprotein-B100 of 1.15 (1.10-1.19) was only slightly attenuated to 1.14 (1.09-1.19) after further adjustment for apolipoprotein A₁ and Lp-PLA₂. Age- and sex-adjusted HRs for other cardiac events (n = 1007) with either Lp-PLA₂ activity or mass were about 1.20, but HRs reduced after adjustment for nonlipid factors (activity: 1.11, 1.04-1.18; mass: 1.08, 1.02-1.15). Adjusted HRs for ischaemic stroke (n = 900) were weak and nonsignificant and for nonvascular mortality (n = 1040) were 1.01 (0.94-1.09) with activity and 1.12 (1.05-1.19) with mass. Simvastatin reduced Lp-PLA₂ levels by about one-quarter, but simvastatin's vascular protection did not vary with baseline Lp-PLA₂ concentration.
CONCLUSIONS
Associations of Lp-PLA₂ with occlusive coronary events depend considerably on lipid levels, whereas those with other cardiac events appear to reflect confounding from cardiovascular medication and prior vascular disease. |
How can students' diagnostic competence benefit most from practice with clinical cases? The effects of structured reflection on future diagnosis of the same and novel diseases. | PURPOSE
To develop diagnostic competence, students should practice with many examples of clinical problems to build rich mental representations of diseases. How to enhance learning from practice remains unknown. This study investigated the effects of reflection on cases compared with generating a single or differential diagnosis.
METHOD
In 2012, during the learning phase, 110 fourth-year medical students diagnosed four cases of two criterion diseases under three different experimental conditions: structured reflection, single-diagnosis, or differential-diagnosis. One week later, they diagnosed two novel exemplars of each criterion disease and four cases of new diseases that were not among the cases of the learning phase but were plausible alternative diagnoses.
RESULTS
Diagnostic performance did not differ among the groups in the learning phase. One week later, the reflection group obtained higher mean diagnostic accuracy scores (range: 0-1) than the other groups when diagnosing new exemplars of criterion diseases (reflection: 0.67; single-diagnosis: 0.36, P < .001; differential-diagnosis: 0.51, P = .014) and cases of new diseases (reflection: 0.44; single-diagnosis: 0.32, P = .010; differential-diagnosis: 0.33, P = .015). No difference was found between the single-diagnosis and the differential-diagnosis conditions.
CONCLUSIONS
Structured reflection while practicing with cases enhanced learning of diagnosis both of the diseases practiced and of their alternative diagnoses, suggesting that reflection not only enriched mental representations of diseases practiced relative to more conventional approaches to clinical learning but also influenced the representations of adjacent but different diseases. Structured reflection seems a useful addition to the existing clinical teaching methods. |
Adaptive Noise Smoothing Filter for Images with Signal-Dependent Noise | In this paper, we consider the restoration of images with signal-dependent noise. The filter is noise smoothing and adapts to local changes in image statistics based on a nonstationary mean, nonstationary variance (NMNV) image model. For images degraded by a class of uncorrelated, signal-dependent noise without blur, the adaptive noise smoothing filter becomes a point processor and is similar to Lee's local statistics algorithm [16]. The filter is able to adapt itself to the nonstationary local image statistics in the presence of different types of signal-dependent noise. For multiplicative noise, the adaptive noise smoothing filter is a systematic derivation of Lee's algorithm with some extensions that allow different estimators for the local image variance. The advantage of the derivation is its easy extension to deal with various types of signal-dependent noise. Film-grain and Poisson signal-dependent restoration problems are also considered as examples. All the nonstationary image statistical parameters needed for the filter can be estimated from the noisy image and no a priori information about the original image is required. |
Effect of impaired renal function on the pharmacokinetics of tomopenem (RO4908463/CS-023), a novel carbapenem. | The objective of this study was to assess the impact of impaired renal function on the pharmacokinetics of tomopenem (RO4908463/CS-023), a novel carbapenem antibiotic, and its major metabolite in humans. Thirty-two subjects were enrolled in an open-label, two-center study. Subjects were evenly assigned to one of four groups, based on creatinine clearance ranges of > or =80, 50 to 79, 30 to 49, and <30 ml/min. The drug was given as a single 1,500-mg constant-rate intravenous infusion over 60 min. There were no safety concerns with increasing renal dysfunction. Renal impairment had a significant impact on exposure of both tomopenem and its metabolite. Mean (+/- standard deviation) areas under the curve for tomopenem increased with decreasing renal function, from 191 +/- 35.2 to 1,037 +/- 238 microg.h/ml. The maximum concentration of drug in plasma (C(max)) increased with a maximum difference of 44% between the severe and normal groups. In contrast, the corresponding increase in C(max) of the metabolite was much higher, at 174%. Total body clearance was linearly correlated with creatinine clearance (R(2) = 0.97; P < 0.0001). Renal clearance for tomopenem decreased with increasing severity of disease, with mean values decreasing from 4.63 +/- 0.89 to 0.59 +/- 0.19 liters/h. The results of this study indicated a strong correlation between the creatinine clearance and total clearance of tomopenem. While renal impairment appeared to have a significant effect on the pharmacokinetics of tomopenem, an even greater effect was seen on the elimination of the inactive metabolite. |
SARL: A General-Purpose Agent-Oriented Programming Language | Complex software systems development require appropriate high-level features to better and easily tackle the new requirements in terms of interactions, concurrency and distribution. This requires a paradigm change in software engineering and corresponding programming languages. We are convinced that agent-oriented programming may be the support for this change by focusing on a small corpus of commonly accepted concepts and the corresponding programming language in line with the current developers' programming practices. This papers introduces SARL, a new general-purpose agent-oriented programming language undertaking this challenge. SARL comes with its full support in the Eclipse IDE for compilation and debugging, and a new version 2.0 of the Janus platform for execution purposes. The main perspective that guided the creation of SARL is the establishment of an open and easily extensible language. Our expectation is to provide the community with a common forum in terms of a first working test bed to study and compare various programming alternatives and associated metamodels. |
Feature Extraction and Classification of Brain Tumor using MRI | 1 M.Tech. Student 2 Assistant Professor 1,2 Department of Electronics & Communication Engineering 1,2 VVIETMysuru Abstract— MRI scan of brain tumor gives the detailed information of brain compared to other scans. Brain tumor is an uncontrolled and an unwanted multiplication and growing of cells in the body Image processing in MRI of brain is highly essential due to accurate detection of the type of brain abnormality which can reduce the chance of fatal result. Segmentation technique plays a major role in the brain tumor detection. This paper outlines an efficient image segmentation technique for the different ventricles affected brain tumor images. K-means segmentation is used for the brain tumor detection and extraction. The extraction of texture features in the detected tumor has been achieved by using Gray Level Co-occurrence Matrix (GLCM). By selecting the significant features the Classification has been performed to labeling the images into normal and abnormal (tumor detected) using Support Vector Machine (SVM). |
What's in a name?: an unsupervised approach to link users across communities | In this paper, we consider the problem of linking users across multiple online communities. Specifically, we focus on the alias-disambiguation step of this user linking task, which is meant to differentiate users with the same usernames. We start quantitatively analyzing the importance of the alias-disambiguation step by conducting a survey on 153 volunteers and an experimental analysis on a large dataset of About.me (75,472 users). The analysis shows that the alias-disambiguation solution can address a major part of the user linking problem in terms of the coverage of true pairwise decisions (46.8%). To the best of our knowledge, this is the first study on human behaviors with regards to the usages of online usernames. We then cast the alias-disambiguation step as a pairwise classification problem and propose a novel unsupervised approach. The key idea of our approach is to automatically label training instances based on two observations: (a) rare usernames are likely owned by a single natural person, e.g. pennystar88 as a positive instance; (b) common usernames are likely owned by different natural persons, e.g. tank as a negative instance. We propose using the n-gram probabilities of usernames to estimate the rareness or commonness of usernames. Moreover, these two observations are verified by using the dataset of Yahoo! Answers. The empirical evaluations on 53 forums verify: (a) the effectiveness of the classifiers with the automatically generated training data and (b) that the rareness and commonness of usernames can help user linking. We also analyze the cases where the classifiers fail. |
Potential Bacillus probiotics enhance bacterial numbers, water quality and growth during early development of white shrimp (Litopenaeus vannamei). | Epidemics of epizootics and occurrence of multiresistant antibiotics of pathogenic bacteria in aquaculture have put forward a development of effective probiotics for the sustainable culture. This study examined the effectiveness of forms of mixed Bacillus probiotics (probiotic A and probiotic B) and mode of probiotic administration on growth, bacterial numbers and water quality during rearing of white shrimp (Litopenaeus vannamei) in two separated experiments: (1) larval stages and (2) postlarval (PL) stages. Forms of Bacillus probiotics and modes of probiotic administration did not affect growth and survival of larval to PL shrimp. The compositions of Bacillus species in probiotic A and probiotic B did not affect growth and survival of larvae. However, postlarvae treated with probiotic B exhibited higher (P<0.05) growth than probiotic A and controls, indicating Bacillus probiotic composition affects the growth of PL shrimp. Total heterotrophic bacteria and Bacillus numbers in larval and PL shrimp or culture water of the treated groups were higher (P<0.05) than in controls. Levels of pH, ammonia and nitrite of the treated shrimp were significantly decreased, compared to the controls. Microencapsulated Bacillus probiotic was effective for rearing of PL L. vannamei. This investigation showed that administration of mixed Bacillus probiotics significantly improved growth and survival of PL shrimp, increased beneficial bacteria in shrimp and culture water and enhanced water quality for the levels of pH, ammonia and nitrite of culture water. |
An End to End Model for Automatic Music Generation : Combining Deep Raw and Symbolic Audio Networks | We develop an approach to combine two types of music generation models, namely symbolic and raw audio models. While symbolic models typically operate at the note level and are able to capture long-term dependencies, they lack the expressive richness and nuance of performed music. Raw audio models train directly on raw audio waveforms, and can be used to produce expressive music; however, these models typically lack structure and long-term dependencies. We describe a work-in-progress model that trains a raw audio model based on the recently-proposed WaveNet architecture, but that incorporates the notes of the composition as a secondary input to the network. When generating novel compositions, we utilize an LSTM network whose output feeds into the raw audio model, thus yielding an end-to-end model that generates raw audio outputs combining the best of both worlds. We describe initial results of our approach, which we believe to show considerable promise for structured music generation. |
Heritage and early history of the boundary element method | This article explores the rich heritage of the boundary element method (BEM) by examining its mathematical foundation from the potential theory, boundary value problems, Green’s functions, Green’s identities, to Fredholm integral equations. The 18th to 20th century mathematicians, whose contributions were key to the theoretical development, are honored with short biographies. The origin of the numerical implementation of boundary integral equations can be traced to the 1960s, when the electronic computers had become available. The full emergence of the numerical technique known as the boundary element method occurred in the late 1970s. This article reviews the early history of the boundary element method up to the late 1970s. q 2005 Elsevier Ltd. All rights reserved. |
An Overview of Graph Databases | Graph databases are used for social networking and website link structure as graphs are used for storing connections among users. For example, social networking sites like Facebook, LinkedIn, and Twitter etc. It has more complicated networks of relationships, so to represent these databases using relational model is not efficient because of large number of table joining. Several well known graph databases are available such as Neo4j, flockdb, allegrograph, sones, trinity, hypergraphdb, infinitegraph, infogrid, orientdb, DEX. In this paper, we have presented comparison of current graph databases and it has been observed that Neo4j is the best in all current current graph databases. |
The effects of large-dose propofol on cerebrovascular pressure autoregulation in head-injured patients. | UNLABELLED
In healthy individuals, cerebrovascular pressure autoregulation is preserved or even improved when propofol is infused. We examined the effect of an increase in propofol plasma concentration on pressure autoregulation in 10 head-injured patients. Using target-controlled infusions, the static rate of autoregulation was determined at a moderate (2.3 +/- 0.4 microg/mL) and a large (4.3 +/- 0.04 microg/mL) plasma target concentration of propofol. Using norepinephrine to control cerebral perfusion pressure, transcranial Doppler measurements from the middle cerebral artery were made at a cerebral perfusion pressure of 70 and 85 mm Hg at each propofol concentration. Middle cerebral artery flow velocities at the large propofol concentration were significantly lower than at the moderate concentration, without any concurrent increase in arterio-jugular difference in oxygen content, a finding compatible with maintained flow-metabolism coupling. Despite this, static rate of autoregulation decreased significantly from 54% +/- 36% to 28% +/- 35% (P = 0.029). Our data suggest that after head injury, the cerebrovascular effects of propofol are different from those observed in healthy individuals. We propose that large doses of propofol should be used cautiously in head-injured patients, because there is the potential to increase the injured brain's vulnerability to secondary insults.
IMPLICATIONS
Propofol is used for sedation and control of intracranial pressure in head-injured patients. In contrast to previous data from healthy individuals, we show a deterioration of cerebrovascular pressure autoregulation with fast propofol infusion rates after head injury. Large propofol doses may increase the injured brain's vulnerability to secondary insults. |
Learning to Learn for Global Optimization of Black Box Functions | We present a learning to learn approach for training recurrent neural networks to perform black-box global optimization. In the meta-learning phase we use a large set of smooth target functions to learn a recurrent neural network (RNN) optimizer, which is either a long-short term memory network or a differentiable neural computer. After learning, the RNN can be applied to learn policies in reinforcement learning, as well as other black-box learning tasks, including continuous correlated bandits and experimental design. We compare this approach to Bayesian optimization, with emphasis on the issues of computation speed, horizon length, and exploration-exploitation trade-offs. |
Poor numeracy skills are associated with glycaemic control in Type 1 diabetes. | AIMS
To assess the numeracy and literacy skills of individuals with Type 1 diabetes and determine if there is a relationship with achieved glycaemic control independent of their duration of diabetes, diabetes education, demographic and socio-economic factors.
METHODS
One hundred and twelve patients completed the study (mean current age 43.8 ± 12.5 years, 47% male, mean duration of diabetes 22.0 ± 13.2 years) out of 650 randomly selected patients from the Bournemouth Diabetes and Endocrine Centre's diabetes register. The Skills for Life Initial Assessments were used to measure numeracy and literacy. These indicate skills levels up to level 2, equivalent to the national General Certificate of Secondary Education grades A*-C. HbA(1c) was also measured. Pearson's correlation was used to measure the correlation of numeracy and literacy scores with HbA(1c.) To compare mean HbA(1c) between those with or without level 2 skills, t-tests were used, and multiple linear regression was used to investigate whether any differences were independent of duration of diabetes, diabetes education, demographic and socio-economic factors.
RESULTS
Literacy was not associated with achieved HbA(1c). In contrast, participants with numeracy skills at level 2 or above achieved an HbA(1c) lower than those with numeracy skills below level 2 (P = 0.027). Although higher socio-economic status was associated with lower mean HbA(1c) , the relationship between numeracy and HbA(1c) appeared to be independent of socio-economic factors.
CONCLUSIONS
Low numeracy skills were adversely associated with diabetes control. Assessment of numeracy skills may be relevant to the structure of diabetes education programmes. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.