title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Improving Monocular SLAM: using Depth Estimating CNN | To bring down the number of traffic accidents and increase people’s mobility companies, such as Robot Engineering Systems (RES) try to put automated vehicles on the road. RES is developing the WEpod, a shuttle capable of autonomously navigating through mixed traffic. This research has been done in cooperation with RES to improve the localization capabilities of the WEpod. The WEpod currently localizes using its GPS and lidar sensors. These have proven to be not accurate and reliable enough to safely navigate through traffic. Therefore, other methods of localization and mapping have been investigated. The primary method investigated in this research is monocular Simultaneous Localization and Mapping (SLAM). Based on literature and practical studies, ORB-SLAM has been chosen as the implementation of SLAM. Unfortunately, ORB-SLAM is unable to initialize the setup when applied on WEpod images. Literature has shown that this problem can be solved by adding depth information to the inputs of ORB-SLAM. Obtaining depth information for the WEpod images is not an arbitrary task. The sensors on the WEpod are not capable of creating the required dense depth-maps. A Convolutional Neural Network (CNN) could be used to create the depth-maps. This research investigates whether adding a depth-estimating CNN solves this initialization problem and increases the tracking accuracy of monocular ORB-SLAM. A well performing CNN is chosen and combined with ORB-SLAM. Images pass through the depth estimating CNN to obtain depth-maps. These depth-maps together with the original images are used in ORB-SLAM, keeping the whole setup monocular. ORB-SLAM with the CNN is first tested on the Kitti dataset. The Kitti dataset is used since monocular ORBSLAM initializes on Kitti images and ground-truth depth-maps can be obtained for Kitti images. Monocular ORB-SLAM’s tracking accuracy has been compared to ORB-SLAM with ground-truth depth-maps and to ORB-SLAM with estimated depth-maps. This comparison shows that adding estimated depth-maps increases the tracking accuracy of ORB-SLAM, but not as much as the ground-truth depth images. The same setup is tested on WEpod images. The CNN is fine-tuned on 7481 Kitti images as well as on 642 WEpod images. The performance on WEpod images of both CNN versions are compared, and used in combination with ORB-SLAM. The CNN fine-tuned on the WEpod images does not perform well, missing details in the estimated depth-maps. However, this is enough to solve the initialization problem of ORB-SLAM. The combination of ORB-SLAM and the Kitti fine-tuned CNN has a better tracking accuracy than ORB-SLAM with the WEpod fine-tuned CNN. It has been shown that the initialization problem on WEpod images is solved as well as the tracking accuracy is increased. These results show that the initialization problem of monocular ORB-SLAM on WEpod images is solved by adding the CNN. This makes it applicable to improve the current localization methods on the WEpod. Using only this setup for localization on the WEpod is not possible yet, more research is necessary. Adding this setup to the current localization methods of the WEpod could increase the localization of the WEpod. This would make it safer for the WEpod to navigate through traffic. This research sets the next step into creating a fully autonomous vehicle which reduces traffic accidents and increases the mobility of people. |
Inverted index compression and query processing with optimized document ordering | Web search engines use highly optimized compression schemes to decrease inverted index size and improve query throughput, and many index compression techniques have been studied in the literature. One approach taken by several recent studies first performs a renumbering of the document IDs in the collection that groups similar documents together, and then applies standard compression techniques. It is known that this can significantly improve index compression compared to a random document ordering. We study index compression and query processing techniques for such reordered indexes. Previous work has focused on determining the best possible ordering of documents. In contrast, we assume that such an ordering is already given, and focus on how to optimize compression methods and query processing for this case. We perform an extensive study of compression techniques for document IDs and present new optimizations of existing techniques which can achieve significant improvement in both compression and decompression performances. We also propose and evaluate techniques for compressing frequency values for this case. Finally, we study the effect of this approach on query processing performance. Our experiments show very significant improvements in index size and query processing speed on the TREC GOV2 collection of 25.2 million web pages. |
Implementation of cancer pain guidelines by acute care nurse practitioners using an audit and feedback strategy. | PURPOSE
Despite the availability of clinical practice guidelines (CPGs) for cancer pain, consistent integration of these principles into practice has not been achieved. The optimal method for implementing CPGs and the impact of guidelines on healthcare outcomes remain uncertain. This study evaluated the effect of an audit and feedback (A/F) intervention on nurse practitioner (NP) implementation of cancer pain CPGs and on hospitalized patients' self-report of pain and satisfaction with pain relief.
DATA SOURCES
Eight NPs and two groups of 96 patients were the sources of data. Eligible patients in both groups completed the Brief Pain Inventory-Short Form (BPI-SF) within 24 h of admission and every 48 h until discharge. During A/F, NPs received weekly feedback on pain scores and guideline adherence.
CONCLUSIONS
Nurse practitioner adherence to CPGs increased during A/F. Pain intensity did not significantly differ between groups. Intervention group patients reported significantly less overall pain interference (p < .0001), interference with general activity (p = .0003), and sleep (p = .006). Satisfaction with pain relief increased from 68.4% to 95.1% during A/F (p < .0001).
IMPLICATIONS FOR PRACTICE
A/F is an effective strategy to promote CPG use. Improved functional status in the absence of decreased pain severity underscores the need to consider symptom clusters when studying pain. |
Advanced Voltage Support and Active Power Flow Control in Grid-Connected Converters Under Unbalanced Conditions | Supporting the grid and improving its reliability have recently become major requirements for large distributed generation units. Under most grid faults, the accuracy of the traditional voltage support schemes (VSSs) is dramatically affected due to the existence of the zero-sequence voltage. Also, the traditional VSSs have been used only in the STATCOM applications, where the active power is zero. This paper proposes an advanced VSS in the converter-interfaced units, called zero-sequence compensated voltage support (ZCVS), to accurately regulate the three-phase voltages of the connection point within the pre-set safety limits. The proposed scheme not only compensates the zero-sequence component but also considers the active power injection. Unlike the traditional methods, the proposed VSS is adapted even in resistive distribution systems. The contribution of this paper is, however, ternate. As the second contribution, the limited active power oscillation (LAPO) is proposed to be augmented to the ZCVS. This feature limits the oscillation to a specified value which provides an adjustable dc-link voltage oscillation setting while simultaneously supporting the ac host grid, even under severe unbalanced faults. Third, the maximum active power delivery (MAPD) to the ac grid is also formulated for the ZCVS. The successful results of the proposed support scheme and complementary strategies are verified using selected simulation and experimental test cases. |
Observation of two new N* resonances in the decay ψ(3686)→ppπ0. | Based on 106×10(6)ψ(3686) events collected with the BESIII detector at the BEPCII facility, a partial wave analysis of ψ(3686)→ppπ0 is performed. The branching fraction of this channel has been determined to be B(ψ(3686)→ppπ0)=(1.65±0.03±0.15)×10(-4). In this decay, 7 N* intermediate resonances are observed. Among these, two new resonances, N(2300) and N(2570) are significant, one 1/2+ resonance with a mass of 2300(-30-0)(+40+109) MeV/c2 and width of 340(-30-58)(+30+110) MeV/c2, and one 5/2- resonance with a mass of 2570(-10-10)(+19+34) MeV/c2 and width of 250(-24-21)(+14+69) MeV/c2. For the remaining 5 N* intermediate resonances [N(1440), N(1520), N(1535), N(1650) and N(1720)], the analysis yields mass and width values that are consistent with those from established resonances. |
Beyond csr? Business, poverty and social justice: an introduction | How far can Corporate Social Responsibility (CSR) initiatives help to address poverty, social exclusion and other development challenges? What is the balance of responsibilities between state, market and civil society in addressing these problems and meeting the UN Millennium Development Goals (MDGs)? What new tools, strategies and methodologies are required to harness the positive potential contribution of business to development and deter corporate irresponsibility? This special issue brings together a dynamic mix of academics and development specialists to address these themes in a focused and innovative way. In this introductory article, we consider some of the key cross-cutting themes and insights raised by the contributions. The aim of the introduction and the special issue is to start to fill the gap in our understanding of how, when and through what means business can help to reduce poverty, while recognising the equally powerful potential of the business community to exacerbate poverty. Taking particular CSR initiatives as a starting point, we seek to look at the broader developmental footprint of business-as-usual strategies, as well as those which fall under the banner of CSR, to gain a fuller picture of how business is implicated in the development process. Corporate Social Responsibility (CSR) has been adopted as an approach to international development. But who does it benefit, how and why? Does CSR have the potential to redefine the meaning of good business practice as meeting the needs of poor and marginalised groups? Or is there a danger that, by basing development policies around a business case, we fail to tackle, or worse, deepen, the multiple forms of inequality and social exclusion that characterise contemporary forms of poverty? International organisations such as the United Nations and theWorld Bank, and national development agencies such as the Department for International Development (DFID) in the UK, have embraced CSR in the hope that the private sector can play a key role in achieving developmental goals aimed at poverty alleviation. The UK’s DFID is confident that, ‘By following socially responsible A R T IC L E S Peter Newell is in the School of Development Studies at the University of East Anglia, Norwich, NB4 7TJ, UK. Jedrzej George Frynas is at the University of Middlesex Business School, The Burroughs, London NW4 4BT, UK. Email: [email protected]. Third World Quarterly, Vol. 28, No. 4, 2007, pp 669 – 681 ISSN 0143-6597 print/ISSN 1360-2241 online/07/040669–13 2007 Third World Quarterly DOI: 10.1080/01436590701336507 669 practices, the growth generated by the private sector will be more inclusive, equitable and poverty reducing’. The idea that the market is a critical vehicle for tackling poverty is emphasised both in DFID’s report ‘Making Market Systems Work Better for the Poor’ and in the report Unleashing Entrepreneurship: Making Business Work for the Poor by the Commission on the Private Sector and Development, convened by former UN Secretary General, Kofi Annan. At the same time there is also an emerging business case for addressing poverty directly. Within the business community the notion that there is a fortune awaiting those entrepreneurs who target their products at the ‘bottom of the pyramid’ (BOP) has recently become very influential. CK Prahalad and Stuart Hart, the key proponents of the idea, suggest that private firms can help reduce poverty, and make profits at the same time, by inventing new business models for providing products and services to the world’s poor—the four billion people who live on less than $2000 a year. It assumes that the poor have cumulatively a large amount of disposable income but that their needs are poorly served by firms, which are geared towards middle-income and high-income consumers. Therefore, partnerships with non-governmental organisations, development agencies and local communities are said to be able to help private firms to develop new markets, while providing the poor with access to markets and services. Although such an approach is not directly concerned with the broader social and environmental responsibilities of business, and its conceptualisation of poverty is itself problematic, the BOP idea echoes the focus on ‘win-win outcomes’ in contemporary CSR debates, namely the assumption that CSR can contribute to the welfare of ‘stakeholders’ while contributing to a firm’s financial bottom line. This win-win logic is dramatically emphasised in the title of the recent book by Craig and Peter Wilson:Make Poverty Business: Increase Profits and Reduce Risks by Engaging with the Poor. However, while there are clearly zones of compatibility between businessled CSR initiatives and efforts by the development community to engage business in efforts to tackle poverty, CSR as a business tool is distinct from CSR as a development tool. CSR emerged among leading firms and business schools as a public relations tool, a way to deflect criticism, engage critics and potentially capitalise on emerging business opportunities associated with doing, and being seen to be doing, good. This is a far cry, however, from constructing corporate strategies that are aligned with the pressing need to tackle poverty and social exclusion across the majority world. Despite recent interest in ‘new’ business models such as ‘bottom of the pyramid’, which present the business case for viewing poorer consumers as a huge untapped market for products targeted to their needs, poverty more often enters business thinking as a problem. For instance, firms producing and trading in oil or diamonds often operate in conflict zones, or areas of social unrest over contested land rights and resource revenues. There is a clear difference then between redefining the poor as business opportunity and viewing the poor as a problem, either an obstacle to smooth-running business operations or, more often than not, someone else’s problem, normally the government’s. PETER NEWELL & JEDRZEJ GEORGE FRYNAS |
A Counterexample to Theorems of Cox and Fine | Cox s well known theorem justifying the use of probability is shown not to hold in nite domains The counterexample also suggests that Cox s assumptions are insu cient to prove the result even in in nite domains The same counterexample is used to disprove a result of Fine on comparative conditional probability |
Recent Advancements in Retinal Vessel Segmentation | Retinal vessel segmentation is a key step towards the accurate visualization, diagnosis, early treatment and surgery planning of ocular diseases. For the last two decades, a tremendous amount of research has been dedicated in developing automated methods for segmentation of blood vessels from retinal fundus images. Despite the fact, segmentation of retinal vessels still remains a challenging task due to the presence of abnormalities, varying size and shape of the vessels, non-uniform illumination and anatomical variability between subjects. In this paper, we carry out a systematic review of the most recent advancements in retinal vessel segmentation methods published in last five years. The objectives of this study are as follows: first, we discuss the most crucial preprocessing steps that are involved in accurate segmentation of vessels. Second, we review most recent state-of-the-art retinal vessel segmentation techniques which are classified into different categories based on their main principle. Third, we quantitatively analyse these methods in terms of its sensitivity, specificity, accuracy, area under the curve and discuss newly introduced performance metrics in current literature. Fourth, we discuss the advantages and limitations of the existing segmentation techniques. Finally, we provide an insight into active problems and possible future directions towards building successful computer-aided diagnostic system. |
Anatomy of the normal diaphragm. | The thoracic diaphragm is a dome-shaped septum, composed of muscle surrounding a central tendon, which separates the thoracic and abdominal cavities. The function of the diaphragm is to expand the chest cavity during inspiration and to promote occlusion of the gastroesophageal junction. This article provides an overview of the normal anatomy of the diaphragm. |
Accurate Segmentation of Cervical Cytoplasm and Nuclei Based on Multiscale Convolutional Network and Graph Partitioning | In this paper, a multiscale convolutional network (MSCN) and graph-partitioning-based method is proposed for accurate segmentation of cervical cytoplasm and nuclei. Specifically, deep learning via the MSCN is explored to extract scale invariant features, and then, segment regions centered at each pixel. The coarse segmentation is refined by an automated graph partitioning method based on the pretrained feature. The texture, shape, and contextual information of the target objects are learned to localize the appearance of distinctive boundary, which is also explored to generate markers to split the touching nuclei. For further refinement of the segmentation, a coarse-to-fine nucleus segmentation framework is developed. The computational complexity of the segmentation is reduced by using superpixel instead of raw pixels. Extensive experimental results demonstrate that the proposed cervical nucleus cell segmentation delivers promising results and outperforms existing methods. |
Tree-Based Multi-dimensional Range Search on Encrypted Data with Enhanced Privacy | With searchable encryption, a data user is able to perform meaningful search on encrypted data stored in the public cloud without revealing data privacy. Besides handling simple queries (e.g., keyword queries), complex search functions, such as multi-dimensional (conjunctive) range queries, have also been studied in several approaches to support the search of multi-dimensional data. However, current works supporting multi-dimensional range queries either only achieve linear search complexity or reveal additional private information to the public cloud. In this paper, we propose a tree-based symmetric-key searchable encryption to support multi-dimensional range queries on encrypted data. Besides protecting data privacy, our proposed scheme is able to achieve faster-than-linear search, query privacy and single-dimension privacy simultaneously compared to previous solutions. More specifically, we formally define the security of our proposed scheme, prove that it is selectively secure, and demonstrate its efficiency with experiments over a real-world dataset. |
Stay On-Topic: Generating Context-Specific Fake Restaurant Reviews | Automatically generated fake restaurant reviews are a threat to online review systems. Recent research has shown that users have difficulties in detecting machine-generated fake reviews hiding among real restaurant reviews. The method used in this work (char-LSTM ) has one drawback: it has difficulties staying in context, i.e. when it generates a review for specific target entity, the resulting review may contain phrases that are unrelated to the target, thus increasing its detectability. In this work, we present and evaluate a more sophisticated technique based on neural machine translation (NMT) with which we can generate reviews that stay on-topic. We test multiple variants of our technique using native English speakers on Amazon Mechanical Turk. We demonstrate that reviews generated by the best variant have almost optimal undetectability (class-averaged F-score 47%). We conduct a user study with experienced users and show that our method evades detection more frequently compared to the state-of-the-art (average evasion 3.2/4 vs 1.5/4) with statistical significance, at level α = 1% (Section 4.3). We develop very effective detection tools and reach average F-score of 97% in classifying these. Although fake reviews are very effective in fooling people, effective automatic detection is still feasible. |
Melasma: treatment strategy. | Melasma, a hypermelanosis of the face, is a common skin problem of middle-aged women of all racial groups, especially with dark complexion. Its precise etio-pathogenesis is evasive, genetic influences, exposure to sunlight, pregnancy, oral contraceptives, estrogen-progesterone therapies, thyroid dysfunction, cosmetics, and drugs have been proposed. Centro-facial, malar, and mandibular are well-recognized. Epidermal pigmentation appears brown/black, while dermal is blue in color, and can be distinguished by Wood's lamp illumination. The difference may be inapparent with mixed type of melasma in skin types V and VI. An increase in melanin in epidermis: basal and suprabasal layers and/or dermis is the prime defect. There is an increased expression of tyrosinase related protein-1 involved in eumelanin synthesis. The use of broad-spectrum sunscreen is important, lightening agents like retinoic acid (tretinoin), azelaic acid, and combination therapies containing hydroquinone, tretinoin, and corticosteroids, have been used in the treatment of melasma, and are thought to have increased efficacy as compared with monotherapy. Quasi-drugs, placental extracts, ellagic acid, chamomilla extract, butylresorcinol, tranexamic acid, methoxy potassium salicylate, adenosine monophosphate disodium salt, dipropyl-biphenyl-2,2'-diol, (4-hydroxyphenyl)-2-butanol, and tranexamic acid cetyl ester hydrochloride, in addition to kojic and ascorbic acid have been used. Chemical peeling is a good adjunct. Laser treatment is worthwhile. |
Understanding Music Sharing Behavior on Social Network Services | Purpose – The purpose of this paper is to understand music sharing behaviour on social networking services (SNS). This study suggests and examines a research model which focuses on the influences of user motivations, such as self-expression, ingratiation, altruism, and interactivity, on music sharing behaviour in SNS through social motivation factors. Design/methodology/approach – Data were collected from 153 Korean SNS (i.e. Cyworld, Naver Blog, Daum Blog, and Tistory) users, who have experience in purchasing music and legally sharing it on SNS. The partial least squares method was used to analyse the measurement and structural models. Findings – The study shows that interactivity, perceived ease of use, self-expression, social presence, and social identity are significant positive predictors of music sharing intention on SNS. Research limitations/implications – This research is significant in light of recent interest in user activities in SNS. Better understanding of the music sharing behaviour on SNS can be prompted by reflecting cultural differences in selecting the SNS for validation with a larger sample size. Practical implications – The findings emphasise the importance of providing users with interactive, self-expressive, and easily manageable services in order to increase their intention to share music through SNS. Service providers need to focus on improving the user experience of the systems. Originality/value – SNS based online music services have been increasing and are a new business model of music content distribution. However no academic research has examined music related services on SNS. This study is the first empirical study analysing music sharing behaviour on SNS. |
Topic Aware Neural Response Generation | We consider incorporating topic information into a sequenceto-sequence framework to generate informative and interesting responses for chatbots. To this end, we propose a topic aware sequence-to-sequence (TA-Seq2Seq) model. The model utilizes topics to simulate prior human knowledge that guides them to form informative and interesting responses in conversation, and leverages topic information in generation by a joint attention mechanism and a biased generation probability. The joint attention mechanism summarizes the hidden vectors of an input message as context vectors by message attention and synthesizes topic vectors by topic attention from the topic words of the message obtained from a pre-trained LDA model, with these vectors jointly affecting the generation of words in decoding. To increase the possibility of topic words appearing in responses, the model modifies the generation probability of topic words by adding an extra probability item to bias the overall distribution. Empirical studies on both automatic evaluation metrics and human annotations show that TA-Seq2Seq can generate more informative and interesting responses, significantly outperforming state-of-theart response generation models. |
Evaluating iterative optimization across 1000 datasets | While iterative optimization has become a popular compiler optimization approach, it is based on a premise which has never been truly evaluated: that it is possible to learn the best compiler optimizations across data sets. Up to now, most iterative optimization studies find the best optimizations through repeated runs on the same data set. Only a handful of studies have attempted to exercise iterative optimization on a few tens of data sets.
In this paper, we truly put iterative compilation to the test for the first time by evaluating its effectiveness across a large number of data sets. We therefore compose KDataSets, a data set suite with 1000 data sets for 32 programs, which we release to the public. We characterize the diversity of KDataSets, and subsequently use it to evaluate iterative optimization.We demonstrate that it is possible to derive a robust iterative optimization strategy across data sets: for all 32 programs, we find that there exists at least one combination of compiler optimizations that achieves 86% or more of the best possible speedup across all data sets using Intel's ICC (83% for GNU's GCC). This optimal combination is program-specific and yields speedups up to 1.71 on ICC and 2.23 on GCC over the highest optimization level (-fast and -O3, respectively). This finding makes the task of optimizing programs across data sets much easier than previously anticipated, and it paves the way for the practical and reliable usage of iterative optimization. Finally, we derive pre-shipping and post-shipping optimization strategies for software vendors. |
Practical Image and Video Processing Using MATLAB | Description: Up-to-date, technically accurate coverage of essential topics in image and video processing This is the first book to combine image and video processing with a practical MATLAB®-oriented approach in order to demonstrate the most important image and video techniques and algorithms. Utilizing minimal math, the contents are presented in a clear, objective manner, emphasizing and encouraging experimentation. The book has been organized into two parts. Part I: Image Processing begins with an overview of the field, then introduces the fundamental concepts, notation, and terminology associated with image representation and basic image processing operations. Next, it discusses MATLAB® and its Image Processing Toolbox with the start of a series of chapters with hands-on activities and step-by-step tutorials. These chapters cover image acquisition and digitization; arithmetic, logic, and geometric operations; point-based, histogrambased, and neighborhood-based image enhancement techniques; the Fourier Transform and relevant frequency-domain image filtering techniques; image restoration; mathematical morphology; edge detection techniques; image segmentation; image compression and coding; and feature extraction and representation. Part II: Video Processing presents the main concepts and terminology associated with analog video signals and systems, as well as digital video formats and standards. It then describes the technically involved problem of standards conversion, discusses motion estimation and compensation techniques, shows how video sequences can be filtered, and concludes with an example of a solution to object detection and tracking in video sequences using MATLAB®. Extra features of this book include: More than 30 MATLAB® tutorials, which consist of step-by-step guides to exploring image and video processing techniques using MATLAB® Chapters supported by figures, examples, illustrative problems, and exercises Useful websites and an extensive list of bibliographical references This accessible text is ideal for upper-level undergraduate and graduate students in digital image and video processing courses, as well as for engineers, researchers, software developers, practitioners, and anyone who wishes to learn about these increasingly popular topics on their own. Supplemental resources for readers and instructors can be found at company website |
Traffic Density-Based Discovery of Hot Routes in Road Networks | Finding hot routes (traffic flow patterns) in a road network is an important problem. They are beneficial to city planners, police departments, real estate developers, and many others. Knowing the hot routes allows the city to better direct traffic or analyze congestion causes. In the past, this problem has largely been addressed with domain knowledge of city. But in recent years, detailed information about vehicles in the road network have become available. With the development and adoption of RFID and other location sensors, an enormous amount of moving object trajectories are being collected and can be used towards finding hot routes. This is a challenging problem due to the complex nature of the data. If objects traveled in organized clusters, it would be straightforward to use a clustering algorithm to find the hot routes. But, in the real world, objects move in unpredictable ways. Variations in speed, time, route, and other factors cause them to travel in rather fleeting “clusters.” These properties make the problem difficult for a naive approach. To this end, we propose a new density-based algorithm named FlowScan. Instead of clustering the moving objects, road segments are clustered based on the density of common traffic they share. We implemented FlowScan and tested it under various conditions. Our experiments show that the system is both efficient and effective at discovering hot routes. |
Adult Image Classification by a Local-Context Aware Network | To build a healthy online environment, adult image recognition is a crucial and challenging task. Recent deep learning based methods have brought great advances to this task. However, the recognition accuracy and generalization ability need to be further improved. In this paper, a local-context aware network is proposed to improve the recognition accuracy and a corresponding curriculum learning strategy is proposed to guarantee a good generalization ability. The main idea is to integrate the global classification and the local sensitive region detection into one network and optimize them simulatenously. Such strategy helps the classification networks focus more on suspicious regions and thus provide better recognition performance. Two datasets containing over 150,000 images have been collected to evaluate the performance of the proposed approach. From the experiment results, it is observed that our approach can always achieve the best classification accuracy compared with several state-of-the-art approaches investigated. |
Streaming Architecture for Large-Scale Quantized Neural Networks on an FPGA-Based Dataflow Platform | Deep neural networks (DNNs) are used by different applications that are executed on a range of computer architectures, from IoT devices to supercomputers. The footprint of these networks is huge as well as their computational and communication needs. In order to ease the pressure on resources, research indicates that in many cases a low precision representation (1-2 bit per parameter) of weights and other parameters can achieve similar accuracy while requiring less resources. Using quantized values enables the use of FPGAs to run NNs, since FPGAs are well fitted to these primitives; e.g., FPGAs provide efficient support for bitwise operations and can work with arbitrary-precision representation of numbers. This paper presents a new streaming architecture for running QNNs on FPGAs. The proposed architecture scales out better than alternatives, allowing us to take advantage of systems with multiple FPGAs. We also included support for skip connections, that are used in state-of-the art NNs, and shown that our architecture allows to add those connections almost for free. All this allowed us to implement an 18-layer ResNet for 224×224 images classification, achieving 57.5% top-1 accuracy. In addition, we implemented a full-sized quantized AlexNet. In contrast to previous works, we use 2-bit activations instead of 1-bit ones, which improves AlexNet's top-1 accuracy from 41.8% to 51.03% for the ImageNet classification. Both AlexNet and ResNet can handle 1000-class real-time classification on an FPGA. Our implementation of ResNet-18 consumes 5× less power and is 4× slower for ImageNet, when compared to the same NN on the latest Nvidia GPUs. Smaller NNs, that fit a single FPGA, are running faster then on GPUs on small (32×32) inputs, while consuming up to 20× less energy and power. |
Salicylate (salsalate) in patients with type 2 diabetes: a randomized trial. | BACKGROUND
Short-duration studies show that salsalate improves glycemia in type 2 diabetes mellitus (T2DM).
OBJECTIVE
To assess 1-year efficacy and safety of salsalate in T2DM.
DESIGN
Placebo-controlled, parallel trial; computerized randomization and centralized allocation, with patients, providers, and researchers blinded to assignment. (ClinicalTrials.gov: NCT00799643).
SETTING
3 private practices and 18 academic centers in the United States.
PATIENTS
Persons aged 18 to 75 years with fasting glucose levels of 12.5 mmol/L or less (≤225 mg/dL) and hemoglobin A1c (HbA1c) levels of 7.0% to 9.5% who were treated for diabetes.
INTERVENTION
286 participants were randomly assigned (between January 2009 and July 2011) to 48 weeks of placebo (n = 140) or salsalate, 3.5 g/d (n = 146), in addition to current therapies, and 283 participants were analyzed (placebo, n = 137; salsalate, n = 146).
MEASUREMENTS
Change in hemoglobin A1c level (primary outcome) and safety and efficacy measures.
RESULTS
The mean HbA1c level over 48 weeks was 0.37% lower in the salsalate group than in the placebo group (95% CI, -0.53% to -0.21%; P < 0.001). Glycemia improved despite more reductions in concomitant diabetes medications in salsalate recipients than in placebo recipients. Lower circulating leukocyte, neutrophil, and lymphocyte counts show the anti-inflammatory effects of salsalate. Adiponectin and hematocrit levels increased more and fasting glucose, uric acid, and triglyceride levels decreased with salsalate, but weight and low-density lipoprotein cholesterol levels also increased. Urinary albumin levels increased but reversed on discontinuation; estimated glomerular filtration rates were unchanged.
LIMITATION
Trial duration and number of patients studied were insufficient to determine long-term risk-benefit of salsalate in T2DM.
CONCLUSION
Salsalate improves glycemia in patients with T2DM and decreases inflammatory mediators. Continued evaluation of mixed cardiorenal signals is warranted. |
Perceptual Control Theory 1 Perceptual Control Theory A Model for Understanding the Mechanisms and Phenomena of Control | Perceptual Control Theory (PCT) provides a general theory of functioning for organisms. At the conceptual core of the theory is the observation that living things control the perceived environment by means of their behavior. Consequently, the phenomenon of control takes center stage in PCT, with observable behavior playing an important but supporting role. The first part of the paper explains how the PCT model works. This explanation includes a definition of “control” as well as the basic equations from which one can see what is required for control to be possible. The second part of the paper describes demonstrations that the reader can download from the Internet and run, so as to learn the basics of control by experiencing and verifying the phenomenon directly. The third part of the paper shows examples of the application of PCT to different areas of psychological research including learning, developmental psychology, social psychology, and psychotherapy. This summary of the current state of the field celebrates the 50th Anniversary of the first major publication in PCT (Powers, Clark & MacFarland, 1960). |
Computational Creativity | Creative thinking is one of the hallmarks of human-level competence. Although it is still a poorly understood subject speculative ideas about brain processes involved in creative thinking may be implemented in computational models. A review of different approaches to creativity, insight and intuition is presented. Two factors are essential for creativity: imagination and selection or filtering. Imagination should be constrained by experience, while filtering in the case of creative use of words may be based on semantic and phonological associations. Analysis of brain processes involved in invention of new words leads to practical algorithms that create many interesting and novel names associated with a set of keywords. |
Sustained clearance of serum hepatitis C virus-RNA independently predicts long-term survival in liver transplant patients with recurrent hepatitis C. | The aim of this study was to analyze the impact of virological response to long-term antiviral therapy using interferon plus ribavirin on survival of 30 liver transplant patients with recurrent hepatitis C. Mean treatment duration is currently 46 months (range: 3-144 months). Sustained clearance of serum hepatitis C virus RNA was achieved in 18 patients (60%). Allograft biopsies demonstrated fibrosis progression in seven virological nonresponders (66.6%), and none of the recipients with viral elimination (0%; P<0.001). Univariately, low pretransplant viral loads, the absence of cytomegalovirus infection, as well as biochemical and virological response to antiviral therapy indicated a positive impact on outcome (P<0.05). Only antiviral treatment induced clearance of viremia, however, was identified as independent predictor of long-term survival (P=0.02). Our data indicate that an antiviral combination should aim at viral eradication in liver transplant patients with recurrent hepatitis C, because it improves survival. |
Analysis of lightning protection with transmission line arrester using ATP/EMTP: Case of an HV 220kV double circuit line | Over the last few decades, the electric utilities have seen a very significant increase in the application of metal oxide surge arresters on transmission lines in an effort to reduce lightning initiated flashovers, maintain high power quality and to avoid damages and disturbances especially in areas with high soil resistivity and lightning ground flash density. For economical insulation coordination in transmission and substation equipment, it is necessary to predict accurately the lightning surge overvoltages that occur on an electric power system. |
Android permissions: user attention, comprehension, and behavior | Android's permission system is intended to inform users about the risks of installing applications. When a user installs an application, he or she has the opportunity to review the application's permission requests and cancel the installation if the permissions are excessive or objectionable. We examine whether the Android permission system is effective at warning users. In particular, we evaluate whether Android users pay attention to, understand, and act on permission information during installation. We performed two usability studies: an Internet survey of 308 Android users, and a laboratory study wherein we interviewed and observed 25 Android users. Study participants displayed low attention and comprehension rates: both the Internet survey and laboratory study found that 17% of participants paid attention to permissions during installation, and only 3% of Internet survey respondents could correctly answer all three permission comprehension questions. This indicates that current Android permission warnings do not help most users make correct security decisions. However, a notable minority of users demonstrated both awareness of permission warnings and reasonable rates of comprehension. We present recommendations for improving user attention and comprehension, as well as identify open challenges. |
Cannabidiol Claims and Misconceptions. | Once a widely ignored phytocannabinoid, cannabidiol now attracts great therapeutic interest, especially in epilepsy and cancer. As with many rising trends, various myths and misconceptions have accompanied this heightened public interest and intrigue. This forum article examines and attempts to clarify some areas of contention. |
Personalised Query Suggestion for Intranet Search with Temporal User Profiling | Recent research has shown the usefulness of using collective user interaction data (e.g., query logs) to recommend query modification suggestions for Intranet search. However, most of the query suggestion approaches for Intranet search follow an ``one size fits all'' strategy, whereby different users who submit an identical query would get the same query suggestion list. This is problematic, as even with the same query, different users may have different topics of interest, which may change over time in response to the user's interaction with the system.
We address the problem by proposing a personalised query suggestion framework for Intranet search. For each search session, we construct two temporal user profiles: a click user profile using the user's clicked documents and a query user profile using the user's submitted queries. We then use the two profiles to re-rank the non-personalised query suggestion list returned by a state-of-the-art query suggestion method for Intranet search. Experimental results on a large-scale query logs collection show that our personalised framework significantly improves the quality of suggested queries. |
Behavioral Hierarchy: Exploration and Representation | Behavioral modules are units of behavior providing reusable building blocks that can be composed sequentially and hierarchically to generate extensive ranges of behavior. Hierarchies of behavioral modules facilitate learning complex skills and planning at multiple levels of abstraction and enable agents to incrementally improve their competence for facing new challenges that arise over extended periods of time. This chapter focusses on two features of behavioral hierarchy that appear to be less well recognized: its influence on exploratory behavior and the opportunity it affords to reduce the representational challenges of planning and learning in large, complex domains. Four computational examples are described that use methods of hierarchical reinforcement learning to illustrate the influence of behavioral hierarchy on exploration and representation. Beyond illustrating these features, the examples provide support for the central role of behavioral hierarchy in development and learning for both artificial and |
VHDL Implementation of UART with Status Register | In parallel communication the cost as well as complexity of the system increases due to simultaneous transmission of data bits on multiple wires. Serial communication alleviates this drawback and emerges as effective candidate in many applications for long distance communication as it reduces the signal distortion because of its simple structure. This paper focuses on the VHDL implementation of UART with status register which supports asynchronous serial communication. The paper presents the architecture of UART which indicates, during reception of data, parity error, framing error, overrun error and break error using status register. The whole design is functionally verified using Xilinx ISE Simulator. |
Supporting the Construction and Evolution of Component Repositories | Repositories must be designed to meet the evolving and dynamic needs of software development organizations. Current software repository methods rely heavily on classification, which exacerbates acquisition and evolution problems by requiring costly classification and domain analysis efforts before a repository can be used effectively. This paper outlines an approach in which minimal initial structure is used to effectively find relevant software components while methods are employed to incrementally improve repository structures. The approach is demonstrated through PEEL, a tool to semi-automatically identify reusable components, and CodeFinder, a retrieval system that compensates for the lack of explicit knowledge structures through spreading activation retrieval and allows component representations to be incrementally improved while users are searching for information. The combination of these techniques yields a flexible software repository that minimizes up-front costs and improves its retrieval effectiveness as developers use it to find reusable software artifacts. |
“Anthropology-Lite”: An Education Perspective on the Ideology of Religious Studies | There has been considerable criticism of religious studies as a separate discipline, focusing on the category of religion. This article aims to develop this debate by defending the religion category but nevertheless criticizing religious studies in terms of its practical consequences for scholarship. Working on the view that scholarship aims to criticize accepted knowledge in pursuit of truth and foster a rational atmosphere for discussion, the article will argue that religious studies, as a separate discipline within the restrictions in which it often operates, potentially damages this aim. Solutions will be proposed to the religious studies problem and the article will aim to open-up a debate. |
Cybercrime: Vandalizing the Information Society | Cybercrime has received significant coverage in recent years, with the media, law enforcers, and governments all working to bring the issue to our attention. This paper begins by presenting an overview of the problem, considering the scope and scale of reported incidents. From this, a series of common attack types are considered (focusing upon website defacement, denial of service and malware), with specific emphasis upon the potential for these to be automated and mounted by novices. Leading on from this, the problem of policing cybercrime is considered, with attention to the need for suitable legislation, and appropriate resourcing of law enforcers. It is concluded that that cybercrime is an inevitable downside of the information society, and that organizations and individuals consequently have a stake in ensuring their own protection. |
The use of a portable breath analysis device in monitoring type 1 diabetes patients in a hypoglycaemic clamp: validation with SIFT-MS data. | Monitoring blood glucose concentrations is a necessary but tedious task for people suffering from diabetes. It has been noted that breath in people suffering with diabetes has a different odour and thus it may be possible to use breath analysis to monitor the blood glucose concentration. Here, we evaluate the analysis of breath using a portable device containing a single mixed metal oxide sensor during hypoglycaemic glucose clamps and compare that with the use of SIFT-MS described in previously published work on the same set of patients. Outputs from both devices have been correlated with the concentration of blood glucose in eight volunteers suffering from type 1 diabetes mellitus. The results demonstrate that acetone as measured by SIFT-MS and the sensor output from the breath sensing device both correlate linearly with blood glucose; however, the sensor response and acetone concentrations differ greatly between patients with the same blood glucose. It is therefore unlikely that breath analysis can entirely replace blood glucose testing. |
Chromatin-associated proteins HMGB1/2 and PDIA3 trigger cellular response to chemotherapy-induced DNA damage. | The identification of new molecular components of the DNA damage signaling cascade opens novel avenues to enhance the efficacy of chemotherapeutic drugs. High-mobility group protein 1 (HMGB1) is a DNA damage sensor responsive to the incorporation of nonnatural nucleosides into DNA; several nuclear and cytosolic proteins are functionally integrated with HMGB1 in the context of DNA damage response. The functional role of HMGB1 and HMGB1-associated proteins (high-mobility group protein B2, HMGB2; glyceraldehyde-3-phosphate dehydrogenase, GAPDH; protein disulfide isomerase family A member 3, PDIA3; and heat shock 70 kDa protein 8, HSPA8) in DNA damage response was assessed in human carcinoma cells A549 and UO31 by transient knockdown with short interfering RNAs. Using the cell proliferation assay, we found that knockdown of HMGB1-associated proteins resulted in 8-fold to 50-fold decreased chemosensitivity of A549 cells to cytarabine. Western blot analysis and immunofluorescent microscopy were used to evaluate genotoxic stress markers in knocked-down cancer cells after 24 to 72 hours of incubation with 1 micromol/L of cytarabine. Our results dissect the roles of HMGB1-associated proteins in DNA damage response: HMGB1 and HMGB2 facilitate p53 phosphorylation after exposure to genotoxic stress, and PDIA3 has been found essential for H2AX phosphorylation (no gamma-H2AX accumulated after 24-72 hours of incubation with 1 micromol/L of cytarabine in PDIA3 knockdown cells). We conclude that phosphorylation of p53 and phosphorylation of H2AX occur in two distinct branches of the DNA damage response. These findings identify new molecular components of the DNA damage signaling cascade and provide novel promising targets for chemotherapeutic intervention. |
Fleeting Things: English Poets and Poems. 1616-1660 | Offers new interpretations of poems by Milton, Jonson, Herrick, and Lovelace, and looks at five themes in seventeenth century English poetry. |
Understanding the canine intestinal microbiota and its modification by pro‐, pre‐ and synbiotics – what is the evidence? | Interest in the composition of the intestinal microbiota and possibilities of its therapeutic modifications has soared over the last decade and more detailed knowledge specific to the canine microbiota at different mucosal sites including the gut is available. Probiotics, prebiotics or their combination (synbiotics) are a way of modifying the intestinal microbiota and exert effects on the host immune response. Probiotics are proposed to exert their beneficial effects through various pathways, for example production of antimicrobial peptides, enhancing growth of favourable endogenous microorganisms, competition for epithelial colonisation sites and immune-modulatory functions. Despite widespread use of pro-, pre- and synbiotics, scientific evidence of their beneficial effects in different conditions of the dog is scarce. Specific effects of different strains, their combination or their potential side-effects have not been evaluated sufficiently. In some instances, in vitro results have been promising, but could not be transferred consistently into in vivo situations. Specific canine gastrointestinal (GI) diseases or conditions where probiotics would be beneficial, their most appropriate dosage and application have not been assessed extensively. This review summarises the current knowledge of the intestinal microbiome composition in the dog and evaluates the evidence for probiotic use in canine GI diseases to date. It wishes to provide veterinarians with evidence-based information on when and why these products could be useful in preventing or treating canine GI conditions. It also outlines knowledge about safety and approval of commercial probiotic products, and the potential use of faecal microbial transplantation, as they are related to the topic of probiotic usage. |
The Sonification Handbook | CONCRETE |
Generating Nonverbal Signals for a Sensitive Artificial Listener | In the Sensitive Artificial Listener project research is performed with the aim to design an embodied agent that not only generates the appropriate nonverbal behaviors that accompany speech, but that also displays verbal and nonverbal behaviors during the production of speech by its conversational partner. Apart from many applications for embodied agents where natural interaction between agent and human partner also require this behavior, the results of this project are also meant to play a role in research on emotional behavior during conversations. In this paper, our research and implementation efforts in this project are discussed and illustrated with examples of experiments, research approaches and interfaces in development. |
Corpus-driven Metaphor Harvesting | The paper presents a corpus-based method for finding metaphorically used lexemes and prevailing semantico-conceptual source domains, given a target domain corpus. It is exemplified by a case study on the target domain of European politics, based on a French 800,000 token corpus. |
A Soft Robotic Gripper With Gecko-Inspired Adhesive | Previous work has demonstrated the versatility of soft robotic grippers using simple control inputs. However, these grippers still face challenges in grasping large objects and in achieving high-strength grasps. This work investigates the combination of fluidic elastomer actuators and gecko-inspired adhesives to both enhance existing soft gripper properties and generate new capabilities. On rocky or dirty surfaces where adhesion is limited, the gripper retains the functionality of a pneumatically actuated elastomer gripper with no measured loss in performance. Design strategies for using the unique properties of the gecko-inspired adhesives are presented. By modeling fluidic elastomer actuators as a series of joints with associated joint torques, we designed an actuator that takes advantage of the unique properties of the gecko-inspired adhesive. Experiments showed higher strength grasps at lower pressures compared to nongecko actuators, in many cases enabling the gripper to actuate more quickly and use less energy. The gripper weighs 48.7 g, uses $7.25 of raw materials, and can support loads of over 50 N. A second gripper, using three fingers for a larger adhesive surface, demonstrated a grasping force of 111 N (25 lbf) when actuated at an internal pressure of 40 kPa. |
Dynamic risk tolerance: Motion planning by balancing short-term and long-term stochastic dynamic predictions | Identifying collision-free paths over long time windows in environments with stochastically moving obstacles is difficult, in part because long-term predictions of obstacle positions typically have low fidelity, and the region of possible obstacle occupancy is typically large. As a result, planning methods that are restricted to identifying paths with a low probability of collision may not be able to find a valid path. However, allowing paths with a higher probability of collision may limit detection of imminent collisions. In this paper, we present Dynamic Risk Tolerance (DRT), a framework that dynamically evaluates risk tolerance, a function which is formulated as a time-varying upper bound on the acceptable likelihood of collision for a given path. DRT is implemented with forward stochastic reachable sets to predict the exact distribution of obstacles in a scalable manner over an arbitrarily long time window. In effect, DRT identifies actions that balance risks posed by both near and far obstacles. We empirically compare DRT to other state of the art methods that are capable of generating real-time solutions in highly crowded environments, and demonstrate the success rates for DRT that is 46% higher than the best performing comparison method, in the most difficult problem tested. |
Road Type Recognition Using Neural Networks for Vehicle Seat Vibration Damping | In a modern vehicle systems one of the main goals to achieve is driver's safety, and many sophisticated systems are made for that purpose. Vibration isolation for the vehicle seats, and at the same time for the driver, is one of the challenging problems. Parameters of the controller used for the isolation can be tuned for a different road types, making the isolation better (specially for the vehicles like dampers, tractors, field machinery, bulldozers, etc.). In this paper we propose the method where neural networks are used for road type recognition. The main goal is to obtain a good road recognition for the purpose of better vibration damping of a driver's semi active controllable seat. The recognition of a specific road type will be based on the measurable parameters of a vehicle. Discrete Fourier Transform of measurable parameters is obtained and used for the neural network learning. The dimension of the input vector, as the main parameter that decides the speed of road recognition, is varied. |
Reply to “Comment on ‘Gravitating magnetic monopole in the global monopole spacetime’ ” | In this Reply I present some arguments in favor of the stability of the topological defect composed by global and magnetic monopoles. |
Mathematics: Truth and Fiction? Review of Mark Balaguer's | Mark Balaguer’s project in this book is extremely ambitious; he sets out to defend both platonism and fictionalism about mathematical entities. Moreover, Balaguer argues that at the end of the day, platonism and fictionalism are on an equal footing. Not content to leave the matter there, however, he advances the anti-metaphysical conclusion that there is no fact of the matter about the existence of mathematical objects. Despite the ambitious nature of this project, for the most part Balaguer does not shortchange the reader on rigor; all the main theses advanced are argued for at length and with remarkable clarity and cogency. There are, of course, gaps in the account (some of which are described below) but these should not be allowed to overshadow the sig- |
Prediction of SAMPL2 aqueous solvation free energies and tautomeric ratios using the SM8, SM8AD, and SMD solvation models | We applied the solvation models SM8, SM8AD, and SMD in combination with the Minnesota M06-2X density functional to predict vacuum-water transfer free energies (Task 1) and tautomeric ratios in aqueous solution (Task 2) for the SAMPL2 test set. The bulk-electrostatic contribution to the free energy of solvation is treated as follows: SM8 employs the generalized Born model with the Coulomb field approximation, SM8AD employs the generalized Born approximation with asymmetric descreening, and SMD solves the nonhomogeneous Poisson equation. The non-bulk-electrostatic contribution arising from short-range interactions between the solute and solvent molecules in the first solvation shell is treated as a sum of terms that are products of geometry-dependent atomic surface tensions and solvent-accessible surface areas of the individual atoms of the solute. On average, three models tested in the present work perform similarly. In particular, we achieved mean unsigned errors of 1.3 (SM8), 2.0 (SM8AD), and 2.6 kcal/mol (SMD) for the aqueous free energies of 30 out of 31 compounds with known reference data involved in Task 1 and mean unsigned errors of 2.7 (SM8), 1.8 (SM8AD), and 2.4 kcal/mol (SMD) in the free energy differences (tautomeric ratios) for 21 tautomeric pairs in aqueous solution involved in Task 2. |
A high efficiency solar array simulator implemented by an LLC resonant DC/DC converter | In order to save the cost and energy for PV system testing, a high efficiency solar array simulator (SAS) implemented by an LLC resonant DC/DC converter is proposed. This converter has zero voltage switching (ZVS) operation of the primary switches and zero current switching (ZCS) operation of the rectifier diodes. By frequency modulation control, the output impedance of an LLC converter can be regulated from zero to infinite without shunt or series resistor; hence, the efficiency of the proposed SAS can be significantly increased. According to the provided operation principles and design considerations of an LLC converter, a prototype is implemented to demonstrate the feasibility of the proposed SAS. |
Improving the performance of Naive Bayes multinomial in e-mail foldering by introducing distribution-based balance of datasets | E-mail foldering or e-mail classification into user predefined folders can be viewed as a text classification/categorization problem. However, it has some intrinsic properties that make it more difficult to deal with, mainly the large cardinality of the class variable (i.e. the number of folders), the different number of e-mails per class state and the fact that this is a dynamic problem, in the sense that e-mails arrive in our mail-forders following a time-line. Perhaps because of these problems, standard text-oriented classifiers such as Naive Bayes Multinomial do no obtain a good accuracy when applied to e-mail corpora. In this paper, we identify the imbalance among classes/folders as the main problem, and propose a new method based on learning and sampling probability distributions. Our experiments over a standard corpus (ENRON) with seven datasets (e-mail users) show that the results obtained by Naive Bayes Multinomial significantly improve when applying the balancing algorithm first. For the sake of completeness in our experimental study we also compare this with another standard balancing method (SMOTE) and classifiers. |
Review and classification of gain cell eDRAM implementations | With the increasing requirement of a high-density, high-performance, low-power alternative to traditional SRAM, Gain Cell (GC) embedded DRAMs have gained a renewed interest in recent years. Several industrial and academic publications have presented GC memory implementations for various target applications, including high-performance processor caches, wireless communication memories, and biomedical system storage. In this paper, we review and compare the recent publications, examining the design requirements and the implementation techniques that lead to achievement of the required design metrics of these applications. |
Control of Variable Speed Wind Turbines | Uncontrolled wind turbine configuration, such as stall-regulation captures, energy relative to the amount of wind speed. This configuration requires constant turbine speed because the generator that is being directly coupled is also connected to a fixed-frequency utility grid. In extremely strong wind conditions, only a fraction of available energy is captured. Plants designed with such a configuration are economically unfeasible to run in these circumstances. Thus, wind turbines operating at variable speed are better alternatives. This paper focuses on a controller design methodology applied to a variable-speed, horizontal axis wind turbine. A simple but rigid wind turbine model was used and linearised to some operating points to meet the desired objectives. By using blade pitch control, the deviation of the actual rotor speed from a reference value is minimised. The performances of PI and PID controllers were compared relative to a step wind disturbance. Results show comparative responses between these two controllers. The paper also concludes that with the present methodology, despite the erratic wind data, the wind turbine still manages to operate most of the time at 88% in the stable region. |
Volumetric intravascular ultrasound analysis of Paclitaxel-eluting and bare metal stents in acute myocardial infarction: the harmonizing outcomes with revascularization and stents in acute myocardial infarction intravascular ultrasound substudy. | BACKGROUND
Vascular responses to drug-eluting stents in ST-segment elevation myocardial infarction are unknown. In the prospective, multicenter Harmonizing Outcomes With Revascularization and Stents in Acute Myocardial Infarction (HORIZONS-AMI) trial, patients with ST-segment elevation myocardial infarction within 12 hours of symptom onset were randomized 3:1 to TAXUS EXPRESS paclitaxel-eluting stents (PES) or EXPRESS bare metal stents (BMS).
METHODS AND RESULTS
A formal intravascular ultrasound substudy enrolled 464 patients with baseline and 13-month follow-up imaging at 36 centers. Overall, 446 lesions in 402 patients were suitable for standard qualitative and quantitative analyses, which were performed at an independent blinded core laboratory. The primary prespecified end point was the in-stent percent net volume obstruction at follow-up. Median stent length measured 23.4 mm (first and third quartiles, 18.5 and 31.9 mm). PES compared with BMS significantly reduced 13-month percent net volume obstruction (6.5% [first and third quartiles, 2.2% and 10.8%] versus 15.6% [first and third quartiles, 7.2% and 28.8%]; P<0.0001). PES compared with BMS also resulted in more late-acquired stent malapposition (29.6% versus 7.9%; P=0.0005) resulting from positive vessel remodeling. Plaque and/or thrombus protrusion through stent struts was initially present in 70.4% of PES and 64.8% of BMS; all resolved during follow-up. New aneurysm formation, stent fracture, and subclinical thrombus were uncommon, although seen only in PES.
CONCLUSIONS
PES compared with BMS significantly reduce neointimal hyperplasia in patients with ST-segment elevation myocardial infarction but also result in a high frequency of late-acquired stent malapposition as a result of positive vessel remodeling. Ongoing long-term follow-up is required to establish the clinical significance of these findings. Clinical Trial Registration- URL: http://www.clinicaltrials.gov. Unique identifier: NCT00433966. |
Prototype development of single phase prepaid kWh meter | The single phase prepaid kWh meter consists of energy metering and STS system, based on International Standard: IEC 62055-31, 62055-41, IEC 62055-51. The energy metering measure line voltage, current and calculates active, reactive apparent power, energy, power factor, and RMS voltage and current. There are two separate inputs to measure line, ground, and/or neutral current enabling the meter to detect tampering and to continue operating. Development of single phase based on needs of PLN, which is about 7 million units kWh meter up to year 2014, especially for new customer. |
Design of an X-band pulsed SSPA based on a cascade technique | An X-band pulsed solid-state power amplifier (PSSPA) with high output power and high power added efficiency (PAE) is reported in this article. The high power amplifier (HPA) was implemented by a cascade approach, including an MMIC driving amplifier, an internally matched medium-power and a high-power GaAs FET. To achieve optimum electrical performance of the proposed PSSPA, some considerations of the Grounding, DC Blocking Circuit, bias network, microwave absorber, and the isolation blocks, have been taken in our design. Under the pulse condition of 8 KHz pulse repeat frequency (PRF) and 10% of duty cycle, the pulse output power ranges between 45.8 and 46.6 dBm, and the PAE varies between 35.8% and 40.5% from 9.5 to 10.5 GH. |
Review. Fungal endophytes and their interaction with plant pathogens | Endophytes are fungi which infect plants without causing symptoms. Fungi belonging to this group are ubiquitous, and plant species not associated to fungal endophytes are not known. In addition, there is a large biological diversity among endophytes, and it is not rare for some plant species to be hosts of more than one hundred different endophytic species. Different mechanisms of transmission, as well as symbiotic lifestyles occur among endophytic species. Latent pathogens seem to represent a relatively small proportion of endophytic assemblages, also composed by latent saprophytes and mutualistic species. Some endophytes are generalists, being able to infect a wide range of hosts, while others are specialists, limited to one or a few hosts. Endophytes are gaining attention as a subject for research and applications in Plant Pathology. This is because in some cases plants associated to endophytes have shown increased resistance to plant pathogens, particularly fungi and nematodes. Several possible mechanisms by which endophytes may interact with pathogens are discussed in this review. Additional key words: biocontrol, biodiversity, symbiosis. |
Real Time Bid Optimization with Smooth Budget Delivery in Online Advertising | Today, billions of display ad impressions are purchased on a daily basis through a public auction hosted by real time bidding (RTB) exchanges. A decision has to be made for advertisers to submit a bid for each selected RTB ad request in milliseconds. Restricted by the budget, the goal is to buy a set of ad impressions to reach as many targeted users as possible. A desired action (conversion), advertiser specific, includes purchasing a product, filling out a form, signing up for emails, etc. In addition, advertisers typically prefer to spend their budget smoothly over the time in order to reach a wider range of audience accessible throughout a day and have a sustainable impact. However, since the conversions occur rarely and the occurrence feedback is normally delayed, it is very challenging to achieve both budget and performance goals at the same time. In this paper, we present an online approach to the smooth budget delivery while optimizing for the conversion performance. Our algorithm tries to select high quality impressions and adjust the bid price based on the prior performance distribution in an adaptive manner by distributing the budget optimally across time. Our experimental results from real advertising campaigns demonstrate the effectiveness of our proposed approach. |
Agent-based modeling: methods and techniques for simulating human systems. | Agent-based modeling is a powerful simulation modeling technique that has seen a number of applications in the last few years, including applications to real-world business problems. After the basic principles of agent-based simulation are briefly introduced, its four areas of application are discussed by using real-world applications: flow simulation, organizational simulation, market simulation, and diffusion simulation. For each category, one or several business applications are described and analyzed. |
An application-specific protocol architecture for wireless microsensor networks | In recent years advances in energy e cient design and wireless technologies have enabled exciting new applications for wireless devices These applications span a wide range including real time and streaming video and audio delivery remote monitoring using networked microsensors personal medical monitoring and home networking of everyday appliances While these applications require high performance from the network they su er from resource constraints that do not appear in more traditional wired computing environments In particular wireless spectrum is scarce often limiting the bandwidth available to applications and making the channel error prone and the nodes are battery operated often limiting available energy My thesis is that this harsh environment with severe resource constraints requires an application speci c protocol architecture rather than the traditional layered approach to obtain the best pos sible performance This dissertation supports this claim using detailed case studies on microsensor networks and wireless video delivery The rst study develops LEACH Low Energy Adaptive Clustering Hierarchy an architecture for remote microsensor networks that combines the ideas of energy e cient cluster based routing and media access together with application speci c data aggregation to achieve good performance in terms of system lifetime latency and application perceived quality This approach improves system lifetime by an order of magnitude compared to general purpose approaches when the node energy is limited The second study develops an unequal error protection scheme for MPEG compressed video delivery that adapts the level of protection applied to portions of a packet to the degree of importance of the corresponding bits This approach obtains better application perceived performance than current approaches for the same amount of transmission bandwidth These two systems show that application speci c protocol architectures achieve the energy and latency e ciency and error robustness needed for wireless networks Thesis Supervisor Anantha P Chandrakasan Title Associate Professor Thesis Supervisor Hari Balakrishnan Title Assistant Professor This thesis is dedicated to the memory of Robert H Rabiner The rst Rabiner at the Tute |
A man and his mouse [Resources Review] | In 1968, at the Fall Joint Computer Conference in San Francisco, Douglas Engelbart blew an audience away by showcasing a set of computing technologies then under development at the Stanford Research Institute (now SRI International) in Menlo Park, Calif. His demonstration was the first time the wider computing community had seen a mouse, word processing, dynamic links, shared-screen collaboration, and many other elements of what is now considered modern computing. |
Family effect on cultured pearl quality in black-lipped pearl oyster Pinctada margaritifera and insights for genetic improvement | Individual Pinctada margaritifera molluscs were collected from the Takapoto atoll (Tuamotu Archipelago, French Polynesia) and used to produce ten first generation full-sib families in a hatchery system, following artificial breeding protocols. After three years of culture, these progenies were transferred to Rangiroa atoll (Tuamotu Archipelago, French Polynesia) and tested for their potential as graft donors. A large-scale grafting experiment of 1500 grafts was conducted, in which a single professional grafter used ten individual donor oysters from each of the ten families, grafting 15 recipient oysters from each donor. The recipient oysters were all obtained from wild spat collection in Ahe (Tuamotu Archipelago, French Polynesia). After 18 months of culture, 874 pearls were harvested. Highly significant donor family effects were found for nucleus retention, nacre thickness, nacre weight, pearl colour darkness and visually-perceived colour (bodycolor and overtone), pearl shape categories, surface defects and lustre, the last two of which are components of the Tahitian classification grade. No significant difference was recorded between the ten G1 families for the absence or presence of rings. The progenies could be ranked from “best” (i.e., the donor whose grafts produced the greatest number of grade A pearls) to the “worst”. Some progenies had extreme characteristics: family B presented the greatest number of pearls with lustre (98%) and a high proportion of dark gray to black with green overtone pearls (70%). These results have important implications for the selective breeding of donor pearl oysters: it may be possible to reach a point where specific donor lines whose grafts produce pearls with specific quality traits could be identified and maintained as specific breeding lines. |
The consistency of task-based authorization constraints in workflow | Workflow management systems (WFMSs) have attracted a lot of interest both in academia and the business community. A workflow consists of a collection of tasks that are organized to facilitate some business process specification. To simplify the complexity of security administration, it is common to use role-based access control (RBAC) to grant authorization to roles and users. Typically, security policies are expressed as constraints on users, roles, tasks and the workflow itself. A workflow system can become very complex and involve several organizations or different units of an organization, thus the number of security policies may be very large and their interactions very complex. It is clearly important to know whether the existence of such constraints will prevent certain instances of the workflow from completing. Unfortunately, no existing constraint models have considered this problem satisfactorily. In this paper, we define a model for constrained workflow systems that includes local and global cardinality constraints, separation of duty constraints and binding of duty constraints. We define the notion of a workflow specification and of a constrained workflow authorization schema. Our main result is to establish necessary and sufficient conditions for the set of constraints that ensure a sound constrained workflow authorization schema, that is, for any user or any role who are authorized to a task, there is at least one complete workflow instance when this user or this role executes this task. |
Deep Neural Network Concepts for Background Subtraction: A Systematic Review and Comparative Evaluation | Conventional neural networks show a powerful framework for background subtraction in video acquired by static cameras. Indeed, the well-known SOBS method and its variants based on neural networks were the leader methods on the largescale CDnet 2012 dataset during a long time. Recently, convolutional neural networks which belong to deep learning methods were employed with success for background initialization, foreground detection and deep learned features. Currently, the top current background subtraction methods in CDnet 2014 are based on deep neural networks with a large gap of performance in comparison on the conventional unsupervised approaches based on multi-features or multi-cues strategies. Furthermore, a huge amount of papers was published since 2016 when Braham and Van Droogenbroeck published their first work on CNN applied to background subtraction providing a regular gain of performance. In this context, we provide the first review of deep neural network concepts in background subtraction for novices and experts in order to analyze this success and to provide further directions. For this, we first surveyed the methods used background initialization, background subtraction and deep learned features. Then, we discuss the adequacy of deep neural networks for background subtraction. Finally, experimental results are presented on the CDnet 2014 dataset. Thierry Bouwmans Lab. MIA, Univ. La Rochelle, France E-mail: [email protected] Sajid Javed Dept. of Computer Science, University of Warwick, UK E-mail: [email protected] Maryam Sultana Dept. of Computer Science and Engineering, Kyungpook National University, Republic of Korea E-mail: [email protected] Soon Ki Jung Dept. of Computer Science and Engineering, Kyungpook National University, Republic of Korea E-mail: [email protected] 2 Thierry Bouwmans, Sajid Javed, Maryam Sultana, Soon Ki Jung |
Automatic detection of posterior subcapsular cataract opacity for cataract screening | Cataract is the leading cause of blindness and posterior subcapsular cataract (PSC) leads to significant visual impairment. An automatic approach for detecting PSC opacity in retro-illumination images is investigated. The features employed include intensity, edge, size and spatial location. The system was tested using 441 images. The automatic detection was compared with the human expert. The sensitivity and specificity are 82.6% and 80% respectively. The preliminary research indicates it is feasible to apply automatic detection in the clinical screening of PSC in the future. |
Inductive reasoning and bounded rationality | The type of rationality we assume in economics--perfect, logical, deductive rationality--is extremely useful in generating solutions to theoretical problems. But it demands much of human behavior--much more in fact than it can usually deliver. If we were to imagine the vast collection of decision problems economic agents might conceivably deal with as a sea or an ocean, with the easier problems on top and more complicated ones at increasing depth, then deductive rationality would describe human behavior accurately only within a few feet of the surface. For example, the game Tic-Tac-Toe is simple, and we can readily find a perfectly rational, minimax solution to it. But we do not find rational "solutions" at the depth of Checkers; and certainly not at the still modest depths of Chess and Go. |
IMBALANCED DATASET CLASSIFICATION AND SOLUTIONS : A REVIEW | -Imbalanced data set problem occurs in classification, where the number of instances of one class is much lower than the instances of the other classes. The main challenge in imbalance problem is that the small classes are often more useful, but standard classifiers tend to be weighed down by the huge classes and ignore the tiny ones. In machine learning the imbalanced datasets has become a critical problem and also usually found in many applications such as detection of fraudulent calls, bio-medical, engineering, remote-sensing, computer society and manufacturing industries. In order to overcome the problems several approaches have been proposed. In this paper a study on Imbalanced dataset problem and the solution is given. |
Quantifying sediment transport across an undisturbed prairie landscape using cesium-137 and high resolution topography | Soil erosion is a global environmental problem, and anthropogenic fallout radionuclides offer a promising tool for describing and quantifying soil redistribution on decadal time scales. To date, applications of radioactive fallout to trace upland sediment transport have been developed primarily on lands disturbed by agriculture, grazing, and logging. Here we use Cs to characterize and quantify soil erosion at the Konza Prairie Long-Term Ecological Research (LTER) site, an undisturbed grassland in northeastern Kansas. We report on the small scale (b10 m) and landscape scale (10 to 1000 m) distribution of fallout Cs, and show significant variability in the concentrations and amounts of Cs in soils at our site. Cs soil concentrations and amounts typically vary by 10% to 30% on small scales, which most likely represents the spatial heterogeneity of the depositional processes. Landscape scale variability of soil Cs was significantly higher than small scale variability. Most notably, soils collected on convex (divergent) landforms had Cs inventories of 2500 to 3000 Bq m, which is consistent with the expected atmospheric inputs to the study area during the 1950s and 1960s. Concave landforms, however, had statistically lower inventories of 1800 to 2300 Bq m. The distribution of Cs on this undisturbed landscape contrasts significantly with distributions observed across disturbed sites, which generally have accumulations of radioactive fallout in valley bottoms. Because the upslope contributing area at each sampling point had a significant negative correlation with the soil inventory of Cs, we suggest that overland flow in convergent areas dominates soil erosion at Konza on time scales of decades. Very few points on our landscape had Cs inventories significantly above that which would be predicted from direct deposition of Cs on the soil surface; we conclude therefore that there is little net sediment storage on this undisturbed landscape. © 2005 Elsevier B.V. All rights reserved. |
Sentinel node biopsy compared with complete axillary dissection for staging early breast cancer with clinically negative lymph nodes: results of randomized trial. | BACKGROUND
Sentinel lymph node (SLN) staging is currently used to avoid complete axillary dissection in breast cancer patients with negative SLNs. Evidence of a similar efficacy, in terms of survival and regional control, of this strategy as compared with axillary resection is based on few clinical trials. In 1998, we started a randomized study comparing the two strategies, and we present here its results.
MATERIALS AND METHODS
Patients were randomly assigned to sentinel lymph node biopsy (SLNB) and axillary dissection [axillary lymph node dissection (ALND arm)] or to SLNB plus axillary resection if SLNs contained metastases (SLNB arm). Main end points were overall survival (OS) and axillary recurrence.
RESULTS
One hundred and fifteen patients were assigned to the ALND arm and 110 to the SLNB arm. A positive SLN was found in 27 patients in the ALND arm and in 31 in the SLNB arm. Overall accuracy of SLNB was 93.0%. Sensitivity and negative predictive values were 77.1% and 91.1%, respectively. At a median follow-up of 5.5 years, no axillary recurrence was observed in the SLNB arm. OS and event-free survival were not statistically different between the two arms.
CONCLUSIONS
The SLNB procedure does not appear inferior to conventional ALND for the subset of patients here considered. |
Changes in biomechanical dysfunction and low back pain reduction with osteopathic manual treatment: results from the OSTEOPATHIC Trial. | The purpose of this study was to measure changes in biomechanical dysfunction following osteopathic manual treatment (OMT) and to assess how such changes predict subsequent low back pain (LBP) outcomes. Secondary analyses were performed with data collected during the OSTEOPATHIC Trial wherein a randomized, double-blind, sham-controlled, 2 × 2 factorial design was used to study OMT for chronic LBP. At baseline, prevalence rates of non-neutral lumbar dysfunction, pubic shear, innominate shear, restricted sacral nutation, and psoas syndrome were determined in 230 patients who received OMT. Five OMT sessions were provided at weeks 0, 1, 2, 4, and 6, and the prevalence of each biomechanical dysfunction was again measured at week 8 immediately before the final OMT session. Moderate pain improvement (≥30% reduction on a 100-mm visual analogue scale) at week 12 defined a successful LBP response to treatment. Prevalence rates at baseline were: non-neutral lumbar dysfunction, 124 (54%); pubic shear, 191 (83%); innominate shear, 69 (30%); restricted sacral nutation, 87 (38%), and psoas syndrome, 117 (51%). Significant improvements in each biomechanical dysfunction were observed with OMT; however, only psoas syndrome remission occurred more frequently in LBP responders than non-responders (P for interaction = 0.002). Remission of psoas syndrome was the only change in biomechanical dysfunction that predicted subsequent LBP response after controlling for the other biomechanical dysfunctions and potential confounders (odds ratio, 5.11; 95% confidence interval, 1.54-16.96). These findings suggest that remission of psoas syndrome may be an important and previously unrecognized mechanism explaining clinical improvement in patients with chronic LBP following OMT. |
Continuous integration processes for modern client-side web applications | Continuous Integration (CI) is very useful for applications that involve many files and multiple developers. Unfortunately, not all types of applications can easily apply this approach. Apparently, CI does not gain a lot of attention with Modern Client-Side Web Application (MCSWA) because it requires complicated testing, i.e. the running environments are browsers. There is no compiler or error warning when a developer writes bad code and the build behavior in the usual CI practice is different from the build process in MCSWA. If the integration process is done manually by an integration expert, unexpected errors are found not only while integrating but also when performing manual tests to verify it. Moreover, it is problematic and elaborate if a developer needs to test the features by clicking around with repeated user-interaction in different browsers; especially, he might create human errors or miss some steps. This indicates that manual processes consume a lot of time and they are stressful. Therefore, the intent of this paper is to demonstrate a new approach for a CI process which specifically applies to MCSWA. It also provides a precise cycle for web development teams on how these repeated processes are important to run automatically with effective expected outcomes. |
Study of compliance with a clinical pathway for suspected pulmonary embolism. | BACKGROUND/AIMS
Clinical pathways to guide the investigation of suspected pulmonary embolism have been increasingly adopted by emergency departments worldwide. This study evaluated the compliance with a clinical pathway that combines risk assessment (Wells score) with d-dimer, ventilation-perfusion scanning or computed tomographic pulmonary angiography (CTPA). The aims of this study were to identify factors that contribute to compliance and to assess patient outcomes and resource utilization.
METHODS
Repeated retrospective chart reviews of 239 patients who underwent investigation for pulmonary embolism through our emergency department extracted patient demographics, pathway parameters and patient outcomes. A phone interview at 3-month follow up was carried out.
RESULTS
Incidence of pulmonary embolism was 8.4% (n= 20). Compliance to the clinical pathway occurred in 120 subjects (50.2%). Non-compliance occurred in 71 subjects (29.7%). Forty-eight subjects (20.1%) underwent risk assessments, but subsequent diagnostic tests did not conform to the stated pathway (partial compliance). Compliance was poor in subjects assessed by non-emergency department doctors (χ(2) = 27.95, P≤ 0.001). Compliance occurred less in pregnant subjects (χ(2) = 7.27, P= 0.007) and those with chronic respiratory disease (χ(2) = 5.31, P= 0.021). Subjects in the compliant group were less likely to undergo CTPA (odds ratio 2.07 (1.16-3.70), P= 0.012).
CONCLUSIONS
Compliance with this clinical pathway allowed emergency department doctors in an Australian university teaching hospital to complete diagnostic testing for suspected pulmonary embolism appropriately unless non-emergency department doctors became involved. Compliance with this pathway altered the distribution of diagnostic tests performed with less reliance on CTPA, but was not associated with better patient outcomes. |
Switched Beam Antenna Based on RF MEMS SPDT Switch on Quartz Substrate | This letter demonstrates a 20-GHz radio frequency microelectromechanical system (RF MEMS)-based electrically switchable antenna on a quartz substrate. Two quasi-Yagi antenna elements are monolithically integrated with a single-pole double-throw (SPDT) MEMS switch router network on a 21 mm times 8 mm chip. Electrical beam steering between two opposite directions is achieved using capacitive MEMS SPDT switches in the router. Port impedance and radiation patterns are studied numerically and experimentally. Measured results show that the switched beam antenna features a 27% impedance bandwidth (S11 = -10 dB), a gain of 4.6 dBi, and a front-to-back ratio of 14 dB at 20 GHz when the control voltage is applied to one of the switch pairs of the SPDT switch. |
A System for Automatic Image Categorization | Traditional multimedia classification techniques are based on the analysis of either low-level features or annotated textual information. Instead, the semantic gap between rough data and its content is still a challenging task. In this paper, we describe a novel solution which automatically associates the image analysis and processing algorithms to keywords and human annotation. We use the well known \FLICK\ system, that contains images, tags, keywords and sometimes useful annotation describing both the content of an image and personal interesting information describing the scene. We have carried out several experiments demonstrating that the proposed categorization process achieves quite good performances in terms of efficiency and effectiveness. |
Rényi Divergence and Kullback-Leibler Divergence | Rényi divergence is related to Rényi entropy much like Kullback-Leibler divergence is related to Shannon's entropy, and comes up in many settings. It was introduced by Rényi as a measure of information that satisfies almost the same axioms as Kullback-Leibler divergence, and depends on a parameter that is called its order. In particular, the Rényi divergence of order 1 equals the Kullback-Leibler divergence. We review and extend the most important properties of Rényi divergence and Kullback- Leibler divergence, including convexity, continuity, limits of σ-algebras, and the relation of the special order 0 to the Gaussian dichotomy and contiguity. We also show how to generalize the Pythagorean inequality to orders different from 1, and we extend the known equivalence between channel capacity and minimax redundancy to continuous channel inputs (for all orders) and present several other minimax results. |
The Other Side of the Coin: A Framework for Detecting and Analyzing Web-based Cryptocurrency Mining Campaigns | Mining for crypto currencies is usually performed on high-performance single purpose hardware or GPUs. However, mining can be easily parallelized and distributed over many less powerful systems. Cryptojacking is a new threat on the Internet and describes code included in websites that uses a visitor's CPU to mine for crypto currencies without the their consent. This paper introduces MiningHunter, a novel web crawling framework which is able to detect mining scripts even if they obfuscate their malicious activities. We scanned the Alexa Top 1 million websites for cryptojacking, collected more than 13,400,000 unique JavaScript files with a total size of 246 GB and found that 3,178 websites perform cryptocurrency mining without their visitors' consent. Furthermore, MiningHunter can be used to provide an in-depth analysis of cryptojacking campaigns. To show the feasibility of the proposed framework, three of such campaigns are examined in detail. Our results provide the most comprehensive analysis to date of the spread of cryptojacking on the Internet. |
Big Data solutions in Healthcare: Problems and perspectives | Data in the healthcare sector is growing beyond dealing capacity of the health care organizations and is expected to increase significantly in the coming years. Majority of the Healthcare data is often unstructured, exists in silos and resides in imaging systems, medical prescription notes, insurance claims data, EPR (Electronic Patient Records) etc. integrating these heterogeneous data and factoring it in to advance analytics is critical to improve healthcare outcomes. Either because data are isolated in disparate or incompatible formats or due to the lack in processing capability to load and query large datasets in a timely fashion the Healthcare organizations are not in a position to leverage the benefits of the vast data they have. With convergence of advanced computing and numerous Big Data technological options like commercial solutions, Open Source, Cloud etc. it is now possible to attain high performance, scalability at a relatively low cost. Big data solutions often come with set of innovative data management solutions and analytical tools, when effectively implemented can transform the healthcare outcomes. |
Detrimental cross-talk between sepsis and acute kidney injury: new pathogenic mechanisms, early biomarkers and targeted therapies | This article is one of ten reviews selected from the Annual Update in Intensive Care and Emergency medicine 2016. Other selected articles can be found online at http://www.biomedcentral.com/collections/annualupdate2016. Further information about the Annual Update in Intensive Care and Emergency Medicine is available from http://www.springer.com/series/8901. |
Scaffolding game-based learning: Impact on learning achievements, perceived learning, and game experiences | One of the central challenges of integrating game-based learning in school settings is helping learners make the connections between the knowledge learned in the game and the knowledge learned at school, while maintaining a high level of engagement with game narrative and gameplay. The current study evaluated the effect of supplementing a business simulation game with an external conceptual scaffold, which introduces formal knowledge representations, on learners’ ability to solve financial-mathematical word problems following the game, and on learners’ perceptions regarding learning, flow, and enjoyment in the game. Participants (Mage 1⁄4 10.10 years) were randomly assigned to three experimental conditions: a “study and play” condition that presented the scaffold first and then the game, a “play and study” condition, and a “play only” condition. Although no significant gains in problem-solving were found following the intervention, learners who studied with the external scaffold before the game performed significantly better in the post-game problem-solving assessment. Adding the external scaffold before the game reduced learners’ perceived learning. However, the scaffold did not have a negative impact on reported flow and enjoyment. Flow was found to significantly predict perceived learning and enjoyment. Yet, perceived learning and enjoyment did not predict problem-solving and flow directly predicted problem solving only in the “play and study” condition. We suggest that presenting the scaffold may have “problematized” learners’ understandings of the game by connecting them to disciplinary knowledge. Implications for the design of scaffolds for game-based learning are discussed. 2013 Elsevier Ltd. All rights reserved. |
Project scheduling under resource and mode identity constraints: Model, complexity, methods, and application | A recurring problem in project management involves the allocation of scarce resources to the individual jobs comprising the project. In many situations such as audit-staff scheduling, timetabling, and course scheduling, the resources correspond to individuals (skilled labour). This naturally leads to an assignment-type project scheduling problem, i.e. a project has to be performed by assigning one or more of several individuals (resources) to each job. In this paper we consider the nonpreemptive variant of a resource-constrained project scheduling problem with mode identity. Mode identity refers to a generalization of the multi-mode case where the set of all jobs is partitioned into disjoint subsets while all jobs forming one subset have to be processed in the same mode. Both time and cost incurred by processing a subset of jobs depend on the resources assigned to it. This problem is a substantial and non-trivial generalization of the well-known multi-mode case. Regarding precedence and temporal relations as well as release dates and deadlines, the question arises to which jobs resources should be assigned in order to minimize overall costs. For solving this time-resource-cost-tradeoff problem we present a tailored parallel randomized solution approach called RAMSES into which both static and dynamic priority rules can be incorporated. The results of an extensive computational study on a practical application from the field of audit-staff scheduling indicate that RAMSES is capable of producing "good" solutions in neglectable amounts of time. |
Investigating the Energy Consumption of a Wireless Network Interface in an Ad Hoc Networking Environment | Energy-aware design and evaluation of network protocols r equires knowledge of the energy consumption behavior of actu al wireless interfaces. But little practical information is available about the energy consumption behavior of well-known wireless network interfaces and device specifications do not provide information in a form that is helpful to protocol developers. This paper describes a series of exper iments which obtained detailed measurements of the energy consumption of a n IEEE 802.11 wireless network interface operating in an ad hoc networking environment. The data is presented as a collection of linear equations for calculating the energy consumed in sending, receiving and discarding broad cast and pointto-point data packets of various sizes. Some implications f or protocol design and evaluation in ad hoc networks are discussed. Keywords—energy consumption, IEEE 802.11, ad hoc networks |
Deep Models of Interactions Across Sets | We use deep learning to model interactions across two or more sets of objects, such as user–movie ratings, protein–drug bindings, or ternary useritem-tag interactions. The canonical representation of such interactions is a matrix (or a higherdimensional tensor) with an exchangeability property: the encoding’s meaning is not changed by permuting rows or columns. We argue that models should hence be Permutation Equivariant (PE): constrained to make the same predictions across such permutations. We present a parameter-sharing scheme and prove that it could not be made any more expressive without violating PE. This scheme yields three benefits. First, we demonstrate state-of-the-art performance on multiple matrix completion benchmarks. Second, our models require a number of parameters independent of the numbers of objects, and thus scale well to large datasets. Third, models can be queried about new objects that were not available at training time, but for which interactions have since been observed. In experiments, our models achieved surprisingly good generalization performance on this matrix extrapolation task, both within domains (e.g., new users and new movies drawn from the same distribution used for training) and even across domains (e.g., predicting music ratings after training on movies). |
INFANTS ’ ABILITY TO DISTINGUISH BETWEEN PURPOSEFUL AND NON-PURPOSEFUL BEHAVIORS | Prior studies (Gergely et al., 1995; Woodward, 1998) have found that infants focus on the goals of an action over other details. The current studies tested whether infants would distinguish between a behavior that seemed to be goal-directed and one that seemed not to be. Infants in one condition saw an actor grasp one of two toys that sat side by side on a stage. Infants in the other condition saw the actor drop her hand onto one of the toys in a manner that looked unintentional. Once infants had been habituated to these events, they were shown test events in which either the path of motion or the object that was touched had changed. Nine-month-olds differentiated between these two actions. When they saw the actor grasp the toy, they looked longer on trials with a change in goal object than on trials with a change in path. When they saw the actor drop her hand onto the toy, they looked equally at the two test events. These findings did not result from infants being more interested in grasping as compared to inert hands. In a second study, 5-month-old infants showed patterns similar to those seen in 9-month-olds. These findings have implications for theories of the development of the concept of intention. They argue against the claim that infants are innately predisposed to interpret any motion of an animate agent as intentional. |
The functional neuroanatomy of the placebo effect. | OBJECTIVE
Administration of placebo can result in a clinical response indistinguishable from that seen with active antidepressant treatment. Functional brain correlates of this phenomenon have not been fully characterized.
METHOD
Changes in brain glucose metabolism were measured by using positron emission tomography in hospitalized men with unipolar depression who were administered placebo as part of an inpatient imaging study of fluoxetine. Common and unique response effects to administration of placebo or fluoxetine were assessed after a 6-week, double-blind trial.
RESULTS
Placebo response was associated with regional metabolic increases involving the prefrontal, anterior cingulate, premotor, parietal, posterior insula, and posterior cingulate and metabolic decreases involving the subgenual cingulate, parahippocampus, and thalamus. Regions of change overlapped those seen in responders administered active fluoxetine. Fluoxetine response, however, was associated with additional subcortical and limbic changes in the brainstem, striatum, anterior insula, and hippocampus, sources of efferent input to the response-specific regions identified with both agents.
CONCLUSIONS
The common pattern of cortical glucose metabolism increases and limbic-paralimbic metabolism decreases in placebo and fluoxetine responders suggests that facilitation of these changes may be necessary for depression remission, regardless of treatment modality. Clinical improvement in the group receiving placebo as part of an inpatient study is consistent with the well-recognized effect that altering the therapeutic environment may significantly contribute to reducing clinical symptoms. The additional subcortical and limbic metabolism decreases seen uniquely in fluoxetine responders may convey additional advantage in maintaining long-term clinical response and in relapse prevention. |
Structural Basis of Zika Virus-Specific Antibody Protection | Zika virus (ZIKV) infection during pregnancy has emerged as a global public health problem because of its ability to cause severe congenital disease. Here, we developed six mouse monoclonal antibodies (mAbs) against ZIKV including four (ZV-48, ZV-54, ZV-64, and ZV-67) that were ZIKV specific and neutralized infection of African, Asian, and American strains to varying degrees. X-ray crystallographic and competition binding analyses of Fab fragments and scFvs defined three spatially distinct epitopes in DIII of the envelope protein corresponding to the lateral ridge (ZV-54 and ZV-67), C-C' loop (ZV-48 and ZV-64), and ABDE sheet (ZV-2) regions. In vivo passive transfer studies revealed protective activity of DIII-lateral ridge specific neutralizing mAbs in a mouse model of ZIKV infection. Our results suggest that DIII is targeted by multiple type-specific antibodies with distinct neutralizing activity, which provides a path for developing prophylactic antibodies for use in pregnancy or designing epitope-specific vaccines against ZIKV. |
Lana-Match Algorithm: A Parallel Version of the Rete-Match Algorithm | The Rete–Match algorithm is a matching algorithm used to develop production systems. Although this algorithm is the fastest known algorithm, for many patterns and many objects matching, it still suffers from considerable amount of time needed due to the recursive nature of the problem. In this paper, a parallel version of the Rete–Match algorithm for distributed memory architecture is presented. Also, a theoretical analysis to its correctness and performance is discussed. q 1998 Elsevier Science B.V. All rights reserved. |
Automatic multimedia cross-modal correlation discovery | Given an image (or video clip, or audio song), how do we automatically assign keywords to it? The general problem is to find correlations across the media in a collection of multimedia objects like video clips, with colors, and/or motion, and/or audio, and/or text scripts. We propose a novel, graph-based approach, "MMG", to discover such cross-modal correlations.Our "MMG" method requires no tuning, no clustering, no user-determined constants; it can be applied to any multimedia collection, as long as we have a similarity function for each medium; and it scales linearly with the database size. We report auto-captioning experiments on the "standard" Corel image database of 680 MB, where it outperforms domain specific, fine-tuned methods by up to 10 percentage points in captioning accuracy (50% relative improvement). |
Economic and environmental analysis of micro hydropower system for rural power supply | This paper discusses the advantages of using renewable energy sources in the architecture of an off-grid hybrid power system in rural areas. The studied system is composed of a diesel generator to which a micro hydropower plant is added. Simulations using the Hybrid Optimization Model for Electric Renewable (HOMER) are performed for given annual values of hydro resources, power demands and hybrid system component costs. The results highlight the cost-effectiveness character and the reduction of gas pollutant emissions achieved by using such a system rather than a diesel generator to supply the same load. |
Strategic Design of Engineering Education for the Flat World | We believe that two critical success factors for an engineer in the flat world are an ability to adapt to changes and to be able to work at the interface of different disciplines. Instead of educating traditional domain-specific and analysis-orientated engineers, we believe that the focus should be on educating and graduating strategic engineers who can realize complex systems for changing markets in a collaborative, globally distributed environment. We identify three key drivers that we believe are foundational to future engineering design education programs. These drivers are a) emphasis on strategic engineering, b) mass customization of courses, c) utilization of IT-enabled environments for distributed education. Strategic engineering is a field that relates to the design and creation of complex systems that are adaptable to changes. Mass customization of courses refers to adapting the course material to educational goals and learning styles of different students. IT enabled environments bring distributed students and instructors closer in the form of a virtual classroom. |
A user-centric evaluation framework for recommender systems | This research was motivated by our interest in understanding the criteria for measuring the success of a recommender system from users' point view. Even though existing work has suggested a wide range of criteria, the consistency and validity of the combined criteria have not been tested. In this paper, we describe a unifying evaluation framework, called ResQue (Recommender systems' Quality of user experience), which aimed at measuring the qualities of the recommended items, the system's usability, usefulness, interface and interaction qualities, users' satisfaction with the systems, and the influence of these qualities on users' behavioral intentions, including their intention to purchase the products recommended to them and return to the system. We also show the results of applying psychometric methods to validate the combined criteria using data collected from a large user survey. The outcomes of the validation are able to 1) support the consistency, validity and reliability of the selected criteria; and 2) explain the quality of user experience and the key determinants motivating users to adopt the recommender technology. The final model consists of thirty two questions and fifteen constructs, defining the essential qualities of an effective and satisfying recommender system, as well as providing practitioners and scholars with a cost-effective way to evaluate the success of a recommender system and identify important areas in which to invest development resources. |
Fault tolerant longitudinal aircraft control using non-linear integral sliding mode | This paper proposes a novel nonlinear fault tolerant scheme for longitudinal control of an aircraft system, comprising an integral sliding mode control allocation scheme and a backstepping structure. In fault free conditions, the closed loop system is governed by the backstepping controller and the integral sliding mode control allocation scheme only influences the performance if faults/failures occur in the primary control surfaces. In this situation the allocation scheme redistributes the control signals to the secondary control surfaces and the scheme is able to tolerate total failures in the primary actuator. A backstepping scheme taken from the existing literature is designed for flight path angle tracking (based on the nonlinear equations of motion) and this is used as the underlying baseline controller in nominal conditions. The efficacy of the scheme is demonstrated using a high fidelity aircraft benchmark model. Excellent results are obtained in the presence of plant/model uncertainty in both fault-free and faulty conditions. Keyword: Fault tolerant control (FTC), integral sliding mode (ISM) control. |
Building credit scoring models using genetic programming | Credit scoring models have been widely studied in the areas of statistics, machine learning, and artificial intelligence (AI). Many novel approaches such as artificial neural networks (ANNs), rough sets, or decision trees have been proposed to increase the accuracy of credit scoring models. Since an improvement in accuracy of a fraction of a percent might translate into significant savings, a more sophisticated model should be proposed to significantly improving the accuracy of the credit scoring mode. In this paper, genetic programming (GP) is used to build credit scoring models. Two numerical examples will be employed here to compare the error rate to other credit scoring models including the ANN, decision trees, rough sets, and logistic regression. On the basis of the results, we can conclude that GP can provide better performance than other models. q 2005 Elsevier Ltd. All rights reserved. |
A Robotic Context Query-Processing Framework Based on Spatio-Temporal Context Ontology | Service robots operating in indoor environments should recognize dynamic changes from sensors, such as RGB-depth (RGB-D) cameras, and recall the past context. Therefore, we propose a context query-processing framework, comprising spatio-temporal robotic context query language (ST-RCQL) and a spatio-temporal robotic context query-processing system (ST-RCQP), for service robots. We designed them based on spatio-temporal context ontology. ST-RCQL can query not only the current context knowledge, but also the past. In addition, ST-RCQL includes a variety of time operators and time constants; thus, queries can be written very efficiently. The ST-RCQP is a query-processing system equipped with a perception handler, working memory, and backward reasoner for real-time query-processing. Moreover, ST-RCQP accelerates query-processing speed by building a spatio-temporal index in the working memory, where percepts are stored. Through various qualitative and quantitative experiments, we demonstrate the high efficiency and performance of the proposed context query-processing framework. |
Sexual activity and Kaposi's sarcoma among human immunodeficiency virus type 1 and human herpesvirus type 8-coinfected men. | PURPOSE
There is notable heterogeneity in the progression to Kaposi's sarcoma (KS) among men coinfected with HIV-1 and human herpesvirus type 8 (HHV-8); additional determinants of KS likely exist. Here, we explore sexual activity as a proxy for a sexually transmitted determinant beyond HIV-1 and HHV-8.
METHODS
The association between sexual activity and incident KS was estimated with data from 1354 HIV-1- and HHV-8-coinfected homosexual men who were followed for up to 10 years in the Multicenter AIDS Cohort Study.
RESULTS
As expected, white race, low CD4 cell count, and the acquisition of HHV-8 after HIV-1 infection increased, whereas smoking decreased, the hazard of KS. The unadjusted hazard of KS among those with high sexual activity was 0.68 relative to the hazard of those with low sexual activity (95% confidence interval, 0.49-0.93) and was somewhat muted after adjustment for characteristics measured at study entry (i.e., race, smoking, CD4 cell count, infection order, history of sexual activity, and sexually transmitted diseases). However, adjustment for time-varying covariates, particularly CD4 cell count, resulted in a nullification of the association (adjusted hazard ratio = 1.06; 95% confidence interval, 0.77-1.48).
CONCLUSION
Once HIV-1 and HHV-8 coinfection is established in homosexual men, progression to KS does not appear to be caused by a third pathogen transmitted by sexual activity. |
CUDAAdvisor: LLVM-based runtime profiling for modern GPUs | General-purpose GPUs have been widely utilized to accelerate parallel applications. Given a relatively complex programming model and fast architecture evolution, producing efficient GPU code is nontrivial. A variety of simulation and profiling tools have been developed to aid GPU application optimization and architecture design. However, existing tools are either limited by insufficient insights or lacking in support across different GPU architectures, runtime and driver versions. This paper presents CUDAAdvisor, a profiling framework to guide code optimization in modern NVIDIA GPUs. CUDAAdvisor performs various fine-grained analyses based on the profiling results from GPU kernels, such as memory-level analysis (e.g., reuse distance and memory divergence), control flow analysis (e.g., branch divergence) and code-/data-centric debugging. Unlike prior tools, CUDAAdvisor supports GPU profiling across different CUDA versions and architectures, including CUDA 8.0 and Pascal architecture. We demonstrate several case studies that derive significant insights to guide GPU code optimization for performance improvement. |
Smallpox vaccine–induced antibodies are necessary and sufficient for protection against monkeypox virus | Vaccination with live vaccinia virus affords long-lasting protection against variola virus, the agent of smallpox. Its mode of protection in humans, however, has not been clearly defined. Here we report that vaccinia-specific B-cell responses are essential for protection of macaques from monkeypox virus, a variola virus ortholog. Antibody-mediated depletion of B cells, but not CD4+ or CD8+ T cells, abrogated vaccine-induced protection from a lethal intravenous challenge with monkeypox virus. In addition, passive transfer of human vaccinia-neutralizing antibodies protected nonimmunized macaques from severe disease. Thus, vaccines able to induce long-lasting protective antibody responses may constitute realistic alternatives to the currently available smallpox vaccine (Dryvax). |
Assessing the impact of diabetes on the quality of life of older adults living in a care home: validation of the ADDQoL Senior. | AIMS
Around a quarter of UK care-home residents have diabetes. Diabetes is known to impact quality of life but existing diabetes-specific quality of life measures are unsuitable for elderly care-home residents. We aimed to develop and evaluate a new measure for use with older adults, to be particularly suitable for use with care-home residents: the Audit of Diabetes-Dependent Quality of Life (ADDQoL) Senior*.
METHODS
Content and format changes were made to the 19-domain ADDQoL, informed by related measures for people with visual impairments (12 domain-specific items were retained, four items were revised/added and three items were removed). This revision was modified further following cognitive debriefing interviews with three older adults living in a care home. Psychometric evaluation of the newly developed 17-domain ADDQoL Senior was conducted using data from 90 care-home residents with diabetes who took part in a broader intervention study.
RESULTS
The life domains most impacted by diabetes were 'independence' and 'freedom to eat as I wish'. The ADDQoL Senior demonstrated good factor structure and internal consistency (Cronbach's alpha = 0.924). Domain scores were, as expected, significantly intercorrelated.
CONCLUSIONS
The ADDQoL Senior measures the perceived impact of diabetes on quality of life in older adults, and has been found to be suitable for those living in care homes if administered by interview. The scale has demonstrated acceptability and excellent psychometric properties. It is anticipated that the number of items may be reduced in the future if our current findings can be replicated. |
A BERT Baseline for the Natural Questions | This technical note describes a new baseline for the Natural Questions (Kwiatkowski et al., 2019). Our model is based on BERT (Devlin et al., 2018) and reduces the gap between the model F1 scores reported in the original dataset paper and the human upper bound by 30% and 50% relative for the long and short answer tasks respectively. This baseline has been submitted to the official NQ leaderboard†. Code, preprocessed data and pretrained model are available‡. |
Full observation of single-atom dynamics in cavity QED | We report the use of broadband heterodyne spectroscopy to perform continuous measurement of the interaction energyEint between one atom and a high-finesse optical cavity, during individual transit events of ≈ 250μs duration. We achieve a fractional sensitivity ≈ 4×10−4/√Hz to variations inEint/h within a measurement bandwidth that covers 2.5 decades of frequency ( 1–300 kHz). Our basic procedure is to drop cold cesium atoms into the cavity from a magnetooptic trap while monitoring the cavity’s complex optical susceptibility with a weak probe laser. The instantaneous value of the atom–cavity interaction energy, which in turn determines the coupled system’s optical susceptibility, depends on both the atomic position and (Zeeman) internal state. Measurements over a wide range of atom–cavity detunings reveal the transition from resonant to dispersive coupling, via the transfer of atom-induced signals from the amplitude to the phase of light transmitted through the cavity. By suppressing all sources of excess technical noise, we approach a measurement regime in which the broadband photocurrent may be interpreted as a classical record of c nditionalquantum evolution in the sense of recently developed quantum trajectory theories. PACS: 03.65.Bz; 06.20.Dk; 42.50 Optical-cavity quantum electrodynamics (QED) in the strongcoupling regime [1] provides a unique experimental paradigm for real-time observation of quantum dynamical processes at thesingle-atomlevel. Whereas spectacular advances have certainly been made in the preparation and tomography of quantum states of motion for a single trapped ion [2, 3], all such experiments have involved the accumulation of ensemble-averaged data over many successive realizations of the process being studied. Recent studies of single-molecule dynamics have likewise demonstrated the “immediate” detection of photochemical [4] or conformational [5] events, but such experiments presently lack the potential that cavity QED provides for observing quantum processes on a timescale that ∗ E-mail: [email protected] makes coherent control /intervention a tangible possibility. We wish here to look beyond the mere detection of quantum jumps, and to focus on the development of a broadband, single-shotmeasurement technique that achieves signal-tonoise ratio> 1 over a bandwidth that includes all characteristic frequencies of a quantum dynamical process. Real-time observation of quantum dynamics in manyatom systems has recently become an important theme in atomic physics, with notable demonstrations involving vibrational excitations of a trapped Bose–Einstein condensate [6] and the decay of coherent oscillations of an ensemble of atoms in an optical lattice [7, 8]. In contrast to programs like these, for which the scientific emphasis lies on noninvasive observation of a system’s intrinsic dynamical processes, experiments in single-atom cavity QED hold great potential for nabling precise investigations of how measurement backaction altersthe dynamical behavior of a continuously observed open quantum system [9–12, 55]. A sophisticated theoretical basis for understanding such issues is presently maturing in the form of quantum trajectory theories [13–16], but significant technical challenges remain to be solved before definitive experiments can be performed in the lab. Our purpose in the present work is to report substantial progress towards surmounting such obstacles in the context of cavity QED, and hence towards achieving the essential experimental capabilities required to perform quantitative tests of measurement-based stochastic master equations. We ultimately hope to be able to implement some recently proposed “applications” of the continuous observation of dissipative quantum dynamics, in fields such as quantum measurement [17, 20], quantum chaos [18, 19], and quantum feedback control [20, 21, 23, 24]. This article focuses on a detailed description of our recent experiments that record the complete time-evolution of interaction energy between one atom and a high-finesse optical cavity, during individual transit events of ≈ 250μs duration. With characteristic atom–cavity interaction energiesEint/h≈ 10 MHz, we achieve measurement sensitivities Sg ' 4.5 kHz/ √ Hz over a bandwidth that covers the dominant rates of variation inEint (1–300 kHz). Unlike typical pump-probe measurements of scattering dynamics in real |
A comparative study of standard versus laparoendoscopic single-site surgery (LESS) totally extraperitoneal (TEP) inguinal hernia repair | Laparoscopic inguinal hernia repair has been around since the 1990s. A novel surgical approach known as laparoendoscopic single-site surgery (LESS) has been developed to reduce the port-related morbidities and improve the cosmetic outcomes of laparoscopic surgery, including totally extraperitoneal (TEP) inguinal hernia repair. The aim of the present study was to evaluate the safety and feasibility of the LESS TEP technique for inguinal hernia repair and compare the outcomes with the standard TEP approach. Between January and May 2009, 54 consecutive healthy patients (48 men and 6 women) underwent LESS TEP inguinal hernia repair at our institute. All procedures were performed using our homemade single port for simultaneous passage of the laparoscope and instruments. The perioperative data, including patient age, sex, body mass index (BMI), hernia characteristics, operative time, complications, length of hospital stay, return to normal activity, pain score, and cosmetic result, were prospectively collected. All LESS TEP procedures were completed successfully without conversion to standard laparoscopic or open surgery. A total of 98 LESS TEP hernia repairs were performed in 54 patients and compared with 152 standard TEP operations. The mean operative time was significantly shorter in the standard TEP series (61.8 ± 26.0 vs. 70.9 ± 23.8 min, p = 0.04). Other perioperative parameters, including the length of hospital stay, time until return to full activity, complication rate, pain score, and cosmetic result, were all comparable between the two techniques. Our short-term experience with LESS TEP inguinal hernia repair has shown that in experienced hands, inguinal hernia repair via the LESS TEP technique is as safe as the standard TEP technique. However, based on our evidence, we currently believe that the LESS TEP technique is not an efficacious surgical alternative to the standard TEP technique for inguinal hernias. |
Correlation Assessment between Three-Dimensional Facial Soft Tissue Scan and Lateral Cephalometric Radiography in Orthodontic Diagnosis | Purpose. The aim of the present prospective study was to investigate correlations between 3D facial soft tissue scan and lateral cephalometric radiography measurements. Materials and Methods. The study sample comprised 312 subjects of Caucasian ethnic origin. Exclusion criteria were all the craniofacial anomalies, noticeable asymmetries, and previous or current orthodontic treatment. A cephalometric analysis was developed employing 11 soft tissue landmarks and 14 sagittal and 14 vertical angular measurements corresponding to skeletal cephalometric variables. Cephalometric analyses on lateral cephalometric radiographies were performed for all subjects. The measurements were analysed in terms of their reliability and gender-age specific differences. Then, the soft tissue values were analysed for any correlations with lateral cephalometric radiography variables using Pearson correlation coefficient analysis. Results. Low, medium, and high correlations were found for sagittal and vertical measurements. Sagittal measurements seemed to be more reliable in providing a soft tissue diagnosis than vertical measurements. Conclusions. Sagittal parameters seemed to be more reliable in providing a soft tissue diagnosis similar to lateral cephalometric radiography. Vertical soft tissue measurements meanwhile showed a little less correlation with the corresponding cephalometric values perhaps due to the low reproducibility of cranial base and mandibular landmarks. |
Building Software Reuse Library with Efficient Keyword based Search Mechanism | Software reuse is the use of existing software components to build a software system. Effective storage and retrieval of software components is much essential in reuse process. The researchers have developed a number of software reuse techniques for storage and retrieval of components. No one technique is complete in its own; every technique has its own merits and demerits. This paper presents a new approach for building software reuse library based on keyword searching for storage and fast retrieval of software components. |
Two-stream RNN/CNN for action recognition in 3D videos | The recognition of actions from video sequences has many applications in health monitoring, assisted living, surveillance, and smart homes. Despite advances in sensing, in particular related to 3D video, the methodologies to process the data are still subject to research. We demonstrate superior results by a system which combines recurrent neural networks with convolutional neural networks in a voting approach. The gated-recurrent-unit-based neural networks are particularly well-suited to distinguish actions based on long-term information from optical tracking data; the 3D-CNNs focus more on detailed, recent information from video data. The resulting features are merged in an SVM which then classifies the movement. In this architecture, our method improves recognition rates of state-of-the-art methods by 14% on standard data sets. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.