title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
JMASM Algorithms and Code Pseudo-Random Number Generators for Vector Processors and Multicore Processors | There is a lack of good pseudo-random number generators capable of utilizing the vector processing capabilities and multiprocessing capabilities of modern computers. A suitable generator must have a feedback path long enough to fit the vector length or permit multiple instances with different parameter sets. The risk that multiple random streams from identical generators have overlapping subsequences can be eliminated by combining two different generators. Combining two generators can also increase randomness by remedying weaknesses caused by the need for mathematical tractability. Larger applications need higher precision. The article discusses hitherto ignored imprecisions caused by quantization errors in the application of generators with prime modulus and when generating uniformly distributed integers with an arbitrary interval length. A C++ software package that overcomes all these problems is offered. The RANVEC1 code combines a Mersenne Twister variant and a multiply-with-carry generator to produce vector output. It is suitable for processors with vector sizes up to 512 bits. Some theorists have argued that there is no theoretical proof that the streams from different generators are statistically independent. The article contends that the request for such a proof misunderstands the nature of the problem, and that the mathematical tractability that would allow such a proof would also defeat it. This calls for a more fundamental philosophical discussion of the status of proofs in relation to deterministic pseudo-random sequences. |
Denoising diffusion-weighted magnitude MR images using rank and edge constraints. | PURPOSE
To improve signal-to-noise ratio for diffusion-weighted magnetic resonance images.
METHODS
A new method is proposed for denoising diffusion-weighted magnitude images. The proposed method formulates the denoising problem as an maximum a posteriori} estimation problem based on Rician/noncentral χ likelihood models, incorporating an edge prior and a low-rank model. The resulting optimization problem is solved efficiently using a half-quadratic method with an alternating minimization scheme.
RESULTS
The performance of the proposed method has been validated using simulated and experimental data. Diffusion-weighted images and noisy data were simulated based on the diffusion tensor imaging model and Rician/noncentral χ distributions. The simulation study (with known gold standard) shows substantial improvements in single-to-noise ratio and diffusion tensor estimation after denoising. In vivo diffusion imaging data at different b-values were acquired. Based on the experimental data, qualitative improvement in image quality and quantitative improvement in diffusion tensor estimation were demonstrated. Additionally, the proposed method is shown to outperform one of the state-of-the-art nonlocal means-based denoising algorithms, both qualitatively and quantitatively.
CONCLUSION
The single-to-noise ratio of diffusion-weighted images can be effectively improved with rank and edge constraints, resulting in an improvement in diffusion parameter estimation accuracy. |
Accelerated large-scale multiple sequence alignment | Multiple sequence alignment (MSA) is a fundamental analysis method used in bioinformatics and many comparative genomic applications. Prior MSA acceleration attempts with reconfigurable computing have only addressed the first stage of progressive alignment and consequently exhibit performance limitations according to Amdahl's Law. This work is the first known to accelerate the third stage of progressive alignment on reconfigurable hardware. We reduce subgroups of aligned sequences into discrete profiles before they are pairwise aligned on the accelerator. Using an FPGA accelerator, an overall speedup of up to 150 has been demonstrated on a large data set when compared to a 2.4 GHz Core2 processor. Our parallel algorithm and architecture accelerates large-scale MSA with reconfigurable computing and allows researchers to solve the larger problems that confront biologists today. Program source is available from http://dna.cs.byu.edu/msa/ . |
Outer-loop vectorization - revisited for short SIMD architectures | Vectorization has been an important method of using data-level parallelism to accelerate scientific workloads on vector machines such as Cray for the past three decades. In the last decade it has also proven useful for accelerating multi-media and embedded applications on short SIMD architectures such as MMX, SSE and AltiVec. Most of the focus has been directed at innermost loops, effectively executing their iterations concurrently as much as possible. Outer loop vectorization refers to vectorizing a level of a loop nest other than the innermost, which can be beneficial if the outer loop exhibits greater data-level parallelism and locality than the innermost loop. Outer loop vectorization has traditionally been performed by interchanging an outer-loop with the innermost loop, followed by vectorizing it at the innermost position. A more direct unroll-and-jam approach can be used to vectorize an outer-loop without involving loop interchange, which can be especially suitable for short SIMD architectures.
In this paper we revisit the method of outer loop vectorization, paying special attention to properties of modern short SIMD architectures. We show that even though current optimizing compilers for such targets do not apply outer-loop vectorization in general, it can provide significant performance improvements over innermost loop vectorization. Our implementation of direct outer-loop vectorization, available in GCC 4.3, achieves speedup factors of 3.13 and 2.77 on average across a set of benchmarks, compared to 1.53 and 1.39 achieved by innermost loop vectorization, when running on a Cell BE SPU and PowerPC970 processors respectively. Moreover, outer-loop vectorization provides new reuse opportunities that can be vital for such short SIMD architectures, including efficient handling of alignment. We present an optimization tapping such opportunities, capable of further boosting the performance obtained by outer-loop vectorization to achieve average speedup factors of 5.26 and 3.64. |
Tachyonic potential in Bianchi type-I universe | We investigated the effects of a tachyonic field as a source of gravity in a Bianchi type-I metric. A tachyonic potential is constructed in the anisotropic metric and it is observed that a tachyonic field can be a possible candidate to drive the anisotropically expanding universe. The asymptotic nature of the potential is also discussed. |
ALGEBRAIC CATEGORIES WHOSE PROJECTIVES ARE EXPLICITLY FREE | Let M = (M;m;u) be a monad and let (MX;m) be the free M-algebra on the object X. Consider an M-algebra (A;a), a retraction r : (MX;m)! (A;a) and a section t : (A;a)! (MX;m) of r. The retract (A;a) is not free in general. We observe that for many monads with a 'combinatorial avor' such a retract is not only a free algebra (MA0;m), but it is also the case that the object A0 of generators is determined in a canonical way by the section t. We give a precise form of this property, prove a characterization, and discuss examples from combinatorics, universal algebra, convexity and topos theory. |
Evolutionary architecture search for deep multitask networks | Multitask learning, i.e. learning several tasks at once with the same neural network, can improve performance in each of the tasks. Designing deep neural network architectures for multitask learning is a challenge: There are many ways to tie the tasks together, and the design choices matter. The size and complexity of this problem exceeds human design ability, making it a compelling domain for evolutionary optimization. Using the existing state of the art soft ordering architecture as the starting point, methods for evolving the modules of this architecture and for evolving the overall topology or routing between modules are evaluated in this paper. A synergetic approach of evolving custom routings with evolved, shared modules for each task is found to be very powerful, significantly improving the state of the art in the Omniglot multitask, multialphabet character recognition domain. This result demonstrates how evolution can be instrumental in advancing deep neural network and complex system design in general. |
Invariant diagrams with data refinement | Invariant based programming is an approach where we start to construct a program by first identifying the basic situations (pre- and post-conditions as well as invariants) that could arise during the execution of the algorithm. These situations are identified before any code is written. After that, we identify the transitions between the situations, which will give us the flow of control in the program. Data refinement is a technique of building correct programs working on concrete data structures as refinements of more abstract programs working on abstract data types. We study in this paper data refinement for invariant based programs and we apply it to the construction of the classical Deutsch–Schorr–Waite graph marking algorithm. Our results are formalized and mechanically proved in the Isabelle/HOL theorem prover. |
Term-Weighting Approaches in Automatic Text Retrieval | The experimental evidence accumulated over the past 20 years indicates that text indexing systems based on the assignment of appropriately weighted single terms produce retrieval results that are superior to those obtainable with other more elaborate text representations. These results depend crucially on the choice of effective termweighting systems. This article summarizes the insights gained in automatic term weighting, and provides baseline single-term-indexing models with which other more elaborate content analysis procedures can be compared. 1. AUTOMATIC TEXT ANALYSIS In the late 195Os, Luhn [l] first suggested that automatic text retrieval systems could be designed based on a comparison of content identifiers attached both to the stored texts and to the users’ information queries. Typically, certain words extracted from the texts of documents and queries would be used for content identification; alternatively, the content representations could be chosen manually by trained indexers familiar with the subject areas under consideration and with the contents of the document collections. In either case, the documents would be represented by term vectors of the form D= (ti,tj,...ytp) (1) where each tk identifies a content term assigned to some sample document D. Analogously, the information requests, or queries, would be represented either in vector form, or in the form of Boolean statements. Thus, a typical query Q might be formulated as Q = (qa,qbr.. . ,4r) (2) |
Learning Radio Resource Management in 5G Networks: Framework, Opportunities and Challenges | The fifth generation (5G) of mobile broadband shall be a far more complex system compared to earlier generations due to advancements in radio and network technology, increased densification and heterogeneity of network and user equipment, larger number of operating bands, as well as more stringent performance requirement. To cope with the increased complexity of the Radio Resources Management (RRM) of 5G systems, this manuscript advocates the need for a clean slate design of the 5G RRM architecture. We propose to capitalize the large amount of data readily available in the network from measurements and system observations in combination with the most recent advances in the field of machine learning. The result is an RRM architecture based on general-purpose learning framework capable of deriving specific RRM control policies directly from data gathered in the network. The potential of this approach is verified in three case studies and future directions on application of machine learning to RRM are discussed. |
Reactions of organic compounds in adsorbed monolayers. I. Ozonation of 3,7-dimethyloctyl acetate | Oxidation with ozone of 3,7-dimethyloctyl acetate (7) adsorbed on silica gel, alumina or barium sulphate proceeds in good yield and affords mainly the 7-hydroxy compound (8) and the 7-nor-ketone (9). Minor products include the 3-alcohol (10) and the 3,7-diol (11). The reaction involves interaction of gaseous ozone with the adsorbed organic substrate. The relative yields of the various products are related to the substrate loading. The results show that the very high regioselectivity exhibited by the reaction when conducted under suitable experimental conditions may be attributed to the mutual steric effect of one molecule upon the other in the adsorbed monolayer. |
Supporting Scalable Analytics with Latency Constraints | Recently there has been a significant interest in building big data analytics systems that can handle both “big data” and “fast data”. Our work is strongly motivated by recent real-world use cases that point to the need for a general, unified data processing framework to support analytical queries with different latency requirements. Toward this goal, we start with an analysis of existing big data systems to understand the causes of high latency. We then propose an extended architecture with mini-batches as granularity for computation and shuffling, and augment it with new model-driven resource allocation and runtime scheduling techniques to meet user latency requirements while maximizing throughput. Results from real-world workloads show that our techniques, implemented in Incremental Hadoop, reduce its latency from tens of seconds to sub-second, with 2x-5x increase in throughput. Our system also outperforms state-ofthe-art distributed stream systems, Storm and Spark Streaming, by 1-2 orders of magnitude when combining latency and throughput. |
How does it really feel to be in my shoes? Patients' experiences of compassion within nursing care and their perceptions of developing compassionate nurses | AIMS AND OBJECTIVES
To understand how patients experience compassion within nursing care and explore their perceptions of developing compassionate nurses.
BACKGROUND
Compassion is a fundamental part of nursing care. Individually, nurses have a duty of care to show compassion; an absence can lead to patients feeling devalued and lacking in emotional support. Despite recent media attention, primary research around patients' experiences and perceptions of compassion in practice and its development in nursing care remains in short supply.
DESIGN
A qualitative exploratory descriptive approach.
METHODS
In-depth, semi-structured interviews were conducted with a purposive sample of 10 patients in a large teaching hospital in the United Kingdom. Interviews were digitally recorded and transcribed verbatim. Thematic networks were used in analysis.
RESULTS
Three overarching themes emerged from the data: (1) what is compassion: knowing me and giving me your time, (2) understanding the impact of compassion: how it feels in my shoes and (3) being more compassionate: communication and the essence of nursing.
CONCLUSION
Compassion from nursing staff is broadly aligned with actions of care, which can often take time. However, for some, this element of time needs only be fleeting to establish a compassionate connection. Despite recent calls for the increased focus compassion at all levels in nurse education and training, patient opinion was divided on whether it can be taught or remains a moral virtue. Gaining understanding of the impact of uncompassionate actions presents an opportunity to change both individual and cultural behaviours.
RELEVANCE TO CLINICAL PRACTICE
It comes as a timely reminder that the smallest of nursing actions can convey compassion. Introducing vignettes of real-life situations from the lens of the patient to engage practitioners in collaborative learning in the context of compassionate nursing could offer opportunities for valuable and legitimate professional development. |
Spam Filtering with Naive Bayes - Which Naive Bayes? | Naive Bayes is very popular in commercial and open-source anti-spam e-mail filters. There are, however, several forms of Naive Bayes, something the anti-spam literature does not always acknowledge. We discuss five different versions of Naive Bayes, and compare them on six new, non-encoded datasets, that contain ham messages of particular Enron users and fresh spam messages. The new datasets, which we make publicly available, are more realistic than previous comparable benchmarks, because they maintain the temporal order of the messages in the two categories, and they emulate the varying proportion of spam and ham messages that users receive over time. We adopt an experimental procedure that emulates the incremental training of personalized spam filters, and we plot roc curves that allow us to compare the different versions of nb over the entire tradeoff between true positives and true negatives. |
Dual-Band Wide-Angle Scanning Planar Phased Array in X/Ku-Bands | A novel planar dual-band phased array, operational in the X/Ku-bands and with wide-angle scanning capability is presented. The design, development and experimental demonstration are described. A new single-layer crossed L-bar microstrip antenna is used for the array design. The antenna has low-profile architecture, measuring only 0.33λ×0.33λ, at the low frequency band of operation, with flexible resonance tuning capability offered by the use of a plate-through-hole and field-matching ring arrangement. A 49-element planar (7 × 7) array demonstrator has been built and its performance validated, exhibiting good agreement with full-wave simulations. The dual-band array supports a large frequency ratio of nearly 1.8:1, and also maintains good sub-band bandwidths. Wide-angle scanning up to a maximum of 60 ° and 50 ° are achieved at the low and high frequency bands of operation, respectively. |
Exploiting Ontology Lexica for Generating Natural Language Texts from RDF Data | The increasing amount of machinereadable data available in the context of the Semantic Web creates a need for methods that transform such data into human-comprehensible text. In this paper we develop and evaluate a Natural Language Generation (NLG) system that converts RDF data into natural language text based on an ontology and an associated ontology lexicon. While it follows a classical NLG pipeline, it diverges from most current NLG systems in that it exploits an ontology lexicon in order to capture context-specific lexicalisations of ontology concepts, and combines the use of such a lexicon with the choice of lexical items and syntactic structures based on statistical information extracted from a domain-specific corpus. We apply the developed approach to the cooking domain, providing both an ontology and an ontology lexicon in lemon format. Finally, we evaluate fluency and adequacy of the generated recipes with respect to two target audiences: cooking novices and advanced cooks. |
Self-substrate-triggered technique to enhance turn-on uniformity of multi-finger ESD protection devices | A novel self-substrate-triggered (SST) technique is proposed to solve the nonuniform turn-on issue of the multi-finger GGNMOS for ESD protection. The first turned-on center finger is used to trigger on all fingers in the GGNMOS structure with self-substrate-triggered technique. So, the turn-on uniformity and ESD robustness of GGNMOS can be greatly improved by the new proposed self-substrate-triggered technique. |
Ultra-wideband tightly coupled fractal octagonal phased array antenna | This paper presents a low profile ultrawideband tightly coupled phased array antenna with integrated feedlines. The aperture array consists of planar element pairs with fractal geometry. In each element these pairs are set orthogonal to each other for dual polarisation. The design is an array of closely capacitively coupled pairs of fractal octagonal rings. The adjustment of the capacitive load at the tip end of the elements and the strong mutual coupling between the elements, enables a wideband conformal performance. Adding a ground plane below the array partly compensates for the frequency variation of the array impedance, providing further enhancement in the array bandwidth. Additional improvement is achieved by placing another layer of conductive elements at a defined distance above the radiating elements. A Genetic Algorithm was scripted in MATLAB and combined with the HFSS simulator, providing an easy optimisation tool across the operational bandwidth for the array unit cell design parameters. The proposed antenna shows a wide-scanning ability with a low cross-polarisation level over a wide bandwidth. |
Amyloid-associated neuron loss and gliogenesis in the neocortex of amyloid precursor protein transgenic mice. | APP23 transgenic mice express mutant human amyloid precursor protein and develop amyloid plaques predominantly in neocortex and hippocampus progressively with age, similar to Alzheimer's disease. We have previously reported neuron loss in the hippocampal CA1 region of 14- to 18-month-old APP23 mice. In contrast, no neuron loss was found in neocortex. In the present study we have reinvestigated neocortical neuron numbers in adult and aged APP23 mice. Surprisingly, results revealed that 8-month-old APP23 mice have 13 and 14% more neocortical neurons compared with 8-month-old wild-type and 27-month-old APP23 mice, respectively. In 27-month-old APP23 mice we found an inverse correlation between amyloid load and neuron number. These results suggest that APP23 mice have more neurons until they develop amyloid plaques but then lose neurons in the process of cerebral amyloidogenesis. Supporting this notion, we found more neurons with a necrotic-apoptotic phenotype in the neocortex of 24-month-old APP23 mice compared with age-matched wild-type mice. Stimulated by recent reports that demonstrated neurogenesis after targeted neuron death in the mouse neocortex, we have also examined neurogenesis in APP23 mice. Strikingly, we found a fourfold to sixfold increase in newly produced cells in 24-month-old APP23 mice compared with both age-matched wild-type mice and young APP23 transgenic mice. However, subsequent cellular phenotyping revealed that none of the newly generated cells in neocortex had a neuronal phenotype. The majority were microglial and to a lesser extent astroglial cells. We conclude that cerebral amyloidosis in APP23 mice causes a modest neuron loss in neocortex and induces marked gliogenesis. |
Victorian Poets and Romantic Poems: Intertextuality and Ideology | Bringing together the critical strategies of both his new historicism and intertextual analysis, "Victorian Poets and Romantic Poems" questions the ideological operations of Victorian poems and the ideological dispositions of their authors, particularly in relation to Romantic presurcursors and pre-texts. By examining the works of eight Victorian poets - Matthew Arnold, Robert Browning, Alfred, Lord Tennyson, Dante Gabriel Rossetti, Christina Rossetti, Elizabeth Barrett Browning, William Morris, and A.C. Swinburne - Harrison demonstrates how the ideologies of Victorian poets are revealed by their self-consciously intertextual uses of precursors. |
In the Shadow of David’s Brutus | This essay rereads Jacques-Louis David’s The Lictors Bring to Brutus the Bodies of His Sons (1789)—long interpreted in terms of revolutionary virtue—in the light of Carl Schmitt’s theories of political foundation and sovereignty. By recovering the early complexity of the figure of Brutus, the essay shows how David’s painting anticipated the constitutional debates of September 1789. |
Towards a fully automated 3D printability checker | 3D printing has become one of the most popular evolutionary techniques with diverse application, even as normal people's hobby. As printing target, enormous 3D virtual models from game industry and virtual reality flood the internet, shared in various online forums, such as thingiverse. But which of them can really be printed? In this paper we propose the 3D Printability Checker, which can be used to automatically answer this non-trivial question. One of the major novelties of this paper is the process of dependable software engineering we use to build this Printability Checker. Firstly, we prove that this question is decidable with a given 3D object and a list of printer profiles. Secondly, we design and implement such a checker. Finally, we show our experimental results and use them further for a machine learning approach to improve our system in an automatic way. The generic framework provides a useful basis of automatic self-improvement of the software by combining current techniques in the area of formal method, geometry modelling and machine learning. |
Prolegomena to any future artificial moral agent | As arti® cial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an arti® cial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing arti® cial moral agents result both from controversies among ethicists about moral theory itself, and from computational limits to the implementation of such theories. In this paper the ethical disputes are surveyed, the possibility of a `moral Turing Test ' is considered and the computational di culties accompanying the diOE erent types of approach are assessed. Human-like performance, which is prone to include immoral actions, may not be acceptable in machines, but moral perfection may be computationally unattainable. The risks posed by autonomous machines ignorantly or deliberately harming people and other sentient beings are great. The development of machines with enough intelligence to assess the eOE ects of their actions on sentient beings and act accordingly may ultimately be the most important task faced by the designers of arti® cially intelligent automata. 1. Introduction A good web server is a computer that e ciently serves up html code. A good chess program is one that wins chess games. There are some grey areas and fuzzy edges, of course. Is a good chess program one that wins most games or just some ? Against just ordinary competitors or against world class players ? But in one sense of the question, it is quite clear what it means to build a good computer or write a good program. A good one is one that ful® lls the purpose we had in building it. However, if you wanted to build a computer or write a program that is good in a moral sense, that is a good moral agent, it is much less clear what would count as success. Yet as arti® cial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an arti® cial moral agent becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. How to curb these capacities for harm is … |
Applications of Convolutional Neural Networks | In recent years, deep learning has been used extensively in a wide range of fields. In deep learning, Convolutional Neural Networks are found to give the most accurate results in solving real world problems. In this paper, we give a comprehensive summary of the applications of CNN in computer vision and natural language processing. We delineate how CNN is used in computer vision, mainly in face recognition, scene labelling, image classification, action recognition, human pose estimation and document analysis. Further, we describe how CNN is used in the field of speech recognition and text classification for natural language processing. We compare CNN with other methods to solve the same problem and explain why CNN is better than other methods. Keywords— Deep Learning, Convolutional Neural Networks, Computer Vision, Natural Language |
GraphChi: Large-Scale Graph Computation on Just a PC | Current systems for graph computation require a distributed computing cluster to handle very large real-world problems, such as analysis on social networks or the web graph. While distributed computational resources have become more accessible, developing distributed graph algorithms still remains challenging, especially to non-experts. In this work, we present GraphChi, a disk-based system for computing efficiently on graphs with billions of edges. By using a well-known method to break large graphs into small parts, and a novel Parallel Sliding Windows algorithm, GraphChi is able to execute several advanced data mining, graph mining and machine learning algorithms on very large graphs, using just a single consumer-level computer. We show, through experiments and theoretical analysis, that GraphChi performs well on both SSDs and rotational hard drives. We build on the basis of Parallel Sliding Windows to propose a new data structure, Partitioned Adjacency Lists, which we use to design an online graph database, GraphChi-DB. We demonstrate that, on a single PC, GraphChi-DB can process over one hundred thousand graph updates per second, while simultaneously performing computation. GraphChi-DB compares favorably to existing graph databases, particularly on data that is much larger than the available memory. We evaluate our work both experimentally and theoretically. Based on the Parallel Sliding Windows algorithm, we propose new I/O efficient algorithms for solving fundamental graph problems. We also propose a novel algorithm for simulating billions of random walks in parallel on a single computer. By repeating experiments reported for existing distributed systems we show that, with only fraction of the resources, GraphChi can solve the same problems in a very reasonable time. Our work makes large-scale graph computation available to anyone with a modern PC. |
BING: Binarized Normed Gradients for Objectness Estimation at 300fps | Training a generic objectness measure to produce a small set of candidate object windows, has been shown to speed up the classical sliding window object detection paradigm. We observe that generic objects with well-defined closed boundary can be discriminated by looking at the norm of gradients, with a suitable resizing of their corresponding image windows in to a small fixed size. Based on this observation and computational reasons, we propose to resize the window to 8 × 8 and use the norm of the gradients as a simple 64D feature to describe it, for explicitly training a generic objectness measure. We further show how the binarized version of this feature, namely binarized normed gradients (BING), can be used for efficient objectness estimation, which requires only a few atomic operations (e.g. ADD, BITWISE SHIFT, etc.). Experiments on the challenging PASCAL VOC 2007 dataset show that our method efficiently (300fps on a single laptop CPU) generates a small set of category-independent, high quality object windows, yielding 96.2% object detection rate (DR) with 1, 000 proposals. Increasing the numbers of proposals and color spaces for computing BING features, our performance can be further improved to 99.5% DR. |
A wearable haptic game controller | This paper outlines the development of a wearable game controller incorporating vibrotacticle haptic feedback that provides a low cost, versatile and intuitive interface for controlling digital games. The device differs from many traditional haptic feedback implementation in that it combines vibrotactile based haptic feedback with gesture based input, thus becoming a two way conduit between the user and the virtual environment. The device is intended to challenge what is considered an “interface” and draws on work in the area of Actor-Network theory to purposefully blur the boundary between man and machine. This allows for a more immersive experience, so rather than making the user feel like they are controlling an aircraft the intuitive interface allows the user to become the aircraft that is controlled by the movements of the user's hand. This device invites playful action and thrill. It bridges new territory on portable and low cost solutions for haptic controllers in a gaming context. |
Enhancing intentions to attend cervical cancer screening with a stage-matched intervention. | OBJECTIVE
The study evaluated the effects of a pros enhancing intervention on intention to uptake cervical cancer screening. It was hypothesized that the pros enhancement session (compared to an education session) would affect intentions of preintentional women, whereas it was expected to have negligible effects among women in intentional and actional stages of the health action process approach (HAPA). Thus, we tested the HAPA using stage-matched and stage-mismatched interventions. Further, a change in decisional balance was assumed to mediate the relationship between the group assignment and intention, with age acting as the moderator.
DESIGN AND METHODS
Respondents (1,436 women, aged 18-60) were randomly assigned to either the intervention or control condition and filled out questionnaires before and directly after the manipulation (in one web-based session).
RESULTS
Direct effects of the group assignment were observed only among preintentional women. Across the stages, however, change in decisional balance mediated the effects of the group assignment on the intention to uptake screening. Further, among participants in the preintentional stage, this mediation became significant only for women aged 35 or older.
CONCLUSIONS
Although direct effects are in line with stage assumptions of the HAPA, meditational analysis among pre- and post-intentional women indicated that similar processes accounted for post-manipulation intention. Future research testing stage models should account for the mediating processes, which explain the effects of the intervention. |
High-dose chemotherapy followed by reinfusion of selected CD34+ peripheral blood cells in patients with poor-prognosis breast cancer: a randomized multicentre study. | Seventy-one patients with poor-prognosis breast cancer were enrolled after informed consent in a multicentre randomized study to evaluate the use of selected peripheral blood CD34+ cells to support haematopoietic recovery following high-dose chemotherapy. Patients who responded to conventional chemotherapy were mobilized with chemotherapy (mainly high-dose cyclophosphamide) and/or recombinant human granulocyte colony-stimulating factor (rhG-CSF). Patients who reached the threshold of 20 CD34+ cells per microl of peripheral blood underwent apheresis and were randomized at that time to receive either unmanipulated mobilized blood cells or selected CD34+ cells. For patients in the study arm, CD34+ cells were selected from aphereses using the Isolex300 device. Fifteen patients failed to mobilize peripheral blood progenitors and nine other patients were excluded for various reasons. Forty-seven eligible patients were randomized into two comparable groups. CD34+ cells were selected from aphereses in the study group. Haematopoietic recovery occurred at similar times in both groups. No side-effect related to the infusion of selected cells was observed. The frequency of epithelial tumour cells in aphereses was low (8 out of 42 evaluated patients), as determined by immunocytochemistry. We conclude that selected CD34+ cells safely support haematopoietic recovery following high-dose chemotherapy in patients with poor-prognosis breast cancer. |
Modeling people flow: simulation analysis of international-departure passenger flows in an airport terminal | An entire airport terminal building is simulated to examine passenger flows, especially international departures. First, times needed for passengers to be processed in the terminal building are examined. It is found that the waiting time for check-in accounts for more than 80 percent of the total waiting time of passengers spent in the airport. A special-purpose data-generator is designed and developed to create experimental data for executing a simulation. It is found that the possible number of passengers missing their flights could be drastically reduced by adding supporting staff to and by making use of first-and business-class check-in counters for processing economy-and group-class passengers. |
The Intermediate Value Theorem is NOT Obvious---and I am Going to Prove It to You. | Stephen M. Walk ([email protected]) earned bachelors and masters degrees from the University of Northern Iowa and a Ph.D. from the University of Notre Dame. He has taught at St. Cloud State University in Minnesota since 1999. His mathematical interests include computability theory and many-valued logic. His non-mathematical time tends to revolve around his three children (who created the art in the background of the photo). |
Motivation factors of Blue collar workers verses White collar workers in Herzberg ' s Two Factors theory | Herzberg et al. (1959) developed “Two Factors theory” to focus on working conditions necessary for employees to be motivated. Since Herzberg examined only white collars in his research, this article reviews later studies on motivation factors of blue collar workers verses white collars and suggests some hypothesis for further researches. |
Multiangle Social Network Recommendation Algorithms and Similarity Network Evaluation | Multiangle social network recommendation algorithms (MSN) and a new assessment method, called similarity network evaluation (SNE), are both proposed. From the viewpoint of six dimensions, the MSN are classified into six algorithms, including user-based algorithm from resource point (UBR), user-based algorithm from tag point (UBT), resource-based algorithm from tag point (RBT), resource-based algorithm from user point (RBU), tag-based algorithm from resource point (TBR), and tag-based algorithm from user point (TBU). Compared with the traditional recall/precision (RP) method, the SNE is more simple, effective, and visualized. The simulation results show that TBR and UBR are the best algorithms, RBU and TBU are the worst ones, and UBT and RBT are in the medium levels. |
Modeling, Control, and Implementation of DC–DC Converters for Variable Frequency Operation | In this paper, novel small-signal averaged models for dc-dc converters operating at variable switching frequency are derived. This is achieved by separately considering the on-time and the off-time of the switching period. The derivation is shown in detail for a synchronous buck converter and the model for a boost converter is also presented. The model for the buck converter is then used for the design of two digital feedback controllers, which exploit the additional insight in the converter dynamics. First, a digital multiloop PID controller is implemented, where the design is based on loop-shaping of the proposed frequency-domain transfer functions. And second, the design and the implementation of a digital LQG state-feedback controller, based on the proposed time-domain state-space model, is presented for the same converter topology. Experimental results are given for the digital multiloop PID controller integrated on an application-specified integrated circuit in a 0.13 μm CMOS technology, as well as for the state-feedback controller implemented on an FPGA. Tight output voltage regulation and an excellent dynamic performance is achieved, as the dynamics of the converter under variable frequency operation are considered during the design of both implementations. |
Collagen VI regulates peripheral nerve regeneration by modulating macrophage recruitment and polarization | Macrophages contribute to peripheral nerve regeneration and produce collagen VI, an extracellular matrix protein involved in nerve function. Here, we show that collagen VI is critical for macrophage migration and polarization during peripheral nerve regeneration. Nerve injury induces a robust upregulation of collagen VI, whereas lack of collagen VI in Col6a1 −/− mice delays peripheral nerve regeneration. In vitro studies demonstrated that collagen VI promotes macrophage migration and polarization via AKT and PKA pathways. Col6a1 −/− macrophages exhibit impaired migration abilities and reduced antiinflammatory (M2) phenotype polarization, but are prone to skewing toward the proinflammatory (M1) phenotype. In vivo, macrophage recruitment and M2 polarization are impaired in Col6a1 −/− mice after nerve injury. The delayed nerve regeneration of Col6a1 −/− mice is induced by macrophage deficits and rejuvenated by transplantation of wild-type bone marrow cells. These results identify collagen VI as a novel regulator for peripheral nerve regeneration by modulating macrophage function. |
Expression and adhesive properties of basement membrane proteins in cerebral capillary endothelial cell cultures | Previous studies have indicated the importance of basement membrane components both for cellular differentiation in general and for the barrier properties of cerebral microvascular endothelial cells in particular. Therefore, we have examined the expression of basement membrane proteins in primary capillary endothelial cell cultures from adult porcine brain. By indirect immunofluorescence, we could detect type IV collagen, fibronectin, and laminin both in vivo (basal lamina of cerebral capillaries) and in vitro (primary culture of cerebral capillary endothelial cells). In culture, these proteins were secreted at the subcellular matrix. Moreover, the interaction between basement membrane constituents and cerebral capillary endothelial cells was studied in adhesion assays. Type IV collagen, fibronectin, and laminin proved to be good adhesive substrata for these cells. Although the number of adherent cells did not differ significantly between the individual proteins, spreading on fibronectin was more pronounced than on type IV collagen or laminin. Our results suggest that type IV collagen, fibronectin, and laminin are not only major components of the cerebral microvascular basal lamina, but also assemble into a protein network, which resembles basement membrane, in cerebral capillary endothelial cell cultures. |
Increasing the revenues from automatic milking by using individual variation in milking characteristics. | The objective of this study was to quantify individual variation in daily milk yield and milking duration in response to the length of the milking interval and to assess the economic potential of using this individual variation to optimize the use of an automated milking system. Random coefficient models were used to describe the individual effects of milking interval on daily milk yield and milking duration. The random coefficient models were fitted on a data set consisting of 4,915 records of normal uninterrupted milkings collected from 311 cows kept in 5 separate herds for 1 wk. The estimated random parameters showed considerable variation between individuals within herds in milk yield and milking duration in response to milking interval. In the actual situation, the herd consisted of 60 cows and the automatic milking system operated at an occupation rate (OR) of 64%. When maximizing daily milk revenues per automated milking system by optimizing individual milking intervals, the average milking interval was reduced from 0.421 d to 0.400 d, the daily milk yield at the herd level was increased from 1,883 to 1,909 kg/d, and milk revenues increased from euro498 to euro507/d. If an OR of 85% could be reached with the same herd size, the optimal milking interval would decrease to 0.238 d, milk yield would increase to 1,997 kg/d, and milk revenues would increase to euro529/d. Consequently, more labor would be required for fetching the cows, and milking duration would increase. Alternatively, an OR of 85% could be achieved by increasing the herd size from 60 to 80 cows without decreasing the milking interval. Milk yield would then increase to 2,535 kg/d and milk revenues would increase to euro673/d. For practical implementation on farms, a dynamic approach is recommended, by which the parameter estimates regarding the effect of interval length on milk yield and the effect of milk yield on milking duration are updated regularly and also the milk production response to concentrate intake is taken into account. |
Type-based race detection for Java | This paper presents a static race detection analysis for multithreaded Java programs. Our analysis is based on a formal type system that is capable of capturing many common synchronization patterns. These patterns include classes with internal synchronization, classes thatrequire client-side synchronization, and thread-local classes. Experience checking over 40,000 lines of Java code with the type system demonstrates that it is an effective approach for eliminating races conditions. On large examples, fewer than 20 additional type annotations per 1000 lines of code were required by the type checker, and we found a number of races in the standard Java libraries and other test programs. |
Twitter Polarity Classification with Label Propagation over Lexical Links and the Follower Graph | There is high demand for automated tools that assign polarity to microblog content such as tweets (Twitter posts), but this is challenging due to the terseness and informality of tweets in addition to the wide variety and rapid evolution of language in Twitter. It is thus impractical to use standard supervised machine learning techniques dependent on annotated training examples. We do without such annotations by using label propagation to incorporate labels from a maximum entropy classifier trained on noisy labels and knowledge about word types encoded in a lexicon, in combination with the Twitter follower graph. Results on polarity classification for several datasets show that our label propagation approach rivals a model supervised with in-domain annotated tweets, and it outperforms the noisily supervised classifier it exploits as well as a lexicon-based polarity ratio classifier. |
A reputation system for peer-to-peer networks | We investigate the design of a reputation system for decentralized unstructured P2P networks like Gnutella. Having reliable reputation information about peers can form the basis of an incentive system and can guide peers in their decision making (e.g., who to download a file from). The reputation system uses objective criteria to track each peer's contribution in the system and allows peers to store their reputations locally. Reputation are computed using either of the two schemes, debit-credit reputation computation (DCRC) and credit-only reputation computation (CORC). Using a reputation computation agent (RCA), we design a public key based mechanism that periodically updates the peer reputations in a secure, light-weight, and partially distributed manner. We evaluate using simulations the performance tradeoffs inherent in the design of our system. |
TOWARDS LEGGED AMPHIBIOUS MOBILE ROBOTICS | New areas of research focus on bridging the gap between mobile robotics on land and at sea. This paper describes the evolution of RHex, a power-autonomous legged land-based robot, into one capable of both sea and land-based locomotion. In working towards an amphibious robot, three versions of RHex with increasing levels of aquatic capability were created. While all three platforms share the same underlying software, electronic and mechanical architectures, varying emphasis on aspects of similar design criteria resulted in the development of varied platforms with increasing ability of amphibious navigation. |
The Effect of Reflexology on the Pain-Insomnia-Fatigue Disturbance Cluster of Breast Cancer Patients During Adjuvant Radiation Therapy. | OBJECTIVE
To evaluate the effects of reflexology treatment on quality of life, sleep disturbances, and fatigue in breast cancer patients during radiation therapy.
METHODS/SUBJECTS
A total of 72 women with breast cancer (stages 1-3) scheduled for radiation therapy were recruited.
DESIGN
Women were allocated upon their preference either to the group receiving reflexology treatments once a week concurrently with radiotherapy and continued for 10 weeks or to the control group (usual care).
OUTCOME MEASURES
The Lee Fatigue Scale, General Sleep Disturbance Scale, and Multidimensional Quality of Life Scale Cancer were completed by each patient in both arms at the beginning of the radiation treatment, after 5 weeks, and after 10 weeks of reflexology treatment.
RESULTS
The final analysis included 58 women. The reflexology treated group demonstrated statistically significant lower levels of fatigue after 5 weeks of radiation therapy (p < 0.001), compared to the control group. It was also detected that although the quality of life in the control group deteriorated after 5 and 10 weeks of radiation therapy (p < 0.01 and p < 0.05, respectively), it was preserved in the reflexology group, which also demonstrated a significant improvement in the quality of sleep after 10 weeks of radiation treatment (p < 0.05). Similar patterns were obtained in the assessment of the pain levels experienced by the patients.
CONCLUSIONS
The results of the present study indicate that reflexology may have a positive effect on fatigue, quality of sleep, pain, and quality of life in breast cancer patients during radiation therapy. Reflexology prevented the decline in quality of life and significantly ameliorated the fatigue and quality of sleep of these patients. An encouraging trend was also noted in amelioration of pain levels. |
Processing and Rendering Massive 3 D Geospatial Environments using WebGL – The examples of OpenWebGlobe and SmartMobileMapping | Generating and visualizing rich and interactive (geospatial) 3D contents over the World Wide Web (WWW) using state of the art web technologies such as WebGL as a native component of modern web browsers is a continuously growing approach to modern geospatial data exploitation. In this paper we are introducing OpenWebGlobe and 3DCityTV as two showcase applications to demonstrate the capabilities for exploiting massive 3d geospatial environments with current web technologies without the extensive use of third-party technologies such as browser plugins or extensions. Real time rendering of massive 3d virtual worlds using WebGL as well as the parallel processing and storage within common cloud computing services of large scale datasets will be discussed. Using an imaged-based approach to 3d modeling on vast stereo-vision based mobile mapping data will be introduced using an entirely web based 3d exploitation solution. |
FLEXIBLE MANUFACTURING SYSTEMS MODELLING AND PERFORMANCE EVALUATION USING AUTOMOD | In recent times flexible manufacturing systems emerged as a powerful technology to meet the continuous changing customer demands. Increase in the performance of flexible manufacturing systems is expected as a result of integration of the shop floor activities such as machine and vehicle scheduling. The authors made an attempt to integrate machine and vehicle scheduling with an objective to minimize the makespan using Automod. Automod is a discrete event simulation package used to model and simulate a wide variety of issues in automated manufacturing systems. The key issues related to the design and operation of automated guided vehicles such as flow path layout, number of vehicles and traffic control problems are considered in the study. The performance measures like throughput, machine and vehicle utilization are studied for different job dispatching and vehicle assignment rules in different flexible manufacturing system configurations. (Received in August 2010, accepted in April 2011. This paper was with the authors 1 month for 2 revisions.) |
Robust Backstepping control of ball and beam system with external disturbance estimator | This paper presents a robust feedback controller for ball and beam system (BBS). The BBS is a nonlinear system in which a ball has to be balanced on a particular beam position. The proposed nonlinear controller designed for the BBS is based upon Backstepping control technique which guarantees the boundedness of tracking error. To tackle the unknown disturbances, an external disturbance estimator (EDE) has been employed. The stability analysis of the overall closed loop robust control system has been worked out in the sense of Lyapunov theory. Finally, the simulation studies have been done to demonstrate the suitability of proposed scheme. |
Praxiological Inquiries of Theorizing | It seems that social scientists somehow understand their papers despite the fact that it is impossible to decide “clear” relation between them and what is called methodological criteria. Accordingly, in this paper, theory is addressed not as something to be improved, but as phenomena embedded in our work of writing and reading within those situations where they are done. Two features -‘sign reading practice’ and ‘impression of rationality’- are examined as ways to achieve making sense of [purported] SCIENCE. It is concluded that theorizing activities are never isolated from our everyday work. |
A General Approach to Network Configuration Analysis | We present an approach to detect network configuration errors, which combines the benefits of two prior approaches. Like prior techniques that analyze configuration files, our approach can find errors proactively, before the configuration is applied, and answer “what if” questions. Like prior techniques that analyze data-plane snapshots, our approach can check a broad range of forwarding properties and produce actual packets that violate checked properties. We accomplish this combination by faithfully deriving and then analyzing the data plane that would emerge from the configuration. Our derivation of the data plane is fully declarative, employing a set of logical relations that represent the control plane, the data plane, and their relationship. Operators can query these relations to understand identified errors and their provenance. We use our approach to analyze two large university networks with qualitatively different routing designs and find many misconfigurations in each. Operators have confirmed the majority of these as errors and have fixed their configurations accordingly. |
UrbanMatch - linking and improving Smart Cities Data | Urban-related data and geographic information are becoming mainstream in the Linked Data community due also to the popularity of Location-based Services. In this paper, we introduce the UrbanMatch game, a mobile gaming application that joins data linkage and data quality/trustworthiness assessment in an urban environment. By putting together Linked Data and Human Computation, we create a new interaction paradigm to consume and produce location-specific linked data by involving and engaging the final user. The UrbanMatch game is also offered as an example of value proposition and business model of a new family of linked data applications based on gaming in Smart Cities. |
Structure of bone morphogenetic protein 9 procomplex. | Bone morphogenetic proteins (BMPs) belong to the TGF-β family, whose 33 members regulate multiple aspects of morphogenesis. TGF-β family members are secreted as procomplexes containing a small growth factor dimer associated with two larger prodomains. As isolated procomplexes, some members are latent, whereas most are active; what determines these differences is unknown. Here, studies on pro-BMP structures and binding to receptors lead to insights into mechanisms that regulate latency in the TGF-β family and into the functions of their highly divergent prodomains. The observed open-armed, nonlatent conformation of pro-BMP9 and pro-BMP7 contrasts with the cross-armed, latent conformation of pro-TGF-β1. Despite markedly different arm orientations in pro-BMP and pro-TGF-β, the arm domain of the prodomain can similarly associate with the growth factor, whereas prodomain elements N- and C-terminal to the arm associate differently with the growth factor and may compete with one another to regulate latency and stepwise displacement by type I and II receptors. Sequence conservation suggests that pro-BMP9 can adopt both cross-armed and open-armed conformations. We propose that interactors in the matrix stabilize a cross-armed pro-BMP conformation and regulate transition between cross-armed, latent and open-armed, nonlatent pro-BMP conformations. |
An RFID warehouse robot | RFID is one of the latest trend in the industry. Its potential application can range from warehouse to library management. This project is aimed to build an autonomous robot with RFID application. The project integrates RFID reader and PIC microcontroller as the main components. The movement control comprises servo-motor with infrared sensors for the line follower. The whole programming operation was carried out by assembly language using MPLab 7.3. The robot has the ability to identify the items by reading the tag on the items. The robot will pick up the item and navigate to prescribed destination using line follower module to store the item at the appropriate place and location. A small white platform with black line is built for demonstration and testing. |
Towards a More Flexible Timing Definition Language | Time-triggered languages permit to model real-time system temporal behavior by assigning system activities to the particular time instants. At these precise instants system observes the controlled object and, depending on the analysis of its state, invokes the appropriate actions. This fine-grained control of system temporal evolution enables value and time deterministic programming. Up-to-date time-triggered frameworks allow to model multi-modal and multi-modular real-time systems. However, their timing verification imposes some constraints on the computational task model and system reactivity. By adapting scheduling analysis techniques based on the processor demand instead of processor utilization factor, these limitations can be overcome and a more flexible framework may be proposed. |
Beauty and Brains: Detecting Anomalous Pattern Co-Occurrences | Our world is filled with both beautiful and brainy people, but how often does a Nobel Prize winner also wins a beauty pageant? Let us assume that someone who is both very beautiful and very smart is more rare than what we would expect from the combination of the number of beautiful and brainy people. Of course there will still always be some individuals that defy this stereotype; these beautiful brainy people are exactly the class of anomaly we focus on in this paper. They do not posses rare qualities, but it is the unexpected combination of factors that makes them stand out. In this paper we define the above described class of anomaly and propose a method to quickly identify them in transaction data. Further, as we take a pattern set based approach, our method readily explains why a transaction is anomalous. The effectiveness of our method is thoroughly verified with a wide range of experiments on both real world and synthetic data. |
Visual evoked potentials in the diagnosis of headache before 5 years of age | Headache is a common complaint in children. Diagnosis of the type of headache and therapeutic approach is predominantly clinical based on a detailed history and physical examination. These data are often not available or informative in young children with headache, leading clinicians to look for diagnostic studies. We reviewed our experience with 53 children under the age of 5 years with headache. Of these, 42 (75%) children underwent neuro-imaging studies including CT scan (32 cases), MRI (10 cases), or both studies (6 cases). All were within normal limits except for two cases with a small arachnoid cyst and cerebellar hypoplasia respectively which were not directly related to the headache. Visual evoked potentials were performed in all children. There were no significant differences between the mean latencies of N1P100 and N2 of the children with clinically diagnosed migraine, however, the P100-N2 amplitudes of children with migraine were significantly larger compared to those of children without migraine. Even in young children with headache, neuro-imaging studies have a very limited diagnostic value. Visual evoked potentials may also be used in this age group as a diagnostic tool, particularly when clinical symptoms are either unavailable or not characteristic. Conclusion The diagnosis of migraine in young children remains clinical, based on history obtained from children and parents; however, visual evoked potentials may support the diagnosis in this age group. |
Query Graphs, Implementing Trees, and Freely-Reorderable Outerjoins | We determine when a join/outerjoin query can be expressed unambiguously as a query graph, without an explicit specification of the order of evaluation. To do so, we first characterize the set of expression trees that implement a given join/outerjoin query graph, and investigate the existence of transformations among the various trees. Our main theorem is that a join/outerjoin query is freely reorderable if the query graph derived from it falls within a particular class, every tree that “implements” such a graph evaluates to the same result.
The result has applications to language design and query optimization. Languages that generate queries within such a class do not require the user to indicate priority among join operations, and hence may present a simplified syntax. And it is unnecessary to add extensive analyses to a conventional query optimizer in order to generate legal reorderings for a freely-reorderable language. |
How data science can advance mental health research | Accessibility of powerful computers and availability of so-called big data from a variety of sources means that data science approaches are becoming pervasive. However, their application in mental health research is often considered to be at an earlier stage than in other areas despite the complexity of mental health and illness making such a sophisticated approach particularly suitable. In this Perspective, we discuss current and potential applications of data science in mental health research using the UK Clinical Research Collaboration classification: underpinning research; aetiology; detection and diagnosis; treatment development; treatment evaluation; disease management; and health services research. We demonstrate that data science is already being widely applied in mental health research, but there is much more to be done now and in the future. The possibilities for data science in mental health research are substantial. Russ et al. discuss the broad applications of data science to mental health research and consider future ways that big data can improve detection, diagnosis, treatment, healthcare provision and disease management. |
Decision Trees: Theory and Algorithms | 4. |
Open-Domain Semantic Parsing with Boxer | Boxer is a semantic parser for English texts with many input and output possibilities, and various ways to perform meaning analysis based on Discourse Representation Theory. This involves the various ways that meaning representations can be computed, as well as their possible semantic ingredients. |
A Planar Magnetically Coupled Resonant Wireless Power Transfer System Using Printed Spiral Coils | A fully planar wireless power transfer (WPT) system via strongly coupled magnetic resonances is presented. In it, both the transmitter and the receiver are planarized with the use of coplanar printed spiral coils (PSCs) and a printed loop. An equivalent circuit model of the proposed planar WPT system is derived to facilitate the design, and a flowchart is provided for the optimization of the system with given size constraints. To realize high peak power transfer efficiency, the quality factor of individual loop or resonator, the mutual coupling between resonators, and the frequency splitting phenomenon of the system are analyzed in addition to the effect of the input impedance of the system on the transmission efficiency. Furthermore, parallel current paths are created by applying auxiliary strips to the backside of the substrates and connecting to the prime resonators using vias to decrease the resistance and to increase the quality factor of the PSC resonators, and this in turn further improves the transfer efficiency of the proposed planar WPT system. The measured results show that the proposed WPT system is able to provide a stable wireless power transfer with up to 81.68% efficiency at a distance of 10 cm. The planar structure and the high transfer efficiency make the proposed design a suitable candidate for wireless power transfer of small portable electronic devices. |
On the impact of MOOCs on engineering education | From the early 90s, online education has been continually reshaping the notion of open learning. Massive Open Online Courses (MOOCs) have generated a paradigm shift in online education by presenting free high-quality education to anyone, anywhere with Internet access. Since 2012, MOOCs got attention of the universities, media and entrepreneurs. These online courses may change the world by 2022. This paper discusses major elements of MOOCs that can influence teaching and learning in engineering. It also explores the promise of online education in improving standard in-class engineering education. |
Impact of increasing number of neurons on performance of neuromorphic architecture | Pattern recognition is used to classify the input data into different classes based on extracted key features. Increasing the recognition rate of pattern recognition applications is a challenging task. The spike neural networks inspired from physiological brain architecture, is a neuromorphic hardware implementation of network of neurons. A sample of neuromorphic architecture has two layers of neurons, input and output. The number of input neurons is fixed based on the input data patterns. While the number of outputs neurons can be different. The goal of this paper is performance evaluation of neuromorphic architecture in terms of recognition rates using different numbers of output neurons. For this purpose a simulation environment of N2S3 and MNIST handwritten digits are used. Our simulation results show the recognition rate for various number of output neurons, 20, 30, 50, 100, 200, and 300 is 70%, 74%, 79%, 85%, 89%, and 91%, respectively. |
A Comparison of Merged versus Non-merged Business Establishments in Britain: What Can we Learn from the Workplace Industrial Relations Survey? | The paper compares the structural characteristics, market conditions, organizational features, strategic behaviour and performance of merged versus unmerged private business establishments in the UK. The results are based on the analysis of the 1990 Workplace Industrial Relations Survey. The following conclusions are reached: merged establishments tend to be rather old, of small to medium size, more likely to involve manufacturing than services business, and to be part of conglomerate businesses. They are more likely to have an international market and to operate in oligopolistic markets. Nonetheless, they are perceived to operate in competitive conditions just as much as non-merged establishments. The merged manufacturing establishments are more likely to have been involved in restructuring strategies and to have cut jobs and achieved productivity gains. More merged establishments declare a below-average financial performance. |
Collective Intelligence and Human Culture | Examples of selfless behavior abound in nature. Cells within an organization sacrifice themselves to prevent spread of infections, worker bees in hives sacrifice their right to reproduce, many female mammals will suckle one another's offspring. Human cooperation and collaboration cover vast areas of activity and behavior, often placing their own reproductive success on the line for the benefit of another individual. Since the publication of Darwin's Origin of Species, biologists have struggled to reconcile evolution's "selfishness" with the clear evidence of cooperation in nature. The dominant view of evolution followed Tennyson's description of nature as "red in tooth and claw." Charles Darwin proposed evolution by natural selection in which individuals with desirable traits reproduce more than their peers and contribute more to the next generation. He called this competition the "struggle for life most severe." Evolution was commonly called "survival of the fittest." It appeared logical that one should not help a rival and should even cheat to win. Winning the game would be all that counts. Any observation of the many species of animals reveals a social nature. Indeed, it is unusual to see many animals apart from their group unless they are lost, injured, or ill. We have many English words to describe the grouping of animals; a covey of quail, a pod of whales, a herd of sheep, a pride of lions, a pack of wolves, an |
Thread Lift: Classification, Technique, and How to Approach to the Patient | Background: The thread lift technique has become popular because it is less invasive, requires a shorter operation, less downtime, and results in fewer postoperative complications. The advantage of the technique is that the thread can be inserted under the skin without the need for long incisions. Currently, there are a lot of thread lift techniques with respect to the specific types of thread used on specific areas, such as the mid-face, lower face, or neck area. Objective: To review the thread lift technique for specific areas according to type of thread, patient selection, and how to match the most appropriate to the patient. Materials and Methods: A literature review technique was conducted by searching PubMed and MEDLINE, then compiled and summarized. Result: We have divided our protocols into two sections: Protocols for short suture, and protocols for long suture techniques. We also created 3D pictures for each technique to enhance understanding and application in a clinical setting. Conclusion: There are advantages and disadvantages to short suture and long suture techniques. The best outcome for each patient depends on appropriate patient selection and determining the most suitable technique for the defect and area of patient concern. Keywords—Thread lift, thread lift method, thread lift technique, thread lift procedure, threading. |
Impact of Communicating Familial Risk of Diabetes on Illness Perceptions and Self-Reported Behavioral Outcomes | OBJECTIVE
To assess the potential effectiveness of communicating familial risk of diabetes on illness perceptions and self-reported behavioral outcomes.
RESEARCH DESIGN AND METHODS
Individuals with a family history of diabetes were randomized to receive risk information based on familial and general risk factors (n = 59) or general risk factors alone (n = 59). Outcomes were assessed using questionnaires at baseline, 1 week, and 3 months.
RESULTS
Compared with individuals receiving general risk information, those receiving familial risk information perceived heredity to be a more important cause of diabetes (P < 0.01) at 1-week follow-up, perceived greater control over preventing diabetes (P < 0.05), and reported having eaten more healthily (P = 0.01) after 3 months. Behavioral intentions did not differ between the groups.
CONCLUSIONS
Communicating familial risk increased personal control and, thus, did not result in fatalism. Although the intervention did not influence intentions to change behavior, there was some evidence to suggest it increases healthy behavior. |
Non-negative Tensor Factorization with missing data for the modeling of gene expressions in the Human Brain | Non-negative Tensor Factorization (NTF) has become a prominent tool for analyzing high dimensional multi-way structured data. In this paper we set out to analyze gene expression across brain regions in multiple subjects based on data from the Allen Human Brain Atlas [1] with more than 40 % data missing in our problem. Our analysis is based on the non-negativity constrained Canonical Polyadic (CP) decomposition where we handle the missing data using marginalization considering three prominent alternating least squares procedures; multiplicative updates, column-wise, and row-wise updating of the component matrices. We examine three gene expression prediction scenarios based on data missing at random, whole genes missing and whole areas missing within a subject. We find that the column-wise updating approach also known as HALS performs the most efficient when fitting the model. We further observe that the non-negativity constrained CP model is able to predict gene expressions better than predicting by the subject average when data is missing at random. When whole genes and whole areas are missing it is in general better to predict by subject averages. However, we find that when whole genes are missing from all subjects the model based predictions are useful. When analyzing the structure of the components derived for one of the best predicting model orders the components identified in general constitute localized regions of the brain. Non-negative tensor factorization based on marginalization thus forms a promising framework for imputing missing values and characterizing gene expression in the human brain. However, care also has to be taken in particular when predicting the genetic expression levels at a whole region of the brain missing as our analysis indicates that this requires a substantial amount of subjects with data for this region in order for the model predictions to be reliable. |
Psoriasis: is it the tip of the iceberg for the quality of life of patients and their families? | OBJECTIVE
To evaluate the impact of psoriasis on patients' and their relatives' quality of life (QoL).
METHODS
Eighty patients with their accompanying family members were included in the study. For measuring health related QoL (HRQoL) of patients with psoriasis, two questionnaires were used: Short Form 36 Health Survey (SF-36) and EuroQol (EQ-5D). Disease-specific HRQoL was assessed by the Dermatology Life Quality Index. For measuring the quality of life of patients' relatives, a specific questionnaire for dermatological diseases was used (Family Dermatology Life Quality Index, FDLQI).
RESULTS
Of our patients, 88.3% reported that their disease affects in many and different ways their QoL whereas only 11.2% reported that psoriasis does not influence at all their life. Regarding FDLQI, 90% of the participating family members, responded that their relative's psoriasis affected their own QoL.
CONCLUSIONS
Psoriasis is a chronic disease that affects in a cumulative way the quality of life of both patients and their close relatives. |
More on the fragility of performance: choking under pressure in mathematical problem solving. | In 3 experiments, the authors examined mathematical problem solving performance under pressure. In Experiment 1, pressure harmed performance on only unpracticed problems with heavy working memory demands. In Experiment 2, such high-demand problems were practiced until their answers were directly retrieved from memory. This eliminated choking under pressure. Experiment 3 dissociated practice on particular problems from practice on the solution algorithm by imposing a high-pressure test on problems practiced 1, 2, or 50 times each. Infrequently practiced high-demand problems were still performed poorly under pressure, whereas problems practiced 50 times each were not. These findings support distraction theories of choking in math, which contrasts with considerable evidence for explicit monitoring theories of choking in sensorimotor skills. This contrast suggests a skill taxonomy based on real-time control structures. |
Therapeutic plasma exchange in patients with neurologic diseases: retrospective multicenter study. | Therapeutic plasma exchange (TPE) is commonly used in many neurological disorders where an immune etiology was known or suspected. We report our experience with TPE performed for neuroimmunologic disorders at four university hospitals. The study was a retrospective review of the medical records of neurological patients (n=57) consecutively treated with TPE between April 2006 and May 2007. TPE indications in neurological diseases included Guillain-Barrè Syndrome (GBS) (n=41), myasthenia gravis (MG) (n=11), acute disseminated encephalomyelitis (ADEM) (n=3), chronic inflammatory demyelinating polyneuropathy (CIDP) (n=1) and multiple sclerosis (MS) (n=1). Patient median age was 49; there was a predominance of males. Twenty-two patients had a history of other therapy including intravenous immunoglobulin (IVIG), steroid, azothioprin, and pridostigmine prior to TPE. Another 35 patients had not received any treatment prior to TPE. All patients were classified according to the Hughes functional grading scores pre- and first day post-TPE for early clinical evaluation of patients. The TPE was carried out 1-1.5 times at the predicted plasma volume every other day. Two hundred and ninety-four procedures were performed on 57 patients. The median number of TPE sessions per patient was five, and the median processed plasma volume was 3075mL for each cycle. Although the pre-TPE median Hughes score of all patients was 4, it had decreased to grade 1 after TPE. While the pre-TPE median Hughes score for GBS and MG patients was 4, post-TPE scores were decreased to grade 1. Additionally, there was a statistically significant difference between post-TPE Hughes score for GBS patients with TPE as front line therapy and patients receiving IVIG as front line therapy (1 vs. 3.5; p=0.034). Although there was no post-TPE improvement in Hughes scores in patients with ADEM and CIDP, patients with MS had an improved Hughes score from 4 to 1. Mild and manageable complications such as hypotension and hypocalcemia were also observed. TPE may be preferable for controlling symptoms of neuroimmunological disorders in early stage of the disease, especially with GBS. |
Pharmacokinetics of sodium valproate in epileptic patients: Prediction of maintenance dosage by single-dose study | Pharmacokinetic analysis of the plasma valproic acid concentration-time course, following a single oral dose (600 mg) of sodium valproate, was performed in 20 epileptic patients as an aid to the prediction of a proper chronic dosage regimen. A simple one-compartment model was found inadequate to describe the drug concentration-time course in 15 of the 20 patients studied. The average elimination (β phase) half-life of 9 h was shorter than that previously reported in healthy subjects. The latter observation and the wide variation in plasma valproic acid clearance observed between patients (0.09–0.53 ml/kg/min) may have been related to its altered disposition by concomitant anticonvulsant therapy. Sodium valproate maintenance therapy, determined by single-dose pharmacokinetic prediction of steady-state plasma valproic acid levels, did not require dosage adjustment because of unwanted effects. However, the occurrence of drug-related adverse events led to dosage reduction in 4 of 9 patients whose chronic therapy was not pharmacokinetically predicted. Moreover, the pharmacokinetic variability demonstrated for sodium valproate by patients on multiple therapy, whose chronic sodium valproate therapy was pharmacokinetically predicted, indicates the value of monitoring plasma valproic acid levels for the regulation of anticonvulsant therapy. |
Compact Offset Microstrip-Fed MIMO Antenna for Band-Notched UWB Applications | A compact multiple-input-multiple-output (MIMO) antenna is presented for ultrawideband (UWB) applications with band-notched function. The proposed antenna is composed of two offset microstrip-fed antenna elements with UWB performance. To achieve high isolation and polarization diversity, the antenna elements are placed perpendicular to each other. A parasitic T-shaped strip between the radiating elements is employed as a decoupling structure to further suppress the mutual coupling. In addition, the notched band at 5.5 GHz is realized by etching a pair of L-shaped slits on the ground. The antenna prototype with a compact size of 38.5 × 38.5 mm2 has been fabricated and measured. Experimental results show that the antenna has an impedance bandwidth of 3.08-11.8 GHz with reflection coefficient less than -10 dB, except the rejection band of 5.03-5.97 GHz. Besides, port isolation, envelope correlation coefficient and radiation characteristics are also investigated. The results indicate that the MIMO antenna is suitable for band-notched UWB applications. |
Personality-Based User Modeling for Music Recommender Systems | Applications are getting increasingly interconnected. Although the interconnectedness provide new ways to gather information about the user, not all user information is ready to be directly implemented in order to provide a personalized experience to the user. Therefore, a general model is needed to which users’ behavior, preferences, and needs can be connected to. In this paper we present our works on a personality-based music recommender system in which we use users’ personality traits as a general model. We identified relationships between users’ personality and their behavior, preferences, and needs, and also investigated different ways to infer users’ personality traits from user-generated data of social networking sites (i.e., Facebook, Twitter, and Instagram). Our work contributes to new ways to mine and infer personality-based user models, and show how these models can be implemented in a music recommender system to positively contribute to the user experience. |
Spirituality and Social Work | Contents: Preface Part I Concepts and Contexts: Religion and spirituality Social work and spirituality. Part II Spirituality Over the Lifespan: Childhood Youth Relationships Work Ageing. Part III Spirituality and Lived Experience: Rituals Creativity Place Believing and belonging Conclusion Bibliography Index. |
A Space Vector Modulation Scheme of the Quasi-Z-Source Three-Level T-Type Inverter for Common-Mode Voltage Reduction | The conventional three-level inverter suffers the limitation of voltage buck operation. In order to give both voltage buck and boost operation capability, the quasi-Z-source three-level T-type inverter (3LT<inline-formula> <tex-math notation="LaTeX">$^2$</tex-math></inline-formula>I) has been proposed. This paper further proposes a space vector modulation (SVM) scheme for the quasi-Z-source 3LT<inline-formula><tex-math notation="LaTeX">$^2$</tex-math> </inline-formula>I to reduce the magnitude and slew rate of common-mode voltage (CMV). By properly selecting the shoot-through phase, the shoot-through states are inserted within zero vector in order not to affect the active states and output voltage. Doing so, the CMV generated by the quasi-Z-source 3LT<inline-formula><tex-math notation="LaTeX"> $^2$</tex-math></inline-formula>I is restricted within one-sixth of dc-link voltage, and voltage boosting and CMV reduction can be simultaneously realized. In addition, high dc-link voltage utilization can be maintained. The proposed scheme has been verified in both simulations and experiments. Comparisons are conducted with the conventional SVM method and the phase-shifted sinusoidal PWM method. |
A convolutional approach to reflection symmetry | We present a convolutional approach to reflection symmetry detection in 2D. Our model, built on the products of complex-valued wavelet convolutions, simplifies previous edgebased pairwise methods. Being parameter-centered, as opposed to feature-centered, it has certain computational advantages when the object sizes are known a priori, as demonstrated in an ellipse detection application. The method outperforms the best-performing algorithm on the CVPR 2013 Symmetry Detection Competition Database in the single-symmetry case. Code and a new database for 2D symmetry detection is available. |
A flexible coupling approach to multi-agent planning under incomplete information | Multi-agent planning (MAP) approaches are typically oriented at solving loosely coupled problems, being ineffective to deal with more complex, strongly related problems. In most cases, agents work under complete information, building complete knowledge bases. The present article introduces a general-purpose MAP framework designed to tackle problems of any coupling levels under incomplete information. Agents in our MAP model are partially unaware of the information managed by the rest of agents and share only the critical information that affects other agents, thus maintaining a distributed vision of the task. Agents solve MAP tasks through the adoption of an iterative refinement planning procedure that uses single-agent planning technology. In particular, agents will devise refinements through the partial-order planning paradigm, a flexible framework to build refinement plans leaving unsolved details that will be gradually completed by means of new refinements. Our proposal is supported with the implementation of a fully operative MAP system and we show various experiments when running our system over different types of MAP problems, from the most strongly related to the most loosely coupled. |
Performance of AdaBoost classifier in recognition of superposed modulations for MIMO TWRC with physical-layer network coding | Modulation recognition algorithms have recently received a great deal of attention in academia and industry. In addition to their application in the military field, these algorithms found civilian use in reconfigurable systems, such as cognitive radios. Most previously existing algorithms are focused on recognition of a single modulation. However, a multiple-input multiple-output two-way relaying channel (MIMO TWRC) with physical-layer network coding (PLNC) requires the recognition of the pair of sources modulations from the superposed constellation at the relay. In this paper, we propose an algorithm for recognition of sources modulations for MIMO TWRC with PLNC. The proposed algorithm is divided in two steps. The first step uses the higher order statistics based features in conjunction with genetic algorithm as a features selection method, while the second step employs AdaBoost as a classifier. Simulation results show the ability of the proposed algorithm to provide a good recognition performance at acceptable signal-to-noise values. |
Fast Kernels for String and Tree Matching | In this paper we present a new algorithm suitable for matching discrete objects such as strings and trees in linear time, thus obviating dynamic programming with quadratic time complexity. Furthermore, prediction cost in many cases can be reduced to linear cost in the length of the sequence to be classified, regardless of the number of support vectors. This improvement on the currently available algorithms makes string kernels a viable alternative for the practitioner. |
Comprehensive analysis of the MYB-NFIB gene fusion in salivary adenoid cystic carcinoma: Incidence, variability, and clinicopathologic significance. | PURPOSE
The objectives of this study were to determine the incidence of the MYB-NFIB fusion in salivary adenoid cystic carcinoma (ACC), to establish the clinicopathologic significance of the fusion, and to analyze the expression of MYB in ACCs in the context of the MYB-NFIB fusion.
EXPERIMENTAL DESIGN
We did an extensive analysis involving 123 cancers of the salivary gland, including primary and metastatic ACCs, and non-ACC salivary carcinomas. MYB-NFIB fusions were identified by reverse transcriptase-PCR (RT-PCR) and sequencing of the RT-PCR products, and confirmed by fluorescence in situ hybridization. MYB RNA expression was determined by quantitative RT-PCR and protein expression was analyzed by immunohistochemistry.
RESULTS
The MYB-NFIB fusion was detected in 28% primary and 35% metastatic ACCs, but not in any of the non-ACC salivary carcinomas analyzed. Different exons in both the MYB and NFIB genes were involved in the fusions, resulting in expression of multiple chimeric variants. Notably, MYB was overexpressed in the vast majority of the ACCs, although MYB expression was significantly higher in tumors carrying the MYB-NFIB fusion. The presence of the MYB-NFIB fusion was significantly associated (P = 0.03) with patients older than 50 years of age. No correlation with other clinicopathologic markers, factors, and survival was found.
CONCLUSIONS
We conclude that the MYB-NFIB fusion characterizes a subset of ACCs and contributes to MYB overexpression. Additional mechanisms may be involved in MYB overexpression in ACCs lacking the MYB-NFIB fusion. These findings suggest that MYB may be a specific novel target for tumor intervention in patients with ACC. |
Digital receivers and transmitters using polyphase filter banks for wireless communications | This paper provides a tutorial overview of multichannel wireless digital receivers and the relationships between channel bandwidth, channel separation, and channel sample rate. The overview makes liberal use of figures to support the underlying mathematics. A multichannel digital receiver simultaneously down-convert a set of frequency-division-multiplexed (FDM) channels residing in a single sampled data signal stream. In a similar way, a multichannel digital transmitter simultaneously up-converts a number of baseband signals to assemble a set of FDM channels in a single sampled data signal stream. The polyphase filter bank has become the architecture of choice to efficiently accomplish these tasks. This architecture uses three interacting processes to assemble or to disassemble the channelized signal set. In a receiver, these processes are an input commutator to effect spectral folding or aliasing due to a reduction in sample rate, a polyphase -path filter to time align the partitioned and resampled time series in each path, and a discrete Fourier transform to phase align and separate the multiple baseband aliases. In a transmitter, these same processes operate in a related manner to alias baseband signals to high order Nyquist zones while increasing the sample rate with the output commutator. This paper presents a sequence of simple modifications to sampled data structures based on analog prototype systems to obtain the basic polyphase structure. We further discuss ways to incorporate small modifications in the operation of the polyphase system to accommodate secondary performance requirements. MATLAB simulations of a 10-, 40-, and 50-channel resampling receiver are included in the electronic version of this paper. An animated version of the ten-channel resampling receiver illustrates the time and frequency response of the filter bank when driven by a slowly varying linear FM sweep. |
Organizing a Global Coordinate System from Local Information on an Ad Hoc Sensor Network | We demonstrate that it is possible to achieve accurate localization and tracking of a target in a randomly placed wireless sensor network composed of inexpensive components of limited accuracy. The crucial enabler for this is a reasonably accurate local coordinate system aligned with the global coordinates. We present an algorithm for creating such a coordinate system without the use of global control, globally accessible beacon signals, or accurate estimates of inter-sensor distances. The coordinate system is robust and automatically adapts to the failure or addition of sensors. Extensive theoretical analysis and simulation results are presented. Two key theoretical results are: there is a critical minimum average neighborhood size of 15 for good accuracy and there is a fundamental limit on the resolution of any coordinate system determined strictly from local communication. Our simulation results show that we can achieve position accuracy to within 20% of the radio range even when there is variation of up to 10% in the signal strength of the radios. The algorithm improves with finer quantizations of inter-sensor distance estimates: with 6 levels of quantization position errors better than 10% achieved. Finally we show how the algorithm gracefully generalizes to target tracking tasks. |
Distributional regularity and phonotactic constraints are useful for segmentation | In order to acquire a lexicon, young children must segment speech into words, even though most words are unfamiliar to them. This is a non-trivial task because speech lacks any acoustic analog of the blank spaces between printed words. Two sources of information that might be useful for this task are distributional regularity and phonotactic constraints. Informally, distributional regularity refers to the intuition that sound sequences that occur frequently and in a variety of contexts are better candidates for the lexicon than those that occur rarely or in few contexts. We express that intuition formally by a class of functions called DR functions. We then put forth three hypotheses: First, that children segment using DR functions. Second, that they exploit phonotactic constraints on the possible pronunciations of words in their language. Specifically, they exploit both the requirement that every word must have a vowel and the constraints that languages impose on word-initial and word-final consonant clusters. Third, that children learn which word-boundary clusters are permitted in their language by assuming that all permissible word-boundary clusters will eventually occur at utterance boundaries. Using computational simulation, we investigate the effectiveness of these strategies for segmenting broad phonetic transcripts of child-directed English. The results show that DR functions and phonotactic constraints can be used to significantly improve segmentation. Further, the contributions of DR functions and phonotactic constraints are largely independent, so using both yields better segmentation than using either one alone. Finally, learning the permissible word-boundary clusters from utterance boundaries does not degrade segmentation performance. |
Agency Theory Implications for Strategic Human Resource Management: Effects of CEO Ownership, Administrative HRM, and Incentive Alignment on Firm Performance | Agency theory is used to expand the research in strategic human resource management (SHRM) by viewing the construct underlying SHRM as control over all employees. We develop hypotheses on the effects of CEO ownership, administrative HRM, and incentive stock ownership on firm performance. The results indicate that administrative HRM has a negative effect on stock price. Incentive alignment via stock ownership has a positive effect on stock price and productivity. CEO ownership has a positive effect on sales but a negative impact on productivity. Implications for theory and practice are discussed. |
A Practical and Highly Optimized Convolutional Neural Network for Classifying Traffic Signs in Real-Time | Classifying traffic signs is an indispensable part of Advanced Driver Assistant Systems. This strictly requires that the traffic sign classification model accurately classifies the images and consumes as few CPU cycles as possible to immediately release the CPU for other tasks. In this paper, we first propose a new ConvNet architecture. Then, we propose a new method for creating an optimal ensemble of ConvNets with highest possible accuracy and lowest number of ConvNets. Our experiments show that the ensemble of our proposed ConvNets (the ensemble is also constructed using our method) reduces the number of arithmetic operations 88 and $$73\,\%$$ 73 % compared with two state-of-art ensemble of ConvNets. In addition, our ensemble is $$0.1\,\%$$ 0.1 % more accurate than one of the state-of-art ensembles and it is only $$0.04\,\%$$ 0.04 % less accurate than the other state-of-art ensemble when tested on the same dataset. Moreover, ensemble of our compact ConvNets reduces the number of the multiplications 95 and $$88\,\%$$ 88 % , yet, the classification accuracy drops only 0.2 and $$0.4\,\%$$ 0.4 % compared with these two ensembles. Besides, we also evaluate the cross-dataset performance of our ConvNet and analyze its transferability power in different layers. We show that our network is easily scalable to new datasets with much more number of traffic sign classes and it only needs to fine-tune the weights starting from the last convolution layer. We also assess our ConvNet through different visualization techniques. Besides, we propose a new method for finding the minimum additive noise which causes the network to incorrectly classify the image by minimum difference compared with the highest score in the loss vector. |
Taxonomies of workflow scheduling problem and techniques in the cloud | Scientific workflows, like other applications, benefit from the cloud computing, which offers access to virtually unlimited resources provisioned elastically on demand. In order to efficiently execute a workflow in the cloud, scheduling is required to address many new aspects introduced by cloud resource provisioning. In the last few years, many techniques have been proposed to tackle different cloud environments enabled by the flexible nature of the cloud, leading to the techniques of different designs. In this paper, taxonomies of cloud workflow scheduling problem and techniques are proposed based on analytical review. We identify and explain the aspects and classifications unique to workflow scheduling in the cloud environment in three categories, namely, scheduling process, task and resource. Lastly, review of several scheduling techniques are included and classified onto the proposed taxonomies. We hope that our taxonomies serve as a stepping stone for those entering this research area and for further development of scheduling technique. |
Challenges to the Hypothesis of Extended | * In recent decades, an intriguing view of human cognition has garnered increasing support. According to this view, which I will call 'the hypothesis of extended cognition' ('HEC', hereafter), human cognitive processing literally extends into the environment surrounding the organism, and human cognitive states literally comprise—as wholes do their proper parts— elements in that environment; in consequence, while the skin and scalp may encase the human organism, they do not delimit the thinking subject. 1 The hypothesis of extended cognition should provoke our critical interest. Acceptance of HEC would alter our approach to research and theorizing in cognitive science and, it would seem, significantly change our conception of persons. Thus, if HEC faces substantive difficulties, these should be brought to light; this paper is meant to do just that, exposing some of the problems HEC must overcome if it is to stand among leading views of the nature of human cognition. The essay unfolds as follows: The first section consists of preliminary remarks, mostly about the scope and content of HEC as I will construe it. Sections II and III clarify HEC by situating it with respect to related theses one finds in the literature—the hypothesis of embedded cognition Association. I would like to express my appreciation to members of all three audiences for their useful feedback (especially William Lycan at the Mountain-Plains and David Chalmers at the APA), as well as to my conference commentators, Robert Welshon and Tadeusz Zawidzki. I also benefited from discussing extended cognition with 2 and content-externalism. The remaining sections develop a series of objections to HEC and the arguments that have been offered in its support. The first objection appeals to common sense: HEC implies highly counterintuitive attributions of belief. Of course, HEC-theorists can take, and have taken, a naturalistic stand. They claim that HEC need not be responsive to commonsense objections, for HEC is being offered as a theoretical postulate of cognitive science; whether we should accept HEC depends, they say, on the value of the empirical work premised upon it. Thus, I consider a series of arguments meant to show that HEC is a promising causal-explanatory hypothesis, concluding that these arguments fail and that, ultimately, HEC appears to be of marginal interest as part of a philosophical foundation for cognitive science. If the cases canvassed here are any indication, adopting HEC results in a significant loss of explanatory power or, at the … |
Structure and Evolution of the Bering Sea Shelf: Abstract | Abstract The Bering shelf is underlain by two Cenozoic structural provinces, an inner and generally coastal one of essentially undeformed Cenozoic basins that occupy the large reentrants of this sea, and an outer continental-margin province with parallel basement ridges and basins overlain by as much as ten kilometers of Cenozoic sediment. The inner provinces includes three major structural sags of Mesozoic and older basement rock: Bristol, Anadyr and Norton basins, containing three kilometers or more of neritic Cenozoic strata. A fourth basin, St. Matthew, trends NE-SW between St. Lawrence and St. Matthew Islands. It may be associated with the offshore extension of the Kaltag fault. The sedimentary fill in the basin (as much as 1.3 km) is not appreciably disturbed, suggesting that the Kaltag fault has not been active since the Early Tertiary. The outer structural province comprises ten elongate basins. These are grabens or half-grabens and they contain up to ten kilometers of Mesozoic(?) and Cenozoic sediment. The basins are bounded by active normal (growth) faults, suggesting that the outer shelf area may be collapsing and rifting away from the inner shelf. We speculate that the outer structural province is a fragmented and submerged Mesozoic fold belt that formed above oceanic crust along an oblique convergence zone extending northwestward from Alaska to Siberia. A belt of highly magnetic volcanic and plutonic rocks that passes through St. Matthew Island is the magmatic are associated with the Mesozoic subduction zone. The inner province basins, like those of the outer province, appear to be superimposed above inherited late Mesozoic trends, but they are in part or wholly underlain by Mesozoic and older continental crust. |
Pavlov's Dog Associative Learning Demonstrated on Synaptic-Like Organic Transistors | In this letter, we present an original demonstration of an associative learning neural network inspired by the famous Pavlov's dogs experiment. A single nanoparticle organic memory field effect transistor (NOMFET) is used to implement each synapse. We show how the physical properties of this dynamic memristive device can be used to perform low-power write operations for the learning and implement short-term association using temporal coding and spike-timing-dependent plasticity–based learning. An electronic circuit was built to validate the proposed learning scheme with packaged devices, with good reproducibility despite the complex synaptic-like dynamic of the NOMFET in pulse regime. |
The microbiota-gut-brain axis in obesity. | Changes in microbial diversity and composition are increasingly associated with several disease states including obesity and behavioural disorders. Obesity-associated microbiota alter host energy harvesting, insulin resistance, inflammation, and fat deposition. Additionally, intestinal microbiota can regulate metabolism, adiposity, homoeostasis, and energy balance as well as central appetite and food reward signalling, which together have crucial roles in obesity. Moreover, some strains of bacteria and their metabolites might target the brain directly via vagal stimulation or indirectly through immune-neuroendocrine mechanisms. Therefore, the gut microbiota is becoming a target for new anti-obesity therapies. Further investigations are needed to elucidate the intricate gut-microbiota-host relationship and the potential of gut-microbiota-targeted strategies, such as dietary interventions and faecal microbiota transplantation, as promising metabolic therapies that help patients to maintain a healthy weight throughout life. |
The Strengths and Difficulties Questionnaire as a Predictor of Parent-Reported Diagnosis of Autism Spectrum Disorder and Attention Deficit Hyperactivity Disorder | The Strengths and Difficulties Questionnaire (SDQ) is widely used as an international standardised instrument measuring child behaviour. The primary aim of our study was to examine whether behavioral symptoms measured by SDQ were elevated among children with autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD) relative to the rest of the population, and to examine the predictive value of the SDQ for outcome of parent-reported clinical diagnosis of ASD/ADHD. A secondary aim was to examine the extent of overlap in symptoms between children diagnosed with these two disorders, as measured by the SDQ subscales. A cross-sectional secondary analysis of data from the Millennium Birth Cohort (n = 19,519), was conducted. Data were weighted to be representative of the UK population as a whole. ADHD or ASD identified by a medical doctor or health professional were reported by parents in 2008 and this was the case definition of diagnosis; (ADHD n = 173, ASD n = 209, excluding twins and triplets). Study children's ages ranged from 6.3-8.2 years; (mean 7.2 years). Logistic regression was used to examine the association between the parent-reported clinical diagnosis of ASD/ADHD and teacher and parent-reported SDQ subscales. All SDQ subscales were strongly associated with both ASD and ADHD. There was substantial co-occurrence of behavioral difficulties between children diagnosed with ASD and those diagnosed with ADHD. After adjustment for other subscales, the final model for ADHD, contained hyperactivity/inattention and impact symptoms only and had a sensitivity of 91% and specificity of 90%; (AUC) = 0.94 (95% CI, 0.90-0.97). The final model for ASD was composed of all subscales except the 'peer problems' scales, indicating of the complexity of behavioural difficulties that may accompany ASD. A threshold of 0.03 produced model sensitivity and specificity of 79% and 93% respectively; AUC = 0.90 (95% CI, 0.86-0.95). The results support changes to DSM-5 removing exclusivity clauses. |
Printed Paper Robot Driven by Electrostatic Actuator | Effective design and fabrication of 3-D electronic circuits are among the most pressing issues for future engineering. Although a variety of flexible devices have been developed, most of them are still designed two-dimensionally. In this letter, we introduce a novel idea to fabricate a 3-D wiring board. We produced the 3-D wiring board from one desktop inkjet printer by printing conductive pattern and a 2-D pattern to induce self-folding. We printed silver ink onto a paper to realize the conductive trace. Meanwhile, a 3-D structure was constructed with self-folding induced by water-based ink printed from the same printer. The paper with the silver ink self-folds along the printed line. The printed silver ink is sufficiently thin to be flexible. Even if the silver ink is already printed, the paper can self-fold or self-bend to consist the 3-D wiring board. A paper scratch driven robot was developed using this method. The robot traveled 56 mm in 15 s according to the vibration induced by the electrostatic force of the printed electrode. The size of the robot is 30 × 15 × 10 mm. This work proposes a new method to design 3-D wiring board, and shows extended possibilities for printed paper mechatronics. |
Automatic Speech Recognition for Mixed Dialect Utterances by Mixing Dialect Language Models | This paper presents an automatic speech recognition (ASR) system that accepts a mixture of various kinds of dialects. The system recognizes dialect utterances on the basis of the statistical simulation of vocabulary transformation and combinations of several dialect models. Previous dialect ASR systems were based on handcrafted dictionaries for several dialects, which involved costly processes. The proposed system statistically trains transformation rules between a common language and dialects, and simulates a dialect corpus for ASR on the basis of a machine translation technique. The rules are trained with small sets of parallel corpora to make up for the lack of linguistic resources on dialects. The proposed system also accepts mixed dialect utterances that contain a variety of vocabularies. In fact, spoken language is not a single dialect but a mixed dialect that is affected by the circumstances of speakers' backgrounds (e.g., native dialects of their parents or where they live). We addressed two methods to combine several dialects appropriately for each speaker. The first was recognition with language models of mixed dialects with automatically estimated weights that maximized the recognition likelihood. This method performed the best, but calculation was very expensive because it conducted grid searches of combinations of dialect mixing proportions that maximized the recognition likelihood. The second was integration of results of recognition from each single dialect language model. The improvements with this model were slightly smaller than those with the first method. Its calculation cost was, however, inexpensive and it worked in real-time on general workstations. Both methods achieved higher recognition accuracies for all speakers than those with the single dialect models and the common language model, and we could choose a suitable model for use in ASR that took into consideration the computational costs and recognition accuracies. |
Glass Blowing on a Wafer Level | A fabrication process for the simultaneous shaping of arrays of glass shells on a wafer level is introduced in this paper. The process is based on etching cavities in silicon, followed by anodic bonding of a thin glass wafer to the etched silicon wafer. The bonded wafers are then heated inside a furnace at a temperature above the softening point of the glass, and due to the expansion of the trapped gas in the silicon cavities the glass is blown into three-dimensional spherical shells. An analytical model which can be used to predict the shape of the glass shells is described and demonstrated to match the experimental data. The ability to blow glass on a wafer level may enable novel capabilities including mass-production of microscopic spherical gas confinement chambers, microlenses, and complex microfluidic networks |
Perceptions of users and providers on barriers to utilizing skilled birth care in mid- and far-western Nepal: a qualitative study | Background Although skilled birth care contributes significantly to the prevention of maternal and newborn morbidity and mortality, utilization of such care is poor in mid- and far-western Nepal. This study explored the perceptions of service users and providers regarding barriers to skilled birth care. Design We conducted 24 focus group discussions, 12 each with service users and service providers from different health institutions in mid- and far-western Nepal. All discussions examined the perceptions and experiences of service users and providers regarding barriers to skilled birth care and explored possible solutions to overcoming such barriers. Results Our results determined that major barriers to skilled birth care include inadequate knowledge of the importance of services offered by skilled birth attendants (SBAs), distance to health facilities, unavailability of transport services, and poor availability of SBAs. Other barriers included poor infrastructure, meager services, inadequate information about services/facilities, cultural practices and beliefs, and low prioritization of birth care. Moreover, the tradition of isolating women during and after childbirth decreased the likelihood that women would utilize delivery care services at health facilities. Conclusions Service users and providers perceived inadequate availability and accessibility of skilled birth care in remote areas of Nepal, and overall utilization of these services was poor. Therefore, training and recruiting locally available health workers, helping community groups establish transport mechanisms, upgrading physical facilities and services at health institutions, and increasing community awareness of the importance of skilled birth care will help bridge these gaps. |
Implant cementation: clinical problems and solutions. | |
Self-Service Data Preparation: Research to Practice | It is widely accepted that the majority of time in any data analysis project is devoted to preparing the data [25]. In 2012, noted data science leader DJ Patil put the fraction of time spent on data preparation at 80%, based on informal discussions in his team at LinkedIn [28]. Analysts we interviewed in an academic study around the same time put the percent time “munging” or “wrangling” data at “greater than half” [19]. Judging from these user stories, the inefficiency of data preparation is the single biggest problem in data analytics. The database community does have a tradition of research on related topics. Most common is algorithmic work (covered in various surveys, e.g. [15, 4, 3, 26]) that focuses on automating certain aspects of data integration and cleaning, or that uses humans as passive computational elements via crowdsourcing. What computer science researchers often miss are the skills realities in real-world organizations. In practice, the primary bottleneck is not the quality of the inner-loop algorithms, but rather the lack of technology enabling domain experts to perform end-to-end data preparation without programming experience. Fully preparing a dataset requires an iterated cycle of input data quality assessment, transform specification, and output quality assessment—all in service of a larger “preparedness” goal that tends to shift fluidly as the work reveals additional properties of the data. Traditionally there has been a divide between the people who know the data and use case best, and the people who have the skills to prepare data using traditional programmatic approaches. This results in the data preparation cycle being split across parties and across time: domain experts try to express their desired outcomes for prepared data to developers or IT professionals, who in turn try to satisfy the needs. A single iteration of this cycle can take from hours to weeks in a typical organization, and rarely produces a satisfying outcome: typically either the end-user did not specify their desires well, or the developer did not achieve the desired outcome. Neither tends to enjoy the experience. In short, the primary problem in data preparation is self-service: we need to enable the people who know the data best to prepare it themselves. Research focused on these user-facing concerns is scattered across the fields of databases, HCI and programming languages (e.g., [7, 29, 13, 24, 17, 9, 16]). While under-investigated in the research community, the topic has become an important force in the industry. In 2015, industry analysts began publishing rankings in an emerging new market category dubbed “Self-Service” or “End-User” Data Preparation [5, 1]. Two years later, the established analyst firm Forrester did a first annual Forrester Wave report on Data Preparation [23]: a mark of arrival for this category. Another major analyst firm, Gartner, has weighed in with various reports on the Data Preparation market (e.g. [35, 8]). Meanwhile, in 2017 Google Cloud Platform was the first cloud provider to launch Self-Service Data Preparation as a native service in their cloud [27], while Azure and Amazon Web Services also announced partnerships with data preparation vendors in 2018. Market size estimates for Data Preparation start at $1 billion [35] and go well upwards depending on the analyst and projected time horizon. Not bad for a technology that was seeded from academic research, and did not even have a name four years ago. |
Towards Effective Research-Paper Recommender Systems and User Modeling based on Mind Maps | ............................................................................................................................... i Zusammenfassung ............................................................................................................. iii Table of |
Bayesian Nonparametric Models | A Bayesian nonparametric model is a Bayesian model on an infinite-dimensional parameter space. The parameter space is typically chosen as the set of all possible solutions for a given learning problem. For example, in a regression problem the parameter space can be the set of continuous functions, and in a density estimation problem the space can consist of all densities. A Bayesian nonparametric model uses only a finite subset of the available parameter dimensions to explain a finite sample of observations, with the set of dimensions chosen depending on the sample, such that the effective complexity of the model (as measured by the number of dimensions used) adapts to the data. Classical adaptive problems, such as nonparametric estimation and model selection, can thus be formulated as Bayesian inference problems. Popular examples of Bayesian nonparametric models include Gaussian process regression, in which the correlation structure is refined with growing sample size, and Dirichlet process mixture models for clustering, which adapt the number of clusters to the complexity of the data. Bayesian nonparametric models have recently been applied to a variety of machine learning problems, including regression, classification, clustering, latent variable modeling, sequential modeling, image segmentation, source separation and grammar induction. |
A review of content-based image retrieval systems in medical applications - clinical benefits and future directions | Content-based visual information retrieval (CBVIR) or content-based image retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. The availability of large and steadily growing amounts of visual and multimedia data, and the development of the Internet underline the need to create thematic access methods that offer more than simple text-based queries or requests based on matching exact database fields. Many programs and tools have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of differing sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever-increasing quantities and used for diagnostics and therapy. The Radiology Department of the University Hospital of Geneva alone produced more than 12,000 images a day in 2002. The cardiology is currently the second largest producer of digital images, especially with videos of cardiac catheterization ( approximately 1800 exams per year containing almost 2000 images each). The total amount of cardiologic image data produced in the Geneva University Hospital was around 1 TB in 2002. Endoscopic videos can equally produce enormous amounts of data. With digital imaging and communications in medicine (DICOM), a standard for image communication has been set and patient information can be stored with the actual image(s), although still a few problems prevail with respect to the standardization. In several articles, content-based access to medical images for supporting clinical decision-making has been proposed that would ease the management of clinical data and scenarios for the integration of content-based access methods into picture archiving and communication systems (PACS) have been created. This article gives an overview of available literature in the field of content-based access to medical image data and on the technologies used in the field. Section 1 gives an introduction into generic content-based image retrieval and the technologies used. Section 2 explains the propositions for the use of image retrieval in medical practice and the various approaches. Example systems and application areas are described. Section 3 describes the techniques used in the implemented systems, their datasets and evaluations. Section 4 identifies possible clinical benefits of image retrieval systems in clinical practice as well as in research and education. New research directions are being defined that can prove to be useful. This article also identifies explanations to some of the outlined problems in the field as it looks like many propositions for systems are made from the medical domain and research prototypes are developed in computer science departments using medical datasets. Still, there are very few systems that seem to be used in clinical practice. It needs to be stated as well that the goal is not, in general, to replace text-based retrieval methods as they exist at the moment but to complement them with visual search tools. |
Ocean Related Senior Design Projects For Mechanical Engineers At Umass Dartmouth | This paper discusses several ocean-related capstone design projects completed by mechanical engineering students at the University of Massachusetts Dartmouth. Some projects are detailed analytical projects that involved complex simulations, others are systems engineering projects that involved manufacturing and prototype testing. Different aspects of the design process are emphasized by the different styles of project, and all met with success. Several projects were performed in collaboration with other Departments at UMD or with local institutions that work in the oceanographic field. Neither students nor faculty advisors were assigned specific projects, yet nearly one third of the capstone design projects completed in the last three years have had an ocean or marine emphasis. Introduction Mechanical Engineering students at the University of Massachusetts Dartmouth (UMD) are required to take a capstone design course in their senior year. Until fall 2002, this 4-credit course was offered during the spring semester only. Beginning in the fall of 2002, the course was changed to a 2-semester sequence, offering 2 credits for each semester. In the fall semester, students are expected to form design teams, select a design project, secure a faculty advisor for that project, write a project proposal that includes both a schedule and a budget, and begin design work. Another major component of the fall course is practical engineering ethics. During the spring semester, the students are expected to complete their design projects, write a comprehensive final report, and publicly present their design projects before the faculty, a panel of judges from industry, and their fellow students. Students meet weekly with both their project advisor and the course facilitator, and write weekly memos updating their progress. Although there is no formal program at UMD for ocean or marine-related engineering, several senior design projects of late have had a marine emphasis. On occasion, these projects have a connection with other departments or facilities of the University of Massachusetts or local research institutions. For example, one project was conducted through the Center for Marine Science and Technology (CMAST), a UMassaffiliated research laboratory that has recently started a Ph.D. program. One project was done in conjunction with the Woods Hole Oceanographic Institution, one of the US premier ocean research facilities; and another project was completed jointly with students |
Graph Cuts and Efficient N-D Image Segmentation | Combinatorial graph cut algorithms have been successfully applied to a wide range of problems in vision and graphics. This paper focusses on possibly the simplest application of graph-cuts: segmentation of objects in image data. Despite its simplicity, this application epitomizes the best features of combinatorial graph cuts methods in vision: global optima, practical efficiency, numerical robustness, ability to fuse a wide range of visual cues and constraints, unrestricted topological properties of segments, and applicability to N-D problems. Graph cuts based approaches to object extraction have also been shown to have interesting connections with earlier segmentation methods such as snakes, geodesic active contours, and level-sets. The segmentation energies optimized by graph cuts combine boundary regularization with region-based properties in the same fashion as Mumford-Shah style functionals. We present motivation and detailed technical description of the basic combinatorial optimization framework for image segmentation via s/t graph cuts. After the general concept of using binary graph cut algorithms for object segmentation was first proposed and tested in Boykov and Jolly (2001), this idea was widely studied in computer vision and graphics communities. We provide links to a large number of known extensions based on iterative parameter re-estimation and learning, multi-scale or hierarchical approaches, narrow bands, and other techniques for demanding photo, video, and medical applications. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.