title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
One-against-all multi-class SVM classification using reliability measures | Support vector machines (SVM) is originally designed for binary classification. To extend it to multi-class scenario, a typical conventional way is to decompose an M-class problem into a series of two-class problems, for which one-against-all is the earliest and one of the most widely used implementations. However, certain theoretical analysis reveals a drawback, i.e., the competence of each classifier is totally neglected when the results of classification from the multiple classifiers are combined for the final decision. To overcome this limitation, this paper introduces reliability measures into the multi-class framework. Two measures are designed: static reliability measure (SRM) and dynamic reliability measure (DRM). SRM works on a collective basis and yields a constant value regardless of the location of the test sample. DRM, on the other hand, accounts for the spatial variation of the classifier's performance. Based on these two reliability measures, a new decision strategy for the one-against-all method is proposed, which is tested on benchmark data sets and demonstrates its effectiveness. |
Cell-permeable nanobodies for targeted immunolabelling and antigen manipulation in living cells. | Functional antibody delivery in living cells would enable the labelling and manipulation of intracellular antigens, which constitutes a long-thought goal in cell biology and medicine. Here we present a modular strategy to create functional cell-permeable nanobodies capable of targeted labelling and manipulation of intracellular antigens in living cells. The cell-permeable nanobodies are formed by the site-specific attachment of intracellularly stable (or cleavable) cyclic arginine-rich cell-penetrating peptides to camelid-derived single-chain VHH antibody fragments. We used this strategy for the non-endocytic delivery of two recombinant nanobodies into living cells, which enabled the relocalization of the polymerase clamp PCNA (proliferating cell nuclear antigen) and tumour suppressor p53 to the nucleolus, and thereby allowed the detection of protein-protein interactions that involve these two proteins in living cells. Furthermore, cell-permeable nanobodies permitted the co-transport of therapeutically relevant proteins, such as Mecp2, into the cells. This technology constitutes a major step in the labelling, delivery and targeted manipulation of intracellular antigens. Ultimately, this approach opens the door towards immunostaining in living cells and the expansion of immunotherapies to intracellular antigen targets. |
Reduction of Speckle Noise and Image Enhancement of Images Using Filtering Technique | Reducing noise from the medical images, a satellite image etc. is a challenge for the researchers in digital image processing. Several approaches are there for noise reduction. Generally speckle noise is commonly found in synthetic aperture radar images, satellite images and medical images. This paper proposes filtering techniques for the removal of speckle noise from the digital images. Quantitative measures are done by using signal to noise ration and noise level is measured by the standard deviation. |
Designing Attractive Gamification Features for Collaborative Storytelling Websites | Gamification design is considered as the predictor of collaborative storytelling websites' success. Although aforementioned studies have mentioned a broad range of factors that may influence gamification, they neither depicted the actual design features nor relative attractiveness among them. This study aims to identify attractive gamification features for collaborative storytelling websites. We first constructed a hierarchical system structure of gamification design of collaborative storytelling websites and conducted a focus group interview with eighteen frequent users to identify 35gamification features. After that, this study determined the relative attractiveness of these gamification features by administrating an online survey to 6333 collaborative storytelling websites users. The results indicated that the top 10 most attractive gamification features could account for more than 50% of attractiveness among these 35 gamification features. The feature of unpredictable time pressure is important to website users, yet not revealed in previous relevant studies. Implications of the findings were discussed. |
Centrality and normality in protomodular categories | We analyse the classical property of centrality of equivalence relations in terms of normal monomorphisms. For this purpose, the internal structure of connector is introduced, allowing to clarify classical results in Maltsev categories and to prove new ones in protomodular categories. This approach allows to work in the general context of finitely complete categories, without requiring the usual Barr exactness assumption. |
Assisting Users in a World Full of Cameras: A Privacy-Aware Infrastructure for Computer Vision Applications | Computer vision based technologies have seen widespread adoption over the recent years. This use is not limited to the rapid adoption of facial recognition technology but extends to facial expression recognition, scene recognition and more. These developments raise privacy concerns and call for novel solutions to ensure adequate user awareness, and ideally, control over the resulting collection and use of potentially sensitive data. While cameras have become ubiquitous, most of the time users are not even aware of their presence. In this paper we introduce a novel distributed privacy infrastructure for the Internet-of-Things and discuss in particular how it can help enhance user's awareness of and control over the collection and use of video data about them. The infrastructure, which has undergone early deployment and evaluation on two campuses, supports the automated discovery of IoT resources and the selective notification of users. This includes the presence of computer vision applications that collect data about users. In particular, we describe an implementation of functionality that helps users discover nearby cameras and choose whether or not they want their faces to be denatured in the video streams. |
Improving Size Estimates Using Historical Data | business objects (Obj) 16 Basic abstract geoscience business objects, including projects, wells, well logs, markers, zones, and seismic data Concrete business object input/output (I/O) 15 Concrete input/output behaviors for abstract business object persistence mechanisms Data import/export filters (Filter) 14 Abstract factories, strategies, and GUI components for importing and exporting “foreign” data formats Application data factory (Data) 6 Abstract data object factory providing GUI components for user data selection, delivering observers of abstract business objects to application contexts Reusable GUI components (Component) 22 Reusable unit-labeling (feet/meters) text fields, data selection components, and calculation configuration components Application support components (Prefs) 15 User preference, window layout, and session save/reinstantiation support Application GUI (App) 73 User data display and editing, object property dialogs, and calculation setup dialogs and utilities that are not directly traceable to specific requirements (Tools). Estimation Study The estimation tools available to our development team included Cocomo II and Estimate Pro V2.0.2,3 (See the “Estimation Techniques” sidebar for a description of these and other estimator techniques.) Both of these tools generate estimates for the amount of effort, time, or cost, but they require input specifying the size of the work to be completed. The problem we faced throughout the project was estimating size. No one on the development team was trained in function-point analysis, so we loosely based our attempts at prediction on analogy methods and the Delphi principle of reaching consensus with individual, “expert” estimates. Without a defined, repeatable sizeestimation process, these predictions were little better than outright guesses. Finally, our failure to record our predictions—and subsequently compare the actual size against the N o v e m b e r / D e c e m b e r 2 0 0 0 I E E E S O F T W A R E 29 Software estimation techniques generally fall into one of three categories: ■ empirical techniques that relate observations of past performance to predictions of future efforts, ■ regression models that are derived from historical data and describe the mathematical relationships among project variables, and ■ theory-based techniques that are based on the underlying theoretical considerations of software development processes.1 For the purposes of this discussion, I merely draw a distinction between techniques that might help estimate size versus those used to estimate effort, schedule, and cost. A well-known and widely used regression technique is Cocomo (constructive cost model).2,3 The Cocomo II model estimates the effort, schedule, and cost required to develop a software product, accounting for different project phases and activities. This type of estimation method uses regression equations (developed from historical data) to compute schedule and cost by factoring in various project drivers such as team experience, the type of system under development, system size, nonfunctional product attributes, and so on. The SLIM (software lifecycle management) method4 is a theory-based technique1 that uses two equations to estimate development effort and schedule. The software equation, derived from empirical observations about productivity levels, expresses development effort in terms of project size and development time. The manpower equation expresses the buildup of manpower as a function of development time. Sizing techniques rely primarily on empirical methods. A few of these include the Standard and Wideband Delphi estimation methods, analogy techniques, the software sizing model (SSM), and function-point analysis. Observations and an understanding of historical project information can help predict the size of future efforts. The Delphi methods2,5 employ techniques for decomposing a project into individual work activities, letting a team of experts generate individual estimates for each activity and form a consensus estimate for the project. Estimation by analogy involves examining similarities and differences between former and current efforts and extrapolating the qualities of measured past work to future efforts. The SSM decomposes a project into individual modules and employs different methods to estimate the relative size of software modules through pair-wise comparisons, PERT (identifying the lowest, most likely, and highest possible sizes) estimates, sorting, and ranking techniques. The estimates are calibrated to local conditions by including at least two reference modules of known size. The technique generates a size for each module and for the overall project.6 Function-point analysis7 measures a software system’s size in terms of system functionality, independent of implementation language. The function-point method is considered an empirical estimation approach1 due to the observed relationship between the effort required to build a system and identifiable system features, such as external inputs, interface files, outputs, queries, and logical internal tables. Counts of system features are adjusted using weighting and complexity factors to arrive at a size expressed in function points. Although function-point analysis was originally developed in a world of database and procedural programming, the method has mapped well into the object-oriented development paradigm.8 References 1. R.E. Fairley, “Recent Advances in Software Estimation Techniques,” Proc. 14th Int’l Conf. Software Eng., ACM Press, New York, 1992. 2. B. Boehm, Software Engineering Economics, Prentice Hall, Upper Saddle River, N.J., 1981. 3. Barry Boehm et al., “Cost Models for Future Software Life Cycle Process: Cocomo 2.0,” Ann. of Software Eng. Special Volume on Software Process and Product Measurement, J.D. Arther and S.M. Henry, eds., Science Publishers, Amsterdam, The Netherlands, Vol. 1, 1995, pp. 45–60. 4. L.H. Putnam, “A General Empirical Solution to the Macro Software Sizing and Estimating Problem,” IEEE Trans. Software Eng., Vol. 4, No. 4, Apr. 1978, pp. 345–361. 5. K. Wiegers, “Stop Promising Miracles,” Software Development, Vol. 8, No. 2, Feb. 2000, p. 49. 6. G.J. Bozoki, “Performance Simulation of SSM (Software Sizing Model),” Proc. 13th Conf., Int’l Soc. of Parametric Analysts, Int’l Soc. of Parametric Analysts, New Orleans, 1991, pp. CM–14. 7. A. Albrecht, “Software Function, Source Lines of Code, and Development Effort Prediction: A Software Science Validation,” IEEE Trans. Software Eng., Vol. SE-9, No. 6, 1983. 8. T. Fetcke, A. Abran, and T.H. Nguyen, “Mapping the OO-Jacobson Approach into Function Point Analysis,” Proc. TOOLS-23’97, IEEE Press, Piscataway, N.J., 1998. Estimation Techniques |
Emulating Soft Real-Time Scheduling Using Traditional Operating System Schedulers | Real-time scheduling algorithms are usually only available in the kernels of real-time operating systems, and not in more general purpose operating systems, like Unix. For some soft real-time problems, a traditional operating system may be the development platform of choice. This paper addresses methods of emulating real-time scheduling algorithms on top of standard time-share schedulers. We examine (through simulations) three strategies for priority assignment within a traditional multi-tasking environment. The results show that the emulation algorithms are comparable in performance to the real-time algorithms and in some instances outperform them. |
How to interpret a genome-wide association study. | Genome-wide association (GWA) studies use high-throughput genotyping technologies to assay hundreds of thousands of single-nucleotide polymorphisms (SNPs) and relate them to clinical conditions and measurable traits. Since 2005, nearly 100 loci for as many as 40 common diseases and traits have been identified and replicated in GWA studies, many in genes not previously suspected of having a role in the disease under study, and some in genomic regions containing no known genes. GWA studies are an important advance in discovering genetic variants influencing disease but also have important limitations, including their potential for false-positive and false-negative results and for biases related to selection of study participants and genotyping errors. Although these studies are clearly many steps removed from actual clinical use, and specific applications of GWA findings in prevention and treatment are actively being pursued, at present these studies mainly represent a valuable discovery tool for examining genomic function and clarifying pathophysiologic mechanisms. This article describes the design, interpretation, application, and limitations of GWA studies for clinicians and scientists for whom this evolving science may have great relevance. |
Growth characteristics of infantile hemangiomas: implications for management. | OBJECTIVES
Infantile hemangiomas often are inapparent at birth and have a period of rapid growth during early infancy followed by gradual involution. More precise information on growth could help predict short-term outcomes and make decisions about when referral or intervention, if needed, should be initiated. The objective of this study was to describe growth characteristics of infantile hemangioma and compare growth with infantile hemangioma referral patterns.
METHODS
A prospective cohort study involving 7 tertiary care pediatric dermatology practices was conducted. Growth data were available for a subset of 526 infantile hemangiomas in 433 patients from a cohort study of 1096 children. Inclusion criteria were age younger than 18 months at time of enrollment and presence of at least 1 infantile hemangioma. Growth stage and rate were compared with clinical characteristics and timing of referrals.
RESULTS
Eighty percent of hemangioma size was reached during the early proliferative stage at a mean age of 3 months. Differences in growth between hemangioma subtypes included that deep hemangiomas tend to grow later and longer than superficial hemangiomas and that segmental hemangiomas tended to exhibit more continued growth after 3 months of age. The mean age of first visit was 5 months. Factors that predicted need for follow-up included ongoing proliferation, larger size, deep component, and segmental and indeterminate morphologic subtypes.
CONCLUSIONS
Most infantile hemangioma growth occurs before 5 months, yet 5 months was also the mean age at first visit to a specialist. Recognition of growth characteristics and factors that predict the need for follow-up could help aid in clinical decision-making. The first few weeks to months of life are a critical time in hemangioma growth. Infants with hemangiomas need close observation during this period, and those who need specialty care should be referred and seen as early as possible within this critical growth period. |
Half-Mode Substrate Integrated Waveguide (HMSIW) Leaky-Wave Antenna for Millimeter-Wave Applications | A novel leaky-wave antenna is demonstrated and developed at Ka-band in this work based on the newly proposed half-mode substrate integrated waveguide (HWSIW). This antenna is accurately simulated by using a full-wave electromagnetic simulator and then fabricated through a single-layer printed circuit board (PCB) process. Wide bandwidth and a quasi-omnidirectional radiation pattern are obtained. The proposed antenna is therefore a good candidate for millimeter-wave applications. Measured results are in good agreement with simulated results. |
Semantic Annotation of the ACL Anthology Corpus for the Automatic Analysis of Scientific Literature | This paper describes the process of creating a corpus annotated for concepts and semantic relations in the scientific domain. A part of the ACL Anthology Corpus was selected for annotation, but the annotation process itself is not specific to the computational linguistics domain and could be applied to any scientific corpus. Concepts were identified and annotated fully automatically, based on a combination of terminology extraction and available ontological resources. A typology of semantic relations between concepts is also proposed. This typology, consisting of 18 domain-specific and 3 generic relations, is the result of a corpus-based investigation of the text sequences occurring between concepts in sentences. A sample of 500 abstracts from the corpus is currently being manually annotated with these semantic relations. Only explicit relations are taken into account, so that the data could serve to train or evaluate pattern-based semantic relation classification systems. |
Integrating real-time and batch processing in a polystore | This paper describes a stream processing engine called S-Store and its role in the BigDAWG polystore. Fundamentally, S-Store acts as a frontend processor that accepts input from multiple sources, and massages it into a form that has eliminated errors (data cleaning) and translates that input into a form that can be efficiently ingested into BigDAWG. S-Store also acts as an intelligent router that sends input tuples to the appropriate components of BigDAWG. All updates to S-Store's shared memory are done in a transactionally consistent (ACID) way, thereby eliminating new errors caused by non-synchronized reads and writes. The ability to migrate data from component to component of BigDAWG is crucial. We have described a migrator from S-Store to Postgres that we have implemented as a first proof of concept. We report some interesting results using this migrator that impact the evaluation of query plans. |
Prognosis and prognostic research: validating a prognostic model. | Internal validation—A common approach is to split the dataset randomly into two parts (often 2:1), develop the model using the first portion (often called the “training” set), and assess its predictive accuracy on the second portion. This approach will tend to give optimistic results because the two datasets are very similar. Non-random splitting (for example, by centre) may be preferable as it reduces the similarity of the two sets of patients.1 4 If the available data are limited, the model can be developed on the whole dataset and techniques of data re-use, such as cross validation and bootstrapping, applied to assess performance.1 Internal validation is helpful, but it cannot provide information about the model’s performance elsewhere. Temporal validation—An alternative is to evaluate the performance of a model on subsequent patients from the same centre(s).6 10 Temporal validation is no different in principle from splitting a single dataset by time. There will clearly be many similarities between the two sets of patients and between the clinical and laboratory techniques used in evaluating them. However, temporal validation is a prospective evaluation of a model, independent of the original data and development process. Temporal validation can be considered external in time and thus intermediate between internal validation and external validation. External validation—Neither internal nor temporal validation examines the generalisability of the model, for which it is necessary to use new data collected from an appropriate (similar) patient population in a different centre. The data can be retrospective data and so external validation is possible for prediction models that need long follow-up to gather enough outcome events. Clearly, the second dataset must include data on all the variables in the model. Fundamental design issues for external validation, such as sample selection and sample size, have received limited attention.11 |
mrFPGA: A novel FPGA architecture with memristor-based reconfiguration | In this paper, we introduce a novel FPGA architecture with memristor-based reconfiguration (mrFPGA). The proposed architecture is based on the existing CMOS-compatible memristor fabrication process. The programmable interconnects of mrFPGA use only memristors and metal wires so that the interconnects can be fabricated over logic blocks, resulting in significant reduction of overall area and interconnect delay but without using a 3D die-stacking process. Using memristors to build up the interconnects can also provide capacitance shielding from unused routing paths and reduce interconnect delay further. Moreover we propose an improved architecture that allows adaptive buffer insertion in interconnects to achieve more speedup. Compared to the fixed buffer pattern in conventional FPGAs, the positions of inserted buffers in mrFPGA are optimized on demand. A complete CAD flow is provided for mrFPGA, with an advanced P&R tool named mrVPR that was developed for mrFPGA. The tool can deal with the novel routing structure of mrFPGA, the memristor shielding effect, and the algorithm for optimal buffer insertion. We evaluate the area, performance and power consumption of mrFPGA based on the 20 largest MCNC benchmark circuits. Results show that mrFPGA achieves 5.18x area savings, 2.28x speedup and 1.63x power savings. Further improvement is expected with combination of 3D technologies and mrFPGA. |
Affective patterns in triadic family interactions: Associations with adolescent depression. | Affective family processes are associated with the development of depression during adolescence. However, empirical description of these processes is generally based on examining affect at the individual or dyadic level. The purpose of this study was to examine triadic patterns of affect during parent-adolescent interactions in families with or without a depressed adolescent. We used state space grid analysis to characterize the state of all three actors simultaneously. Compared to healthy controls, triads with depressed adolescents displayed a wider range of affect, demonstrated less predictability of triadic affective sequences, spent more time in and returned more quickly to discrepant affective states, and spent less time in and returned more slowly to matched affective states, particularly while engaged in a problem-solving interaction. Furthermore, we identified seven unique triadic states in which triads with depressed adolescents spent significantly more time than triads with healthy controls. The present study enhances understanding of family affective processes related to depression by taking a more systemic approach and revealing triadic patterns that go beyond individual and dyadic analyses. |
Security challenges in software defined network and their solutions | The main purpose of Software Defined Networking (SDN) is to allow network engineers to respond quickly to changing network industrial requirements. This network technology focuses on making network as adaptable and active as virtual server. SDN is physical separation of Control plane from Data plane and control plane is centralized to manage underlying infrastructure. Hence, the SDN permit network administrator to adjust wide traffic flow from centralized control console without having to touch Switches and Routers, and can provide the services to wherever they are needed in the network. As in SDN the control plane is disassociated from underlying forwarding plane, however, susceptible to many security challenges like Denial of Service (DoS) attack, Distributed DoS (DDoS) attack, Volumetric attack. In this paper, we highlight some security challenges and evaluate some security solutions. |
Architectural Style Classification Using Multinomial Latent Logistic Regression | Architectural style classification differs from standard classification tasks due to the rich inter-class relationships between different styles, such as re-interpretation, revival, and territoriality. In this paper, we adopt Deformable Part-based Models (DPM) to capture the morphological characteristics of basic architectural components and propose Multinomial Latent Logistic Regression (MLLR) that introduces the probabilistic analysis and tackles the multi-class problem in latent variable models. Due to the lack of publicly available datasets, we release a new large-scale architectural style dataset containing twenty-five classes. Experimentation on this dataset shows that MLLR in combination with standard global image features, obtains the best classification results. We also present interpretable probabilistic explanations for the results, such as the styles of individual buildings and a style relationship network, to illustrate inter-class relationships. |
Image Interpolation and Resampling | This chapter presents a survey of interpolation and resampling techniques in the context of exact, separable interpolation of regularly sampled data. In this context, the traditional view of interpolation is to represent an arbitrary continuous function as a discrete sum of weighted and shifted synthesis functions—in other words, a mixed convolution equation. An important issue is the choice of adequate synthesis functions that satisfy interpolation properties. Examples of finite-support ones are the square pulse (nearest-neighbor interpolation), the hat function (linear interpolation), the cubic Keys' function, and various truncated or windowed versions of the sinc function. On the other hand, splines provide examples of infinite-support interpolation functions that can be realized exactly at a finite, surprisingly small computational cost. We discuss implementation issues and illustrate the performance of each synthesis function. We also highlight several artifacts that may arise when performing interpolation, such as ringing, aliasing, blocking and blurring. We explain why the approximation order inherent in the synthesis function is important to limit these interpolation artifacts, which motivates the use of splines as a tunable way to keep them in check without any significant cost penalty. I. I NTRODUCTION Interpolation is a technique that pervades many an application. Interpolation is almost never the goal in itself, yet it affects both the desired results and the ways to obtain them. Notwithstanding its nearly universal relevance, some authors give it less importance than it deserves, perhaps because considerations on interpolation are felt as being paltry when compared to the description of a more inspiring grand scheme of things of some algorithm or method. Due to this indifference, it appears as if the basic principles that underlie interpolation might be sometimes cast aside, or even misunderstood. The goal of this chapter is to refresh the notions encountered in classical interpolation, as well as to introduce the reader to more general approaches. 1.1. Definition What is interpolation? Several answers coexist. One of them defines interpolation as an informed estimate of the unknown [1]. We prefer the following—admittedly less concise—definition: modelbased recovery of continuous data from discrete data within a known range of abscissa. The reason for this preference is to allow for a clearer distinction between interpolation and extrapolation. The former postulates the existence of a known range where the model applies, and asserts that the deterministicallyrecovered continuous data is entirely described by the discrete data, while the latter authorizes the use of the model outside of the known range, with the implicit assumption that the model is "good" near data samples, and possibly less good elsewhere. Finally, the three most important hypothesis for interpolation are: |
Reversible inactivation of cytochrome P450 by alkaline earth metal ions: auxiliary metal ion induced conformation change and formation of inactive P420 species in CYP101. | The effects of the divalent alkaline-earth metal ions (Ca2+ and Mg2+) on the substrate binding affinity, spin-state transition at the heme active site, conformational properties as well as the stability of the active form of cytochrome P450cam (CYP 101) have been investigated using various spectroscopic and kinetic methods. The divalent cations were found to have two types of effects on the enzyme. At the initial stage the alkaline-earth metal ion facilitated enhanced binding of the substrate and formation of the high-spin form of the heme active center of the enzyme compared to that in absence of any metal ion. However, analogous to many other mono-valent metal ions, the alkaline-earth metal ions were also less efficient than K+ in promoting the substrate binding and spin-transition properties of the enzyme. The auxiliary metal ions were shown to cause small but distinct change in the circular dichroism spectra of the substrate-free enzyme in the visible region, indicating that the tertiary structure around the heme was perturbed on binding of the auxiliary metal ion to the enzyme. The effect of the auxiliary metal ion was found to be more prominent in the WT enzyme compared to the Y96F mutant of P450cam suggesting that the Tyr 96 residue plays an important role in mediating the effects of the auxiliary metal ions to the active site of the enzyme. At the second stage of interaction, the alkaline-earth metal ions were found to slowly convert the enzyme into an inactive P420 form, which could be reversibly re-activated by addition of KCl. The results have been discussed in the light of understanding the mechanism of inactivation of certain mammalian P450 enzymes by these alkaline-earth metal ions. |
Intravenous or intramuscular anti-HBs immunoglobulin for the prevention of hepatitis B reinfection after orthotopic liver transplantation. | To prevent reinfection with hepatitis B virus after orthotopic liver transplantation, patients receive long-term intravenous anti-HBs immunoprophylaxis. We compared the pharmacokinetics of intravenously and intramuscularly administered commercially available hepatitis B virus immunoglobulins. The study group consisted of 12 patients on immunoprophylaxis after orthotopic liver transplantation, who were Hbs antigen negative; 11 were anti-HBe positive and one was HBe positive. The patients first received intravenous immunoglobulin, and six of them were then transferred to intramuscular immunoglobulin. Our findings show that with fortnightly intramuscular application of 1000 IU of anti-HBs, reproducible and stable antibody titers above 100 IU of anti-HBs can be achieved. Side effects of intramuscular immunoprophylaxis are minimal and the method is safe. The switch from intravenous (1500 IU of anti-HBs) to intramuscular (1000 IU of anti-HBs) reduced the cost of immunoprophylaxis by more than 50%. |
A randomized study of the effects of exercise training on patients with atrial fibrillation. | BACKGROUND
Exercise training is beneficial in ischemic and congestive heart disease. However, the effect on atrial fibrillation (AF) is unknown.
METHODS
Forty-nine patients with permanent AF (age [mean ± SD], 70.2 ± 7.8 years; male-to-female ratio, 0.75; body mass index [mean ± SD], 29.7 ± 4.3 kg/m(2)) were randomized to 12-week aerobic exercise training or a control group. Exercise capacity, 6-minute walk test (6MWT), cardiac output, quality of life, and natriuretic peptides were measured. Cardiac output was measured at rest and during ergometer testing, and atrial natriuretic peptide and N-terminal pro-B-type natriuretic peptide were measured before and after the training period. Quality of life was evaluated using the Short-Form 36 and Minnesota Living With Heart Failure (MLHF-Q) questionnaires.
RESULTS
Improved exercise capacity and 6MWT were observed in the active patients (P < .001), and at study end, there was a significant difference between the active patients and the controls (P = .002). Resting pulse decreased in the active patients (94.8 ± 22.4 to 86.3 ± 22.5 beats/min, P = .049) but remained unchanged in the controls. Cardiac output was unchanged from baseline to end-of-study period. The MLHF-Q score improved in the active group (21.1 ± 18.0 vs 15.4 ± 17.5, P = .03). Active patients showed progress in 3 of the 8 Short-Form 36 subscales: physical functioning (P = .02), general health perceptions (P = .001), and vitality (P = .02). Natriuretic peptides were unchanged.
CONCLUSION
Twelve weeks of exercise training increased exercise capacity and 6MWT and decreased resting pulse rate significantly in patients with AF. Overall quality of life increased significantly as measured by the cardiology-related MLHF-Q. Cardiac output and natriuretic peptides were unchanged in both groups. |
Analogy-driven 3D style transfer | Style transfer aims to apply the style of an exemplar model to a target one, while retaining the target’s structure. The main challenge in this process is to algorithmically distinguish style from structure, a high-level, potentially ill-posed cognitive task. Inspired by cognitive science research we recast style transfer in terms of shape analogies. In IQ testing, shape analogy queries present the subject with three shapes: source, target and exemplar, and ask them to select an output such that the transformation, or analogy, from the exemplar to the output is similar to that from the source to the target. The logical process involved in identifying the source-to-target analogies implicitly detects the structural differences between the source and target and can be used effectively to facilitate style transfer. Since the exemplar has a similar structure to the source, applying the analogy to the exemplar will provide the output we seek. The main technical challenge we address is to compute the source to target analogies, consistent with human logic. We observe that the typical analogies we look for consist of a small set of simple transformations, which when applied to the exemplar generate a continuous, seamless output model. To assemble a shape analogy, we compute an optimal set of source-to-target transformations, such that the assembled analogy best fits these criteria. The assembled analogy is then applied to the exemplar shape to produce the desired output model. We use the proposed framework to seamlessly transfer a variety of style properties between 2D and 3D objects and demonstrate significant improvements over the state of the art in style transfer. We further show that our framework can be used to successfully complete partial scans with the help of a user provided structural template, coherently propagating scan style across the completed surfaces. |
Synthesizing geometry constructions | In this paper, we study the problem of automatically solving ruler/compass based geometry construction problems. We first introduce a logic and a programming language for describing such constructions and then phrase the automation problem as a program synthesis problem. We then describe a new program synthesis technique based on three key insights: (i) reduction of symbolic reasoning to concrete reasoning (based on a deep theoretical result that reduces verification to random testing), (ii) extending the instruction set of the programming language with higher level primitives (representing basic constructions found in textbook chapters, inspired by how humans use their experience and knowledge gained from chapters to perform complicated constructions), and (iii) pruning the forward exhaustive search using a goal-directed heuristic (simulating backward reasoning performed by humans). Our tool can successfully synthesize constructions for various geometry problems picked up from high-school textbooks and examination papers in a reasonable amount of time. This opens up an amazing set of possibilities in the context of making classroom teaching interactive. |
Multi-View Clustering and Feature Learning via Structured Sparsity | Combining information from various data sources has become an important research topic in machine learning with many scientific applications. Most previous studies employ kernels or graphs to integrate different types of features, which routinely assume one weight for one type of features. However, for many problems, the importance of features in one source to an individual cluster of data can be varied, which makes the previous approaches ineffective. In this paper, we propose a novel multi-view learning model to integrate all features and learn the weight for every feature with respect to each cluster individually via new joint structured sparsity-inducing norms. The proposed multi-view learning framework allows us not only to perform clustering tasks, but also to deal with classification tasks by an extension when the labeling knowledge is available. A new efficient algorithm is derived to solve the formulated objective with rigorous theoretical proof on its convergence. We applied our new data fusion method to five broadly used multi-view data sets for both clustering and classification. In all experimental results, our method clearly outperforms other related state-of-the-art methods. |
Efficiency of dynamic elastic response prosthetic feet. | Following years of accepting the Solid Ankle Cushion Heel (SACH) foot as the optimum compromise between durability and functional effectiveness, as well as being of reasonable cost, several new feet with dynamic elastic response qualities have been designed. The stimulus for these new designs is the recent development of materials which offer the potential to "store and release energy" in a manner that facilitates walking and running . Numerous new prosthetic feet have become available commercially. The effectiveness of these designs is not known, though each has its strong clinical advocates (1,2,3,4). It is claimed that these feet reduce the energy required for walking, and increase mobility. Four dynamic elastic response (DER) feet representative of this design were selected for study. The objectives of this project were to compare the efficiency of four DER prosthetic foot designs (Seattle, Flex-Foot, Carbon Copy II, Sten) to that of the traditional SACH foot ; define the gait mechanics induced by each foot ; and determine the relative effectiveness and cost/benefit ratio of these new feet for the dysvascular and traumatic amputee populations. |
Image recognition of plant diseases based on principal component analysis and neural networks | Plant disease identification based on image processing could quickly and accurately provide useful information for the prediction and control of plant diseases. In this study, 21 color features, 4 shape features and 25 texture features were extracted from the images of two kinds wheat diseases (wheat stripe rust and wheat leaf rust) and two kinds of grape diseases (grape downy mildew and grape powdery mildew), principal component analysis (PCA) was performed for reducing dimensions in feature data processing, and then neural networks including backpropagation (BP) networks, radial basis function (RBF) neural networks, generalized regression networks (GRNNs) and probabilistic neural networks (PNNs) were used as the classifiers to identify wheat diseases and grape diseases, respectively. The results showed that these neural networks could be used for image recognition of these diseases based on reducing dimensions using PCA and acceptable fitting accuracies and prediction accuracies could be obtained. For the two kinds of wheat diseases, the optimal recognition result was obtained when image recognition was conducted based on PCA and BP networks, and the fitting accuracy and the prediction accuracy were both 100%. For the two kinds of grape diseases, the optimal recognition results were obtained when GRNNs and PNNs were used as the classifiers after reducing the dimensions of feature data with PCA, and the prediction accuracies were 94.29% with the fitting accuracies equal to 100%. |
Infant-Guided, Co-Regulated Feeding in the Neonatal Intensive Care Unit. Part II: Interventions to Promote Neuroprotection and Safety. | Feeding skills of preterm neonates in a neonatal intensive care unit are in an emergent phase of development and require careful support to minimize stress. The underpinnings that influence and enhance both neuroprotection and safety were discussed in Part I. An infant-guided, co-regulated approach to feeding can protect the vulnerable neonate's neurologic development, support the parent-infant relationship, and prevent feeding problems that may endure. Contingent interventions are used to maintain subsystem stability and enhance self-regulation, development, and coping skills. This co-regulation between caregiver and neonate forms the foundation for a positive infant-guided feeding experience. Caregivers select evidence-based interventions contingent to the newborn's communication. When these interventions are then titrated from moment to moment, neuroprotection and safety are fostered. |
Rebooting Research on Detecting Repackaged Android Apps: Literature Review and Benchmark | Repackaging is a serious threat to the Android ecosystem as it deprives app developers of their benefits, contributes to spreading malware on users’ devices, and increases the workload of market maintainers. In the space of six years, the research around this specific issue has produced 57 approaches which do not readily scale to millions of apps or are only evaluated on private datasets without, in general, tool support available to the community. Through a systematic literature review of the subject, we argue that the research is slowing down, where many state-of-the-art approaches have reported high-performance rates on closed datasets, which are unfortunately difficult to replicate and to compare against. In this work, we propose to reboot the research in repackaged app detection by providing a literature review that summarises the challenges and current solutions for detecting repackaged apps and by providing a large dataset that supports replications of existing solutions and implications of new research directions. We hope that these contributions will re-activate the direction of detecting repackaged apps and spark innovative approaches going beyond the current state-of-the-art. |
Stacked Graphs – Geometry & Aesthetics | In February 2008, the New York Times published an unusual chart of box office revenues for 7500 movies over 21 years. The chart was based on a similar visualization, developed by the first author, that displayed trends in music listening. This paper describes the design decisions and algorithms behind these graphics, and discusses the reaction on the Web. We suggest that this type of complex layered graph is effective for displaying large data sets to a mass audience. We provide a mathematical analysis of how this layered graph relates to traditional stacked graphs and to techniques such as ThemeRiver, showing how each method is optimizing a different ldquoenergy functionrdquo. Finally, we discuss techniques for coloring and ordering the layers of such graphs. Throughout the paper, we emphasize the interplay between considerations of aesthetics and legibility. |
Inattentional blindness for a gun during a simulated police vehicle stop | People often fail to notice unexpected objects and events when they are focusing attention on something else. Most studies of this "inattentional blindness" use unexpected objects that are irrelevant to the primary task and to the participant (e.g., gorillas in basketball games or colored shapes in computerized tracking tasks). Although a few studies have examined noticing rates for personally relevant or task-relevant unexpected objects, few have done so in a real-world context with objects that represent a direct threat to the participant. In this study, police academy trainees (n = 100) and experienced police officers (n = 75) engaged in a simulated vehicle traffic stop in which they approached a vehicle to issue a warning or citation for running a stop sign. The driver was either passive and cooperative or agitated and hostile when complying with the officer's instructions. Overall, 58% of the trainees and 33% of the officers failed to notice a gun positioned in full view on the passenger dashboard. The driver's style of interaction had little effect on noticing rates for either group. People can experience inattentional blindness for a potentially dangerous object in a naturalistic real-world context, even when noticing that object would change how they perform their primary task and even when their training focuses on awareness of potential threats. |
StructSLAM: Visual SLAM With Building Structure Lines | We propose a novel 6-degree-of-freedom (DoF) visual simultaneous localization and mapping (SLAM) method based on the structural regularity of man-made building environments. The idea is that we use the building structure lines as features for localization and mapping. Unlike other line features, the building structure lines encode the global orientation information that constrains the heading of the camera over time, eliminating the accumulated orientation errors and reducing the position drift in consequence. We extend the standard extended Kalman filter visual SLAM method to adopt the building structure lines with a novel parameterization method that represents the structure lines in dominant directions. Experiments have been conducted in both synthetic and real-world scenes. The results show that our method performs remarkably better than the existing methods in terms of position error and orientation error. In the test of indoor scenes of the public RAWSEEDS data sets, with the aid of a wheel odometer, our method produces bounded position errors about 0.79 m along a 967-m path although no loop-closing algorithm is applied. |
Corporate social responsibility : the 3 CSR model | Purpose – To develop a model that bridges the gap between CSR definitions and strategy and offers guidance to managers on how to connect socially committed organisations with the growing numbers of ethically aware consumers to simultaneously achieve economic and social objectives. Design/methodology/approach – This paper offers a critical evaluation of the theoretical foundations of corporate responsibility (CR) and proposes a new strategic approach to CR, which seeks to overcome the limitations of normative definitions. To address this perceived issue, the authors propose a new processual model of CR, which they refer to as the 3C-SR model. Findings – The 3C-SR model can offer practical guidelines to managers on how to connect with the growing numbers of ethically aware consumers to simultaneously achieve economic and social objectives. It is argued that many of the redefinitions of CR for a contemporary audience are normative exhortations (“calls to arms”) that fail to provide managers with the conceptual resources to move from “ought” to “how”. Originality/value – The 3C-SR model offers a novel approach to CR in so far as it addresses strategy, operations and markets in a single framework. |
The Toyota Way in Services : The Case of Lean Product Development | Executive Overview Toyota’s Production System (TPS) is based on “lean” principles including a focus on the customer, continual improvement and quality through waste reduction, and tightly integrated upstream and downstream processes as part of a lean value chain. Most manufacturing companies have adopted some type of “lean initiative,” and the lean movement recently has gone beyond the shop floor to white-collar offices and is even spreading to service industries. Unfortunately, most of these efforts represent limited, piecemeal approaches—quick fixes to reduce lead time and costs and to increase quality—that almost never create a true learning culture. We outline and illustrate the management principles of TPS that can be applied beyond manufacturing to any technical or service process. It is a true systems approach that effectively integrates people, processes, and technology—one that must be adopted as a continual, comprehensive, and coordinated effort for change and learning across the organization. |
On an EPQ model for deteriorating items under permissible delay in payments | This paper derives a production model for the lot-size inventory system with finite production rate, taking into consideration the effect of decay and the condition of permissible delay in payments, in which the restrictive assumption of a permissible delay is relaxed to that at the end of the credit period, the retailer will make a partial payment on total purchasing cost to the supplier and pay off the remaining balance by loan from the bank. At first, this paper shows that there exists a unique optimal cycle time to minimize the total variable cost per unit time. Then, a theorem is developed to determine the optimal ordering policies and bounds for the optimal cycle time are provided to develop an algorithm. Numerical examples reveal that our optimization procedure is very accurate and rapid. Finally, it is shown that the model developed by Huang [1] can be treated as a special case of this paper. 2005 Elsevier Inc. All rights reserved. |
The relation of strength of stimulus to rapidity of habit-formation in the kitten. | In connection with a study of various aspects of the modifiability of behavior in the dancing mouse a need for definite knowledge concerning the relation of strength of stimulus to rate of learning arose. It was for the purpose of obtaining this knowledge that we planned and executed the experiments which are now to be described. Our work was greatly facilitated by the advice and assistance of Doctor E. G. MARTIN, Professor G. W. PIERCE, and Professor A. E. KENNELLY, and we desire to express here both our indebtedness and our thanks for their generous services. |
The integrated curriculum in medical education: AMEE Guide No. 96. | The popularity of the term "integrated curriculum" has grown immensely in medical education over the last two decades, but what does this term mean and how do we go about its design, implementation, and evaluation? Definitions and application of the term vary greatly in the literature, spanning from the integration of content within a single lecture to the integration of a medical school's comprehensive curriculum. Taking into account the integrated curriculum's historic and evolving base of knowledge and theory, its support from many national medical education organizations, and the ever-increasing body of published examples, we deem it necessary to present a guide to review and promote further development of the integrated curriculum movement in medical education with an international perspective. We introduce the history and theory behind integration and provide theoretical models alongside published examples of common variations of an integrated curriculum. In addition, we identify three areas of particular need when developing an ideal integrated curriculum, leading us to propose the use of a new, clarified definition of "integrated curriculum", and offer a review of strategies to evaluate the impact of an integrated curriculum on the learner. This Guide is presented to assist educators in the design, implementation, and evaluation of a thoroughly integrated medical school curriculum. |
Towards a scalable software-defined network virtualization platform | Software-defined networking (SDN) has emerged to circumvent the difficulty of introducing new functionality into the network. The widespread adoption of SDN technologies, such as OpenFlow, can facilitate the deployment of novel network functions and new services. Network infrastructure providers can significantly benefit from the SDN paradigm by leasing network slices with SDN support to Service Providers and end-users. Currently, the deployment of arbitrary virtual SDN topologies entails significant configuration overhead for SDN operators. To this end, we present a SDN virtualization layer that orchestrates the deployment and management of virtual SDNs (vSDN). The so-called SDN hypervisor generates and installs the forwarding entries required for vSDN setup and also coordinates the necessary switch flow table modifications for seamless resource migration. Furthermore, the hypervisor transparently rewrites all control messages enforcing flowspace isolation while giving to the vSDN operator the illusion of exclusive access control. We explore the design space and prerequisites for SDN virtualization, including the selection and encoding of packet identifiers, the resolution of flowspace identifiers, and the configuration and consolidation of multiple virtual flow tables onto a single switch in order to provide support for arbitrary topologies. Furthermore, we discuss the scalability of the SDN control and data plane. |
Application of Content-Based Approach in Research Paper Recommendation System for a Digital Library | Recommender systems are software applications that provide or suggest items to intended users. These systems use filtering techniques to provide recommendations. The major ones of these techniques are collaborative-based filtering technique, content-based technique, and hybrid algorithm. The motivation came as a result of the need to integrate recommendation feature in digital libraries in order to reduce information overload. Content-based technique is adopted because of its suitability in domains or situations where items are more than the users. TF-IDF (Term Frequency Inverse Document Frequency) and cosine similarity were used to determine how relevant or similar a research paper is to a user's query or profile of interest. Research papers and user's query were represented as vectors of weights using Keyword-based Vector Space model. The weights indicate the degree of association between a research paper and a user's query. This paper also presents an algorithm to provide or suggest recommendations based on users' query. The algorithm employs both TF-IDF weighing scheme and cosine similarity measure. Based on the result or output of the system, integrating recommendation feature in digital libraries will help library users to find most relevant research papers to their needs. Keywords—Recommender Systems; Content-Based Filtering; Digital Library; TF-IDF; Cosine Similarity; Vector Space Model |
Predictive Mechanisms in Idiom Comprehension | Prediction is pervasive in human cognition and plays a central role in language comprehension. At an electrophysiological level, this cognitive function contributes substantially in determining the amplitude of the N400. In fact, the amplitude of the N400 to words within a sentence has been shown to depend on how predictable those words are: The more predictable a word, the smaller the N400 elicited. However, predictive processing can be based on different sources of information that allow anticipation of upcoming constituents and integration in context. In this study, we investigated the ERPs elicited during the comprehension of idioms, that is, prefabricated multiword strings stored in semantic memory. When a reader recognizes a string of words as an idiom before the idiom ends, she or he can develop expectations concerning the incoming idiomatic constituents. We hypothesized that the expectations driven by the activation of an idiom might differ from those driven by discourse-based constraints. To this aim, we compared the ERP waveforms elicited by idioms and two literal control conditions. The results showed that, in both cases, the literal conditions exhibited a more negative potential than the idiomatic condition. Our analyses suggest that before idiom recognition the effect is due to modulation of the N400 amplitude, whereas after idiom recognition a P300 for the idiomatic sentence has a fundamental role in the composition of the effect. These results suggest that two distinct predictive mechanisms are at work during language comprehension, based respectively on probabilistic information and on categorical template matching. |
Greenhouse gas mitigation in agriculture. | Agricultural lands occupy 37% of the earth's land surface. Agriculture accounts for 52 and 84% of global anthropogenic methane and nitrous oxide emissions. Agricultural soils may also act as a sink or source for CO2, but the net flux is small. Many agricultural practices can potentially mitigate greenhouse gas (GHG) emissions, the most prominent of which are improved cropland and grazing land management and restoration of degraded lands and cultivated organic soils. Lower, but still significant mitigation potential is provided by water and rice management, set-aside, land use change and agroforestry, livestock management and manure management. The global technical mitigation potential from agriculture (excluding fossil fuel offsets from biomass) by 2030, considering all gases, is estimated to be approximately 5500-6000Mt CO2-eq.yr-1, with economic potentials of approximately 1500-1600, 2500-2700 and 4000-4300Mt CO2-eq.yr-1 at carbon prices of up to 20, up to 50 and up to 100 US$ t CO2-eq.-1, respectively. In addition, GHG emissions could be reduced by substitution of fossil fuels for energy production by agricultural feedstocks (e.g. crop residues, dung and dedicated energy crops). The economic mitigation potential of biomass energy from agriculture is estimated to be 640, 2240 and 16 000Mt CO2-eq.yr-1 at 0-20, 0-50 and 0-100 US$ t CO2-eq.-1, respectively. |
AMON: a wearable multiparameter medical monitoring and alert system | This paper describes an advanced care and alert portable telemedical monitor (AMON), a wearable medical monitoring and alert system targeting high-risk cardiac/respiratory patients. The system includes continuous collection and evaluation of multiple vital signs, intelligent multiparameter medical emergency detection, and a cellular connection to a medical center. By integrating the whole system in an unobtrusive, wrist-worn enclosure and applying aggressive low-power design techniques, continuous long-term monitoring can be performed without interfering with the patients' everyday activities and without restricting their mobility. In the first two and a half years of this EU IST sponsored project, the AMON consortium has designed, implemented, and tested the described wrist-worn device, a communication link, and a comprehensive medical center software package. The performance of the system has been validated by a medical study with a set of 33 subjects. The paper describes the main concepts behind the AMON system and presents details of the individual subsystems and solutions as well as the results of the medical validation. |
Children's trust in previously inaccurate informants who were well or poorly informed: when past errors can be excused. | Past research demonstrates that children learn from a previously accurate speaker rather than from a previously inaccurate one. This study shows that children do not necessarily treat a previously inaccurate speaker as unreliable. Rather, they appropriately excuse past inaccuracy arising from the speaker's limited information access. Children (N= 67) aged 3, 4, and 5 years aimed to identify a hidden toy in collaboration with a puppet as informant. When the puppet had previously been inaccurate despite having full information, children tended to ignore what they were told and guess for themselves: They treated the puppet as unreliable in the longer term. However, children more frequently believed a currently well-informed puppet whose past inaccuracies arose legitimately from inadequate information access. |
A Hybrid Spectral Clustering and Deep Neural Network Ensemble Algorithm for Intrusion Detection in Sensor Networks | The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks. |
Low plasma testosterone and elevated carotid intima-media thickness: importance of low-grade inflammation in elderly men. | CONTEXT AND OBJECTIVE
An inverse correlation between plasma testosterone levels and carotid intima-media thickness (IMT) has been reported in men. We investigated whether this association could be mediated or modified by traditional cardiovascular risk factors as well as inflammatory status.
METHODS
In the Three-City population-based cohort study, 354 men aged 65 and over had available baseline data on hormones levels and carotid ultrasonography. Plasma concentrations of testosterone (total and bioavailable), estradiol and sex hormone-binding globulin (SHBG), together with cardiovascular risk factors were measured. IMT in plaque-free site and atherosclerotic plaques in the extracranial carotid arteries were determined using a standardized protocol. Multiple linear regression models were used to analyze this association and interaction study.
RESULTS
Analyses with and without adjustment for cardiovascular risk factors showed that carotid IMT was inversely and significantly correlated with total and bioavailable testosterone levels but not with SHBG and estradiol levels. This association depended on C-reactive protein (CRP) levels (p for interaction <0.05). Among men with low-grade inflammation (CRP ≥2 mg/L), mean IMT was higher in subjects with bioavailable testosterone ≤ 3.2 ng/mL than in those with bioavailable testosterone > 3.2 ng/mL (0.76 mm and 0.70 mm respectively, p < 0.01). By contrast, among men with CRP ≤ 2 mg/L, mean IMT was similar in both groups (0.72 mm and 0.71 mm respectively, p = 0.77). Similar results were found for total testosterone although not significant. No association was found between plasma hormones levels and atherosclerotic plaques.
CONCLUSION
In elderly men, low plasma testosterone is associated with elevated carotid intima-media thickness only in those with low-grade inflammation. Traditional risk factors have no mediator role. |
Risk, predictors, and mortality associated with non-AIDS events in newly diagnosed HIV-infected patients: role of antiretroviral therapy. | OBJECTIVE
We aimed to characterize non-AIDS events (NAEs) occurring in newly diagnosed HIV-infected patients in a contemporary cohort.
METHODS
The Cohort of the AIDS Research Network (CoRIS) is a prospective, multicenter cohort of HIV-infected adults antiretroviral naive at entry, established in 2004. We evaluated the incidence of and the mortality due to NAEs and AIDS events through October 2010. Poisson regression was used to investigate factors associated with a higher incidence of NAEs.
RESULTS
Overall, 5185 patients (13.306 person-years of follow-up), median age (interquartile range) 36 (29-43) years, participated in the study. A total of 86.5% patients had been diagnosed in 2004 or later. The incidence rate of NAEs was 28.93 per 1000 person-years [95% confidence interval (CI) 26.15-32.07], and of AIDS-defining events 25.23 per 1000 person-years (95% CI 22.60-28.16). The most common NAEs were psychiatric, hepatic, malignant, renal, and cardiovascular related. After adjustment, age, higher HIV-viral load, and lower CD4 cell count at cohort entry were associated with the occurrence of NAEs, whereas likelihood significantly decreased with sexual transmission and higher educational level. Additionally, antiretroviral therapy was inversely associated with the development of some NAEs, specifically of psychiatric [incidence rate ratio (95% CI) 0.54 (0.30-0.96)] and renal-related [incidence rate ratio (95% CI) 0.31 (0.13-0.72)] events. One hundred and seventy-three (3.33%) patients died during the study period. NAEs contributed to 28.9% of all deaths, with an incidence rate (95% CI) of 3.75 (2.84-4.94) per 1000 person-years.
CONCLUSION
In patients newly diagnosed with HIV infection, NAEs are a significant cause of morbidity and mortality. Our results suggest a protective effect of antiretroviral therapy in the occurrence of NAEs, in particular of psychiatric and renal-related events. |
A century of the phage: past, present and future | Viruses that infect bacteria (bacteriophages; also known as phages) were discovered 100 years ago. Since then, phage research has transformed fundamental and translational biosciences. For example, phages were crucial in establishing the central dogma of molecular biology — information is sequentially passed from DNA to RNA to proteins — and they have been shown to have major roles in ecosystems, and help drive bacterial evolution and virulence. Furthermore, phage research has provided many techniques and reagents that underpin modern biology — from sequencing and genome engineering to the recent discovery and exploitation of CRISPR–Cas phage resistance systems. In this Timeline, we discuss a century of phage research and its impact on basic and applied biology. |
Building a Chatbot with Serverless Computing | Chatbots are emerging as the newest platform used by millions of consumers worldwide due in part to the commoditization of natural language services, which provide provide developers with many building blocks to create chatbots inexpensively. However, it is still difficult to build and deploy chatbots. Developers need to handle the coordination of the cognitive services to build the chatbot interface, integrate the chatbot with external services, and worry about extensibility, scalability, and maintenance. In this work, we present the architecture and prototype of a chatbot using a serverless platform, where developers compose stateless functions together to perform useful actions. We describe our serverless architecture based on function sequences, and how we used these functions to coordinate the cognitive microservices in the Watson Developer Cloud to allow the chatbot to interact with external services. The serverless model improves the extensibility of our chatbot, which currently supports 6 abilities: location based weather reports, jokes, date, reminders, and a simple music tutor. |
The OAD Survey-Taxonomy of General Traits 1 | The OAD Survey (Organization Analysis and Design) is an adjective-based organization diagnostic, selection, and development instrument comprising two matched questionnaires. Each questionnaire contains 110 identical adjectives. For the first set of 110 adjectives, respondents are asked to check those words which best describe themselves. For the second set of 110 items, respondents are asked to check those words that describe how they must behave in their current (or previous) job. |
Comparison and Combination of Ear and Face Images in Appearance-Based Biometrics | Researchers have suggested that the ear may have advantages over the face for biometric recognition. Our previous experiments with ear and face recognition, using the standard principal component analysis approach, showed lower recognition performance using ear images. We report results of similar experiments on larger data sets that are more rigorously controlled for relative quality of face and ear images. We find that recognition performance is not significantly different between the face and the ear, for example, 70.5 percent versus 71.6 percent, respectively, in one experiment. We also find that multimodal recognition using both the ear and face results in statistically significant improvement over either individual biometric, for example, 90.9 percent in the analogous experiment. |
Interrater reliability of a new handwriting assessment battery for adults. | OBJECTIVE
The objective of this study was to develop, pilot, and evaluate the interrater reliability of a new Handwriting Assessment Battery for adults.
DESIGN
Test development included item selection and interrater reliability involving two raters.
METHOD
The test assessed pen control and manipulation, writing speed, and writing legibility. Ten people with brain injury completed the test with two occupational therapists independently rating 10 writing samples. Results were analyzed for reliability using kappa and intraclass correlation coefficients (ICC2,1).
RESULTS
Pen control and manipulation subtests showed high to perfect agreement (line drawing subtest, kappa = 1.0; dot subtest, kappa = 0.80). The speed subtest showed perfect agreement (ICC= 1.0). Writing legibility showed high agreement for all five subtests (ICC = 0.71-0.83), although a ceiling effect was evident for two subtests.
CONCLUSION
Although the test showed excellent interrater reliability, further reliability and validity testing are needed before the test is used clinically. |
Direct control method applied for improved incremental conductance mppt using SEPIC converter | Solar energy forms the major alternative for the generation of power keeping in mind the sustainable development with reduced greenhouse emission. For improved efficiency of the MPPT which uses solar energy in photovoltaic systems(PV), this paper presents a technique utilizing improved incremental conductance(Inc Cond) MPPT with direct control method using SEPIC converter. Several improvements in the existing technique is proposed which includes converter design aspects, system simulation & DSP programming. For the control part dsPIC30F2010 is programmed accordingly to get the maximum power point for different illuminations. DSP controller also forms the interfacing of PV array with the load. Now the improved Inc Cond helps to get point to point values accurately to track MPP's under different atmospheric conditions. MATLAB and Simulink were employed for simulation studies validation of the proposed technique. Experiment result proves the improvement from existing method. |
If Your Bug Database Could Talk | We have mined the Eclipse bug and version databases to map failures to Eclipse components. The resulting data set lists the defect density of all Eclipse components. As we demonstrate in three simple experiments, the bug data set can be easily used to relate code, process, and developers to defects. The data set is publicly available for download. |
Overlapping, Rare Examples and Class Decomposition in Learning Classifiers from Imbalanced Data | This paper deals with inducing classifiers from imbalanced data, where one class (a minority class) is under-represented in comparison to the remaining classes (majority classes). The minority class is usually of primary interest and it is required to recognize its members as accurately as possible. Class imbalance constitutes a difficulty for most algorithms learning classifiers as they are biased toward the majority classes. The first part of this study is devoted to discussing main properties of data that cause this difficulty. Following the review of earlier, related research several types of artificial, imbalanced data sets affected by critical factors have been generated. The decision trees and rule based classifiers have been generated from these data sets. Results of first experiments show that too small number of examples from the minority class is not the main source of difficulties. These results confirm the initial hypothesis saying the degradation of classification performance is more related to the minority class decomposition into small sub-parts. Another critical factor concerns presence of a relatively large number of borderline examples from the minority class in the overlapping region between classes, in particular for non-linear decision boundaries. The novel observation is showing the impact of rare examples from the minority class located inside the majority class. The experiments make visible that stepwise increasing the number of borderline and rare examples in the minority class has larger influence on the considered classifiers than increasing the decomposition of this class. The second part of this paper is devoted to studying an improvement of classifiers by pre-processing of such data with resampling methods. Next experiments examine the influence of the identified critical data factors on performance of 4 different pre-processing re-sampling methods: two versions of random over-sampling, focused under-sampling NCR and the hybrid method SPIDER. Results show that if data is sufficiently disturbed by borderline and rare examples SPIDER and partly NCR work better than over-sampling. Jerzy Stefanowski Institute of Computing Science, Poznań University of Technology, ul. Piotrowo 2, 60–965 Poznań, Poland, e-mail: [email protected] |
An RFID attendance and monitoring system for university applications | The main objective of this paper is to enhance the university's monitoring system taking into account factors such as reliability, time saving, and easy control. The proposed system consists of a mobile RFID solution in a logical context. The system prototype and its small scale application was a complete success. However, the more practical phase will not be immediately ready because a large setup is required and a part of the existing system has to be completely disabled. Some software modifications in the RFID system can be easily done in order for the system to be ready for a new application. In this paper, advantages and disadvantages of the proposed RFID system will be presented. |
Piaget ’ s Constructivism , Papert ’ s Constructionism : What ’ s the difference ? | What is the difference between Piaget's constructivism and Papert’s “constructionism”? Beyond the mere play on the words, I think the distinction holds, and that integrating both views can enrich our understanding of how people learn and grow. Piaget’s constructivism offers a window into what children are interested in, and able to achieve, at different stages of their development. The theory describes how children’s ways of doing and thinking evolve over time, and under which circumstance children are more likely to let go of—or hold onto— their currently held views. Piaget suggests that children have very good reasons not to abandon their worldviews just because someone else, be it an expert, tells them they’re wrong. Papert’s constructionism, in contrast, focuses more on the art of learning, or ‘learning to learn’, and on the significance of making things in learning. Papert is interested in how learners engage in a conversation with [their own or other people’s] artifacts, and how these conversations boost self-directed learning, and ultimately facilitate the construction of new knowledge. He stresses the importance of tools, media, and context in human development. Integrating both perspectives illuminates the processes by which individuals come to make sense of their experience, gradually optimizing their interactions with the world |
An improved categorization of classifier's sensitivity on sample selection bias | A recent paper categorizes classifier learning algorithms according to their sensitivity to a common type of sample selection bias where the chance of an example being selected into the training sample depends on its feature vector x but not (directly) on its class label y. A classifier learner is categorized as "local" if it is insensitive to this type of sample selection bias, otherwise, it is considered "global". In that paper, the true model is not clearly distinguished from the model that the algorithm outputs. In their discussion of Bayesian classifiers, logistic regression and hard-margin SVMs, the true model (or the model that generates the true class label for every example) is implicitly assumed to be contained in the model space of the learner, and the true class probabilities and model estimated class probabilities are assumed to asymptotically converge as the training data set size increases. However, in the discussion of naive Bayes, decision trees and soft-margin SVMs, the model space is assumed not to contain the true model, and these three algorithms are instead argued to be "global learners". We argue that most classifier learners may or may not be affected by sample selection bias; this depends on the dataset as well as the heuristics or inductive bias implied by the learning algorithm and their appropriateness to the particular dataset. |
Inducing semantic relations from conceptual spaces: A data-driven approach to plausible reasoning | Commonsense reasoning patterns such as interpolation and a fortiori inference have proven useful for dealing with gaps in structured knowledge bases. An important di culty in applying these reasoning patterns in practice is that they rely on fine-grained knowledge of how di↵erent concepts and entities are semantically related. In this paper, we show how the required semantic relations can be learned from a large collection of text documents. To this end, we first induce a conceptual space from the text documents, using multi-dimensional scaling. We then rely on the key insight that the required semantic relations correspond to qualitative spatial relations in this conceptual space. Among others, in an entirely unsupervised way, we identify salient directions in the conceptual space which correspond to interpretable relative properties such as ‘more fruity than’ (in a space of wines), resulting in a symbolic and interpretable representation of the conceptual space. To evaluate the quality of our semantic relations, we show how they can be exploited by a number of commonsense reasoning based classifiers. We experimentally show that these classifiers can outperform standard approaches, while being able to provide intuitive explanations of classification decisions. A number of crowdsourcing experiments provide further insights into the nature of the extracted semantic relations. |
Internet of Things to Smart IoT Through Semantic, Cognitive, and Perceptual Computing | Rapid growth in the Internet of Things (IoT) has resulted in a massive growth of data generated by these devices and sensors put on the Internet. Physical-cyber-social (PCS) big data consist of this IoT data, complemented by relevant Web-based and social data of various modalities. Smart data is about exploiting this PCS big data to get deep insights and make it actionable, and making it possible to facilitate building intelligent systems and applications. This article discusses key AI research in semantic computing, cognitive computing, and perceptual computing. Their synergistic use is expected to power future progress in building intelligent systems and applications for rapidly expanding markets in multiple industries. Over the next two years, this column on IoT will explore many challenges and technologies on intelligent use and applications of IoT data. |
Taxonomy-based job recommender systems on Facebook and LinkedIn profiles | This paper presents taxonomy-based recommender systems that propose relevant jobs to Facebook and LinkedIn users; they are being developed by Work4, a San Francisco-based software company and the Global Leader in Social and Mobile Recruiting that offers Facebook recruitment solutions; to use its applications, Facebook or LinkedIn users explicitly grant access to some parts of their data, and they are presented with the jobs whose descriptions are matching their profiles the most. In this paper, we use the O*NET-SOC taxonomy, a taxonomy that defines the set of occupations across the world of work, to develop a new taxonomy-based vector model for social network users and job descriptions suited to the task of job recommendation; we propose two similarity functions based on the AND and OR fuzzy logic's operators, suited to the proposed vector model. We compare the performance of our proposed vector model to the TF-IDF model using our proposed similarity functions and the classic heuristic measures; the results show that the taxonomy-based vector model outperforms the TF-IDF model. We then use SVMs (Support Vector Machines) with a mechanism to handle unbalanced datasets, to learn similarity functions from our data; the learnt models yield better results than heuristic similarity measures. The comparison of our methods to two methods of the literature (a matrix factorization method and the Collaborative Topic Regression) shows that our best method yields better results than those two methods in terms of AUC. The proposed taxonomy-based vector model leads to an efficient dimensionality reduction method in the task of job recommendation. |
An Exact Method for the Determination of Differential Leakage Factors in Electrical Machines With Non-Symmetrical Windings | An exact and simple method for the determination of differential leakage factors in polyphase ac electrical machines with non-symmetrical windings is presented in this paper. The method relies on the properties of Görges polygons that are used to transform an infinite series expressing the differential leakage factor into a finite sum in order to significantly simplify the calculations. Some examples are shown and discussed in order to practically demonstrate the effectiveness of the proposed method. |
Sexual behaviour and condom use among university students in Madagascar. | Although the number of known HIV-infected students in Madagascar increased significantly between 1989 and 1995, very little is known about student behaviour with regard to AIDS. The study objectives were: to describe Malagasy students' sexual behaviour and condom use; to document students' perceptions about condoms; and to study the relationships between students' socio-demographic characteristics, their perceptions about condoms, and their condom use. The survey used a cross-sectional design and was conducted at the Antananarivo's university campus sites. Anonymous questionnaires were self-administered to 320 randomly selected students. Descriptive statistics and 95% confidence intervals were calculated. Logistic regressions were performed to identify the predictors of condom use. Participants' average age was 24 years. Approximately 80% of the participants reported sexual experiences, and the average age at sexual debut was 19 years. Only 5.7% reported consistent condom use. Common reasons for non-use were steady relationships (75.6%), the perception that condoms were useful only during ovulation periods (8.7%), and the decrease of pleasure (6.4%). The predictors of condom use were male gender, and the perception that condoms were useful during ovulation periods. Risky sexual behaviours with regard to AIDS were prevalent in this community. An HIV prevention programme is recommended. |
A Generic Approach for Extracting Aspects and Opinions of Arabic Reviews | New opportunities and challenges arise with the growing availability of online Arabic reviews. Sentiment analysis of these reviews can help the beneficiary by summarizing the opinions of others about entities or events. Also, for opinions to be comprehensive, analysis should be provided for each aspect or feature of the entity. In this paper, we propose a generic approach that extracts the entity aspects and their attitudes for reviews written in modern standard Arabic. The proposed approach does not exploit predefined sets of features, nor domain ontology hierarchy. Instead we add sentiment tags on the patterns and roots of an Arabic lexicon and used these tags to extract the opinion bearing words and their polarities. The proposed system is evaluated on the entity-level using two datasets of 500 movie reviews with accuracy 96% and 1000 restaurant reviews with accuracy 86.7%. Then the system is evaluated on the aspect-level using 500 Arabic reviews in different domains (Novels, Products, Movies, Football game events and Hotels). It extracted aspects, at 80.8% recall and 77.5% precision with respect to the aspects defined by domain experts. |
Multiwalled carbon nanotube-induced gene signatures in the mouse lung: potential predictive value for human lung cancer risk and prognosis. | Concerns over the potential for multiwalled carbon nanotubes (MWCNT) to induce lung carcinogenesis have emerged. This study sought to (1) identify gene expression signatures in the mouse lungs following pharyngeal aspiration of well-dispersed MWCNT and (2) determine if these genes were associated with human lung cancer risk and progression. Genome-wide mRNA expression profiles were analyzed in mouse lungs (n = 160) exposed to 0, 10, 20, 40, or 80 μg of MWCNT by pharyngeal aspiration at 1, 7, 28, and 56 d postexposure. By using pairwise statistical analysis of microarray (SAM) and linear modeling, 24 genes were selected, which have significant changes in at least two time points, have a more than 1.5-fold change at all doses, and are significant in the linear model for the dose or the interaction of time and dose. Additionally, a 38-gene set was identified as related to cancer from 330 genes differentially expressed at d 56 postexposure in functional pathway analysis. Using the expression profiles of the cancer-related gene set in 8 mice at d 56 postexposure to 10 μg of MWCNT, a nearest centroid classification accurately predicts human lung cancer survival with a significant hazard ratio in training set (n = 256) and test set (n = 186). Furthermore, both gene signatures were associated with human lung cancer risk (n = 164) with significant odds ratios. These results may lead to development of a surveillance approach for early detection of lung cancer and prognosis associated with MWCNT in the workplace. |
Flexible Radio Access Beyond 5G: A Future Projection on Waveform, Numerology, and Frame Design Principles | To address the vast variety of user requirements, applications, and channel conditions, flexibility support is strongly highlighted for 5G radio access technologies (RATs). For this purpose, usage of multiple orthogonal frequency division multiplexing (OFDM) numerologies, i.e., different parameterization of OFDM-based subframes, within the same frame has been proposed in the third-generation partnership project discussions for 5G new radio. This concept will likely meet the current expectations in multiple service requirements to some extent. However, since the quantity of wireless devices, applications, and heterogeneity of user requirements will keep increasing toward the next decade, the sufficiency of the aforementioned flexibility consideration remains quite disputable for future services. Therefore, novel RATs facilitating much more flexibility are needed to address various technical challenges, e.g., power efficiency, massive connectivity, latency, spectral efficiency, robustness against channel dispersions, and so on. In this paper, we discuss the potential directions to achieve further flexibility in RATs beyond 5G, such as future releases of 5G and 6G. In this context, a framework for developing flexible waveform, numerology, and frame design strategies is proposed along with sample methods. We also discuss their potential role to handle various upper-level system issues, including the ones in orthogonal and nonorthogonal multiple accessing schemes and cellular networks. By doing so, we aim to contribute to the future vision of designing flexible RATs and to point out the possible research gaps in the related fields. |
Non-locally Enhanced Encoder-Decoder Network for Single Image De-raining | Single image rain streaks removal has recently witnessed substantial progress due to the development of deep convolutional neural networks. However, existing deep learning based methods either focus on the entrance and exit of the network by decomposing the input image into high and low frequency information and employing residual learning to reduce the mapping range, or focus on the introduction of cascaded learning scheme to decompose the task of rain streaks removal into multi-stages. These methods treat the convolutional neural network as an encapsulated end-to-end mapping module without deepening into the rationality and superiority of neural network design. In this paper, we delve into an effective end-to-end neural network structure for stronger feature expression and spatial correlation learning. Specifically, we propose a non-locally enhanced encoder-decoder network framework, which consists of a pooling indices embedded encoder-decoder network to efficiently learn increasingly abstract feature representation for more accurate rain streaks modeling while perfectly preserving the image detail. The proposed encoder-decoder framework is composed of a series of non-locally enhanced dense blocks that are designed to not only fully exploit hierarchical features from all the convolutional layers but also well capture the long-distance dependencies and structural information. Extensive experiments on synthetic and real datasets demonstrate that the proposed method can effectively remove rain-streaks on rainy image of various densities while well preserving the image details, which achieves significant improvements over the recent state-of-the-art methods. |
Evaluation of two commercial omalizumab/free IgE immunoassays: implications of use during therapy. | BACKGROUND
The anti-IgE monoclonal antibody, omalizumab, is approved in the US as add-on therapy for patients ≥12 years of age with moderate-to-severe persistent allergic asthma. Omalizumab is administered according to the US Food and Drug Administration approved dosing table included in the prescribing information. The dosing table was developed using Genentech's free IgE assay and is designed to achieve free serum IgE levels of <50 ng/mL, known to be associated with clinical benefit. Lack of clinical benefit in a subset of patients on omalizumab has prompted demand for commercial free IgE assays to guide omalizumab dosing. To date, two commercial free IgE assays marketed by ViraCor-IBT (no longer offered) and BioTeZ have been available to physicians.
OBJECTIVE
This study compares the results generated from the two commercial free IgE assays with the free IgE levels generated by the Genentech assay.
METHODS
Two serum sample sets were prepared using 20 samples from patients with a wide range of IgE and omalizumab from an omalizumab clinical trial and 36 samples from omalizumab-naïve patients. Different amounts of omalizumab were added to the 36 omalizumab naïve samples based on measured total IgE levels to ensure that a good range of IgE and omalizumab was represented in the study samples. Samples were randomized for blinded analysis of free IgE levels using the Genentech, ViraCor-IBT and BioTeZ free serum IgE assays. Analysis of samples in the ViraCor-IBT assay were conducted by ViraCor-IBT and the analysis of samples using the Genentech and BioTeZ assay methods were conducted by a third party contract research organization.
RESULTS
The ViraCor-IBT and BioTeZ free IgE assays demonstrated significantly higher free IgE levels than the Genentech free IgE assay. Twenty-nine of 56 samples tested <50 ng/mL in the Genentech assay; of these, 12/29 (41%) and 20/29 (69%) tested >50 ng/mL in the BioTeZ and ViraCor-IBT assays, respectively. In the BioTeZ free IgE evaluations, 11/20 samples that were re-tested had inter-assay differences ranging from 40-190%.
CONCLUSIONS
Free ligand (such as IgE) measurements are challenging and dependent on the method and reagents used. The Viracor-IBT and BioTeZ methods tend to over-estimate free serum IgE levels compared with the Genentech free IgE assay. Using these assays to monitor therapy and adjust omalizumab doses post treatment is considered off-label use and could lead to a potential risk for unnecessary treatment and/or risk to patient safety. |
Impossibility of Distributed Consensus with One Faulty Process | The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem. |
The ADHD Concomitant Difficulties Scale (ADHD-CDS), a Brief Scale to Measure Comorbidity Associated to ADHD | INTRODUCTION
Although the critical feature of attention-deficit/hyperactivity disorder (ADHD) is a persistent pattern of inattention and/or hyperactivity/impulsivity behavior, the disorder is clinically heterogeneous, and concomitant difficulties are common. Children with ADHD are at increased risk for experiencing lifelong impairments in multiple domains of daily functioning. In the present study we aimed to build a brief ADHD impairment-related tool -ADHD concomitant difficulties scale (ADHD-CDS)- to assess the presence of some of the most important comorbidities that usually appear associated with ADHD such as emotional/motivational management, fine motor coordination, problem-solving/management of time, disruptive behavior, sleep habits, academic achievement and quality of life. The two main objectives of the study were (i) to discriminate those profiles with several and important ADHD functional difficulties and (ii) to create a brief clinical tool that fosters a comprehensive evaluation process and can be easily used by clinicians.
METHODS
The total sample included 399 parents of children with ADHD aged 6-18 years (M = 11.65; SD = 3.1; 280 males) and 297 parents of children without a diagnosis of ADHD (M = 10.91; SD = 3.2; 149 male). The scale construction followed an item improved sequential process.
RESULTS
Factor analysis showed a 13-item single factor model with good fit indices. Higher scores on inattention predicted higher scores on ADHD-CDS for both the clinical sample (β = 0.50; p < 0.001) and the whole sample (β = 0.85; p < 0.001). The ROC curve for the ADHD-CDS (against the ADHD diagnostic status) gave an area under the curve (AUC) of.979 (95%, CI = [0.969, 0.990]).
DISCUSSION
The ADHD-CDS has shown preliminary adequate psychometric properties, with high convergent validity and good sensitivity for different ADHD profiles, which makes it a potentially appropriate and brief instrument that may be easily used by clinicians, researchers, and health professionals in dealing with ADHD. |
Saving face on Facebook: privacy concerns, social benefits, and impression management | Use of online social networks is nearly ubiquitous. Use of these services generally entails substantial personal disclosure and elicits significant privacy concerns. This research uses Social Exchange Theory and the impression management literature to examine how privacy concerns can be counterbalanced by the perceived social benefits afforded by a social network’s ability to support impression management. We frame social network use as an attempt to engage in impression management, and we highlight the importance of a social network’s impression management capabilities in predicting social benefits from, and use of, a social network. We test our model with a sample of 244 Facebook users, finding strong support for the proposed relationships. Our theory has important implications for researchers and practitioners interested in privacy issues within social networks. |
Quantify-me: Consumer Acceptance of Wearable Self-tracking Devices | The usage of wearable self-tracking technology has recently emerged as a new big trend in lifestyle and personal optimization in terms of health, fitness and well-being. Currently, only little is known about why people plan or start using such devices. Thus, in our research project, we aim at answering the question of what drives the usage intention of wearable self-tracking technology. Therefore, based on established technology acceptance theories, we deductively develop an acceptance model for wearable self-tracking technologies which sheds light on the pre-adoption criteria of such devices. We validate our proposed model by means of structural equation modeling using empirical data collected in a survey among 206 potential users. Our study identifies perceived usefulness, perceived enjoyment, social influence, trust, personal innovativeness, and perceived support of well-being as the strongest drivers for the intention to use wearable self-tracking technologies. By accounting for the influence of the demographic factors age and gender, we provide a further refined picture. |
Drone to the Rescue: Relay-Resilient Authentication using Ambient Multi-sensing | Many mobile and wireless authentication systems are prone to relay attacks whereby two non co-presence colluding entities can subvert the authentication functionality by simply relaying the data between a legitimate prover (P) and verifier (V). Examples include payment systems involving NFC and RFID devices, and zero-interaction token-based authentication approaches. Utilizing the contextual information to determine P-V proximity, or lack thereof, is a recently proposed approach to defend against relay attacks. Prior work considered WiFi, Bluetooth, GPS and Audio as different contextual modalities for the purpose of relay-resistant authentication. In this paper, we explore purely ambient physical sensing capabilities to address the problem of relay attacks in authentication systems. Specifically, we consider the use of four new sensor modalities, ambient temperature, precision gas, humidity, and altitude, for P-V proximity detection. Using an off-the-shelf ambient sensing platform, called Sensordrone, connected to Android devices, we show that combining these different modalities provides a robust proximity detection mechanism, yielding very low false positives (security against relay attacks) and very low false negatives (good usability). Such use of multiple ambient sensor modalities offers unique security advantages over traditional sensors (WiFi, Bluetooth, GPS or Audio) because it requires the attacker to simultaneously manipulate the multiple characteristics of the physical environment. |
A cluster-randomized controlled trial to study the effectiveness of a protocol-based lifestyle program to prevent type 2 diabetes in people with impaired fasting glucose | BACKGROUND
Effective diabetes prevention strategies that can be implemented in daily practice, without huge amounts of money and a lot of personnel are needed. The Dutch Diabetes Federation developed a protocol for coaching people with impaired fasting glucose (IFG; according to WHO criteria: 6.1 to 6.9 mmol/l) to a sustainable healthy lifestyle change: 'the road map towards diabetes prevention' (abbreviated: Road Map: RM). This protocol is applied within a primary health care setting by a general practitioner and a practice nurse. The feasibility and (cost-) effectiveness of care provided according to the RM protocol will be evaluated.
METHODS/DESIGN
A cluster randomised clinical trial is performed, with randomisation at the level of the general practices. Both opportunistic screening and active case finding took place among clients with high risk factors for diabetes. After IFG is diagnosed, motivated people in the intervention practices receive 3-4 consultations by the practice nurse within one year. During these consultations they are coached to increase the level of physical activity and healthy dietary habits. If necessary, participants are referred to a dietician, physiotherapist, lifestyle programs and/or local sports activities. The control group receives care as usual. The primary outcome measure in this study is change in Body Mass Index (BMI). Secondary outcome measures are waist circumference, physical activity, total and saturated fat intake, systolic blood pressure, blood glucose, total cholesterol, HDL cholesterol, triglycerides and behaviour determinants like risk perception, perceived knowledge and motivation. Based on a sample size calculation 120 people in each group are needed. Measurements are performed at baseline, and after one (post-intervention) and two years follow up. Anthropometrics and biochemical parameters are assessed in the practices and physical activity, food intake and their determinants by a validated questionnaire. The cost-effectiveness is estimated by using the Chronic Disease Model (CDM). Feasibility will be tested by interviews among health care professionals.
DISCUSSION
The results of the study will provide valuable information for both health care professionals and policy makers. If this study shows the RM to be both effective and cost-effective the protocol can be implemented on a large scale.
TRIAL REGISTRATION
ISRCTN41209683. Ethical approval number: NL31342.075.10. |
1 Kernels for graphs | This chapter discusses the construction of kernel functions between labeled graphs. We provide a unified account of a family of kernels called label sequence kernels that are defined via label sequences generated by graph traversal. For cyclic graphs, dynamic programming techniques cannot simply be applied, because the kernel is based on an infinite dimensional feature space. We show that the kernel computation boils down to obtaining the stationary state of a discrete-time linear system, which is efficiently performed by solving simultaneous linear equations. Promising empirical results are presented in classification of chemical compounds. |
IR night vision video-based estimation of heart and respiration rates | With the aging of the population, comes increased incidence of chronic diseases affecting the cardiac and respiratory systems. Monitoring of these chronic conditions at home via family members or in institutions via healthcare providers is usually adequate during the day. Non-intrusive video-based monitoring approaches have been proposed using optical cameras whose performance significantly deteriorates in low light conditions. This paper proposed the use of infrared night vision cameras to monitor the heart and respiration rates in low light conditions and in complete darkness. An infrared camera in conjunction with video magnification method is used to capture and analyze the video of subjects in dark conditions. To validate the extracted heart rate, a finger photoplethysmograph (PPG) device that can display the real-time heart rate was used. To validate the respiration rate a BioHarness chest strap was used. The proposed framework was tested on different sizes of regions of interest (ROIs) and different distances between the subject and the camera. A post-processing procedure was applied on the video magnification signal to reduce noise. To characterize and rule out artifacts, an experiment on inanimate objects was also conducted. Results indicate that the non-intrusive approach based on infrared night vision cameras and video magnification method can accurately extract heart and respiration rates, and can be used for continuous healthcare monitoring in the night. |
Phase I and pharmacokinetics study of crotoxin (cytotoxic PLA(2), NSC-624244) in patients with advanced cancer. | A Phase I clinical trial was performed on patients with solid tumors refractory to conventional therapy. Crotoxin was administered i.m. for 30 consecutive days at doses ranging from 0.03 to 0.22 mg/m(2). Patients entered the study after providing a written informed consent. Although 26 patients were entered only 23 were evaluated. Reversible, nonlimiting neuromuscular toxicity evidenced as diplopia because of pareses of the external ocular muscles was present in 13 patients. It started at doses of 0.18 mg/m(2) and lasted from 2 to 6 h. These episodes did not require dose adjustment and disappeared in 1-3 weeks of treatment. Three patients experienced palpebral ptosis, nystagmus (grade 2), and anxiety (grade 2-3) at the dose-limiting toxicity of 0.22 mg/m(2). Also at dose-limiting toxicity, 1 patient showed nystagmus (grade 2) and anxiety (grade 3) without evidence of palpebral ptosis. Transient increases (grades 1-3) in the levels of creatinine kinase, aspartate aminotransferase, and alanine transaminase attributed to crotoxin myotoxicity were observed but returned to normal by the last week of treatment. At 0.21 mg/m(2) there was a case of grade-3 anaphylactic reaction on day 31, which required treatment. Hypersensitivity was regarded as an adverse drug-related reaction, and the patient was removed from the protocol. Two patients at different doses (0.12 mg/m(2) and 0.22 mg/m(2)) had sialorrhea. Four patients had asymptomatic transient increase in blood pressure (up to 20 mm Hg) 12 h after the first injection, which lasted 24 h. No treatment was required and toxicity did not reappear. Six patients experienced slight eosinophilia during the first 2 weeks. The maximum tolerated dose was set at 0.21 mg/m(2). Objective measurable partial responses (>50% reduction of tumor mass) were noted in 2 patients treated at 0.21 mg/m(2) and 1 at 0.12 mg/m(2). One patient (at 0.21 mg/m(2)) presented a complete response on day 110. Crotoxin pharmacokinetics showed rapid absorption from the injection site to blood (t(1/2 A) = 5.2 +/- 0.6 min). Plasma concentration reached a peak (C(max) = 0.79 +/- 0.1 ng/ml) at tau(max) = 19 +/- 3 min. The half-life of the distribution (alpha) phase is 22 +/- 2 min. Starting at 1.5 h after injection, the decrease in plasma concentration becomes slower, reaching 14 +/- 3 pg/ml 24 h after injection. The profile is dominated by the elimination (beta) phase with a half-life of 5.2 +/- 0.6 h. Consequently, 24 h after the injection ( approximately 5 half-life) 97% of the product was eliminated. The area under plasma concentration versus time curve was 0.19 +/- 0.05 microg/min/ml. Assuming availability (F) approximately 1, the clearance is C(L) = 26.3 +/- 7 ml/min, and the apparent volume of distribution is V(d) = 12 +/- 3 liter/kg. The recommended dose for a Phase II study is 0.18 mg/m(2). |
On the stability of discrete-time active damping methods for VSI converters with a LCL input filter | The use of LCL filters with VSI converters is interesting since they present good attenuation of current ripple at high frequencies. Nevertheless, they also present a high resonance peak which can cause undesired oscillations, and even instability problems. Passive and Active Damping are methods which try to reduce those resonance effects. In this paper, Passive Damping is briefly reviewed and Active Damping is formally analyzed in continuous-time and, specially, in discrete-time. It will be shown that the one period delay included in sampled-data control systems and the grid impedance play a very important role in stability conditions. The presented results are useful to design discrete-time Active Damping controllers. Moreover, important implications for higher level controls can be deduced since the Active Damping is the first control loop in the power converter's controller. |
CoRR: a computing research repository | This paper describes the decisions by which teh Association for Computing Machinery integrated good features from the Los Alamos e-print (physics) archive and from Cornell University's Networked Computer Science Technical Reference Library to form their own open, permanent, online “computing research repository” (CoRR). Submitted papers are not refereed and anyone can browse and extract CoRR material for free, so Corr's eventual success could revolutionize computer science publishing. But several serious challenges remain: some journals forbid online preprints, teh CoRR user interface is cumbersome, submissions are only self-indexed, (no professional library staff manages teh archive) and long-term funding is uncertain. |
On the Unfairness of Blockchain | The success of Bitcoin largely relies on the perception of a fair underlying peer-to-peer protocol: blockchain. Fairness here essentially means that the reward (in bitcoins) given to any participant that helps maintain the consistency of the protocol by mining, is proportional to the computational power devoted by that participant to the mining task. Without such perception of fairness, honest miners might be disincentivized to maintain the protocol, leaving the space for dishonest miners to reach a majority and jeopardize the consistency of the entire system. We prove, in this paper, that blockchain is actually unfair, even in a distributed system of only two honest miners. In a realistic setting where message delivery is not instantaneous, the ratio between the (expected) number of blocks committed by two miners is at least exponential in the product of the message delay and the difference between the two miners’ hashrates. To obtain our result, we model the growth of blockchain, which may be of independent interest. We also apply our result to explain recent empirical observations and vulnerabilities. |
The management of penile Mondor's phlebitis: superficial dorsal penile vein thrombosis. | Superficial dorsal penile vein thrombosis was diagnosed 8 times in 7 patients between 19 and 40 years old (mean age 27 years). All patients related the onset of the thrombosis to vigorous sexual intercourse. No other etiological medications, drugs or constricting devices were implicated. Three patients were treated acutely with anti-inflammatory medications, while 4 were managed expectantly. The mean interval to resolution of symptoms was 7 weeks. Followup ranged from 3 to 30 months (mean 11) at which time all patients noticed normal erectile function. Only 1 patient had recurrent thrombosis 3 months after the initial episode, again related to intercourse. We conclude that this is a benign self-limited condition. Anti-inflammatory agents are useful for acute discomfort but they do not affect the rate of resolution. |
JML: notations and tools supporting detailed design in Java | JML is a notation for specifying the detailed design of Java classes and interfaces. JML’s assertions are stated using a slight extension of Java’s expression syntax. This should make it easy to use. Tools for JML aid in static analysis, verification, and run-time debugging of Java code. |
Stochastic models underlying Croston ' s method for intermittent demand forecasting | Croston’s method is a widely used to predict inventory demand when it is inter mittent. However, it is an ad hoc method with no properly formulated underlying stochastic model. In this paper, we explore possible models underlying Croston’s method and three related methods, and we show that any underlying model will be inconsistent with the prop erties of intermittent demand data. However, we find that the point forecasts and prediction intervals based on such underlying models may still be useful. [JEL: C53, C22, C51] |
A Scalable and Adaptive Method for Finding Semantically Equivalent Cue Words of Uncertainty | Scientific knowledge is constantly subject to a variety of changes due to new discoveries, alternative interpretations, and fresh perspectives. Understanding uncertainties associated with various stages of scientific inquiries is an integral part of scientists’ domain expertise and it serves as the core of their metaknowledge of science. Despite the growing interest in areas such as computational linguistics, systematically characterizing and tracking the epistemic status of scientific claims and their evolution in scientific disciplines remains a challenge. We present a unifying framework for the study of uncertainties explicitly and implicitly conveyed in scientific publications. The framework aims to accommodate a wide range of uncertain types, from speculations to inconsistencies and controversies. We introduce a scalable and adaptive method to recognize semantically equivalent cues of uncertainty across different fields of research and accommodate individual analysts’ unique perspectives. We demonstrate how the new method can be used to expand a small seed list of uncertainty cue words and how the validity of the expanded candidate cue words are verified. We visualize the mixture of the original and expanded uncertainty cue words to reveal the diversity of expressions of uncertainty. These cue words offer a novel resource for the study of uncertainty in scientific assertions. |
Biochemistry of Statins. | Cardiovascular disease (CVD) is the leading cause of morbidity and mortality worldwide. Elevated blood lipids may be a major risk factor for CVD. Due to consistent and robust association of higher low-density lipoprotein (LDL)-cholesterol levels with CVD across experimental and epidemiologic studies, therapeutic strategies to decrease risk have focused on LDL-cholesterol reduction as the primary goal. Current medication options for lipid-lowering therapy include statins, bile acid sequestrants, a cholesterol-absorption inhibitor, fibrates, nicotinic acid, and omega-3 fatty acids, which all have various mechanisms of action and pharmacokinetic properties. The most widely prescribed lipid-lowering agents are the HMG-CoA reductase inhibitors, or statins. Since their introduction in the 1980s, statins have emerged as the one of the best-selling medication classes to date, with numerous trials demonstrating powerful efficacy in preventing cardiovascular outcomes (Kapur and Musunuru, 2008 [1]). The statins are commonly used in the treatment of hypercholesterolemia and mixed hyperlipidemia. This chapter focuses on the biochemistry of statins including their structures, pharmacokinetics, and mechanism of actions as well as the potential adverse reactions linked to their clinical uses. |
Physics-motivated features for distinguishing photographic images and computer graphics | The increasing photorealism for computer graphics has made computer graphics a convincing form of image forgery. Therefore, classifying photographic images and photorealistic computer graphics has become an important problem for image forgery detection. In this paper, we propose a new geometry-based image model, motivated by the physical image generation process, to tackle the above-mentioned problem. The proposed model reveals certain physical differences between the two image categories, such as the gamma correction in photographic images and the sharp structures in computer graphics. For the problem of image forgery detection, we propose two levels of image authenticity definition, i.e., imaging-process authenticity and scene authenticity, and analyze our technique against these definitions. Such definition is important for making the concept of image authenticity computable. Apart from offering physical insights, our technique with a classification accuracy of 83.5% outperforms those in the prior work, i.e., wavelet features at 80.3% and cartoon features at 71.0%. We also consider a recapturing attack scenario and propose a counter-attack measure. In addition, we constructed a publicly available benchmark dataset with images of diverse content and computer graphics of high photorealism. |
The Impact of Internet Addiction on Life Satisfaction and Life Engagement in Young Adults | This study examined the impact of Internet addiction (IA) on life satisfaction and life engagement in young adults. A total of 210 University students participated in the study. Multivariate regression analysis showed that the model was significant and contributes 8% of the variance in life satisfaction (Adjusted R=.080, p<.001) and 2.8% of the variance in life engagement (Adjusted R=.028, p<.05). Unstandardized regression coefficient (B) indicates that one unit increase in raw score of Internet addiction leads to .168 unit decrease in raw score of life satisfaction (B=-.168, p<.001) and .066 unit decrease in raw score of life engagement (B=-.066, p<.05). Means and standard deviations of the scores on IA and its dimensions showed that the most commonly given purposes of Internet are online discussion, adult chatting, online gaming, chatting, cyber affair and watching pornography. Means and standard deviations of the scores on IA and its dimensions across different types of social networking sites further indicate that people who frequently participate in skype, twitter and facebook have relatively higher IA score. Correlations of different aspects of Internet use with major variables indicate significant and positive correlations of Internet use with IA, neglect of duty and virtual fantasies. Implications of the findings for theory, research and practice are discussed. |
Research on nursing handoffs for medical and surgical settings: an integrative review. | AIMS
To synthesize outcomes from research on handoffs to guide future computerization of the process on medical and surgical units.
BACKGROUND
Handoffs can create important information gaps, omissions and errors in patient care. Authors call for the computerization of handoffs; however, a synthesis of the literature is not yet available that might guide computerization.
DATA SOURCES
PubMed, CINAHL, Cochrane, PsycINFO, Scopus and a handoff database from Cohen and Hilligoss.
DESIGN
Integrative literature review.
REVIEW METHODS
This integrative review included studies from 1980-March 2011 in peer-reviewed journals. Exclusions were studies outside medical and surgical units, handoff education and nurses' perceptions.
RESULTS
The search strategy yielded a total of 247 references; 81 were retrieved, read and rated for relevance and research quality. A set of 30 articles met relevance criteria.
CONCLUSION
Studies about handoff functions and rituals are saturated topics. Verbal handoffs serve important functions beyond information transfer and should be retained. Greater consideration is needed on analysing handoffs from a patient-centred perspective. Handoff methods should be highly tailored to nurses and their contextual needs. The current preference for bedside handoffs is not supported by available evidence. The specific handoff structure for all units may be less important than having a structure for contextually based handoffs. Research on pertinent information content for contextually based handoffs is an urgent need. Without it, handoff computerization is not likely to be successful. Researchers need to use more sophisticated experimental research designs, control for individual and unit differences and improve sampling frames. |
Is expanding retrieval a superior method for learning text materials? | Expanding retrieval practice refers to the idea that gradually increasing the spacing interval between repeated tests ought to promote optimal long-term retention. Belief in the superiority of this technique is widespread, but empirical support is scarce. In addition, virtually all research on expanding retrieval has examined the learning of word pairs in paired-associate tasks. We report two experiments in which we examined the learning of text materials with expanding and equally spaced retrieval practice schedules. Subjects studied brief texts and recalled them in an initial learning phase. We manipulated the spacing of the repeated recall tests and examined final recall 1 week later. Overall we found that (1) repeated testing enhanced retention more than did taking a single test, (2) testing with feedback (restudying the passages) produced better retention than testing without feedback, but most importantly (3) there were no differences between expanding and equally spaced schedules of retrieval practice. Repeated retrieval enhanced long-term retention, but how the repeated tests were spaced did not matter. |
Evolutionary computing in recommender systems: a review of recent research | One of the main current applications of intelligent systems is recommender systems (RS). RS can help users to find relevant items in huge information spaces in a personalized way. Several techniques have been investigated for the development of RS. One of them is evolutionary computational (EC) techniques, which is an emerging trend with various application areas. The increasing interest in using EC for web personalization, information retrieval and RS fostered the publication of survey papers on the subject. However, these surveys have analyzed only a small number of publications, around ten. This study provides a comprehensive review of more than 65 research publications focusing on five aspects we consider relevant for such: the recommendation technique used, the datasets and the evaluation methods adopted in their experimental parts, the baselines employed in the experimental comparison of proposed approaches and the reproducibility of the reported experiments. At the end of this review, we discuss negative and positive aspects of these papers, as well as point out opportunities, challenges and possible future research directions. To the best of our knowledge, this review is the most comprehensive review of various approaches using EC in RS. Thus, we believe this review will be a relevant material for researchers interested in EC and RS. |
How to Login from an Internet Cafe Without Worrying about Keyloggers | Roaming users who use untrusted machines to access password protected accounts have few good options. An internet café machine can easily be running a keylogger. The roaming user has no reliable way of determining whether it is safe, and has no alternative to typing the password. We describe a simple trick the user can employ that is entirely effective in concealing the password. We verify its efficacy against the most popular keylogging programs. |
Pragmatics in human-computer conversations | This paper provides a pragmatic analysis of some human-computer conversations carried out during the past six years within the context of the Loebner Prize Contest, an annual competition in which computers participate in Turing Tests. The Turing Test posits that to be granted intelligence, a computer should imitate human conversational behavior so well as to be indistinguishable from a real human being. We carried out an empirical study exploring the relationship between computers’ violations of Grice’s cooperative principle and conversational maxims, and their success in imitating human language use. Based on conversation analysis and a large survey, we found that different maxims have different effects when violated, but more often than not, when computers violate the maxims, they reveal their identity. The results indicate that Grice’s cooperative principle is at work during conversations with computers. On the other hand, studying human-computer communication may require some modifications of existing frameworks in pragmatics because of certain characteristics of these conversational environments. Pragmatics constitutes a serious challenge to computational linguistics. While existing programs have other significant shortcomings, it may be that the biggest hurdle in developing computer programs which can successfully carry out conversations will be modeling the ability to ‘cooperate’. |
Signed Distance Fields : A Natural Representation for Both Mapping and Planning | How to represent a map of the environment is a key question of robotics. In this paper, we focus on suggesting a representation well-suited for online map building from vision-based data and online planning in 3D. We propose to combine a commonly-used representation in computer graphics and surface reconstruction, projective Truncated Signed Distance Field (TSDF), with a representation frequently used for collision checking and collision costs in planning, Euclidean Signed Distance Field (ESDF), and validate this combined approach in simulation. We argue that this type of map is better-suited for robotic applications than existing representations. |
Interactive HDR Environment Map Capturing on Mobile Devices | Real world illumination, captured by digitizing devices, is beneficial to solve many problems in computer graphics. Therefore, practical methods for capturing this illumination are of high interest. In this paper, we present a novel method for capturing environmental illumination by a mobile device. Our method is highly practical as it requires only a consumer mobile phone and the result can be instantly used for rendering or material estimation. We capture the real light in high dynamic range (HDR) to preserve its high contrast. Our method utilizes the moving camera of a mobile phone in auto-exposure mode to reconstruct HDR values. The projection of the image to the spherical environment map is based on the orientation of the mobile device. Both HDR reconstruction and projection run on the mobile GPU to enable interactivity. Moreover, an additional image alignment step is performed. Our results show that the presented method faithfully captures the real environment and that the rendering with our reconstructed environment maps achieves high quality, comparable to reality. |
Distributed aggregation for data-parallel computing: interfaces and implementations | Data-intensive applications are increasingly designed to execute on large computing clusters. Grouped aggregation is a core primitive of many distributed programming models, and it is often the most efficient available mechanism for computations such as matrix multiplication and graph traversal. Such algorithms typically require non-standard aggregations that are more sophisticated than traditional built-in database functions such as Sum and Max. As a result, the ease of programming user-defined aggregations, and the efficiency of their implementation, is of great current interest.
This paper evaluates the interfaces and implementations for user-defined aggregation in several state of the art distributed computing systems: Hadoop, databases such as Oracle Parallel Server, and DryadLINQ. We show that: the degree of language integration between user-defined functions and the high-level query language has an impact on code legibility and simplicity; the choice of programming interface has a material effect on the performance of computations; some execution plans perform better than others on average; and that in order to get good performance on a variety of workloads a system must be able to select between execution plans depending on the computation. The interface and execution plan described in the MapReduce paper, and implemented by Hadoop, are found to be among the worst-performing choices. |
The Paradigm Shift in Mathematics Education: Explanations and Implications of Reforming Conceptions of Teaching and Learning | In this article, we argue that the debates about mathematics education that have arisen in the United States over the past decade are the result of a major shift in how we conceptualize mathematical knowledge and mathematics learning. First, we examine past efforts to change mathematics education and argue they are grounded in a common traditional paradigm. Next, we describe the emergence of a new paradigm that has grown out of a coalescence of theories from cognitive psychology, an awareness of the importance of culture to learning, and the belief that all students can and should learn meaningful mathematics. Reforms grounded in the new paradigm have the potential to dramatically alter the way in which students—as well as which students— experience success in school mathematics. We discuss some implications of these reforms related to how mathematics educators might work with teachers of mathematics. |
Detecting Emotional Contagion in Massive Social Networks | Happiness and other emotions spread between people in direct contact, but it is unclear whether massive online social networks also contribute to this spread. Here, we elaborate a novel method for measuring the contagion of emotional expression. With data from millions of Facebook users, we show that rainfall directly influences the emotional content of their status messages, and it also affects the status messages of friends in other cities who are not experiencing rainfall. For every one person affected directly, rainfall alters the emotional expression of about one to two other people, suggesting that online social networks may magnify the intensity of global emotional synchrony. |
Bod-IDE: An Augmented Reality Sandbox for eFashion Garments | Electronic fashion (eFashion) garments use technology to augment the human body with wearable interaction. In developing ideas, eFashion designers need to prototype the role and behavior of the interactive garment in context; however, current wearable prototyping toolkits require semi-permanent construction with physical materials that cannot easily be altered. We present Bod-IDE, an augmented reality 'mirror' that allows eFashion designers to create virtual interactive garment prototypes. Designers can quickly build, refine, and test on-the-body interactions without the need to connect or program electronics. By envisioning interaction with the body in mind, eFashion designers can focus more on reimagining the relationship between bodies, clothing, and technology. |
Case management for blood pressure and lipid level control after minor stroke: PREVENTION randomized controlled trial. | BACKGROUND
Optimization of systolic blood pressure and lipid levels are essential for secondary prevention after ischemic stroke, but there are substantial gaps in care, which could be addressed by nurse- or pharmacist-led care. We compared 2 types of case management (active prescribing by pharmacists or nurse-led screening and feedback to primary care physicians) in addition to usual care.
METHODS
We performed a prospective randomized controlled trial involving adults with recent minor ischemic stroke or transient ischemic attack whose systolic blood pressure or lipid levels were above guideline targets. Participants in both groups had a monthly visit for 6 months with either a nurse or pharmacist. Nurses measured cardiovascular risk factors, counselled patients and faxed results to primary care physicians (active control). Pharmacists did all of the above as well as prescribed according to treatment algorithms (intervention).
RESULTS
Most of the 279 study participants (mean age 67.6 yr, mean systolic blood pressure 134 mm Hg, mean low-density lipoprotein [LDL] cholesterol 3.23 mmol/L) were already receiving treatment at baseline (antihypertensives: 78.1%; statins: 84.6%), but none met guideline targets (systolic blood pressure ≤ 140 mm Hg, fasting LDL cholesterol ≤ 2.0 mmol/L). Substantial improvements were observed in both groups after 6 months: 43.4% of participants in the pharmacist case manager group met both systolic blood pressure and LDL guideline targets compared with 30.9% in the nurse-led group (12.5% absolute difference; number needed to treat = 8, p = 0.03).
INTERPRETATION
Compared with nurse-led case management (risk factor evaluation, counselling and feedback to primary care providers), active case management by pharmacists substantially improved risk factor control at 6 months among patients who had experienced a stroke.
TRIAL REGISTRATION
ClinicalTrials.gov, no. NCT00931788. |
Quality improvement guidelines for percutaneous transhepatic cholangiography, biliary drainage, and percutaneous cholecystostomy. | THE membership of the Society of Interventional Radiology (SIR) Standards of Practice Committee represents experts in a broad spectrum of interventional procedures from both the private and academic sectors of medicine. Generally Standards of Practice Committee members dedicate the vast majority of their professional time to performing interventional procedures; as such they represent a valid broad expert constituency of the subject matter under consideration for standards production. |
Feature recognition and shape design in sneakers | Article history: Available online xxxx |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.