title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Workplace pedagogic practices: Participation and learning. | This paper advances conceptual tentative bases for understanding workplace pedagogic practices. It proposes that whether arising through everyday work activities or guided learning in workplaces, learning is shaped by workplace participatory practices. This learning is held to be co-participative: the reciprocal process of how the workplace affords participation and therefore learning, and how individuals elect to engage with the work practice (Billett 2001b). In order to make a space to understand workplaces as learning environments it is necessary for them to be discussed and conceptualised on their own terms. Describing learning through work as ‘informal’ is negative, imprecise and denies key premises about participation in and learning through work. Access to workplace activities and guidance, and the distribution of opportunities to participate are structured by workplace factors. Much of this structuring has intentionality associated with the continuity of the work practice through participants’ learning. Workplace experiences (activities and interactions) are, therefore, not ‘adhoc’ or ‘informal’, they are a product of the historical, cultural and situational factors that constitute the work practices and its enactment, and individuals’ engagement in those practices. These factors shape the activities, goals and interactions afforded by the work practice and how individuals construe and learn through them Learning is conceptualised as arising inter-psychologically through participation in social practices such as workplaces. It is not reserved exclusively for or peculiar to particular experiences. However, particular kinds of experiences (e.g. routine or non-routine activities) are likely to have particular learning kinds of learning consequences. However, learning through participation needs to be considered critically. Although intersubjectivity (shared understanding) is seen as an important goal in the development of vocational practice, it offers a limited conception of goals for learning, as it is largely reproductive. The appropriation of individuals’ knowledge through workplace practices needs to be seen in terms of its worth and adaptability, not just its salience at time and place of learning. Therefore, in considering the kinds of processes adopted and outcomes arising from participatory practices in workplaces a critical stance is warranted. Learning through work This paper discusses and proposes bases for understanding workplace participatory practices as learning experiences that are constituted in the activities and interactions, in which individuals engage. It aims to contribute to a larger project of developing a workplace pedagogy. Learning through participation in a social practice, such as in workplaces, is described in Vygotskian (1987) derived sociocultural theories of learning and development, as being an inter-psychological process -those between the individual and social partners, artefacts, symbols and the physical environment. That is, learning occurs as a product of interactions within the social world from where the knowledge to be learnt is sourced (Scribner 1985, Rogoff 1990, 1995). The knowledge required for vocational practice has its geneses in historical, cultural and situational sources (Billett 1998). It does not emanate from within individuals. Therefore, this knowledge must be accessed through social sources. However, the knowledge required for work performance is not always easy to access as it may be hidden and also there are impediments that inhibit learning as workplaces are far from benign environments. A key focus then becomes how individuals’ or cohorts of workers participate in workplaces and how opportunities for participation and, therefore, learning are accessed. This includes the kinds of activities individuals are able to engage in and the interactions they can access through these experiences. It seems that regardless of whether the contributions to learning through everyday work activity or those from guided learning in the workplace are being |
ePluribus: Ethnicity on Social Networks | We propose an approach to determine the ethnic breakdown of a population based solely on people’s names and data provided by the U.S. Census Bureau. We demonstrate that our approach is able to predict the ethnicities of individuals as well as the ethnicity of an entire population better than natural alternatives. We apply our technique to the population of U.S. Facebook users and uncover the demographic characteristics of ethnicities and how they relate. We also discover that while Facebook has always been diverse, diversity has increased over time leading to a population that today looks very similar to the overall U.S. population. We also find that different ethnic groups relate to one another in an assortative manner, and that these groups have different profiles across demographics, beliefs, and usage of site features. |
FPGA: what's in it for a database? | While there seems to be a general agreement that next years' systems will include many processing cores, it is often overlooked that these systems will also include an increasing number of different cores (we already see dedicated units for graphics or network processing). Orchestrating the diversity of processing functionality is going to be a major challenge in the upcoming years, be it to optimize for performance or for minimal energy consumption.
We expect field-programmable gate arrays (FPGAs or "programmable hardware") to soon play the role of yet another processing unit, found in commodity computers. It is clear that the new resource is going to be too precious to be ignored by database systems, but it is unclear how FPGAs could be integrated into a DBMS. With a focus on database use, this tutorial introduces into the emerging technology, demonstrates its potential, but also pinpoints some challenges that need to be addressed before FPGA-accelerated database systems can go mainstream. Attendees will gain an intuition of an FPGA development cycle, receive guidelines for a "good" FPGA design, but also learn the limitations that hardware-implemented database processing faces. Our more high-level ambition is to spur a broader interest in database processing on novel hardware technology. |
Writing about emotional experiences as a therapeutic process | For the past decade, an increasing number of studies have demonstrated that when individuals wnte about emotional experiences, significant physical and mental health improvements follow The basic paradigm and findings are summarized along with some boundary conditions Although a reduction tn inhibition may contribute to the disclosure phenomenon changes in basic cognitive and linguistic processes during writing predict better health Implications for theory and treatment are discussed Virtually all forms of psychotherapy—from psychoanalysis to behavioral and cognitive therapies—have been shown to reduce distress and to promote physical and mental well-being (Mumford, Schlesinger, & Glass, 1983, Smith, Glass, & Miller, 1980) A process common to most therapies is labeling the problem and discussing its causes and consequences Further, participating in therapy presupposes that the individual acknowledges the existence of a problem and openly discusses It with another person As discussed in this article, the mere act of disclosure is a powerful therapeutic agent that may account for a substantial percentage of the vanance m the healing process PARAMETERS OF WRITING AND TALKING ASSOCIATED WITH HEALTH IMPROVEMENTS Over the past decade, several laboratories have been explonng the value of writing or talking about emotional experiences Confronting deeply personal issues has been found to promote physical health, subjective well-being, and selected adaptive behaviors In this section, the general findings of the disclosure paradigm are discussed Whereas individuals have been asked to disclose personal expenences through talking in a few studies, most studies involve wnting The Basic Wnting Paradigm The standard laboratory wnting technique has involved randomly .signing each participant to one of two or more groups All wnting groups are asked to wnte about assigned topics for 3 to 5 consecutive days, 15 to 30 mm each day Wnting is generally done m the laboy with no feedback given Participants assigned to the control conditions are typically asked to wnte about superficial topics, such as how they use their time The standard instructions for those assigned to the expenmental group are a vanation on the following the next 3 days, I would like for you to wnte about your very deepest thoughts and feeling about an extremely important emoUonal issue that has affected you and your life In your wnting I d like you to really let go and explore your very deepest emoUons and thoughts You might ue your topic to Address correspondence to James W Pennebaker Department of Psychology, Southern Methodist University, Dallas, TX 75275 e-mail pennebak® your relationships with others including parents lovers fnends, or relatives, to your past your present or your future or to who you have been, who you would like to be or who you are now You may wnte about the same general issues or expenences on all days of wnung or on different topics each day All of your wnting will be completely confidential Don't worry about spelling, sentence structure, or grammar The only rule is that once you begin wnting, continue to do so until your time is up The writing paradigm is exceptionally powerful Paiticipantsfrom children to the elderly, from honor students to maximun secunty pnsoners—disclose a remarkable range and depth of traumatic expenences Lost loves, deaths, incidents of sexual and physical abuse, and tragic failures are common themes in all of the studiei nothing else, the paradigm demonstrates that when individuals are given the opportunity to disclose deeply personal aspects of their lives, they readily do so Even though a large number of participants report crying or being deeply upset by the expenence, the overwhelming majonty report that the wnting expenence was valuable and meaningful in their lives EfTects of Disclosure on Outcome Measures Researchers have relied on a vanety of physical and mental health measures to evaluate the effect of wntmg As depicted in Table 1, wnting or talking about emotional expenences, relative to wnting about superficial control topics, has been found to be associated w significant drops in physician visits from before to after wnting among relatively healthy samples Wnting or talking about emotional topics has also been found to have beneficial influences on lmm function, including t-helper cell growth (using a blastogenesis procedure with the mitogen phytohemagglutmui), antibody response t Epstein-Barr virus, and antibody response to hepatitis B vaccination; Disclosure also has produced short-term changes in autonomic activ lty (e g , lowered heart rate and electrodermal activity) and muscula activity (l e , reduced phasic comigator acUvity) Self-reports also suggest that wntmg about upsetting expenences, although painful in the days of wntmg, produces long-term lmproveis in mood and indicators of well-being compared with wnting about control topics Although a number of studies have failed to find lstent effects on mood or self-reported distress, Smyth's (1996) It meta-analysis on wntten-disclosure studies indicates that, li general, wnting about emotional topics is associated with significant reductions in distress Behavioral changes have also been found Students who wnte about emotional topics show improvements in grades m the months following the study Senior professionals who have been laid off from their jobs get new jobs more quickly af^er wnting Consistent with the direct health measures, university staff members who wnte about emotional topics are subsequently absent from their work at lower than control participants Interestingly, relatively few reliable changes emerge using self-reports of health-related behaviors That is Copynght © 1997 Amencan Psychological Society VOL 8, NO 3, MAY 1997 PSYCHOLOGICAL SCIENCE |
Bidirectional Expansion For Keyword Search on Graph Databases | Relational, XML and HTML data can be represented as graphs with entities as nodes and relationships as edges. Text is associated with nodes and possibly edges. Keyword search on such graphs has received much attention lately. A central problem in this scenario is to efficiently extract from the data graph a small number of the “best” answer trees. A Backward Expanding search, starting at nodes matching keywords and working up toward confluent roots, is commonly used for predominantly text-driven queries. But it can perform poorly if some keywords match many nodes, or some node has very large degree. In this paper we propose a new search algorithm, Bidirectional Search, which improves on Backward Expanding search by allowing forward search from potential roots towards leaves. To exploit this flexibility, we devise a novel search frontier prioritization technique based on spreading activation. We present a performance study on real data, establishing that Bidirectional Search significantly outperforms Backward Expanding search. |
Detection of Covert Channel Encoding in Network Packet Delays | Covert channels are mechanisms for communicating information in ways that are difficult to detect. Data exfiltration can be an indication that a computer has been compromised by an attacker even when other intrusion detection schemes have failed to detect a successful attack. Covert timing channels use packet inter-arrival times, not header or payload embedded information, to encode covert messages. This paper investigates the channel capacity of Internet-based timing channels and proposes a methodology for detecting covert timing channels based on how close a source comes to achieving that channel capacity. A statistical approach is then used for the special case of binary codes. |
Opinion mining in Twitter: How to make use of sarcasm to enhance sentiment analysis | Opinion mining and sentiment analysis refer to the identification and the aggregation of attitudes or opinions expressed by internet users towards a specific topic. However, due to the limitation in terms of characters (i.e. 140 characters per tweet) and the use of informal language, the state-of-the-art approaches of sentiment analysis present lower performances in Twitter than that when they are applied on longer texts. Moreover, presence of sarcasm makes the task even more challenging. Sarcasm is when a person conveys implicit information, usually the opposite of what is said, within the message he transmits. In this paper we propose a method that makes use of a minimal set of features, yet, efficiently classifies tweets regardless of their topic. We also study the importance of detecting sarcastic tweets automatically, and demonstrate how the accuracy of sentiment analysis can be enhanced knowing which tweets are sarcastic and which are not. |
Document Context Neural Machine Translation with Memory Networks | We present a document-level neural machine translation model which takes both source and target document context into account using memory networks. We model the problem as a structured prediction problem with interdependencies among the observed and hidden variables, i.e., the source sentences and their unobserved target translations in the document. The resulting structured prediction problem is tackled with a neural translation model equipped with two memory components, one each for the source and target side, to capture the documental interdependencies. We train the model endto-end, and propose an iterative decoding algorithm based on block coordinate descent. Experimental results of English translations from French, German, and Estonian documents show that our model is effective in exploiting both source and target document context, and statistically significantly outperforms the previous work in terms of BLEU and METEOR. |
A hybrid multi-objective evolutionary algorithm using an inverse neural network for aircraft control system design | This study introduces a hybrid multi-objective evolutionary algorithm (MOEA) for the optimization of aircraft control system design. The strategy suggested is composed mainly of two stages. The first stage consists of training an artificial neural network (ANN) with objective values as inputs and decision variables as outputs to model an approximation of the inverse of the objective function used. The second stage consists of a local improvement phase in objective space preserving objectives relationships, and a mapping process to decision variables using the trained ANN. Both the hybrid MOEA and the original MOEA were applied to an aircraft control system design application for assessment |
Ranking-Based Emotion Recognition for Music Organization and Retrieval | Determining the emotion of a song that best characterizes the affective content of the song is a challenging issue due to the difficulty of collecting reliable ground truth data and the semantic gap between human's perception and the music signal of the song. To address this issue, we represent an emotion as a point in the Cartesian space with valence and arousal as the dimensions and determine the coordinates of a song by the relative emotion of the song with respect to other songs. We also develop an RBF-ListNet algorithm to optimize the ranking-based objective function of our approach. The cognitive load of annotation, the accuracy of emotion recognition, and the subjective quality of the proposed approach are extensively evaluated. Experimental results show that this ranking-based approach simplifies emotion annotation and enhances the reliability of the ground truth. The performance of our algorithm for valence recognition reaches 0.326 in Gamma statistic. |
Real-Time Dense Monocular SLAM With Online Adapted Depth Prediction Network | Considerable advances have been achieved in estimating the depth map from a single image via convolutional neural networks (CNNs) during the past few years. Combining depth prediction from CNNs with conventional monocular simultaneous localization and mapping (SLAM) is promising for accurate and dense monocular reconstruction, in particular addressing the two long-standing challenges in conventional monocular SLAM: low map completeness and scale ambiguity. However, depth estimated by pretrained CNNs usually fails to achieve sufficient accuracy for environments of different types from the training data, which are common for certain applications such as obstacle avoidance of drones in unknown scenes. Additionally, inaccurate depth prediction of CNN could yield large tracking errors in monocular SLAM. In this paper, we present a real-time dense monocular SLAM system, which effectively fuses direct monocular SLAM with an online-adapted depth prediction network for achieving accurate depth prediction of scenes of different types from the training data and providing absolute scale information for tracking and mapping. Specifically, on one hand, tracking pose (i.e., translation and rotation) from direct SLAM is used for selecting a small set of highly effective and reliable training images, which acts as ground truth for tuning the depth prediction network on-the-fly toward better generalization ability for scenes of different types. A stage-wise Stochastic Gradient Descent algorithm with a selective update strategy is introduced for efficient convergence of the tuning process. On the other hand, the dense map produced by the adapted network is applied to address scale ambiguity of direct monocular SLAM which in turn improves the accuracy of both tracking and overall reconstruction. The system with assistance of both CPUs and GPUs, can achieve real-time performance with progressively improved reconstruction accuracy. Experimental results on public datasets and live application to obstacle avoidance of drones demonstrate that our method outperforms the state-of-the-art methods with greater map completeness and accuracy, and a smaller tracking error. |
EasiMed: A remote health care solution | An embedded remote health care system based on wireless sensor network technology was established. Firstly, a new system architecture was proposed which introduced a scalable wireless personal medical sensor network around human's body. Then the designs of several sensor node and the care base-station were presented. The wireless communication between the sensor nodes and the care base-station used IEEE 802.15.4/Zigbee standard whilst the care base-station and the remote central server was connected in one of the following ways, including computing network, GSM short messages and telephone modem. The system can be used for remote health care at home or in the hospitals |
The influence of community-based participatory research principles on the likelihood of participation in health research in American Indian communities. | OBJECTIVES
Advocates of community-based participatory research (CBPR) have emphasized the need for such efforts to be collaborative, and close partnerships with the communities of interest are strongly recommended in developing study designs. However, to date, no systematic, empiric inquiry has been made into whether CBPR principles might influence an individual's decision to participate in research.
DESIGN, SETTING, AND PARTICIPANTS
Using vignettes that described various types of research, we surveyed 1066 American Indian students from three tribal colleges/universities to ascertain the extent to which respondent age, gender, education, cultural affiliation, tribal status, and prior experience with research may interact with the implementation of critical CBPR principles to increase or decrease the likelihood of participating in health research.
RESULTS
Many factors significantly increased odds of participation and included the study's being conducted by a tribal college/university or national organization, involving the community in study development, an American Indian's leading the study, addressing serious health problems of concern to the community, bringing money into the community, providing new treatments or services, compensation, anonymity, and using the information to answer new questions. Decreased odds of participation were related to possible discrimination against one's family, tribe, or racial group; lack of confidentiality; and possible physical harm.
CONCLUSIONS
Employing CBPR principles such as community involvement in all phases of the research, considering the potential benefits of the research, building on strengths and resources within the community and considering how results will be used is essential to conceptualizing, designing, and implementing successful health research in partnership with American Indians. |
TCP ex machina: computer-generated congestion control | This paper describes a new approach to end-to-end congestion control on a multi-user network. Rather than manually formulate each endpoint's reaction to congestion signals, as in traditional protocols, we developed a program called Remy that generates congestion-control algorithms to run at the endpoints.
In this approach, the protocol designer specifies their prior knowledge or assumptions about the network and an objective that the algorithm will try to achieve, e.g., high throughput and low queueing delay. Remy then produces a distributed algorithm---the control rules for the independent endpoints---that tries to achieve this objective.
In simulations with ns-2, Remy-generated algorithms outperformed human-designed end-to-end techniques, including TCP Cubic, Compound, and Vegas. In many cases, Remy's algorithms also outperformed methods that require intrusive in-network changes, including XCP and Cubic-over-sfqCoDel (stochastic fair queueing with CoDel for active queue management).
Remy can generate algorithms both for networks where some parameters are known tightly a priori, e.g. datacenters, and for networks where prior knowledge is less precise, such as cellular networks. We characterize the sensitivity of the resulting performance to the specificity of the prior knowledge, and the consequences when real-world conditions contradict the assumptions supplied at design-time. |
The impact of perceived interpersonal functioning on treatment for adolescent depression: IPT-A versus treatment as usual in school-based health clinics. | OBJECTIVE
Aspects of depressed adolescents' perceived interpersonal functioning were examined as moderators of response to treatment among adolescents treated with interpersonal psychotherapy for depressed adolescents (IPT-A; Mufson, Dorta, Moreau, & Weissman, 2004) or treatment as usual (TAU) in school-based health clinics.
METHOD
Sixty-three adolescents (12-18 years of age) participated in a clinical trial examining the effectiveness of IPT-A (Mufson, Dorta, Wickramaratne, et al., 2004). The sample was 84.1% female and 15.9% male (mean age = 14.67 years). Adolescents were 74.6% Latino, 14.3% African American, 1.6% Asian American, and 9.5% other. They came primarily from low-income families. Adolescents were randomly assigned to receive IPT-A or TAU delivered by school-based mental health clinicians. Assessments, completed at baseline and at Weeks 4, 8, and 12 (or at early termination), included the Hamilton Rating Scale for Depression (Hamilton, 1967), the Conflict Behavior Questionnaire (Robin & Foster, 1989), and the Social Adjustment Scale-Self-Report (Weissman & Bothwell, 1976).
RESULTS
Multilevel modeling indicated that treatment condition interacted with adolescents' baseline reports of conflict with their mothers and social dysfunction with friends to predict the trajectory of adolescents' depressive symptoms over the course of treatment, controlling for baseline levels of depression. The benefits of IPT-A over TAU were particularly strong for the adolescents who reported high levels of conflict with their mothers and social dysfunction with friends.
CONCLUSIONS
Replication with larger samples would suggest that IPT-A may be particularly helpful for depressed adolescents who are reporting high levels of conflict with their mothers or interpersonal difficulties with friends. |
Urban Land, Household Debt and Overseas Borrowing | A Great Financial Crisis can only occur as a surprise. If it is anticipated by the financial players, there may be a series of little financial crises as the players adjust their expectations but the imbalances which cause great crises will not build up. Great crises are therefore consequences of selective blindness. They result from disregard of the first and perhaps only worthwhile lesson in economics, which is that everything depends on everything else. Despite the plague of dishonest accounting (itself a sign of trouble), in the boom which begat the current GFC the selective blindness was not due to lack of data - throughout the boom the national accounts of the Anglophone countries documented an unsustainable reliance on growing consumer debt as a source of demand. The blinkers were imposed by neo-liberal economics, that legacy of European intellectual turmoil and American Cold War evangelism which became the hegemonic economic theory following the end of full employment. This was ultimately an economics of process: provided all economic decisions were subject to demand and supply in competitive markets, the invisible hand would guarantee economic efficiency and the optimum use of resources. This being the case, there was no perceived need to check the macroeconomic balance sheets for signs of impending trouble. With honourable exceptions, faith in competition brought blindness, and this ingrained faith still hinders an appreciation of what went wrong. For a full appreciation of present problems, it is necessary to examine the macroeconomic balance sheets (ABS 2008). |
where science and magic meet the illusion of a science | Recent articles calling for a scientific study of magic have been the subject of widespread interest. This article considers the topic from a broader perspective, and argues that to engage in a science of magic, in any meaningful sense, is misguided. It argues that those who have called for a scientific theory of magic have failed to explain either how or why such a theory might be constructed, that a shift of focus to a neuroscience of magic is simply unwarranted, and that a science of magic is itself an inherently unsound idea. It seeks to provide a more informed view of the relationship between science and magic, and suggests a more appropriate way forward for scientists. |
Skeletal Quads: Human Action Recognition Using Joint Quadruples | Recent advances on human motion analysis have made the extraction of human skeleton structure feasible, even from single depth images. This structure has been proven quite informative for discriminating actions in a recognition scenario. In this context, we propose a local skeleton descriptor that encodes the relative position of joint quadruples. Such a coding implies a similarity normalisation transform that leads to a compact (6D) view-invariant skeletal feature, referred to as skeletal quad. Further, the use of a Fisher kernel representation is suggested to describe the skeletal quads contained in a (sub)action. A Gaussian mixture model is learnt from training data, so that the generation of any set of quads is encoded by its Fisher vector. Finally, a multi-level representation of Fisher vectors leads to an action description that roughly carries the order of sub-action within each action sequence. Efficient classification is here achieved by linear SVMs. The proposed action representation is tested on widely used datasets, MSRAction3D and HDM05. The experimental evaluation shows that the proposed method outperforms state-of-the-art algorithms that rely only on joints, while it competes with methods that combine joints with extra cues. |
The sharing economy: Why people participate in collaborative consumption | Information and communications technologies (ICTs) have enabled the rise of so-called “Collaborative Consumption” (CC): the peer-to-peer-based activity of obtaining, giving, or sharing the access to goods and services, coordinated through community-based online services. CC has been expected to alleviate societal problems such as hyper-consumption, pollution, and poverty by lowering the cost of economic coordination within communities. However, beyond anecdotal evidence, there is a dearth of understanding why people participate in CC. Therefore, in this article we investigate people’s motivations to participate in CC. The study employs survey data (N = 168) gathered from people registered onto a CC site. The results show that participation in CC is motivated by many factors such as its sustainability, enjoyment of the activity as well as economic gains. An interesting detail in the result is that sustainability is not directly associated with participation unless it is at the same time also associated with positive attitudes towards CC. This suggests that sustainability might only be an important factor for those people for whom ecological consumption is important. Furthermore, the results suggest that in CC an attitudebehavior gap might exist; people perceive the activity positively and say good things about it, but this good attitude does not necessary translate into action. Introduction |
Shadow And Highlight Invariant Colour Segmentation Algorithm For Traffic Signs | Shadows and highlights represent a challenge to the computer vision researchers due to a variance in the brightness on the surfaces of the objects under consideration. This paper presents a new colour detection and segmentation algorithm for road signs in which the effect of shadows and highlights are neglected to get better colour segmentation results. Images are taken by a digital camera mounted in a car. The RGB images are converted into HSV colour space and the shadow-highlight invariant method is applied to extract the colours of the road signs under shadow and highlight conditions. The method is tested on hundreds of outdoor images under such light conditions, and it shows high robustness; more than 95% of correct segmentation is achieved |
Efficacy and Safety of Botulinum Toxin A Injection Compared with Topical Nitroglycerin Ointment for the Treatment of Chronic Anal Fissure: A Prospective Randomized Study | OBJECTIVES:To evaluate the efficacy and safety of botulinum toxin A injection compared with topical nitroglycerin ointment for the treatment of chronic anal fissure (CAF).METHODS:Fifty outpatients with CAF were randomized to receive either a single botulinum toxin injection (30 IU Botox®) or topical nitroglycerin ointment 0.2% b.i.d. for 2 wk. If the initial therapy failed, patients were assigned to the other treatment group for a further 2 wk. If CAF still showed no healing at wk 4, patients received combination therapy of botulinum toxin and nitroglycerin for 4 additional wk. Persisting CAF at wk 8 was treated according to the investigator's decision. Healing rates, symptoms, and side effects of the therapy were recorded at wk 2, 4, 8, 12, and 24 after randomization.RESULTS:The group initially treated with nitroglycerin showed a higher healing rate of CAF (13 of 25, 52%) as compared with the botulinum toxin group (6 of 25, 24%) after the first 2 wk of therapy (p < 0.05). At the end of wk 4, CAF healed in three additional patients, all receiving nitroglycerin after initial botulinum toxin injection. Mild side effects occurred in 13 of 50 (26%) patients, all except one were on nitroglycerin.CONCLUSIONS:Nitroglycerin ointment was superior to the more expensive and invasive botulinum toxin injection for initial healing of CAF, but was associated with more but mild side effects. |
A longitudinal study of the relation between adolescent boys and girls' computer use with friends and friendship quality: Support for the social compensation or the rich-get-richer hypothesis? | 0747-5632/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.chb.2010.02.004 * Corresponding author. E-mail address: [email protected] (M. Using computers with friends either in person or online has become ubiquitous in the life of most adolescents; however, little is known about the complex relation between this activity and friendship quality. This study examined direct support for the social compensation and rich-get-richer hypotheses among adolescent girls and boys by including social anxiety as a moderating factor. A sample of 1050 adolescents completed a survey in grade 9 and then again in grades 11 and 12. For girls, there was a main effect of using computers with friends on friendship quality; providing support for both hypotheses. For adolescent boys, however, social anxiety moderated this relation, supporting the social compensation hypothesis. These findings were identical for online communication and were stable throughout adolescence. Furthermore, participating in organized sports did not compensate for social anxiety for either adolescent girls or boys. Therefore, characteristics associated with using computers with friends may create a comfortable environment for socially anxious adolescents to interact with their peers which may be distinct from other more traditional adolescent activities. 2010 Elsevier Ltd. All rights reserved. |
Comparison of Production, Meat Yield, and Meat Quality Traits of NWAC103 Line Channel Catfish, Norris Line Channel Catfish, and Female Channel Catfish × Male Blue Catfish F1 Hybrids | —NWAC103 line channel catfish Ictalurus punctatus, Norris line channel catfish, and Norris line female channel catfish 3 Dycus Farm line male blue catfish I. furcatus F1 hybrids were compared for production, meat yield, and meat quality traits. Juvenile fish from each genetic group were stocked at 12,000 fish/ha into three, 0.04-ha ponds per genetic group. Fish were fed once daily to satiation from June through October, and fed on days when afternoon water temperatures were above 178C from November through December. Fish were harvested, weighed, and counted in January, and 150 fish per genetic group (50 fish per pond) were processed and measured for meat and body component yield. Instrumental and sensory panel evaluations of quality were measured on fresh, frozen-thawed, and baked fillets. Stocking weight, harvest weight, and net production (kg/ha) were highest for the NWAC103 line channel catfish, intermediate for the hybrid, and lowest for the Norris line channel catfish. Growth at unit size (a), percent weight gain, survival, and feed conversion were not significantly different among genetic groups. Carcass yield (relative to whole weight) and fillet yield were higher for the hybrid than for the two channel catfish lines, and higher for females than for males in all genetic groups. Head yield and total viscera yield were higher for the channel catfish lines than the hybrid. Head yield was higher for males than for females, and total viscera yield was higher for females than for males. Visceral fat yield was higher for the hybrid than for the two channel catfish lines. Instrumental and sensory panel analysis indicated only minor differences among genetic groups for fillet quality. Thus, catfish producers and processors can improve important traits and increase profits by utilizing catfish lines with superior performance. Production traits (growth, feed conversion, and survival) and processing traits (meat yield and quality) are economically important traits of farmraised catfish, and improving these traits will benefit the catfish farming industry. Genetic improvement programs have resulted in substantial improvements in production and processing traits in other meat animal species, and similar improvements could be achieved in farm-raised catfish. Evaluation, identification, and use of superior germplasm from existing genetic resources could allow rapid improvements in the production and processing traits of farm-raised catfish. Part of the mission of the USDA–ARS Catfish Genetics Research Unit is to produce genetically superior germplasm for release to the catfish farming in* Corresponding author: [email protected] Received August 13, 2003; accepted January 14, 2004 dustry. The NWAC103 line of channel catfish Ictalurus punctatus (a line of catfish developed jointly between the USDA–ARS and Mississippi State University) is being compared with other genetic groups of catfish having potential commercial use. The objective of this study was to compare the NWAC103 line channel catfish, Norris line channel catfish (a line of fish currently being used by some commercial producers), and Norris line female channel catfish 3 Dycus Farm line male blue catfish I. furcatus F1 hybrids for production, meat yield, and meat quality traits. |
Comparison of penile length at 6–24 months between children with unilateral cryptorchidism and a healthy normal cohort | Purpose
Urologic diseases affected by testosterone can be associated with smaller penis size compared to the normal population. We sought to compare penile length in children with unilateral cryptorchidism and normative data from a cohort of healthy Korean boys.
Materials and Methods
This study was performed in 259 Korean boys (212, normal cohort; 47, cryptorchidism) aged 6-24 months, each of whom had been brought to an outpatient clinic at one of five tertiary hospitals (Gyeongsangnam-do Province) between April 2014 and June 2015. Penile length was measured via stretched penile length (SPL) and testicular size was measured using orchidometry (mL).
Results
SPL in children with cryptorchidism was significantly shorter compared to a cohort of healthy Korean boys aged 6-24 months (3.7±0.5 cm and 4.3±0.8 cm, p<0.001), although there were no differences with regard to height, body weight and contralateral testicular size between the two groups. According to the stratified ages (6-12, 12-18, and 18-24 months), SPL in children with cryptorchidism was persistently shorter at their ages than those without.
Conclusions
It might be that the penile length aged 6-24 months of children with unilateral cryptorchidism is shorter than that of a cohort of healthy Korean boys. |
Compositeness Effects in the Anomalous Weak-Magnetic Moment of Leptons | We investigate the effects induced by excited leptons, at the one-loop level, in the anomalous magnetic and weak-magnetic form factors of the leptons. Using a general effective Lagrangian approach to describe the couplings of the excited leptons, we compute their contributions to the weak-magnetic moment of the $\tau$ lepton, which can be measured on the $Z$ peak, and we compare it with the contributions to $g_\mu - 2$, measured at low energies. |
Who are the crowdworkers?: shifting demographics in mechanical turk | Amazon Mechanical Turk (MTurk) is a crowdsourcing system in which tasks are distributed to a population of thousands of anonymous workers for completion. This system is increasingly popular with researchers and developers. Here we extend previous studies of the demographics and usage behaviors of MTurk workers. We describe how the worker population has changed over time, shifting from a primarily moderate-income, U.S.-based workforce towards an increasingly international group with a significant population of young, well-educated Indian workers. This change in population points to how workers may treat Turking as a full-time job, which they rely on to make ends meet. |
Level Playing Field for Million Scale Face Recognition | Face recognition has the perception of a solved problem, however when tested at the million-scale exhibits dramatic variation in accuracies across the different algorithms [11]. Are the algorithms very different? Is access to good/big training data their secret weapon? Where should face recognition improve? To address those questions, we created a benchmark, MF2, that requires all algorithms to be trained on same data, and tested at the million scale. MF2 is a public large-scale set with 672K identities and 4.7M photos created with the goal to level playing field for large scale face recognition. We contrast our results with findings from the other two large-scale benchmarks MegaFace Challenge and MS-Celebs-1M where groups were allowed to train on any private/public/big/small set. Some key discoveries: 1) algorithms, trained on MF2, were able to achieve state of the art and comparable results to algorithms trained on massive private sets, 2) some outperformed themselves once trained on MF2, 3) invariance to aging suffers from low accuracies as in MegaFace, identifying the need for larger age variations possibly within identities or adjustment of algorithms in future testing. |
Comparison of pharmacy-based measures of medication adherence | BACKGROUND
Pharmacy databases are commonly used to assess medication usage, and a number of measures have been developed to measure patients' adherence to medication. An extensive literature now supports these measures, although few studies have systematically compared the properties of different adherence measures.
METHODS
As part of an 18-month randomized clinical trial to assess the impact of automated telephone reminders on adherence to inhaled corticosteroids (ICS) among 6903 adult members of a managed care organization, we computed eight pharmacy-based measures of ICS adherence using outpatient pharmacy dispensing records obtained from the health plan's electronic medical record. We used simple descriptive statistics to compare the relative performance characteristics of these measures.
RESULTS
Comparative analysis found a relative upward bias in adherence estimates for those measures that require at least one dispensing event to be calculated. Measurement strategies that require a second dispensing event evidence even greater upward bias. These biases are greatest with shorter observation times. Furthermore, requiring a dispensing to be calculated meant that these measures could not be defined for large numbers of individuals (17-32 % of participants in this study). Measurement strategies that do not require a dispensing event to be calculated appear least vulnerable to these biases and can be calculated for everyone. However they do require additional assumptions and data (e.g., pre-intervention dispensing data) to support their validity.
CONCLUSIONS
Many adherence measures require one, or sometimes two, dispensings in order to be defined. Since such measures assume all dispensed medication is used as directed, they have a built in upward bias that is especially pronounced when they are calculated over relatively short timeframes (< 9 months). Less biased measurement strategies that do not require a dispensing event are available, but require additional data to support their validity.
TRIAL REGISTRATION
The study was funded by grant R01HL83433 from the National Heart, Lung and Blood Institute (NHLBI) and is filed as study NCT00414817 in the clinicaltrials.gov database. |
Relations and effects of transformational leadership: a comparative analysis with traditional leadership styles. | This study has two main goals: (a) to compare the relationship between transformational leadership and other important leadership styles (i.e., democratic versus autocratic or relations- and task-oriented leadership) and (b) to compare the effects of transformational leadership and the other styles on some important organizational outcomes such as employees' satisfaction and performance. For this purpose, a sample of 147 participants, working in 35 various work-teams, was used. Results show high correlations between transformational leadership, relations-oriented, democratic, and task-oriented leadership. On the other hand, according to the literature, transformational leadership, especially high levels, significantly increases the percentage of variance accounted for by other leadership styles in relevant organizational outcome variables (subordinates' performance, satisfaction and extra effort). |
A stochastic approach to traffic congestion costs | The real world is a complex dynamic and stochastic environment. This is especially true for the traffic moving daily on our roads. As such, accurate modeling that correctly considers the real-world dynamics and the inherent stochasticity is very important, especially if government will base its road tax decisions on the outcomes of these models. The contemporary traffic prices, if any, however do not reflect the external congestion costs. In order to induce road users to make the correct decision, marginal external costs should be internalized. To assess these costs, the public sector managers need accurate operational models. We show in this article that using a better representation and characterization of the road traffic, via stochastic queueing models, leads to a more adequate reflection of the congestion costs involved. Using extensive numerical experiments, we show the superiority of the stochastic traffic flow models. |
Reinforcement learning algorithms for solving classification problems | We describe a new framework for applying reinforcement learning (RL) algorithms to solve classification tasks by letting an agent act on the inputs and learn value functions. This paper describes how classification problems can be modeled using classification Markov decision processes and introduces the Max-Min ACLA algorithm, an extension of the novel RL algorithm called actor-critic learning automaton (ACLA). Experiments are performed using 8 datasets from the UCI repository, where our RL method is combined with multi-layer perceptrons that serve as function approximators. The RL method is compared to conventional multi-layer perceptrons and support vector machines and the results show that our method slightly outperforms the multi-layer perceptron and performs equally well as the support vector machine. Finally, many possible extensions are described to our basic method, so that much future research can be done to make the proposed method even better. |
Acoustic event detection in real life recordings | This paper presents a system for acoustic event detection in recordings from real life environments. The events are modeled using a network of hidden Markov models; their size and topology is chosen based on a study of isolated events recognition. We also studied the effect of ambient background noise on event classification performance. On real life recordings, we tested recognition of isolated sound events and event detection. For event detection, the system performs recognition and temporal positioning of a sequence of events. An accuracy of 24% was obtained in classifying isolated sound events into 61 classes. This corresponds to the accuracy of classifying between 61 events when mixed with ambient background noise at 0dB signal-to-noise ratio. In event detection, the system is capable of recognizing almost one third of the events, and the temporal positioning of the events is not correct for 84% of the time. |
Vision-based intelligent vehicles: State of the art and perspectives | Recently, a large emphasis has been devoted to Automatic Vehicle Guidance since the automation of driving tasks carries a large number of benefits, such as the optimization of the use of transport infrastructures, the improvement of mobility, the minimization of risks, travel time, and energy consumption. This paper surveys the most common approaches to the challenging task of Autonomous Road Following reviewing the most promising experimental solutions and prototypes developed worldwide using AI techniques to perceive the environmental situation by means of artificial vision. The most interesting results and trends in this field as well as the perspectives on the evolution of intelligent vehicles in the next decades are also sketched out. © 2000 Elsevier Science B.V. All rights reserved. |
An e-service infrastructure for power distribution | Delivering Web services based on data collected from distributed networks of smart devices presents several business and data-integration challenges for providers. Enterprise users need a scalable, standards-based mechanism that lets them run services without becoming experts in the technical aspects of service delivery. The Inside software infrastructure attempts to provide such a solution. Inside uses the Java 2 Enterprise Edition, the Open Services Gateway Initiative, and a suite of mediation components to address issues of scalability, dynamism, and transparency. |
CHAPTER 12 WORKFLOW ENGINE FOR CLOUDS | A workflow models a process as consisting of a series of steps that simplifies the complexity of execution and management of applications. Scientific workflows in domains such as high-energy physics and life sciences utilize distributed resources in order to access, manage, and process a large amount of data from a higher level. Processing and managing such large amounts of data require the use of a distributed collection of computation and storage facilities. These resources are often limited in supply and are shared among many competing users. The recent progress in virtualization technologies and the rapid growth of cloud computing services have opened a new paradigm in distributed computing for utilizing existing (and often cheaper) resource pools for ondemand and scalable scientific computing. Scientific Workflow Management Systems (WfMS) need to adapt to this new paradigm in order to leverage the benefits of cloud services. Cloud services vary in the levels of abstraction and hence the type of service they present to application users. Infrastructure virtualization enables providers such as Amazon to offer virtual hardware for use in computeand dataintensive workflow applications. Platform-as-a-Service (PaaS) clouds expose a higher-level development and runtime environment for building and deploying workflow applications on cloud infrastructures. Such services may also expose domain-specific concepts for rapid-application development. Further up in the cloud stack are Software-as-a-Service providers who offer end users with |
Antenna-filter-antenna arrays as a class of bandpass frequency-selective surfaces | A method is introduced for designing bandpass frequency-selective surfaces (FSSs) using arrays of antenna-filter-antenna (AFA) modules. An AFA module is a filter with radiation ports, which is obtained by integrating two antennas and a nonradiating resonant structure in between. AFA modules are designed based on circuit models and microwave filter design techniques. Three types of these AFA modules are designed using microstrip antennas and coplanar-waveguide resonators, and are used to form FSSs with three- and four-pole shaped bandpass response at 35 GHz. FSS structures are formed by arraying these modules in a periodic grid with an optimal cell size. The proposed concept and the design method are validated using numerical simulation (finite-element method), as well as experimental results. |
Social, economic and environmental capital : corporate citizenship in a new economy | Corporate Citizenship as a concept has been with us for many years now, but the 1990s saw a revitalisation of its main themes and issues, particularly around corporate social responsibility, business/community partnerships and social auditing/accountability. More recently, moves have been made, mostly in Europe, to reposition corporate citizenship more fully into new economics thinking. This paper examines this thinking and the effects this may have on the development of corporate citizenship as a new paradigm in the sustainability debate. |
Building Classifiers with Independency Constraints | In this paper we study the problem of classifier learning where the input data contains unjustified dependencies between some data attributes and the class label. Such cases arise for example when the training data is collected from different sources with different labeling criteria or when the data is generated by a biased decision process. When a classifier is trained directly on such data, these undesirable dependencies will carry over to the classifier’s predictions. In order to tackle this problem, we study the classification with independency constraints problem: find an accurate model for which the predictions are independent from a given binary attribute. We propose two solutions for this problem and present an empirical validation. |
Perceptually Optimized Image Rendering | We develop a framework for rendering photographic images by directly optimizing their perceptual similarity to the original visual scene. Specifically, over the set of all images that can be rendered on a given display, we minimize the normalized Laplacian pyramid distance (NLPD), a measure of perceptual dissimilarity that is derived from a simple model of the early stages of the human visual system. When rendering images acquired with a higher dynamic range than that of the display, we find that the optimization boosts the contrast of low-contrast features without introducing significant artifacts, yielding results of comparable visual quality to current state-of-the-art methods, but without manual intervention or parameter adjustment. We also demonstrate the effectiveness of the framework for a variety of other display constraints, including limitations on minimum luminance (black point), mean luminance (as a proxy for energy consumption), and quantized luminance levels (halftoning). We show that the method may generally be used to enhance details and contrast, and, in particular, can be used on images degraded by optical scattering (e.g., fog). Finally, we demonstrate the necessity of each of the NLPD components-an initial power function, a multiscale transform, and local contrast gain control-in achieving these results and we show that NLPD is competitive with the current state-of-the-art image quality metrics. |
An LDA-based Community Structure Discovery Approach for Large-Scale Social Networks | Community discovery has drawn significant research interests among researchers from many disciplines for its increasing application in multiple, disparate areas, including computer science, biology, social science and so on. This paper describes an LDA(latent Dirichlet Allocation)-based hierarchical Bayesian algorithm, namely SSN-LDA (simple social network LDA). In SSN-LDA, communities are modeled as latent variables in the graphical model and defined as distributions over the social actor space. The advantage of SSN-LDA is that it only requires topological information as input. This model is evaluated on two research collaborative networkst: CtteSeer and NanoSCI. The experimental results demonstrate that this approach is promising for discovering community structures in large-scale networks. |
Camera Constraint-Free View-Based 3-D Object Retrieval | Recently, extensive research efforts have been dedicated to view-based methods for 3-D object retrieval due to the highly discriminative property of multiviews for 3-D object representation. However, most of state-of-the-art approaches highly depend on their own camera array settings for capturing views of 3-D objects. In order to move toward a general framework for 3-D object retrieval without the limitation of camera array restriction, a camera constraint-free view-based (CCFV) 3-D object retrieval algorithm is proposed in this paper. In this framework, each object is represented by a free set of views, which means that these views can be captured from any direction without camera constraint. For each query object, we first cluster all query views to generate the view clusters, which are then used to build the query models. For a more accurate 3-D object comparison, a positive matching model and a negative matching model are individually trained using positive and negative matched samples, respectively. The CCFV model is generated on the basis of the query Gaussian models by combining the positive matching model and the negative matching model. The CCFV removes the constraint of static camera array settings for view capturing and can be applied to any view-based 3-D object database. We conduct experiments on the National Taiwan University 3-D model database and the ETH 3-D object database. Experimental results show that the proposed scheme can achieve better performance than state-of-the-art methods. |
Introduction and review | The papers from this section fall into four broad categories:
The first two papers in the section serve to illustrate the important role that halokinesis of the Zechstein salt played in controlling sedimentation in parts of the Central Graben. Smith et al. demonstrated that, even as early as the Triassic, salt movements were instrumental in controlling and focusing sedimentation in synforms developed on the surface of the salt. There also appears to be a salt-controlled ‘grain’ to the Jurassic pre-rift cover sequence.
This theme is developed further by Weston et al. in their use of analogue models to study the interaction between active faulting, sedimentation and salt movement. The objective of this ongoing research project is to improve our understanding of the factors controlling diapirism and its effects on the overburden sequence, particularly the geometry of beds close to the diapir where seismic data quality is generally poor.
Morgan and Cutts show that the Triassic of the Crawford Field is cut by a series of low-angle faults which appear to sole out towards the top of the Permian. Considerable block rotation has occurred, resulting in loss of section across the faults. This has serious implications for correlation work if, where present, such faults are not recognized. This, and other related papers on the East Irish Sea Basin, indicate. . . |
BlueTorrent: Cooperative Content Sharing for Bluetooth Users | People wish to enjoy their everyday lives in various ways, among which entertainment plays a major role. In order to improve lifestyle with more ready access to entertainment content, we propose BlueTorrent, a P2P file sharing application based on ubiquitous Blue tooth-enabled devices such as PDAs, cellphones and smart phones. Using BlueTorrent, people can share audio/video contents as they move about shopping malls, airports, subway stations etc. BlueTorrent poses new challenges caused by limited bandwidth, short communications range, mobile users and variable population density. A key ingredient is efficient peer discovery. This paper approaches the problem by analyzing the Bluetooth periodic inquiry mode and by finding the optimum inquiry/connection time settings. At the application layer, the BlueTorrent index/block dissemination protocol is then designed and analyzed. The entire system is integrated and implemented both in simulation and in an experimental testbed. Simulation and measurement results are used to evaluate and validate the performance of BlueTorrent in content sharing scenarios |
Value of magnetic resonance imaging in the evaluation of sex-reassignment surgery in male-to-female transsexuals* | Background: We investigated the value of magnetic resonance imaging (MRI) in the evaluation of sex-reassignment surgery in male-to-female transsexual patients. Methods: Ten male-to-female transsexual patients who underwent sex-reassignment surgery with inversion of combined penile and scrotal skin flaps for vaginoplasty were examined after surgery with MRI. Turbo spin-echo T2-weighted and spin-echo T1-weighted images were obtained in sagittal, coronal, and axial planes with a 1.5-T superconductive magnet. Images were acquired with and without an inflatable silicon vaginal tutor. The following parameters were evaluated: neovaginal depth, neovaginal inclination in the sagittal plane, presence of remnants of the corpus spongiosum and corpora cavernosa, and thickness of the rectovaginal septum. Results: The average neovaginal depth was 7.9 cm (range = 5–10 cm). The neovagina had a correct oblique inclination in the sagittal plane in four patients, no inclination in five, and an incorrect inclination in one. In seven patients, MRI showed remnants of the corpora cavernosa and/or of the corpus spongiosum; in three patients, no remnants were detected. The average thickness of the rectovaginal septum was 4 mm (range = 3–6 mm). Conclusion: MRI allows a detailed assessment of the pelvic anatomy after genital reconfiguration and provides information that can help the surgeon to adopt the most correct surgical approach. |
Teachers’ Use of Technology and Constructivism | Technology has changed the way we teach and the way we learn. Many learning theories can be used to apply and integrate this technology more effectively. There is a close relationship between technology and constructivism, the implementation of each one benefiting the other. Constructivism states that learning takes place in contexts, while technology refers to the designs and environments that engage learners. Recent efforts to integrate technology in the classroom have been within the context of a constructivist framework. The purpose of this paper is to examine the definition of constructivism, incorporating technology into the classroom, successful technology integration into the classroom, factors contributing to teachers’ use of technology, role of technology in a constructivist classroom, teacher’s use of learning theories to enable more effective use of technology, learning with technology: constructivist perspective, and constructivism as a framework for educational technology. This paper explains whether technology by itself can make the education process more effective or if technology needs an appropriate instructional theory to indicate its positive effect on the learner. Index Terms Technology, Definition, Role, Constructivism, Benefits, Factors, Learning Theory |
Collaborative Consumption On Mobile Applications: A Study Of Multi-sided Digital Platform GoCatch | This paper examines the role of IT in developing collaborative consumption. We present a study of the multi-sided platform goCatch, which is widely recognized as a mobile application and digital disruptor in the Australian transport industry. From our investigation, we find that goCatch uses IT to create situational-based and object-based opportunities to enable collaborative consumption and in turn digital disruption to the incumbent industry. We also highlight the factors to consider in developing a mobile application to connect with customers, and serve as a viable competitive option for responding to competition. Such research is necessary in order to better understand how service providers extract business value from digital technologies to formulate new breakthrough strategies, design compelling new products and services, and transform management processes. Ongoing work will reveal how m-commerce service providers can extract business value from a collaborative consumption model. |
Cs229 Problem Set #2 Solutions | Notes: (1) These questions require thought, but do not require long answers. Please be as concise as possible. (2) If you have a question about this homework, we encourage you to post your question on our Piazza forum, at https://piazza.com/stanford/fall2015/cs229. (3) If you missed the first lecture or are unfamiliar with the collaboration or honor code policy, please read the policy on Handout #1 (available from the course website) before starting work. (4) For problems that require programming, please include in your submission a printout of your code (with comments) and any figures that you are asked to plot. (5) If you are an on-campus (non-SCPD) student, please print, fill out, and include a copy of the cover sheet (enclosed as the final page of this document), and include the cover sheet as the first page of your submission. SCPD students: If you are submitting on time without using late-days, please submit your assignments through the SCPD office. Consult Piazza post 35 for details. Otherwise, please submit your assignments at as a single PDF file under 20MB in size. If you have trouble submitting online, you can also email your submission to [email protected]. However, we strongly recommend using the website submission method as it will provide confirmation of submission, and also allow us to track and return your graded homework to you more easily. If you are scanning your document by cellphone, please check the Piazza forum for recommended cellphone scanning apps and best practices. 1. [15 points] Constructing kernels In class, we saw that by choosing a kernel K(x, z) = φ(x) T φ(z), we can implicitly map data to a high dimensional space, and have the SVM algorithm work in that space. One way to generate kernels is to explicitly define the mapping φ to a higher dimensional space, and then work out the corresponding K. However in this question we are interested in direct construction of kernels. I.e., suppose we have a function K(x, z) that we think gives an appropriate similarity measure for our learning problem, and we are considering plugging K into the SVM as the kernel function. However for K(x, z) to be a valid kernel, it must correspond to an inner product in some higher dimensional space resulting from some feature mapping φ. Mercer's theorem tells us that K(x, z) is a (Mercer) kernel if and only if for any finite set {x |
Pain Control Program improves family caregivers ’ knowledge of cancer pain management | Background: The majority of cancer treatment is provided in outpatient settings. Family caregivers' (FCs') knowledge and beliefs about pain and its management are critical components of effective care. Objective: This study's aim was to evaluate the efficacy of a psychoeducational intervention, compared to control, to increase FCs' knowledge of cancer pain management. Intervention/method: FCs of oncology outpatients were randomized together with the patients into the PROSELF © Pain Control Program (n=58) or a control group (n=54). FCs completed a demographic questionnaire and the Family Pain Questionnaire (FPQ) at the beginning and end of the study to assess their knowledge about pain and its management. The intervention consisted of nurse coaching, home visits, and phone calls that occurred over 6 weeks. Results: One hundred and twelve FCs (60% female) with a mean age of 63 years (SD 10.7) participated. Compared to FCs in the control group, FCs in the PRO-SELF© group had significantly higher knowledge scores on all of the single items on the FPQ, except for the item "cancer pain can be relieved", as well as for the total FPQ score. Conclusion: The use of a knowledge and attitude survey like the FPQ, as part of a psychoeducational intervention provides an effective foundation for FC education about cancer pain management. Implications for practice: Oncology nurses can use FCs' responses to the FPQ to individualize teaching and spend more time on identified knowledge deficits. This individualized approach to FC education may save staff time and improve patient outcomes. |
Norepinephrine turnover is increased in suprabulbar subcortical brain regions and is related to whole-body sympathetic activity in human heart failure. | BACKGROUND
Although it is established that heightened sympathetic drive exists in congestive heart failure (CHF), the reflex processes by which this may occur and the sites in the central nervous system that may be responsible for mediating this process are not yet fully elucidated.
METHODS AND RESULTS
Eight patients with moderate to severe CHF and 8 healthy control subjects underwent simultaneous arterial and bilateral internal jugular venous blood sampling and cerebral venous blood pool scanning for anatomical determination of the origin of internal jugular venous blood flow. We estimated sympathetic nervous activity by measuring total body norepinephrine (NE) spillover using radiotracer methodology and determined brain NE turnover by measuring the internal jugular overflow of NE and its lipophilic metabolites, 3-methoxy-4-hydroxyphenylglycol and 3,4-dihydroxyphenylglycol. Suprabulbar subcortical turnover of NE was significantly greater in CHF patients than in the healthy group (2.77 +/- 0.75 versus 0.66 +/- 0.40 nmol/min, P<0.05). There was a significant positive correlation between suprabulbar subcortical turnover of NE and total body NE spillover (r=0.62, P=0.01).
CONCLUSIONS
This study, for the first time, demonstrates elevated suprabulbar subcortical noradrenergic activity in human CHF and identifies a positive correlation between this and the level of whole-body NE spillover. The findings suggest that the activation of noradrenergic neurons projecting rostrally from the brain stem mediates sympathetic nervous stimulation in CHF. |
Model Ensemble for Click Prediction in Bing Search Ads | Accurate estimation of the click-through rate (CTR) in sponsored ads significantly impacts the user search experience and businesses’ revenue, even 0.1% of accuracy improvement would yield greater earnings in the hundreds of millions of dollars. CTR prediction is generally formulated as a supervised classification problem. In this paper, we share our experience and learning on model ensemble design and our innovation. Specifically, we present 8 ensemble methods and evaluate them on our production data. Boosting neural networks with gradient boosting decision trees turns out to be the best. With larger training data, there is a nearly 0.9% AUC improvement in offline testing and significant click yield gains in online traffic. In addition, we share our experience and learning on improving the quality of training. |
Dictionary Learning | We describe methods for learning dictionaries that are appropriate for the representation of given classes of signals and multisensor data. We further show that dimensionality reduction based on dictionary representation can be extended to address specific tasks such as data analy sis or classification when the learning includes a class separability criteria in the objective function. The benefits of dictionary learning clearly show that a proper understanding of causes underlying the sensed world is key to task-specific representation of relevant information in high-dimensional data sets. |
Beware, Your Hands Reveal Your Secrets! | Research on attacks which exploit video-based side-channels to decode text typed on a smartphone has traditionally assumed that the adversary is able to leverage some information from the screen display (say, a reflection of the screen or a low resolution video of the content typed on the screen). This paper introduces a new breed of side-channel attack on the PIN entry process on a smartphone which entirely relies on the spatio-temporal dynamics of the hands during typing to decode the typed text. Implemented on a dataset of 200 videos of the PIN entry process on an HTC One phone, we show, that the attack breaks an average of over 50% of the PINs on the first attempt and an average of over 85% of the PINs in ten attempts. Because the attack can be conducted in such a way not to raise suspicion (i.e., since the adversary does not have to direct the camera at the screen), we believe that it is very likely to be adopted by adversaries who seek to stealthily steal sensitive private information. As users conduct more and more of their computing transactions on mobile devices in the open, the paper calls for the community to take a closer look at the risks posed by the now ubiquitous camera-enabled devices. |
SCENARIO-BASED EVALUATION OF ENTERPRISE ARCHITECTURE - A top-down approach for chief information officer decision making | As the primary stakeholder for the Enterprise Architecture, the Chief Information Officer (CIO) is responsible for the evolution of the enterprise IT system. An important part of the CIO role is therefore to make decisions about strategic and complex IT matters. This paper presents a cost effective and scenariobased approach for providing the CIO with an accurate basis for decision making. Scenarios are analyzed and compared against each other by using a number of problem-specific easily measured system properties identified in literature. In order to test the usefulness of the approach, a case study has been carried out. A CIO needed guidance on how to assign functionality and data within four overlapping systems. The results are quantifiable and can be presented graphically, thus providing a cost-efficient and easily understood basis for decision making. The study shows that the scenario-based approach can make complex Enterprise Architecture decisions understandable for CIOs and other business-orientated stakeholders |
Gold nanocages as multifunctional materials for nanomedicine | Featured by tunable localized surface plasmon resonance peaks in the near-infrared region and hollow interiors, Au nanocages represent a novel class of multifunctional nanomaterials that have gained considerable attention in recent years. This short review summarizes our recent work on the capabilities of Au nanocages in nanomedicine. We start with a brief description of the synthesis of Au nanocages and highlight our recent protocols for the scaled-up production of Au nanocages. We then use a number of examples to illustrate how Au nanocages can contribute to nanomedicine with respect to both diagnosis and therapy. |
The Development of Digital Library User Interface by Using Responsive Web Design and User Experience | The digital library has the purpose of providing services to the user, therefore, user experience factor and adjusting the display device diversity becomes the main thing to consider in building a digital library. The digital library is visited by many users because it provides a lot of information so that the existing content should be accessible anywhere, anytime and using a variety of devices. This research focuses on the design of digital library interface that adapts to the user experience and a variety of devices. The method used in this study is testing the user experience for the evaluation of digital library interface and Responsive Web Design for the development of digital library interface. The confidence interval is used as the analysis of the test data. Implementation of the user experience and Responsive Web Design in the digital library interface design proposals can improve the estimated percentage of success and task time users. |
Video-Based Person Re-Identification With Accumulative Motion Context | Video-based person re-identification plays a central role in realistic security and video surveillance. In this paper, we propose a novel accumulative motion context (AMOC) network for addressing this important problem, which effectively exploits the long-range motion context for robustly identifying the same person under challenging conditions. Given a video sequence of the same or different persons, the proposed AMOC network jointly learns appearance representation and motion context from a collection of adjacent frames using a two-stream convolutional architecture. Then, AMOC accumulates clues from motion context by recurrent aggregation, allowing effective information flow among adjacent frames and capturing dynamic gist of the persons. The architecture of AMOC is end-to-end trainable, and thus, motion context can be adapted to complement appearance clues under unfavorable conditions (e.g., occlusions). Extensive experiments are conduced on three public benchmark data sets, i.e., the iLIDS-VID, PRID-2011, and MARS data sets, to investigate the performance of AMOC. The experimental results demonstrate that the proposed AMOC network outperforms state-of-the-arts for video-based re-identification significantly and confirm the advantage of exploiting long-range motion context for video-based person re-identification, validating our motivation evidently. |
Comparative Evaluation of Spin-Transfer-Torque and Magnetoelectric Random Access Memory | Spin-transfer torque random access memory (STT-RAM), as a promising nonvolatile memory technology, faces challenges of high write energy and low density. The recently developed magnetoelectric random access memory (MeRAM) enables the possibility of overcoming these challenges by the use of voltage-controlled magnetic anisotropy (VCMA) effect and achieves high density, fast speed, and low energy simultaneously. As both STT-RAM and MeRAM suffer from the reliability problem of write errors, we implement a fast Landau-Lifshitz-Gilbert equation-based simulator to capture their write error rate (WER) under process and temperature variation. We utilize a multi-write peripheral circuit to minimize WER and design reliable STT-RAM and MeRAM. With the same acceptable WER, MeRAM shows advantages of 83% faster write speed, 67.4% less write energy, 138% faster read speed, and 28.2% less read energy compared with STT-RAM. Benefiting from the VCMA effect, MeRAM also achieves twice the density of STT-RAM with a 32 nm technology node, and this density difference is expected to increase with technology scaling down. |
Accurate rough terrain estimation with space-carving kernels | Accurate terrain estimation is critical for autonomous offroad navigation. Reconstruction of a 3D surface allows rough and hilly ground to be represented, yielding faster driving and better planning and control. However, data from a 3D sensor samples the terrain unevenly, quickly becoming sparse at longer ranges and containing large voids because of occlusions and inclines. The proposed approach uses online kernel-based learning to estimate a continuous surface over the area of interest while providing upper and lower bounds on that surface. Unlike other approaches, visibility information is exploited to constrain the terrain surface and increase precision, and an efficient gradient-based optimization allows for realtime implementation. |
Keyphrase Extraction Using Knowledge Graphs | Extracting keyphrases from documents automatically is an important and interesting task since keyphrases provide a quick summarization for documents. Although lots of efforts have been made on keyphrase extraction, most of the existing methods (the co-occurrence-based methods and the statistic-based methods) do not take semantics into full consideration. The co-occurrence-based methods heavily depend on the co-occurrence relations between two words in the input document, which may ignore many semantic relations. The statistic-based methods exploit the external text corpus to enrich the document, which introduce more unrelated relations inevitably. In this paper, we propose a novel approach to extract keyphrases using knowledge graphs, based on which we could detect the latent relations of two keyterms (i.e., noun words and named entities) without introducing many noises. Extensive experiments over real data show that our method outperforms the state-of-the-art methods including the graph-based co-occurrence methods and statistic-based clustering methods. |
A multi-mode cavity filter with Jerusalem Cross structure resonator | In this paper, a novel multi-mode dielectric resonator sandwiched by a Jerusalem Cross metal structure in the middle is proposed. This resonator has one degenerate dual modes and another nearby mode. These three modes are smartly used in a cylindrical cavity to generate two transmission zeros and one transmission pole. Therefore a single-cavity multi-mode bandpass filter is designed. In order to excite the three modes, two ports are placed orthogonally and another two tuning screws are needed to provide similar capacitance effects as the port pins. Simulation results show that the cavity with the resonator successfully becomes a bandpass filter and measurement results also prove this concept. |
Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems | Particle swarm optimization (PSO) is a population-based swarm intelligence algorithm that shares many similarities with evolutionary computation techniques. However, the PSO is driven by the simulation of a social psychological metaphor motivated by collective behaviors of bird and other social organisms instead of the survival of the fittest individual. Inspired by the classical PSO method and quantum mechanics theories, this work presents novel quantum-behaved PSO (QPSO) approaches using mutation operator with Gaussian probability distribution. The application of Gaussian mutation operator instead of random sequences in QPSO is a powerful strategy to improve the QPSO performance in preventing premature convergence to local optima. In this paper, new combinations of QPSO and Gaussian probability distribution are employed in well-studied continuous optimization problems of engineering design. Two case studies are described and evaluated in this work. Our results indicate that Gaussian QPSO approaches handle such problems efficiently in terms of precision and convergence and, in most cases, they outperform the results presented in the literature. 2009 Elsevier Ltd. All rights reserved. |
An Evaluation of Distributed Concurrency Control | Increasing transaction volumes have led to a resurgence of interest in distributed transaction processing. In particular, partitioning data across several servers can improve throughput by allowing servers to process transactions in parallel. But executing transactions across servers limits the scalability and performance of these systems. In this paper, we quantify the effects of distribution on concurrency control protocols in a distributed environment. We evaluate six classic and modern protocols in an in-memory distributed database evaluation framework called Deneva, providing an apples-to-apples comparison between each. Our results expose severe limitations of distributed transaction processing engines. Moreover, in our analysis, we identify several protocol-specific scalability bottlenecks. We conclude that to achieve truly scalable operation, distributed concurrency control solutions must seek a tighter coupling with either novel network hardware (in the local area) or applications (via data modeling and semantically-aware execution), or both. |
Emotion recognition from speech using global and local prosodic features | In this paper, global and local prosodic features extracted from sentence, word and syllables are proposed for speech emotion or affect recognition. In this work, duration, pitch, and energy values are used to represent the prosodic information, for recognizing the emotions from speech. Global prosodic features represent the gross statistics such as mean, minimum, maximum, standard deviation, and slope of the prosodic contours. Local prosodic features represent the temporal dynamics in the prosody. In this work, global and local prosodic features are analyzed separately and in combination at different levels for the recognition of emotions. In this study, we have also explored the words and syllables at different positions (initial, middle, and final) separately, to analyze their contribution towards the recognition of emotions. In this paper, all the studies are carried out using simulated Telugu emotion speech corpus (IITKGP-SESC). These results are compared with the results of internationally known Berlin emotion speech corpus (Emo-DB). Support vector machines are used to develop the emotion recognition models. The results indicate that, the recognition performance using local prosodic features is better compared to the performance of global prosodic features. Words in the final position of the sentences, syllables in the final position of the words exhibit more emotion discriminative information compared to the words and syllables present in the other positions. K.S. Rao ( ) · S.G. Koolagudi · R.R. Vempada School of Information Technology, Indian Institute of Technology Kharagpur, Kharagpur 721302, West Bengal, India e-mail: [email protected] S.G. Koolagudi e-mail: [email protected] R.R. Vempada e-mail: [email protected] |
Supporting Social Engagement for Young Audiences with Serious Games and Virtual Environments in Museums | Considering the shift of museums towards digital experiences that can satiate the interests of their young audiences, we suggest an integrated schema for socially engaging large visitor groups. As a means to present our position we propose a framework for audience involvement with complex educational material, combining serious games and virtual environments along with a theory of contextual learning in museums. We describe the research methodology for validating our framework, including the description of a testbed application and results from existing studies with children in schools, summer camps, and a museum. Such findings serve both as evidence for the applicability of our position and as a guidepost for the direction we should move to foster richer social engagement of young crowds. Author |
Analysis of participation in an online photo-sharing community: A multidimensional perspective | In recent years we have witnessed a significant growth of social-computing communities—online services in which users share information in various forms. As content contributions from participants are critical to the viability of these communities, it is important to understand what drives users to participate and share information with others in such settings. We extend previous literature on user contribution by studying the factors that are associated with various forms of participation in a large online photo-sharing community. Using survey and system data, we examine four different forms of participation and consider the differences between these forms. We build on theories of motivation to examine the relationship between users’ participation and their motivations with respect to their tenure in the community. Amongst our findings, we identify individual motivations (both extrinsic and intrinsic) that underpin user participation, and their effects on different forms of information sharing; we show that tenure in the community does affect participation, but that this effect depends on the type of participation activity. Finally, we demonstrate that tenure in the community has a weak moderating effect on a number of motivations with regard to their effect on participation. Directions for future research, as well as implications for theory and practice, are discussed. |
Visual Feature Attribution Using Wasserstein GANs | Attributing the pixels of an input image to a certain category is an important and well-studied problem in computer vision, with applications ranging from weakly supervised localisation to understanding hidden effects in the data. In recent years, approaches based on interpreting a previously trained neural network classifier have become the de facto state-of-the-art and are commonly used on medical as well as natural image datasets. In this paper, we discuss a limitation of these approaches which may lead to only a subset of the category specific features being detected. To address this problem we develop a novel feature attribution technique based on Wasserstein Generative Adversarial Networks (WGAN), which does not suffer from this limitation. We show that our proposed method performs substantially better than the state-of-the-art for visual attribution on a synthetic dataset and on real 3D neuroimaging data from patients with mild cognitive impairment (MCI) and Alzheimer's disease (AD). For AD patients the method produces compellingly realistic disease effect maps which are very close to the observed effects. |
Multiple identities in decentralized Spain: the case of Catalonia | The persistence of a dual self-identification expressed by citizens in the Spanish Comunidades Autónomas (nationalities and regions) is one of the main features of centre-periphery relations in democratic Spain. This 'dual identity' or 'compound nationality' incorporates -in variable proportions, individually or subjectively assertedboth state/national and ethnoterritorial identities with no apparent exclusion. It characterises the ambivalent and dynamic nature of spatial politics in decentralized Spain. A succinct review of the main developments in Spain's contemporary history is carried out in order to provide a background for the discussion of the various identities expressed by citizens in Catalonia. A segmentation analysis reviews the various forms of Catalan self-identification, among which ‘duality’ is to be underlined. |
Graphene-based polymer nanocomposites | The chapter gives general information about graphene, namely its structure, properties and methods of preparation, and highlights the methods for the preparation of graphene-based polymer nanocomposites. |
Flip-Rotate-Pooling Convolution and Split Dropout on Convolution Neural Networks for Image Classification | This paper presents a new version of Dropout called Split Dropout (sDropout) and rotational convolution techniques to improve CNNs’ performance on image classification. The widely used standard Dropout has advantage of preventing deep neural networks from overfitting by randomly dropping units during training. Our sDropout randomly splits the data into two subsets and keeps both rather than discards one subset. We also introduce two rotational convolution techniques, i.e. rotate-pooling convolution (RPC) and flip-rotate-pooling convolution (FRPC) to boost CNNs’ performance on the robustness for rotation transformation. These two techniques encode rotation invariance into the network without adding extra parameters. Experimental evaluations on ImageNet2012 classification task demonstrate that sDropout not only enhances the performance but also converges faster. Additionally, RPC and FRPC make CNNs more robust for rotation transformations. Overall, FRPC together with sDropout bring 1.18% (model of Zeiler and Fergus [24], 10-view, top-1) accuracy increase in ImageNet 2012 classification task compared to the original network. |
Instilling new habits: addressing implicit bias in healthcare professionals. | There appears to be a fundamental inconsistency between research which shows that some minority groups consistently receive lower quality healthcare and the literature indicating that healthcare workers appear to hold equality as a core personal value. Recent evidence using Implicit Association Tests suggests that these disparities in outcome may in part be due to social biases that are primarily unconscious. In some individuals the activation of these biases may be also facilitated by the high levels of cognitive load associated with clinical practice. However, a range of measures, such as counter-stereotypical stimuli and targeted experience with minority groups, have been identified as possible solutions in other fields and may be adapted for use within healthcare settings. We suggest that social bias should not be seen exclusively as a problem of conscious attitudes which need to be addressed through increased awareness. Instead the delivery of bias free healthcare should become a habit, developed through a continuous process of practice, feedback and reflection. |
Improving Distant Supervision for Information Extraction Using Label Propagation Through Lists | Because of polysemy, distant labeling for information extraction leads to noisy training data. We describe a procedure for reducing this noise by using label propagation on a graph in which the nodes are entity mentions, and mentions are coupled when they occur in coordinate list structures. We show that this labeling approach leads to good performance even when off-the-shelf classifiers are used on the distantly-labeled data. |
Expressing Emotions Through Color, Sound, and Vibration with an Appearance-Constrained Social Robot | Many researchers are now dedicating their efforts to studying interactive modalities such as facial expressions, natural language, and gestures. This phenomenon makes communication between robots and individuals become more natural. However, many robots currently in use are appearance constrained and not able to perform facial expressions and gestures. In addition, although humanoid-oriented techniques are promising, they are time and cost consuming, which leads to many technical difficulties in most research studies. To increase interactive efficiency and decrease costs, we alternatively focus on three interaction modalities and their combinations, namely color, sound, and vibration. We conduct a structured study to evaluate the effects of the three modalities on a human's emotional perception towards our simple-shaped robot "Maru." Our findings offer insights into human-robot affective interactions, which can be particularly useful for appearance-constrained social robots. The contribution of this work is not so much the explicit parameter settings but rather deepening the understanding of how to express emotions through the simple modalities of color, sound, and vibration while providing a set of recommended expressions that HRI researchers and practitioners could readily employ. |
Automatic player labeling, tracking and field registration and trajectory mapping in broadcast soccer video | In this article, we present a method to perform automatic player trajectories mapping based on player detection, unsupervised labeling, efficient multi-object tracking, and playfield registration in broadcast soccer videos. Player detector determines the players' positions and scales by combining the ability of dominant color based background subtraction and a boosting detector with Haar features. We first learn the dominant color with accumulate color histogram at the beginning of processing, then use the player detector to collect hundreds of player samples, and learn player appearance codebook by unsupervised clustering. In a soccer game, a player can be labeled as one of four categories: two teams, referee or outlier. The learning capability enables the method to be generalized well to different videos without any manual initialization. With the dominant color and player appearance model, we can locate and label each player. After that, we perform multi-object tracking by using Markov Chain Monte Carlo (MCMC) data association to generate player trajectories. Some data driven dynamics are proposed to improve the Markov chain's efficiency, such as label consistency, motion consistency, and track length, etc. Finally, we extract key-points and find the mapping from an image plane to the standard field model, and then map players' position and trajectories to the field. A large quantity of experimental results on FIFA World Cup 2006 videos demonstrate that this method can reach high detection and labeling precision, reliably tracking in scenes of player occlusion, moderate camera motion and pose variation, and yield promising field registration results. |
Novel Color LBP Descriptors for Scene and Image Texture Classification | Four novel color Local Binary Pattern (LBP) descriptors are presented in this paper for scene image and image texture classification with applications to image search and retrieval. The oRGB-LBP descriptor is derived by concatenating the LBP features of the component images in the oRGB color space. The Color LBP Fusion (CLF) descriptor is constructed by integrating the LBP descriptors from different color spaces; the Color Grayscale LBP Fusion (CGLF) descriptor is derived by integrating the grayscale-LBP descriptor and the CLF descriptor; and the CGLF+PHOG descriptor is obtained by integrating the Pyramid of Histogram of Orientation Gradients (PHOG) and the CGLF descriptor. Feature extraction applies the Enhanced Fisher Model (EFM) and image classification is based on the nearest neighbor classification rule (EFM-NN). The proposed image descriptors and the feature extraction and classification methods are evaluated using three grand challenge databases and are shown to improve upon the classification performance of existing methods. |
CHARACTERIZATION AND ANTIBACTERIAL STUDY OF PUMPKIN SEED OIL (CUCURBITA PEPO) | The current research deals with the extraction, physicochemical and antimicrobial parameters of pumpkin seed oil (Cucurbita pepo) of arid zone variety of Pakistan. Pumpkin is a nutritive and unique plant commonly used as vegetable all around the world. Its seeds and rinds are mostly thrown away after use however; they are rich in proteins and fatty acids. In Pakistan usage of pumpkin is not as frequent as other vegetable and least importance is given to its seed. Therefore a present study is designed to see the physicochemical and antimicrobial properties of pumpkin seed oil (Cucurbita pepo). Results showed that the pumpkin seed oil had acid value 0.83667 (mg KOH/g oil), saponification value 194.606 (g of I2/100 g oil), peroxide value 6.74 (meq O2/kg oil), iodine value 97.9766 (g of I2/100 g oil) and ester value 193.7640 which are in range of the standard levels. Gass chromatography analysis showed the presence of five major free fatty acids linoleic acid, oleic acid, palmitic acid, stearic acid, and linolenic acid and among them linoleic and oleic are the major ones. The oil also showed good antimicrobial activity against S. aureus with zone of inhibition of 15 mm. Therefore, it is feasible to be used as edible oil and for other purposes. |
Titanium dioxide and zinc oxide nanoparticles in sunscreens: focus on their safety and effectiveness. | Sunscreens are used to provide protection against adverse effects of ultraviolet (UV)B (290-320 nm) and UVA (320-400 nm) radiation. According to the United States Food and Drug Administration, the protection factor against UVA should be at least one-third of the overall sun protection factor. Titanium dioxide (TiO2) and zinc oxide (ZnO) minerals are frequently employed in sunscreens as inorganic physical sun blockers. As TiO2 is more effective in UVB and ZnO in the UVA range, the combination of these particles assures a broad-band UV protection. However, to solve the cosmetic drawback of these opaque sunscreens, microsized TiO2 and ZnO have been increasingly replaced by TiO2 and ZnO nanoparticles (NPs) (<100 nm). This review focuses on significant effects on the UV attenuation of sunscreens when microsized TiO2 and ZnO particles are replaced by NPs and evaluates physicochemical aspects that affect effectiveness and safety of NP sunscreens. With the use of TiO2 and ZnO NPs, the undesired opaqueness disappears but the required balance between UVA and UVB protection can be altered. Utilization of mixtures of micro- and nanosized ZnO dispersions and nanosized TiO2 particles may improve this situation. Skin exposure to NP-containing sunscreens leads to incorporation of TiO2 and ZnO NPs in the stratum corneum, which can alter specific NP attenuation properties due to particle-particle, particle-skin, and skin-particle-light physicochemical interactions. Both sunscreen NPs induce (photo)cyto- and genotoxicity and have been sporadically observed in viable skin layers especially in case of long-term exposures and ZnO. Photocatalytic effects, the highest for anatase TiO2, cannot be completely prevented by coating of the particles, but silica-based coatings are most effective. Caution should still be exercised when new sunscreens are developed and research that includes sunscreen NP stabilization, chronic exposures, and reduction of NPs' free-radical production should receive full attention. |
The influence of interactions between accommodation and convergence on the lag of accommodation. | Several models of myopia predict that growth of axial length is stimulated by blur. Accommodative lag has been suggested as an important source of blur in the development of myopia and this study has modeled how cross-link interactions between accommodation and convergence might interact with uncorrected distance heterophoria and refractive error to influence accommodative lag. Accommodative lag was simulated with two models of interactions between accommodation and convergence (one with and one without adaptable tonic elements). Simulations of both models indicate that both uncorrected hyperopia and esophoria increase the lag of accommodative and uncorrected myopia and exophoria decrease the lag or introduce a lead of accommodation in response to the near (40 cm) stimulus. These effects were increased when gain of either cross-link, accommodative convergence (AC/A) or convergence accommodation (CA/C), was increased within a moderate range of values while the other was fixed at a normal value (clamped condition). These effects were exaggerated when both the AC/A and CA/C ratios were increased (covaried condition) and affects of cross-link gain were negated when an increase of one cross-link (e.g. AC/A) was accompanied by a reduction of the other cross-link (e.g. CA/C) (reciprocal condition). The inclusion of tonic adaptation in the model reduced steady state errors of accommodation for all conditions except when the AC/A ratio was very high (2 MA/D). Combinations of cross-link interactions between accommodation and convergence that resemble either clamped or reciprocal patterns occur naturally in clinical populations. Simulations suggest that these two patterns of abnormal cross-link interactions could affect the progression of myopia differently. Adaptable tonic accommodation and tonic vergence could potentially reduce the progression of myopia by reducing the lag of accommodation. |
Game Theory for Cyber Security and Privacy | In this survey, we review the existing game-theoretic approaches for cyber security and privacy issues, categorizing their application into two classes, security and privacy. To show how game theory is utilized in cyberspace security and privacy, we select research regarding three main applications: cyber-physical security, communication security, and privacy. We present game models, features, and solutions of the selected works and describe their advantages and limitations from design to implementation of the defense mechanisms. We also identify some emerging trends and topics for future research. This survey not only demonstrates how to employ game-theoretic approaches to security and privacy but also encourages researchers to employ game theory to establish a comprehensive understanding of emerging security and privacy problems in cyberspace and potential solutions. |
Providers do not verify patient identity during computer order entry. | INTRODUCTION
Improving patient identification (ID), by using two identifiers, is a Joint Commission safety goal. Appropriate identifiers include name, date of birth (DOB), or medical record number (MRN).
OBJECTIVES
The objectives were to determine the frequency of verifying patient ID during computerized provider order entry (CPOE).
METHODS
This was a prospective study using simulated scenarios with an eye-tracking device. Medical providers were asked to review 10 charts (scenarios), select the patient from a computer alphabetical list, and order tests. Two scenarios had embedded ID errors compared to the computer (incorrect DOB or misspelled last name), and a third had a potential error (second patient on alphabetical list with same last name). Providers were not aware the focus was patient ID. Verifying patient ID was defined as looking at name and either DOB or MRN on the computer.
RESULTS
Twenty-five of 25 providers (100%; 95% confidence interval [CI] = 86% to 100%) selected the correct patient when there was a second patient with the same last name. Two of 25 (8%; 95% CI = 1% to 26%) noted the DOB error; the remaining 23 ordered tests on an incorrect patient. One of 25 (4%, 95% CI = 0% to 20%) noted the last name error; 12 ordered tests on an incorrect patient. No participant (0%, 0/107; 95% CI = 0% to 3%) verified patient ID by looking at MRN prior to selecting a patient from the alphabetical list. Twenty-three percent (45/200; 95% CI = 17% to 29%) verified patient ID prior to ordering tests.
CONCLUSIONS
Medical providers often miss ID errors and infrequently verify patient ID with two identifiers during CPOE. |
RETRACTED ARTICLE: Neutropenia and invasive fungal infection in patients with hematological malignancies treated with chemotherapy: a multicenter, prospective, non-interventional study in China | In this study, we explored the relationship between neutropenia (absolute neutrophil count (ANC) <1,500/mm3) and invasive fungal infection (IFI) in Chinese patients who had hematological malignancies treated with chemotherapy. We conducted a multicenter, prospective, non-interventional study of consecutive patients with hematological malignancies undergoing chemotherapy in China and determined clinical characteristics of patients who developed neutropenia and IFI. The results indicated that for the 2,177 neutropenic patients, 88 (4.0 %) were diagnosed with IFI. We found that a high risk of IFI (P < 0.05) is associated with male gender, non-remission of the primary disease, use of two or more broad-spectrum antibiotics, treatment with parenteral nutrition, presence of cardiovascular disease, history of IFI, and neutropenia. When the ANC was less than 1,000, 1,000∼500, 500∼100, and <100/mm3, the incidence of IFI was 0.5, 5.2, 3.9, and 4.7 %, respectively (ANC > 1,000/mm3 versus other groups, P < 0.001). When the ANC was less than 1,000, 500, or 100/mm3 for 10 days or more, the incidence of IFI was 3.2 versus 6.1 % (P = 0.0052), 3.5 versus 7.1 % (P = 0.0021), and 3.1 versus 10.0 % (P < 0.001). When the ANC was less than 100/mm3, taking antifungal prophylaxis reduced the incidence of IFI (P < 0.05). The IFI-attributable mortality rate was 11.7 %. In conclusion, Chinese patients with IFI, severe and prolonged neutropenia increases the incidence of IFI. The incidence of IFI associated with neutropenia was reduced when antifungal prophylaxis was given. IFI was associated with a significantly increased high mortality rate in hematological malignancy patients with neutropenia. |
Effects of massage on physiological restoration, perceived recovery, and repeated sports performance. | BACKGROUND
Despite massage being widely used by athletes, little scientific evidence exists to confirm the efficacy of massage for promoting both physiological and psychological recovery after exercise and massage effects on performance.
AIM
To investigate the effect of massage on perceived recovery and blood lactate removal, and also to examine massage effects on repeated boxing performance.
METHODS
Eight amateur boxers completed two performances on a boxing ergometer on two occasions in a counterbalanced design. Boxers initially completed performance 1, after which they received a massage or passive rest intervention. Each boxer then gave perceived recovery ratings before completing a second performance, which was a repeated simulation of the first. Heart rates and blood lactate and glucose levels were also assessed before, during, and after all performances.
RESULTS
A repeated measures analysis of variance showed no significant group differences for either performance, although a main effect was found showing a decrement in punching force from performance 1 to performance 2 (p<0.05). A Wilcoxon matched pairs test showed that the massage intervention significantly increased perceptions of recovery (p<0.01) compared with the passive rest intervention. A doubly multivariate multiple analysis of variance showed no differences in blood lactate or glucose following massage or passive rest interventions, although the blood lactate concentration after the second performance was significantly higher following massage (p<0.05).
CONCLUSIONS
These findings provide some support for the psychological benefits of massage, but raise questions about the benefit of massage for physiological restoration and repeated sports performance. |
Advice to a Young Scientist | * Introduction * How Can I Tell if I Am Cut Out to Be a Scientific Research Worker? * What Shall I Do Research On? * How Can I Equip Myself to Be a Scientist or a Better One? * Sexism and Racism in Science * Aspects of Scientific Life and Manners * Of Younger and Older Scientists * Presentations * Experiment and Discovery * Prizes and Rewards * The Scientific Process * Scientific Meliorism Versus Scientific Messianism |
Topic-Informed Neural Machine Translation | In recent years, neural machine translation (NMT) has demonstrated state-of-the-art machine translation (MT) performance. It is a new approach to MT, which tries to learn a set of parameters to maximize the conditional probability of target sentences given source sentences. In this paper, we present a novel approach to improve the translation performance in NMT by conveying topic knowledge during translation. The proposed topic-informed NMT can increase the likelihood of selecting words from the same topic and domain for translation. Experimentally, we demonstrate that topic-informed NMT can achieve a 1.15 (3.3% relative) and 1.67 (5.4% relative) absolute improvement in BLEU score on the Chinese-to-English language pair using NIST 2004 and 2005 test sets, respectively, compared to NMT without topic information. |
Immersion Meta-Lenses at Visible Wavelengths for Nanoscale Imaging. | Immersion objectives can focus light into a spot smaller than what is achievable in free space, thereby enhancing the spatial resolution for various applications such as microscopy, spectroscopy, and lithography. Despite the availability of advanced lens polishing techniques, hand-polishing is still required to manufacture the front lens of a high-end immersion objective, which poses major constraints for lens design. This limits the shape of the front lens to spherical. Therefore, several other lenses need to be cascaded to correct for spherical aberration, resulting in significant challenges for miniaturization and adding design complexity for different immersion liquids. Here, by using metasurfaces, we demonstrate liquid immersion meta-lenses free of spherical aberration at various design wavelengths in the visible spectrum. We report water and oil immersion meta-lenses of various numerical apertures (NA) up to 1.1 and show that their measured focal spot sizes are diffraction-limited with Strehl ratios of approximately 0.9 at 532 nm. By integrating the oil immersion meta-lens (NA = 1.1) into a commercial scanning confocal microscope, we achieve an imaging spatial resolution of approximately 200 nm. These meta-lenses can be easily adapted to focus light through multilayers of different refractive indices and mass-produced using modern industrial manufacturing or nanoimprint techniques, leading to cost-effective high-end optics. |
Deep Learning with Domain Adaptation for Accelerated Projection Reconstruction MR | PURPOSE
The radial k-space trajectory is a well-established sampling trajectory used in conjunction with magnetic resonance imaging. However, the radial k-space trajectory requires a large number of radial lines for high-resolution reconstruction. Increasing the number of radial lines causes longer acquisition time, making it more difficult for routine clinical use. On the other hand, if we reduce the number of radial lines, streaking artifact patterns are unavoidable. To solve this problem, we propose a novel deep learning approach with domain adaptation to restore high-resolution MR images from under-sampled k-space data.
METHODS
The proposed deep network removes the streaking artifacts from the artifact corrupted images. To address the situation given the limited available data, we propose a domain adaptation scheme that employs a pre-trained network using a large number of X-ray computed tomography (CT) or synthesized radial MR datasets, which is then fine-tuned with only a few radial MR datasets.
RESULTS
The proposed method outperforms existing compressed sensing algorithms, such as the total variation and PR-FOCUSS methods. In addition, the calculation time is several orders of magnitude faster than the total variation and PR-FOCUSS methods. Moreover, we found that pre-training using CT or MR data from similar organ data is more important than pre-training using data from the same modality for different organ.
CONCLUSION
We demonstrate the possibility of a domain-adaptation when only a limited amount of MR data is available. The proposed method surpasses the existing compressed sensing algorithms in terms of the image quality and computation time. |
Cytokine biomarkers to predict antitumor responses to nivolumab suggested in a phase 2 study for advanced melanoma | Promising antitumor activities of nivolumab, a fully humanized IgG4 inhibitor antibody against the programmed death-1 protein, were suggested in previous phase 1 studies. The present phase 2, single-arm study (JAPIC-CTI #111681) evaluated the antitumor activities of nivolumab and explored its predictive correlates in advanced melanoma patients at 11 sites in Japan. Intravenous nivolumab 2 mg/kg was given repeatedly at 3-week intervals to 35 of 37 patients enrolled from December 2011 to May 2012 until they experienced unacceptable toxicity, disease progression, or complete response. Primary endpoint was objective response rate. Serum levels of immune modulators were assessed at multiple time points. As of 21 October 2014, median response duration, median progression-free survival, and median overall survival were 463 days, 169 days, and 18.0 months, respectively. The overall response rate and 1- and 2-year survival rates were 28.6%, 54.3%, and 42.9%, respectively. Thirteen patients remained alive at the end of the observation period and no deaths were drug related. Grade 3-4 drug-related adverse events were observed in 31.4% of patients. Pretreatment serum interferon-γ, and interleukin-6 and -10 levels were significantly higher in the patients with objective tumor responses than in those with tumor progression. In conclusion, giving repeated i.v. nivolumab had potent and durable antitumor effects and a manageable safety profile in advanced melanoma patients, strongly suggesting the usefulness of nivolumab for advanced melanoma and the usefulness of pretreatment serum cytokine profiles as correlates for predicting treatment efficacy. |
General Chemistry with Qualitative Analysis | CONTENTS: The Foundations of Chemistry. Chemical Formulas and Composition Stoichiometry. Chemical Equations and Reactions Stoichiometry. Some Types of Chemical Reactions. The Structure of Atoms. Chemical Bonding. Molecular Structure and Covalent Bonding Theories. Molecular Orbitals in Chemical Bonding. Reactions in Aqueous Solutions I: Acids, Bases and Salts. Reactions in Aqueous Solutions II: Calculations. Gases and the Kinetic Molecular Theory. Liquids and Solids. Solutions. Chemical Thermodynamics. Chemical Kinetics. Chemical Equilibrium. Ionic Equilibria I: Acids and Bases. Ionic Equilibria II: Buffers and Titration Curves. Ionic Equilibria III: The Solubility Product Principle. Electrochemistry. Metals I: Metallurgy. Metals II: Properties and Reaction. Some Nonmetals and Metalloids. Co-ordination Compounds. Nuclear Chemistry. Organic Chemistry I: Formulas, Names and Properties. Organic Chemistry II: Molecular Geometry and Reactions. QUALITATIVE ANALYSIS: Metals in Qualitative Analysis. Introduction to Laboratory Work. Analysis of Cation Group I. Analysis of Cation Group II. Analysis of Cation Group III. Analysis of Cation Group IV. Analysis of Cation Group V. Ionic Equilibria in Quantitative Analysis |
Inferring Binary Relation Schemas for Open Information Extraction | This paper presents a framework to model the semantic representation of binary relations produced by open information extraction systems. For each binary relation, we infer a set of preferred types on the two arguments simultaneously, and generate a ranked list of type pairs which we call schemas. All inferred types are drawn from the Freebase type taxonomy, which are human readable. Our system collects 171,168 binary relations from ReVerb, and is able to produce top-ranking relation schemas with a mean reciprocal rank of 0.337. |
Assessing youth participation in AA-related helping: validity of the Service to Others in Sobriety (SOS) questionnaire in an adolescent sample. | BACKGROUND AND OBJECTIVES
The positive outcomes derived from participation in Alcoholics Anonymous-related helping (AAH) found among adults has spurred study of AAH among minors with addiction. AAH includes acts of good citizenship in AA, formal service positions, public outreach, and transmitting personal experience to another fellow sufferer. Addiction research with adolescents is hindered by few validated assessments of 12-step activity among minors. This study provides psychometric findings of the "Service to Others in Sobriety (SOS)" questionnaire as completed by youths.
METHODS
Multi-informant data was collected prospectively from youth self-reports, clinician-rated assessments, biomarkers, and medical chart records for youths (N = 195) after residential treatment.
RESULTS
Few youths (7%) did not participate in any AAH during treatment. Results indicated the SOS as a unidimensional scale with adequate psychometric properties, including inter-informant reliability (r = .5), internal consistency (alpha = .90), and convergent validity (rs = -.3 to .3). Programmatic AAH activities distinguished abstinent youths in a random half-sample, and replicated on the other half-sample. The SOS cut-point of 40 indicated high AAH participation.
CONCLUSIONS AND SIGNIFICANCE
The SOS appears to be a valid measure of AAH, suggesting clinical utility for enhancing treatment and identifying service opportunities salient to sobriety. |
Progranulin and TDP-43: Mechanistic Links and Future Directions | Loss-of-function mutations in the multifunctional growth factor progranulin (GRN) cause frontotemporal lobar degeneration (FTLD) with TDP-43 protein accumulation. Nuclear TDP-43 protein with key roles in RNA metabolism is also aggregated in amyotrophic lateral sclerosis (ALS), suggesting that ALS and FTLD constitute a broad disease continuum. However, the fact that mutations in GRN are associated with FTLD, while mutations in TDP-43 cause a preferential loss of motor neurons resulting in ALS-end of the disease spectrum, suggests involvement of both cell-autonomous and non-autonomous mechanisms. Studies on animal models and in vitro studies have been instrumental in understanding the link between GRN and TDP-43 and also their role in neurodegeneration. For instance, in mouse models, allelic deficiencies of Grn do not recapitulate human pathology of TDP-43 brain accumulations, but embryonic neurons derived from these mice do show abnormal TDP-43 accumulation after additional cellular challenges, suggesting that TDP-43 changes observed in GRN mutation carriers might also relate to stress. Recent results have shown that the dual action of GRN in growth modulation and inflammation could be due to its negative regulation of TNF-α signaling. In addition, GRN also interacts with sortilin and is endocytosed, thereby regulating its own levels and possibly also modulating the turnover of other proteins including that of TDP-43. Accumulating evidence suggests that TDP-43 abnormal cellular aggregation causes a possible gain of function, also suggested by recently constructed mouse models of TDP-43 proteinopathy; however, it would be inconvincible that sequestration of physiological TDP-43 within cellular aggregates observed in patients would be innocuous for disease pathogenesis. This review discusses some of these data on the possible link between GRN and TDP-43 as well as mechanisms involved in TDP-43-led neurodegeneration. Continued multitiered efforts on genetic, cell biological, and animal modeling approaches would prove crucial in finding a cure for GRN-related diseases. |
Constructing Bayesian Network by Integrating FMEA with FTA | Bayesian Network has an advantage in dealing with uncertainty. But It is difficult to construct a scientific and rational Bayesian Network model in practice application. In order to solve this problem, a novel method for constructing Bayesian Network by integrating Failure Mode and Effect Analysis (FMEA) with Fault Tree Analysis (FTA) was proposed. Firstly, the structure matrix representations of FMEA, FTA and Bayesian Network were shown and a structure matrix integration algorithm was explained. Then, an approach for constructing Bayesian Network by obtaining information on node, structure and parameter from FMEA and FTA based on structure matrix was put forward. Finally, in order to verify the feasibility of the method, an illustrative example was given. This method can simplify the modeling process and improve the modeling efficiency for constructing Bayesian Network and promote the application of Bayesian Network in the system reliability and safety analysis. |
An audio-vocal interface in echolocating horseshoe bats. | The control of vocalization depends significantly on auditory feedback in any species of mammals. Echolocating horseshoe bats, however, provide an excellent model system to study audio-vocal (AV) interactions. These bats can precisely control the frequency of their echolocation calls by monitoring the characteristics of the returning echo; they compensate for flight-induced Doppler shifts in the echo frequency by lowering the frequency of the subsequent vocalization cells (Schnitzler, 1968; Schuller et al., 1974, 1975). It was the aim of this study to investigate the neuronal mechanisms underlying this Doppler-shift compensation (DSC) behavior. For that purpose, the neuronal activity of single units was studied during spontaneous vocalizations of the bats and compared with responses to auditory stimuli such as playback vocalizations and artificially generated acoustic stimuli. The natural echolocation situation was simulated by triggering an acoustic stimulus to the bat's own vocalization and by varying the time delay of this artificial "echo" relative to the vocalization onset. Single-unit activity was observed before, during, and/or after the bat's vocalization as well as in response to auditory stimuli. However, the activity patterns associated with vocalization differed from those triggered by auditory stimuli even when the auditory stimuli were acoustically identical to the bat's vocalization. These neurons were called AV neurons. Their distribution was restricted to an area in the paralemniscal tegmentum of the midbrain. When the natural echolocation situation was stimulated, the responses of AV neurons depended on the time delay between the onset of vocalization and the beginning of the simulated echo. This delay sensitivity disappeared completely when the act of vocalization was replaced by an auditory stimulus that mimicked acoustic self-stimulation during the emission of an echolocation call. The activity of paralemniscal neurons was correlated with all parameters of echolocation calls and echoes that are relevant in context with DSC. These results suggest a model for the regulation of vocalization frequencies by inhibitory auditory feedback. |
Automatic license plate detection and recognition using OpenCV | License plate recognition (LPR) has always been one of the crucial predicaments faced due to in numerous reasons such as severe lighting conditions, complex background, unpredicted weather conditions, low light and more. This paper is to enlight the above-mentioned muddles through a model based on OpenCV for enhancing details of edge information in license plate with improved text detection and recognition methods. LPR is a cutting-edge, next-generation system with imminent technological application in almost every field of the transportation industry. Keywords— Automatic number plate recognition, OpenCV |
Foundations of Statistical Natural Language Processing | (6.24) Briefly noted Bell et al. (1990) and and Bell (1991) introduce a number of smoothing algorithms for the goal of improving text compression. Their “Method is normally referred to as smoothing and has been used for smoothing speech language models. The idea is to model the probability of a previously unseen event by estimating the probability of seeing such a new (previously unseen) event at each point as one proceeds through the training corpus. In particular, this probability is worked out relative to a certain history. So to calculate the probability of seeing a new word after, say, sat in one is calculating from the training data how often one saw a new word after sat in, which is just the count of the number of types seen which begin with sat in. It is thus an instance of generalized linear interpolation: where the probability mass given to new n-grams is given by: |
Segmentation of blood vessels from red-free and fluorescein retinal images | The morphology of the retinal blood vessels can be an important indicator for diseases like diabetes, hypertension and retinopathy of prematurity (ROP). Thus, the measurement of changes in morphology of arterioles and venules can be of diagnostic value. Here we present a method to automatically segment retinal blood vessels based upon multiscale feature extraction. This method overcomes the problem of variations in contrast inherent in these images by using the first and second spatial derivatives of the intensity image that gives information about vessel topology. This approach also enables the detection of blood vessels of different widths, lengths and orientations. The local maxima over scales of the magnitude of the gradient and the maximum principal curvature of the Hessian tensor are used in a multiple pass region growing procedure. The growth progressively segments the blood vessels using feature information together with spatial information. The algorithm is tested on red-free and fluorescein retinal images, taken from two local and two public databases. Comparison with first public database yields values of 75.05% true positive rate (TPR) and 4.38% false positive rate (FPR). Second database values are of 72.46% TPR and 3.45% FPR. Our results on both public databases were comparable in performance with other authors. However, we conclude that these values are not sensitive enough so as to evaluate the performance of vessel geometry detection. Therefore we propose a new approach that uses measurements of vessel diameters and branching angles as a validation criterion to compare our segmented images with those hand segmented from public databases. Comparisons made between both hand segmented images from public databases showed a large inter-subject variability on geometric values. A last evaluation was made comparing vessel geometric values obtained from our segmented images between red-free and fluorescein paired images with the latter as the "ground truth". Our results demonstrated that borders found by our method are less biased and follow more consistently the border of the vessel and therefore they yield more confident geometric values. |
An analysis of the Steam community network evolution | The Steam community network is a large social network of players on the Steam gaming platform, with over 30 million users to date. In this paper we introduce an analysis of the Steam community network in 2011, looking at the characteristics of the users network and the connectivity graph. We next present the evolution of the network over time and show how the network has changed over the years. Last, we analyze the role of games and groups in the Steam community. This work is the first to analyze the Steam network, and to provide a large scale analysis of the characteristics of gaming platforms communities. |
A statistical aimbot detection method for online FPS games | First Person Shooter (FPS) is a popular genre in online gaming, unfortunately not everyone plays the game fairly, and this hinders the growth of the industry. The aiming robot (aimbot) is a common cheating mechanism employed in this genre, it differs from many other common online bots in that there is a human operating alongside the bot, and thus the in-game data exhibit both human and bot-like behaviour. The aimbot users can aim much better than the average player. However, there are also a large number of highly skilled players who can aim much better than the average player, some of these players have in the past been banned from servers due to false accusations from their peers. Therefore, it would be interesting to find out if and where the honest player's and the bot user's behaviour differ. In this paper we investigate the difference between the aiming abilities of aimbot users and honest human players. We introduce two novel features and have conducted an experiment using a modified open source FPS game. Our data shows that there is significant difference between behaviours of honest players and aimbot users. We propose a voting scheme to improve aimbot detection in FPS based on distribution matching, and have achieved approximately 93% in both True positive and True negative rates with one of our features. |
DUTH at SemEval-2018 Task 2: Emoji Prediction in Tweets | This paper describes the approach that was developed for SemEval 2018 Task 2 (Multilingual Emoji Prediction) by the DUTH Team. First, we employed a combination of preprocessing techniques to reduce the noise of tweets and produce a number of features. Then, we built several N-grams, to represent the combination of word and emojis. Finally, we trained our system with a tuned LinearSVC classifier. Our approach in the leaderboard ranked 18th amongst 48 teams. |
Three-dimensional menus: A survey and taxonomy | Various interaction techniques have been developed in the field of virtual and augmented reality. Whereas techniques for object selection, manipulation, travel, and wayfinding have already been covered in existing taxonomies in some detail, application control techniques have not yet been sufficiently considered. However, they are needed by almost every mixed reality application, e.g. for choosing from alternative objects or options. For this purpose a great variety of distinct three-dimensional (3D) menu selection techniques is available. This paper surveys existing 3D menus from the corpus of literature and classifies them according to various criteria. The taxonomy introduced here assists developers of interactive 3D applications to better evaluate their options when choosing, optimizing, and implementing a 3D menu technique. Since the taxonomy spans the design space for 3D menu solutions, it also aids researchers in identifying opportunities to improve or create novel virtual menu techniques. r 2006 Elsevier Ltd. All rights reserved. |
Netbait: a Distributed Worm Detection Service | This paper presents Netbait, a planetary-scale service for distributed detection of Internet worms. Netbait allows users to pose queries that identify which machines on a given network have been compromised based on the collective view of a geographically distributed set of machines. It is based on a distributed query processing architecture that evaluates queries expressed using a subset of SQL against a single logical database table. This single logical table is realized using a distributed set of relational databases, each populated by local intrusion detection systems running on Netbait server nodes. For speed, queries in Netbait are processed in parallel by distributing them over dynamically constructed query processing trees built over Tapestry, a distributed object and location routing (DOLR) layer. For efficiency, query results are compressed using application-specific aggregation and compact encodings. We have implemented a prototype system based on a simplified version of the architecture and have deployed it on 90 nodes of the PlanetLab testbed at 42 sites spread across three continents. The system has been continuously running for over a month now and has been collecting probe information from machines compromised by both the Code Red and Nimda worms. Early results based on this data are promising. First, we observe that by having multiple machines sharing probe information from infected machines, we can identify a substantially larger set of infected hosts that would be possible otherwise. Second, we also observe that by having multiple viewpoints of the network, Netbait is able to identify compromised machines that otherwise would have been difficult to detect in cases where worms have an affinity to certain regions of the IP address space. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.