title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Differences in task demands influence the hemispheric lateralization and neural correlates of metaphor | This study investigated metaphor comprehension in the broader context of task-difference effects and manipulation of processing difficulty. We predicted that right hemisphere recruitment would show greater specificity to processing difficulty rather than metaphor comprehension. Previous metaphor processing studies have established that the left inferior frontal gyrus strongly correlates with metaphor comprehension but there has been controversy about whether right hemisphere (RH) involvement is specific for metaphor comprehension. Functional MRI data were recorded from healthy subjects who read novel metaphors, conventional metaphors, definition-like sentences, or literal sentences. We investigated metaphor processing in contexts where semantic judgment or imagery modulates linguistic judgment. Our findings support the position that the type of task rather than figurative language processing per se modulates the left inferior gyrus (LIFG). RH involvement was more influenced by processing difficulty and less by the novelty or figurativity of linguistic expressions. Our results suggest that figurative language processing depends upon the effects of task-type and processing difficulty on imaging results. |
Securing Virtual Machines from Anomalies Using Program-Behavior Analysis in Cloud Environment | Cloud Computing is the key technology of today's cyber world which provides online provisioning of resources on demand and pay per use basis. Malware attacks such as virus, worm and rootkits etc. are some threats to virtual machines (VMs) in cloud environment. In this paper, we present a system call analysis approach to detect malware attacks which maliciously affect the legitimate programs running in Virtual Machines (VMs) and modify their behavior. Our approach is named as 'Malicious System Call Sequence Detection (MSCSD)' which is based on analysis of short sequence of system calls (n-grams). MSCSD employs an efficient feature representation method for system call patterns to improve the accuracy of attack detection and reduce the cost of storage with reduced false positives. MSCSD applies Machine Learning (Decision Tree C 4.5) over the collected n-gram patterns for learning the behavior of monitored programs and detecting malicious system call patterns in future. We have analyzed the performance of some other classifiers and compared our work with the existing work for securing virtual machine in cloud. A prototype implementation of the approach is carried out over UNM dataset and results seem to be promising. |
Lp Norms and the Sinc Function | It's everywhere! It's everywhere! ... In this note we give elementary proofs of some of the striking asymptotic properties of the p-norm of the ubiquitous sinc function. Based on experimental evidence we conjecture some enticing further properties of the p-norm as a function of p. See, for example, http://www.carma.newcastle.edu.au/~jb616/oscillatory.pdf. |
Training Quantized Nets: A Deeper Understanding | Currently, deep neural networks are deployed on low-power portable devices by first training a full-precision model using powerful hardware, and then deriving a corresponding lowprecision model for efficient inference on such systems. However, training models directly with coarsely quantized weights is a key step towards learning on embedded platforms that have limited computing resources, memory capacity, and power consumption. Numerous recent publications have studied methods for training quantized networks, but these studies have mostly been empirical. In this work, we investigate training methods for quantized neural networks from a theoretical viewpoint. We first explore accuracy guarantees for training methods under convexity assumptions. We then look at the behavior of these algorithms for non-convex problems, and show that training algorithms that exploit high-precision representations have an important greedy search phase that purely quantized training methods lack, which explains the difficulty of training using low-precision arithmetic. |
Correspondence-free pose estimation for 3D objects from noisy depth data | Estimating the pose of objects from depth data is a problem of considerable practical importance for many vision applications. This paper presents an approach for accurate and efficient 3D pose estimation from noisy 2.5D depth images obtained from a consumer depth sensor. Initialized with a coarsely accurate pose, the proposed approach applies a hypothesize-and-test scheme that combines stochastic optimization and graphics-based rendering to refine the supplied initial pose, so that it accurately accounts for a sensed depth image. Pose refinement employs particle swarm optimization to minimize an objective function that quantifies the misalignment between the acquired depth image and a rendered one that is synthesized from a hypothesized pose with the aid of an object mesh model. No explicit correspondences between the depth data and the model need to be established, whereas pose hypothesis rendering and objective function evaluation are efficiently performed on the GPU. Extensive experimental results demonstrate the superior performance of the proposed approach compared to the ICP algorithm, which is typically used for pose refinement in depth images. Furthermore, the experiments indicate the graceful degradation of its performance to limited computational resources and its robustness to noisy and reduced polygon count models, attesting its suitability for use with automatically scanned object models and common graphics hardware. |
An Exploration on Brain Computer Interface and Its Recent Trends | Detailed exploration on Brain Computer Interface (BCI) and its recent trends has been done in this paper. Work is being done to identify objects, images, videos and their color compositions. Efforts are on the way in understanding speech, words, emotions, feelings and moods. When humans watch the surrounding environment, visual data is processed by the brain, and it is possible to reconstruct the same on the screen with some appreciable accuracy by analyzing the physiological data. This data is acquired by using one of the non-invasive techniques like electroencephalography (EEG) in BCI. The acquired signal is to be translated to produce the image on to the screen. This paper also lays suitable directions for future work. KeywordsBCI; EEG; brain image reconstruction. |
Gigantomastia due to retromammary lipoma: An aesthetic management. | INTRODUCTION
A "giant" lipoma is defined as a tumor having dimensions greater than 10 cm. Giant lipomas are rare and giant breast lipomas are exceptionally uncommon. Only six cases have been described in world literature till date. Herein we describe a case of giant breast lipoma and discuss its surgical management.
CASE REPORT
A 43-year-old lady presented with left sided unilateral gigantomastia. Clinical examination, radiology and histopathology diagnosed lipoma. Excision of the tumor was planned, together with correction of the breast deformity by reduction mammoplasty using McKissok technique. A tumor measuring 19 cm × 16 cm × 10 cm and weighing 1647 grams was removed. The nipple areola complex was set by infolding of the vertical pedicles and the lateral and medial flaps were approximated to create the final breast contour. The patient is doing well on follow up.
DISCUSSION
Giant lipomas are rare and of them, giant breast lipomas are extremely uncommon. They can grow to immense proportions and cause significant aesthetic and functional problems. The treatment is excision. But reconstruction of the breast is almost always necessary to achieve a symmetric breast in terms of volume, shape, projection and nipple areola complex symmetry compared to the normal opposite breast. Few authors have used various mammoplasty techniques for reconstruction of the breast after giant lipoma excision. Our case has the following unique features: (i) It is the third largest breast lipoma described in the literature till date, weighing 1647 grams; (ii) The Mckissock technique has been used for parenchymal reshaping which has not been previously described for giant breast lipoma.
CONCLUSION
This case demonstrates that reduction mammoplasty after giant lipoma removal is highly rewarding, resulting in a smaller-sized breast that is aesthetically more pleasing, has better symmetry with the contralateral breast, and provides relief from functional mass deficit. |
Artificial bee colony programming for symbolic regression | Artificial bee colony algorithm simulating the intelligent foraging behavior of honey bee swarms is one of the most popular swarm based optimization algorithms. It has been introduced in 2005 and applied in several fields to solve different problems up to date. In this paper, an artificial bee colony algorithm, called as Artificial Bee Colony Programming (ABCP), is described for the first time as a new method on symbolic regression which is a very important practical problem. Symbolic regression is a process of obtaining a mathematical model using given finite sampling of values of independent variables and associated values of dependent variables. In this work, a set of symbolic regression benchmark problems are solved using artificial bee colony programming and then its performance is compared with the very well-known method evolving computer programs, genetic programming. The simulation results indicate that the proposed method is very feasible and robust on the considered test problems of symbolic regression. 2012 Elsevier Inc. All rights reserved. |
The Knowledge Economy and Urban Economic Growth | In this paper we contribute to the longstanding discussion on the role of knowledge to economic growth in a spatial context. We observe that in adopting the European policy strategy towards a competitive knowledge economy, The Netherlands is – as most European countries mainly oriented towards industrial, technological factors. The policy focus is on R&D specialized regions in their spatial economic strategies. We place the knowledge economy in a broader perspective. Based on the knowledge economy literature, we value complementary indicators: the successful introduction of new products and services to the market (‘ innovation’ ) and indicators of skills of employees (‘knowledge workers’ ). Using econometric analysis, we relate the three factors ‘R&D’ , ‘ innovation’ and ‘knowledge workers’ to regional economic growth. We conclude that the factors ‘ innovation’ and ‘knowledge workers’ are more profoundly related to urban employment and productivity growth than the R&D-factor. Preferably, urban research and policymakers should therefore take all three knowledge factors into account when determining economic potentials of cities. |
Pharmacogenetic analysis of the mGlu2/3 agonist LY2140023 monohydrate in the treatment of schizophrenia | The goal of this study was to identify genetic markers associated with LY2140023 monohydrate response in patients with schizophrenia. Variants in eight candidate genes related to the mechanism of action of LY2140023 or olanzapine were investigated in a genetic cohort collected from two clinical trials. Results from this genetic analysis indicate that 23 single nucleotide polymorphisms (SNPs) were associated with a change in Positive and Negative Syndrome Scale total score in response to LY2140023 at 28 days (P<0.01; false discovery rate <0.2). Sixteen of these SNPs were located in the serotonin 2A receptor (HTR2A). Bioinformatic analyses identified a putative antisense nested gene in intron 2 of HTR2A in the region of the genetic markers associated with LY2140023 response. These data suggest a genetic association exists between SNPs in several genes, such as HTR2A, and response to LY2140023 treatment. Additional clinical trials are needed to establish replication of these results. |
56 Gb/s PAM-4 optical receiver frontend in an advanced FinFET process | This paper presents a 56Gb/s PAM-4 optical receiver analog frontend circuits which consists of three inverter stages TIA with resistive feedback in the first and third stages. An adaptively-tuned continuous-time linear equalizer (CTLE) is cascaded after the TIA for improved sensitivity and bandwidth. The overall gain is controlled by an automatic gain control (AGC) circuits to avoid the large input optical power saturates the TIA, thus distorting the PAM-4 signals. The frontend receiver circuits is designed in an advanced FinFET technology and overall gain achieves 68 dBO with a 22 GHz bandwidth. The simulated input referred current rms noise is 2.86 μA. Total chip power is 6.3 mW from a 0.83 V supply. The chip active area is 150μm × 100 μm. |
Evolutionary Ensemble for Stock Prediction | We propose a genetic ensemble of recurrent neural networks for stock prediction model. The genetic algorithm tunes neural networks in a two-dimensional and parallel framework. The ensemble makes the decision of buying or selling more conservative. It showed notable improvement on the average over not only the buy-and-hold strategy but also other traditional ensemble approaches. |
Soft robot for gait rehabilitation of spinalized rodents | Soft actuators made of highly elastic polymers allow novel robotic system designs, yet application-specific soft robotic systems are rarely reported. Taking notice of the characteristics of soft pneumatic actuators (SPAs) such as high customizability and low inherent stiffness, we report in this work the use of soft pneumatic actuators for a biomedical use - the development of a soft robot for rodents, aimed to provide a physical assistance during gait rehabilitation of a spinalized animal. The design requirements to perform this unconventional task are introduced. Customized soft actuators, soft joints and soft couplings for the robot are presented. Live animal experiment was performed to evaluate and show the potential of SPAs for their use in the current and future biomedical applications. |
Prediction of tool life in end milling of hardened steel AISI D2 | Most published research works on the development of tool life model in machining of hardened steels have been mainly concerned with the turning process, whilst the milling process has received little attention due to the complexity of the process. Thus, the aim of present study is to develope a tool life model in end milling of hardened steel AISI D2 using PVD TiAlN coated carbide cutting tool. The hardness of AISI D2 tool lies within the range of 56-58 HRC. The independent variables or the primary machining parameters selected for this experiment were the cutting speed, feed, and depth of cut. First and second order models were developed using Response Surface Methodology (RSM). Experiments were conducted within specified ranges of the parameters. Design-Expert 6.0 software was used to develop the tool life equations as the predictive models. The predicted tool life results are presented in terms of both 1 and 2 order equations with the aid of a statistical design of experiment software called Design-Expert version 6.0. Analysis of variance (ANOVA) has indicated that both models are valid in predicting the tool life of the part machined under specified condition and the prediction of average error is less than 10%. |
Religion and Mental Health: The Case of American Muslims | Muslims have lived in America for centuries and their numbers are increasing like those of any other ethnic or religious group living in America. There is a growing awareness among mental health professionals of how to deal with mental health issues of the American minorities but little, if any, research is available on American Muslims. American living presents unique challenges to the Muslims who stick to their Islamic faith. The nature of Islamic faith and the concept of mental health in Islam are presented in this paper as well as the stressors that lead to mental health problems among the Muslims. The article also covers the response of Muslim communities to such challenges and the prescriptions given in Islam for positive mental health. Recommendations are outlined in the hope of initiating relevant research that would address the psychological needs of this largely neglected minority. |
Relaxed Lasso | The Lasso is an attractive regularisation method for high-dimensional regression. It combines variable selection with an efficient computational procedure. However, the rate of convergence of the Lasso is slow for some sparse high-dimensional data, where the number of predictor variables is growing fast with the number of observations. Moreover, many noise variables are selected if the estimator is chosen by cross-validation. It is shown that the contradicting demands of an efficient computational procedure and fast convergence rates of the 2-loss can be overcome by a two-stage procedure, termed the relaxed Lasso. For orthogonal designs, the relaxed Lasso provides a continuum of solutions that include both softand hard-thresholding of estimators. The relaxed Lasso solutions include all regular Lasso solutions and computation of all relaxed Lasso solutions is often identically expensive as computing all regular Lasso solutions. Theoretical and numerical results demonstrate that the relaxed Lasso produces sparser models with equal or lower prediction loss than the regular Lasso estimator for high-dimensional data. © 2007 Elsevier B.V. All rights reserved. |
Multi-Cell Multiuser Massive MIMO Networks: User Capacity Analysis and Pilot Design | We propose a novel pilot sequence design to mitigate pilot contamination in multi-cell multiuser massive multiple-input multiple-output networks. Our proposed design generates pilot sequences in the multi-cell network and devises power allocation at base stations (BSs) for downlink transmission. The pilot sequences together with the power allocation ensure that the user capacity of the network is achieved and the pre-defined signal-to-interference-plus-noise ratio (SINR) requirements of all users are met. To realize our design, we first derive new closed-form expressions for the user capacity and the user capacity region. Built upon these expressions, we then develop a new algorithm to obtain the required pilot sequences and power allocation. We further determine the minimum number of antennas required at the BSs to achieve certain SINR requirements of all users. The numerical results are presented to corroborate our analysis and to examine the impact of key parameters, such as the pilot sequence length and the total number of users, on the network performance. A pivotal conclusion is reached that our design achieves a larger user capacity region than the existing designs and needs less antennas at the BS to fulfill the pre-defined SINR requirements of all users in the network than the existing designs. |
Mining actionlet ensemble for action recognition with depth cameras | Human action recognition is an important yet challenging task. The recently developed commodity depth sensors open up new possibilities of dealing with this problem but also present some unique challenges. The depth maps captured by the depth cameras are very noisy and the 3D positions of the tracked joints may be completely wrong if serious occlusions occur, which increases the intra-class variations in the actions. In this paper, an actionlet ensemble model is learnt to represent each action and to capture the intra-class variance. In addition, novel features that are suitable for depth data are proposed. They are robust to noise, invariant to translational and temporal misalignments, and capable of characterizing both the human motion and the human-object interactions. The proposed approach is evaluated on two challenging action recognition datasets captured by commodity depth cameras, and another dataset captured by a MoCap system. The experimental evaluations show that the proposed approach achieves superior performance to the state of the art algorithms. |
Think-aloud protocols: a comparison of three think-aloud protocols for use in testing data-dissemination web sites for usability | We describe an empirical, between-subjects study on the use of think-aloud protocols in usability testing of a federal data-dissemination Web site. This double-blind study used three different types of think-aloud protocols: a traditional protocol, a speech-communication protocol, and a coaching protocol. A silent condition served as the control. Eighty participants were recruited and randomly pre-assigned to one of four conditions. Accuracy and efficiency measures were collected, and participants rated their subjective satisfaction with the site. Results show that accuracy is significantly higher in the coaching condition than in the other conditions. The traditional protocol and the speech-communication protocol are not statistically different from each other with regard to accuracy. Participants in the coaching condition are more satisfied with the Web site than participants in the traditional or speech-communication condition. In addition, there are no significant differences with respect to efficiency (time-on-task). This paper concludes with recommendations for usability practitioners. |
Robust Multilingual Part-of-Speech Tagging via Adversarial Training | Adversarial training (AT)1 is a powerful regularization method for neural networks, aiming to achieve robustness to input perturbations. Yet, the specific effects of the robustness obtained from AT are still unclear in the context of natural language processing. In this paper, we propose and analyze a neural POS tagging model that exploits AT. In our experiments on the Penn Treebank WSJ corpus and the Universal Dependencies (UD) dataset (27 languages), we find that AT not only improves the overall tagging accuracy, but also 1) prevents over-fitting well in low resource languages and 2) boosts tagging accuracy for rare / unseen words. We also demonstrate that 3) the improved tagging performance by AT contributes to the downstream task of dependency parsing, and that 4) AT helps the model to learn cleaner word representations. 5) The proposed AT model is generally effective in different sequence labeling tasks. These positive results motivate further use of AT for natural language tasks. |
Makescape: my diary of landscape | Late October 2004 Mid January 2005: As an architect engaged in postgraduate research in a department of geography where I am surrounded by physical geographers, historical geographers and cultural geographers, I have spent a large portion of the past few months submerged in various writings on the term 'landscape'. Covering as broad a scope as possible, my reading has led me into many fields of exploration. From art critics to academic geographers, from historians to landscape architects, it appears that each group has a different take on the meaning, position, and approach to landscape. The deeper and deeper I have gone into these areas, the more muddied my understandings have become. It feels as though my own once-clear understandings of what landscape is, or could be, are becoming lost in a mire amongst these competing meanings. Landscape: A cultural construction? A social process? A physical thing? |
AUTHENTICATION OF PAPER CURRENCY AND CONVERSION INTO LOWER DENOMINATIONS | Over the past few year, as a result of the great technological advances in color printing, duplicating and scanning, counterfeiting problems have become more and more serious. In the past, only the printing house has the ability to make counterfeit paper currency, but today it is possible for any person to print counterfeit bank notes simply by using a computer and a laser printer at house. Therefore the issue of efficiently distinguishing counterfeit banknotes from genuine ones via automatic machines has become more and more important. There is a need to design a system that is helpful in recognition of paper currency notes with fast speed and in less time. |
Japanese-to-English Machine Translation Using Recurrent Neural Networks | Neural network machine translation systems have recently demonstrated encouraging results. We examine the performance of a recently proposed recurrent neural network model for machine translation on the task of Japanese-to-English translation. We observe that with relatively little training the model performs very well on a small hand-designed parallel corpus, and adapts to grammatical complexity with ease, given a small vocabulary. The success of this model on a small corpus warrants more investigation of its performance on a larger corpus. |
Automatic optimisation of parallel linear algebra routines in systems with variable load | The architecture of an automatically tuned linear algebra library proposed in previous works is extended in order to adapt itself to platforms where both the CPU load and the network traffic vary. During the installation process in a system, the linear algebra routines will be tuned automatically to the system conditions: hardware characteristics and basic libraries used in the linear algebra routines. At run-time the parameters that define the system characteristics are adjusted to the actual load of the platform. The design methodology is analysed with a block LU factorisation. Variants for sequential and parallel versions of this routine on a logical rectangular mesh of processors are considered. The behavior of the algorithm is studied with message-passing, using MPI on a cluster of PCs. The experiments show that the configurable parameters of the linear algebra routines can be adjusted during the run-time process despite the variability of the environment. |
Aggressive behavior and reproductive physiology in females of the social cichlid fish Cichlasoma dimerus | The South American cichlid fish Cichlasoma dimerus is a freshwater species that presents social hierarchies, a highly organized breeding activity, biparental care and a high frequency of spawning. Spawning is followed by a period of parental care (about 20 days in aquaria conditions) during which the cooperative pair takes care of the eggs, both by fanning them and by removing dead ones. The different spawning events in the reproductive period were classified as female reproductive stages which can be subdivided in four phases, according to their offspring degree of development: (1) female with prespawning activity (day 0), (2) female with eggs (day 1 after fertilization), (3) female with hatched larvae (day 3 after fertilization) and (4) female with swimming larvae (FSL, day 8 after fertilization). In Perciform species gonadotropin-releasing hormone type-3 (GnRH3) neurons are associated with the olfactory bulbs acting as a potent neuromodulator of reproductive behaviors in males. The aim of this study is to characterize the GnRH3 neuronal system in females of C. dimerus in relation with aggressive behavior and reproductive physiology during different phases of the reproductive period. Females with prespawning activity were the most aggressive ones showing GnRH-3 neurons with bigger nuclear and somatic area and higher optical density than the others. They also presented the highest levels of plasma androgen and estradiol and maximum gonadosomatic indexes. These results provide information about the regulation and functioning of hypothalamus-pituitary-gonads axis during reproduction in a species with highly organized breeding activity. |
Entropy-based active learning for object recognition | Most methods for learning object categories require large amounts of labeled training data. However, obtaining such data can be a difficult and time-consuming endeavor. We have developed a novel, entropy-based ldquoactive learningrdquo approach which makes significant progress towards this problem. The main idea is to sequentially acquire labeled data by presenting an oracle (the user) with unlabeled images that will be particularly informative when labeled. Active learning adaptively prioritizes the order in which the training examples are acquired, which, as shown by our experiments, can significantly reduce the overall number of training examples required to reach near-optimal performance. At first glance this may seem counter-intuitive: how can the algorithm know whether a group of unlabeled images will be informative, when, by definition, there is no label directly associated with any of the images? Our approach is based on choosing an image to label that maximizes the expected amount of information we gain about the set of unlabeled images. The technique is demonstrated in several contexts, including improving the efficiency of Web image-search queries and open-world visual learning by an autonomous agent. Experiments on a large set of 140 visual object categories taken directly from text-based Web image searches show that our technique can provide large improvements (up to 10 x reduction in the number of training examples needed) over baseline techniques. |
Reading Scene Text with Attention Convolutional Sequence Modeling | Reading text in the wild is a challenging task in the field of computer vision. Existing approaches mainly adopted Connectionist Temporal Classification (CTC) or Attention models based on Recurrent Neural Network (RNN), which is computationally expensive and hard to train. In this paper, we present an end-to-end Attention Convolutional Network for scene text recognition. Firstly, instead of RNN, we adopt the stacked convolutional layers to effectively capture the contextual dependencies of the input sequence, which is characterized by lower computational complexity and easier parallel computation. Compared to the chain structure of recurrent networks, the Convolutional Neural Network (CNN) provides a natural way to capture long-term dependencies between elements, which is 9 times faster than Bidirectional Long ShortTerm Memory (BLSTM). Furthermore, in order to enhance the representation of foreground text and suppress the background noise, we incorporate the residual attention modules into a small densely connected network to improve the discriminability of CNN features. We validate the performance of our approach on the standard benchmarks, including the Street View Text, IIIT5K and ICDAR datasets. As a result, state-of-the-art or highly-competitive performance and efficiency show the superiority of the proposed approach. |
Coffee Silverskin Extract Protects against Accelerated Aging Caused by Oxidative Agents | Nowadays, coffee beans are almost exclusively used for the preparation of the beverage. The sustainability of coffee production can be achieved introducing new applications for the valorization of coffee by-products. Coffee silverskin is the by-product generated during roasting, and because of its powerful antioxidant capacity, coffee silverskin aqueous extract (CSE) may be used for other applications, such as antiaging cosmetics and dermaceutics. This study aims to contribute to the coffee sector's sustainability through the application of CSE to preserve skin health. Preclinical data regarding the antiaging properties of CSE employing human keratinocytes and Caenorhabditis elegans are collected during the present study. Accelerated aging was induced by tert-butyl hydroperoxide (t-BOOH) in HaCaT cells and by ultraviolet radiation C (UVC) in C. elegans. Results suggest that the tested concentrations of coffee extracts were not cytotoxic, and CSE 1 mg/mL gave resistance to skin cells when oxidative damage was induced by t-BOOH. On the other hand, nematodes treated with CSE (1 mg/mL) showed a significant increased longevity compared to those cultured on a standard diet. In conclusion, our results support the antiaging properties of the CSE and its great potential for improving skin health due to its antioxidant character associated with phenols among other bioactive compounds present in the botanical material. |
An Efficient Active Learning Framework for New Relation Types | Supervised training of models for semantic relation extraction has yielded good performance, but at substantial cost for the annotation of large training corpora. Active learning strategies can greatly reduce this annotation cost. We present an efficient active learning framework that starts from a better balance between positive and negative samples, and boosts training efficiency by interleaving self-training and co-testing. We also studied the reduction of annotation cost by enforcing argument type constraints. Experiments show a substantial speed-up by comparison to the previous state-of-the-art pure co-testing active learning framework. We obtain reasonable performance with only 150 labels for individual ACE 2004 relation |
Measuring Semantic Similarity in the Taxonomy of WordNet | This paper presents a new model to measure semantic similarity in the taxonomy of WordNet, using edgecounting techniques. We weigh up our model against a benchmark set by human similarity judgment, and achieve a much improved result compared with other methods: the correlation with average human judgment on a standard 28 word pair dataset is 0.921, which is better than anything reported in the literature and also significantly better than average individual human judgments. As this set has been effectively used for algorithm selection and tuning, we also cross-validate an independent 37 word pair test set (0.876) and present results for the full 65 word pair superset (0.897). |
Remote photoplethysmography application to the analysis of time-frequency changes of human heart rate variability | In this article we present the possibilities of using remote photoplethysmography (rPPG or imaging PPG) technology to estimate time-frequency changes of human heart rate variability. We propose improvements for algorithm presented in our recent study. Algorithm modification allows to exclude skin areas with highly variable levels of lighting, thus reducing noise level and increasing duration of signal suitable for processing. Twenty healthy volunteers (males and females) aged from 20 to 25 took part in this investigation. The blood volume pulse rate estimated from the rPPG rhythmogram and cardiac pulse rate estimated from the electrocardiogram are compared. The results showed that the very low frequency hemodynamic oscillations of blood volume pulse rate estimated from the rPPG rhythmogram in the [0.04-0.003] Hz frequency band can be used to monitor functional changes of a human. |
Estimation and Forecast of Wind Power Generation by FTDNN and NARX-net based models for Energy Management Purpose in Smart Grids | This paper is focused on the prediction and forecast of climate time series, particularly useful for planning and management of the power grid, by artificial neural networks. An appropriate prediction and forecast of climate variables, indeed, improves the overall efficiency and performance of renewable power plants connected to the power grid. On such a basis, the application of suitable Artificial Neural Networks (ANNs) to the field of wind power generation is proposed. In particular, two dynamic recurrent ANNs, i.e., the Focused Time-Delay Neural Network (FTDNN) and the Nonlinear autoregressive network with exogenous inputs (NARX), are used to develop a model for the estimate and forecast of daily wind speed. Results, applied to a turbine model, allow the produced power to be calculated for energy management and planning purpose in smart grids. |
Vaccine-like immunity against malaria by repeated causal-prophylactic treatment of liver-stage Plasmodium parasites. | Liver-stage development of Plasmodium parasites represents a dramatic expansion phase for the malarial parasite between vector transmission and onset of the pathogenic blood-stage cycle. Here, we report that repeated causal-prophylactic primaquine treatment of liver-stage Plasmodium parasites in rodents elicits vaccine-like protective immunity against sporozoite-induced malaria. This regimen differs fundamentally from those involving radiation- or genetically attenuated parasites, in which long-lasting immune responses are dependent on persistence of metabolically active parasites. Pharmacological inhibition of liver-stage parasites in the rodent malaria model offers a potential fast track toward development of a vaccine that targets parasites in preerythrocytic stages. |
Effects of acute systemic administration of cannabidiol on sleep-wake cycle in rats. | UNLABELLED
Cannabidiol (CBD) is one of the main components of Cannabis sativa and has a wide spectrum of action, including effects in the sleep-wake cycle.
OBJECTIVE
The objective of this paper is to assess the effects on sleep of acute systemic administration of CBD.
METHOD
Adult male Wistar rats were randomly distributed into four groups that received intraperitoneal injections of CBD 2.5 mg/kg, CBD 10 mg/kg, CBD 40 mg/kg or vehicle (n=seven animals/group). Sleep recordings were made during light and dark periods for four days: two days of baseline recording, one day of drug administration (test), and one day after drug (post-test).
RESULTS
During the light period of the test day, the total percentage of sleep significantly increased in the groups treated with 10 and 40 mg/kg of CBD compared to placebo. REM sleep latency increased in the group injected with CBD 40 mg/kg and was significantly decreased with the dose of 10 mg/kg on the post-test day. There was an increase in the time of SWS in the group treated with CBD 40 mg/kg, although this result did not reach statistical significance.
CONCLUSION
The systemic acute administration of CBD appears to increase total sleep time, in addition to increasing sleep latency in the light period of the day of administration. |
P-MOD: Secure Privilege-Based Multilevel Organizational Data-Sharing in Cloud Computing | Cloud computing has changed the way enterprises store, access and share data. Data is constantly being uploaded to the cloud and shared within an organization built on a hierarchy of many different individuals that are given certain data access privileges. With more data storage needs turning over to the cloud, finding a secure and efficient data access structure has become a major research issue. With different access privileges, individuals with more privileges (at higher levels of the hierarchy) are granted access to more sensitive data than those with fewer privileges (at lower levels of the hierarchy). In this paper, a Privilege-based Multilevel Organizational Data-sharing scheme (P-MOD) is proposed that incorporates a privilege-based access structure into an attribute-based encryption mechanism to handle these concerns. Each level of the privilege-based access structure is affiliated with an access policy that is uniquely defined by specific attributes. Data is then encrypted under each access policy at every level to grant access to specific data users based on their data access privileges. An individual ranked at a certain level can decrypt the ciphertext (at that specific level) if and only if that individual owns a correct set of attributes that can satisfy the access policy of that level. The user may also decrypt the ciphertexts at the lower levels with respect to the user’s level. Security analysis shows that P-MOD is secure against adaptively chosen plaintext attack assuming the DBDH assumption holds. The comprehensive performance analysis demonstrates that PMOD is more efficient in computational complexity and storage space than the existing schemes in secure data sharing within an organization. |
I Like It... I Like It Not: Evaluating User Ratings Noise in Recommender Systems | Recent growing interest in predicting and influencing consu mer behavior has generated a parallel increase in research efforts on Recommend er Systems. Many of the state-of-the-art Recommender Systems algorithms rely on o btaining user ratings in order to later predict unknown ratings. An underlying assumpt ion in this approach is that the user ratings can be treated as ground truth of the user’s t aste. However, users are inconsistent in giving their feedback, thus introducing an un known amount of noise that challenges the validity of this assumption. In this paper, we tackle the problem of analyzing and charact e izing the noise in user feedback through ratings of movies. We present a user st udy aimed at quantifying the noise in user ratings that is due to inconsistencies. We m easure RMSE values that range from0.557 to 0.8156. We also analyze how factors such as item sorting and time of rating affect this noise. |
Semi-Supervised Learning for Relation Extraction | This paper proposes a semi-supervised learning method for relation extraction. Given a small amount of labeled data and a large amount of unlabeled data, it first bootstraps a moderate number of weighted support vectors via SVM through a co-training procedure with random feature projection and then applies a label propagation (LP) algorithm via the bootstrapped support vectors. Evaluation on the ACE RDC 2003 corpus shows that our method outperforms the normal LP algorithm via all the available labeled data without SVM bootstrapping. Moreover, our method can largely reduce the computational burden. This suggests that our proposed method can integrate the advantages of both SVM bootstrapping and label propagation. |
Relationship between risk and intention to purchase in an online context: role of gender and product category | Multiple studies have attempted to explain the online shopping behaviour of consumers both in Information Systems (IS) and Marketing literature. However, given the widening gap between actual and expected increase in Internet-enabled or web-based consumer purchase transactions, the need to investigate the underlying factors for on-line purchase behaviour assumes increased significance. Also, the gap between actual purchase behaviour of the consumer on Internet and that explained by existing research points to the possibility of some unexplained control variables influencing consumers’ online shopping behaviour. Building on past research, our study incorporates gender and product category as two control variables and unlike prior studies takes an integrative perspective by examining the interactional role of gender and product category on online shopping behaviour. Our study results show that relationship between perceived risk and intention to purchase is moderated by interaction of gender and product category. One major finding of this study, that perceived usefulness mediates the relationship between perceived risk and intention to purchase, has significant theoretical implications for technology acceptance model in Internet context. Our study also indicates perceived usefulness to be the primary determinant of on-line purchase behaviour and points to likely nonsignificant role of perceived ease of use in influencing purchase intention. We discuss these results and provide implications for both theory and research. |
Spatio-Temporally Consistent Correspondence for Dense Dynamic Scene Modeling | Local Rigidity Test Problem. Dense 3D reconstruction of a dynamic foreground subject from a pair of unsynchronized videos with unknown temporal overlap. Challenges: 1. How to identify temporal overlap in terms of estimated dynamic geometry . 2. How to robustly estimate geometry without knowledge of temporal overlap. Key Ideas: 1. Define the cardinality of the maximal set of locally rigid feature tracks as a measure of spatio-temporal consistency of a pair of video sub-sequences. 2. Develop a closed-loop track correspondence refinement process to find the maximal set of rigid tracks. Contributions: 1. We exploit the correlation between temporal alignment errors and geometric estimation errors. 2. We provide a joint solution to the geometry estimation and temporal the video alignment problems. 3. Model-free (i.e. data-driven) framework with wide applicability. |
Private Equity Minority Investments in Large Family Firms: What Influences the Attitude of Family Firm Owners? | This paper extends research in the field of private equity investments in family firms. It contributes to the literature by fundamentally analyzing the decision criteria of family firm owners for using minority investments of private equity investors. This type of financing might be of great interest to family firms, as the family firm owner is able to secure majority ownership and control over the family business. Likewise, minority investments might be attractive for private equity investors, as they are mostly not leveraged and therefore independent from capital market turbulences. Using data from 21 case studies, we identify challenges induced by the family or the business that lead to the phenomenon of private equity minority investments in family firms. We find that perceived benefits and drawbacks of private equity investments are influenced by business and family characteristics. Based on pecking-order theory, resource-based view and the strategy paradigm, propositions as well as a conceptual framework are developed. |
An Integrated Biometric-Based Security Framework Using Wavelet-Domain HMM in Wireless Body Area Networks (WBAN) | In this paper, we proposed an integrated biometric-based security framework for wireless body area networks, which takes advantage of biometric features shared by body sensors deployed at different positions of a person's body. The data communications among these sensors are secured via the proposed authentication and selective encryption schemes that only require low computational power and less resources (e.g., battery and bandwidth). Specifically, a wavelet-domain Hidden Markov Model (HMM) classification is utilized by considering the non-Gaussian statistics of ECG signals for accurate authentication. In addition, the biometric information such as ECG parameters is selected as the biometric key for the encryption in the framework. Our experimental results demonstrated that the proposed approach can achieve more accurate authentication performance without extra requirements of key distribution and strict time synchronization. |
Biologic Approaches for the Treatment of Partial Tears of the Anterior Cruciate Ligament | BACKGROUND
Anterior cruciate ligament reconstruction (ACLR) has been established as the gold standard for treatment of complete ruptures of the anterior cruciate ligament (ACL) in active, symptomatic individuals. In contrast, treatment of partial tears of the ACL remains controversial. Biologically augmented ACL-repair techniques are expanding in an attempt to regenerate and improve healing and outcomes of both the native ACL and the reconstructed graft tissue.
PURPOSE
To review the biologic treatment options for partial tears of the ACL.
STUDY DESIGN
Review.
METHODS
A literature review was performed that included searches of PubMed, Medline, and Cochrane databases using the following keywords: partial tear of the ACL, ACL repair, bone marrow concentrate, growth factors/healing enhancement, platelet-rich plasma (PRP), stem cell therapy.
RESULTS
The use of novel biologic ACL repair techniques, including growth factors, PRP, stem cells, and bioscaffolds, have been reported to result in promising preclinical and short-term clinical outcomes.
CONCLUSION
The potential benefits of these biological augmentation approaches for partial ACL tears are improved healing, better proprioception, and a faster return to sport and activities of daily living when compared with standard reconstruction procedures. However, long-term studies with larger cohorts of patients and with technique validation are necessary to assess the real effect of these approaches. |
MediConceptNet: An Affinity Score Based Medical Concept Network | In healthcare, information extraction is essential in building automatic domain-specific applications. Medical concepts and their semantic identification take an important role to develop a network for visualizing medical concepts and their relations. The challenge appears while available medical corpora are only in either unstructured or semi-structured forms. In the present paper, to overcome the challenge and consequently to construct a structured corpus, we apply a domain-specific lexicon, namely WordNet of Medical Event. Medical concepts assigned by this lexicon and their affinity score, polarity score, sense, and semantic features assist in identifying conceptual and sentiment relations from the corpus. The lexicon and all these features provide an essential support to analyze an unstructured corpus and represent it in a structured corpus which we term MediConceptNet: the medical concepts are connected with each other through the concerned features. A previously suggested network for the same purpose, e.g., SemNet, is only based on the semantic and affinity features. The semantic relations of the concepts can be successfully determined in three distinct ranges, e.g., 0 for no relation, 0-1 for partial relations, and 1 corresponding a full relation. To evaluate the data of MediConceptNet, we apply an agreement analysis provided by the Cohen’s kappa coefficient and achieve 0.66 agreement score, evaluating the comparative statistics of two medical practitioners working as manual an- |
Relaxation, reduction in angry articulated thoughts, and improvements in borderline hypertension and heart rate | An intensive 7-week relaxation therapy was evaluated in a sample of unmedicated borderline hypertensive men. All subjects were provided state-of-the-art medical information regarding changes known to affect hypertension favorably, e.g., lower salt intake and regular exercise. In addition, relaxation subjects were trained in muscle relaxation that entailed audiotaped home practice. As predicted, relaxation combined with hygiene lowered blood pressure more than did hygiene alone. Neither treatment favorably affected a paper-and-pencil measure of anger but relaxation did lower anger-hostility on a new cognitive assessment procedure, Articulated Thoughts in Simulated Situations (ATSS). Moreover, ATSS anger-hostility reduction was correlated with blood pressure or heart rate reductions, for all subjects and especially for those in the Relaxation condition. This represents the first clinically demonstrated link between change in a cognitive variable and change in cardiovascular activity. Finally, results were especially strong in subjects high in norepinephrine, suggesting its importance in essential hypertension. |
Educational Environment for Training Future Engineers for Enterprises of Mining and Metallurgical Complex | The article describes a new approach to the training of bachelors in the field of “Power Engineering and Electrical Engineering”, future engineers of mining and metallurgical complex, as well as the features of business education, the purpose of which is to teach engineers for a particular enterprise. The deep learning approach in the field of future professional activity is applied. The necessary results can be achieved by using modern information technologies in the educational process, such as 3D-modeling, virtual reality, 3D-simulators, cloud technologies. In combination with educational tasks these technologies form an educational environment that is able to form engineering thinking and prepare graduates for solving urgent production problems without additional adaptation at the enterprise. |
Intravenous tranexamic acid and intraoperative visualization during functional endoscopic sinus surgery: a double-blind randomized controlled trial. | BACKGROUND
Bleeding during endoscopic sinus surgery (ESS) can hinder surgical progress and may be associated with increased complications. Tranexamic acid is an antifibrinolytic that is known to reduce operative bleeding. The current study was designed to assess the effect of adjunctive intravenous tranexamic acid on intraoperative bleeding and the quality of the surgical field during ESS.
METHODS
Double-blind, randomized, controlled trial. Patients undergoing ESS for the primary diagnosis of chronic rhinosinusitis with or without polyposis were included. Sample size calculation based on a clinically relevant difference in the Wormald surgical field score yielded a sample of 28. In addition to standard measures to minimize blood loss, study patients received intravenous tranexamic acid with control patients receiving intravenous normal saline. Outcome measures included the Wormald grading scale to assess the intraoperative surgical field and estimated blood loss based on suction container contents with irrigation fluid subtracted.
RESULTS
Twenty-eight patients (median age, 45 years; range, 23-80 years) were included in the study. Diagnoses included chronic rhinosinusitis without polyposis (n = 5), chronic rhinosinusitis with polyposis (n = 23). The use of the tranexamic acid was not associated with a statistically significant decrease in estimated blood loss (201 vs 231 mL; p = 0.60) or Wormald grading scale (5.84 vs 5.80; p = 0.93). There were no adverse events or complications during the study.
CONCLUSION
Adjunctive intravenous tranexamic acid does not appear to result in a clinically meaningful reduction in blood loss or improve visualization of the surgical field during ESS. |
Personality traits and concern for privacy: an empirical study in the context of location-based services | Received: 10 March 2008 Revised: 31 May 2008 2nd Revision: 27 July 2008 Accepted: 11 August 2008 Abstract For more than a century, concern for privacy (CFP) has co-evolved with advances in information technology. The CFP refers to the anxious sense of interest that a person has because of various types of threats to the person’s state of being free from intrusion. Research studies have validated this concept and identified its consequences. For example, research has shown that the CFP can have a negative influence on the adoption of information technology; but little is known about factors likely to influence such concern. This paper attempts to fill that gap. Because privacy is said to be a part of a more general ‘right to one’s personality’, we consider the so-called ‘Big Five’ personality traits (agreeableness, extraversion, emotional stability, openness to experience, and conscientiousness) as factors that can influence privacy concerns. Protection motivation theory helps us to explain this influence in the context of an emerging pervasive technology: location-based services. Using a survey-based approach, we find that agreeableness, conscientiousness, and openness to experience each affect the CFP. These results have implications for the adoption, the design, and the marketing of highly personalized new technologies. European Journal of Information Systems (2008) 17, 387–402. doi:10.1057/ejis.2008.29 |
Superpixel-Based Graphical Model for Remote Sensing Image Mapping | Object-oriented remote sensing image classification is becoming more and more popular because it can integrate spatial information from neighboring regions of different shapes and sizes into the classification procedure to improve the mapping accuracy. However, object identification itself is difficult and challenging. Superpixels, which are groups of spatially connected similar pixels, have the scale between the pixel level and the object level and can be generated from oversegmentation. In this paper, we establish a new classification framework using a superpixel-based graphical model. Superpixels instead of pixels are applied as the basic unit to the graphical model to capture the contextual information and the spatial dependence between the superpixels. The advantage of this treatment is that it makes the classification less sensitive to noise and segmentation scale. The contribution of this paper is the application of a graphical model to remote sensing image semantic segmentation. It is threefold. 1) Gradient fusion is applied to multispectral images before the watershed segmentation algorithm is used for superpixel generation. 2) A probabilistic fusion method is designed to derive node potential in the superpixel-based graphical model to address the problem of insufficient training samples at the superpixel level. 3) A boundary penalty between the superpixels is introduced in the edge potential evaluation. Experiments on three real data sets were conducted. The results show that the proposed method performs better than the related state-of-the-art methods tested. |
A Survey of Augmented Reality Technologies , Applications and Limitations | — We are on the verge of ubiquitously adopting Augmented Reality (AR) technologies to enhance our perception and help us see, hear, and feel our environments in new and enriched ways. AR will support us in fields such as education, maintenance, design and reconnaissance, to name but a few. This paper describes the field of AR, including a brief definition and development history, the enabling technologies and their characteristics. It surveys the state of the art by reviewing some recent applications of AR technology as well as some known limitations regarding human factors in the use of AR systems that developers will need to overcome. I INTRODUCTION Imagine a technology with which you could see more than others see, hear more than others hear, and perhaps even touch, smell and taste things that others can not. What if we had technology to perceive completely computational elements and objects within our real world experience, entire creatures and structures even that help us in our daily activities , while interacting almost unconsciously through mere gestures and speech? With such technology, mechanics could see instructions what to do next when repairing an unknown piece of equipment, surgeons could see ultrasound scans of organs while performing surgery on them, fire fighters could see building layouts to avoid otherwise invisible hazards, soldiers could see positions of enemy snipers spotted by un-manned reconnaissance aircraft, and we could read reviews for each restaurant in the street we " re walking in, or battle 10-foot tall aliens on the way to work [57]. Augmented reality (AR) is this technology to create a " next generation, reality-based interface " [77] and is moving from laboratories around the world into various industries and consumer markets. AR supplements the real world with virtual (computer-generated) objects that appear to coexist in the same space as the real world. AR was recognised as an emerging technology of 2007 [79], and with today " s smart phones and AR browsers we are starting to embrace this very new and exciting kind of human-computer interaction. shino [107] (Fig. 1), AR is one part of the general area of mixed reality. Both virtual environments (or virtual reality) and augmented virtuality, in which real objects are added to virtual ones, replace the surrounding environment by a virtual one. In contrast, AR provides local virtuality. When considering not just artificiality but also user transportation, Benford et al. [28] … |
Engineering response to the dual risk of natural hazards and global change | The research programs on natural hazards mitigation and in global change supported by the National Science Foundation are briefly described. The potential relationships between the two types of phenomena, natural and man made, are discussed together with a summary of possible future directions in engineering research. |
Obstacles in Moving to Agile Software Development methods; at a Glance | It is only less than a decade that agile SD methods were introduced and got popular steadily. The defined values in these methods and their outcomes have motivated many software producers to use these methods. Since migration from traditional software development methods to agile methods is growing highly, managers of the companies should be aware of problems, hindrances and challenges they may face with during the agile transformation process. This study focused on challenges which companies may face with and it is necessary that managers think about solving them. Classifying them into four main categories; organization and management, people, process and tools are the areas that challenges have been seen in recent studies. |
Streaming Architecture for Large-Scale Quantized Neural Networks on an FPGA-Based Dataflow Platform | Deep neural networks (DNNs) are used by different applications that are executed on a range of computer architectures, from IoT devices to supercomputers. The footprint of these networks is huge as well as their computational and communication needs. In order to ease the pressure on resources, research indicates that in many cases a low precision representation (1-2 bit per parameter) of weights and other parameters can achieve similar accuracy while requiring less resources. Using quantized values enables the use of FPGAs to run NNs, since FPGAs are well fitted to these primitives; e.g., FPGAs provide efficient support for bitwise operations and can work with arbitrary-precision representation of numbers. This paper presents a new streaming architecture for running QNNs on FPGAs. The proposed architecture scales out better than alternatives, allowing us to take advantage of systems with multiple FPGAs. We also included support for skip connections, that are used in state-of-the art NNs, and shown that our architecture allows to add those connections almost for free. All this allowed us to implement an 18-layer ResNet for 224×224 images classification, achieving 57.5% top-1 accuracy. In addition, we implemented a full-sized quantized AlexNet. In contrast to previous works, we use 2-bit activations instead of 1-bit ones, which improves AlexNet's top-1 accuracy from 41.8% to 51.03% for the ImageNet classification. Both AlexNet and ResNet can handle 1000-class real-time classification on an FPGA. Our implementation of ResNet-18 consumes 5× less power and is 4× slower for ImageNet, when compared to the same NN on the latest Nvidia GPUs. Smaller NNs, that fit a single FPGA, are running faster then on GPUs on small (32×32) inputs, while consuming up to 20× less energy and power. |
Comparison of Risk Assessment for a Nuclear Power Plant Construction Project Based on Analytic Hierarchy Process and Fuzzy Analytic Hierarchy Process | Recently, plant construction throughout the world, including nuclear power plant construction, has grown significantly. The scale of Korea’s nuclear power plant construction in particular, has increased gradually since it won a contract for a nuclear power plant construction project in the United Arab Emirates in 2009. However, time and monetary resources have been lost in some nuclear power plant construction sites due to lack of risk management ability. The need to prevent losses at nuclear power plant construction sites has become more urgent because it demands professional skills and large-scale resources. Therefore, in this study, the Analytic Hierarchy Process (AHP) and Fuzzy Analytic Hierarchy Process (FAHP) were applied in order to make comparisons between decision-making methods, to assess the potential risks at nuclear power plant construction sites. To suggest the appropriate choice between two decision-making methods, a survey was carried out. From the results, the importance and the priority of 24 risk factors, classified by process, cost, safety, and quality, were analyzed. The FAHP was identified as a suitable method for risk assessment of nuclear power plant construction, compared with risk assessment using the AHP. These risk factors will be able to serve as baseline data for risk management in nuclear power plant construction projects. |
Corpus Specific Stop Words to Improve the Textual Analysis in Scientometrics | With the availability of vast collection of research articles on internet, textual analysis is an increasingly important technique in scientometric analysis. While the context in which it is used and the specific algorithms implemented may vary, typically any textual analysis exercise involves intensive pre-processing of input text which includes removing topically uninteresting terms (stop words). In this paper we argue that corpus specific stop words, which take into account the specificities of a collection of texts, improve textual analysis in scientometrics. We describe two relatively simple techniques to generate corpus-specific stop words; stop words lists following a Poisson distribution and keyword adjacency stop words lists. In a case study to extract keywords from scientific abstracts of research project funded by the European Research Council in the domain of Life sciences, we show that a combination of those techniques gives better recall values than standard stop words or any of the two techniques alone. The method we propose can be implemented to obtain stop words lists in an automatic way by using author provided keywords for a set of abstracts. The stop words lists generated can be updated easily by adding new texts to the training corpus. Conference Topic Methods and techniques |
Big Data: Principles and best practices of scalable realtime data systems | This book covers the theory and principles of core avionic systems in civil and military aircraft, including displays, data entry and control systems, fly by wire control systems, inertial sensor and air data systems, navigation, autopilot systems an... Use the latest data mining best practices to enable timely, actionable, evidence-based decision making throughout your organization! Real-World Data Mining demystifies current best practices, showing how to use data mining to uncover hidden patterns ... Data Warehousing in the Age of the Big Data will help you and your organization make the most of unstructured data with your existing data warehouse. As Big Data continues to revolutionize how we use data, it doesn't have to create more confusion. Ex... This book explores the concepts of data mining and data warehousing, a promising and flourishing frontier in data base systems and new data base applications and is also designed to give a broad, yet .... |
Exposure to violent video games increases automatic aggressiveness. | The effects of exposure to violent video games on automatic associations with the self were investigated in a sample of 121 students. Playing the violent video game Doom led participants to associate themselves with aggressive traits and actions on the Implicit Association Test. In addition, self-reported prior exposure to violent video games predicted automatic aggressive self-concept, above and beyond self-reported aggression. Results suggest that playing violent video games can lead to the automatic learning of aggressive self-views. |
Polyethylene glycol 4000 for treatment of functional constipation in children. | OBJECTIVES
The aim of the study was to evaluate the effectiveness and safety of 2 different polyethylene glycol (PEG) doses for the maintenance treatment of functional constipation in children.
METHODS
Children with functional constipation according to the Rome III criteria were randomly assigned to receive PEG 4000 at a dose of either 0.7 g/kg (high-dose group; n = 45) or 0.3 g/kg (low-dose group; n = 47) for 6 weeks. Adjustment of the therapy was recommended in the event of <3 bowel movements (BM) per week or ≥3 BM per day. The primary outcome measure was treatment success, defined as ≥3 BM per week with no fecal soiling during the last week of the intervention.
RESULTS
A total of 90 of 92 randomized children, with a mean age of 3.7 ± 2.1 years, completed the study. In the analysis based on allocated treatment, treatment success was similar in both groups (relative risk 0.9, 95% confidence interval 0.78-1.03). Compared with the high-dose PEG group, the low-dose PEG group had an increased need for therapy adjustment of borderline significance (relative risk 2.0, 95% confidence interval 1.0-4.2), an increased risk of painful defecation, a lower number of stools per week, and lower parental satisfaction. Adverse events were similar in both groups.
CONCLUSIONS
To achieve treatment success, both tested doses of PEG were equally safe and effective in the treatment of children with functional constipation. |
A culturally and linguistically responsive vocabulary approach for young Latino dual language learners. | PURPOSE
This study examined the role of the language of vocabulary instruction in promoting English vocabulary in preschool Latino dual language learners (DLLs). The authors compared the effectiveness of delivering a single evidence-informed vocabulary approach using English as the language of vocabulary instruction (English culturally responsive [ECR]) versus using a bilingual modality that strategically combined Spanish and English (culturally and linguistically responsive [CLR]).
METHOD
Forty-two DLL Spanish-speaking preschoolers were randomly assigned to the ECR group (n=22) or CLR group (n=20). Thirty English words were presented during small-group shared readings in their preschools 3 times a week for 5 weeks. Multilevel models were used to examine group differences in postinstruction scores on 2 Spanish and 2 English vocabulary assessments at instruction end and follow-up.
RESULTS
Children receiving instruction in the CLR bilingual modality had significantly higher posttest scores (than those receiving the ECR English-only instruction) on Spanish and English vocabulary assessments at instruction end and on the Spanish vocabulary assessment at follow-up, even after controlling for preinstruction scores.
CONCLUSIONS
The results provide additional evidence of the benefits of strategically combining the first and second language to promote English and Spanish vocabulary development in this population. Future directions for research and clinical applications are discussed. |
Post-bond Testing of the Silicon Interposer and Micro-bumps in 2.5D ICs | 2.5D integration is emerging as a precursor to stacked 3D ICs. Since the silicon interposer and micro-bumps in 2.5D integration can suffer from fabrication and assembly defects, post-bond testing is necessary for product qualification. This paper proposes and evaluates an interposer test architecture based on extensions to the IEEE 1149.1 Std. The proposed method enables access to interconnects inside the interposer by probing on the C4 bumps. It provides an effective test method for opens, shorts, and interconnect delay fault in the interposer. Moreover, micro-bumps can be tested through test paths that include dies on the interposer. HSPICE simulation results show that a large range of defects can be detected, diagnosed, and characterized using the proposed approach. |
Breast cancer risk factors and mammographic breast density in women over age 70 | Breast density is a strong risk factor for breast cancer, but little is known about factors associated with breast density in women over 70. Percent breast density, sex hormone levels and breast cancer risk factor data were obtained on 239 women ages 70–92 recruited from 1986 to 1988 in the United States. Multivariable linear regression was used to develop a model to describe factors associated with percent density. Median (range) percent density among women was 23.7% (0–85%). Body mass index (β= −0.345, p<0.001 adjusted for age and parity) and parity (β= −0.277, p<0.001 adjusted for age and BMI) were significantly and inversely associated with percent breast density. After adjusting for parity and BMI, age was not associated with breast density (β=0.05, p=0.45). Parous women had lower percent density than nulliparous women (23.7 versus 34.7%, p=0.005). Women who had undergone surgical menopause had greater breast density than those who had had a natural menopause (33.4 versus 24.8%, p=0.048), as did women who were not current smokers (26.0 versus 17.3% for smokers, p=0.02). Breast density was not associated with age at menarche, age at menopause, age at first birth, breastfeeding, estrogen levels or androgen levels. In a multivariable model, 24% of the variance in percent breast density was explained by BMI (β= −0.35), parity (β=−0.29), surgical menopause (β=0.13) and current smoking (β= −0.12). Factors associated with breast density in older, post-menopausal women differ from traditional breast cancer risk factors and from factors associated with breast density in pre-menopausal and younger post-menopausal women. |
An Overview on XML Semantic Disambiguation from Unstructured Text to Semi-Structured Data: Background, Applications, and Ongoing Challenges | Since the last two decades, XML has gained momentum as the standard for web information management and complex data representation. Also, collaboratively built semi-structured information resources, such as Wikipedia, have become prevalent on the Web and can be inherently encoded in XML. Yet most methods for processing XML and semi-structured information handle mainly the syntactic properties of the data, while ignoring the semantics involved. To devise more intelligent applications, one needs to augment syntactic features with machine-readable semantic meaning. This can be achieved through the computational identification of the meaning of data in context, also known as (a.k.a.) automated semantic analysis and disambiguation, which is nowadays one of the main challenges at the core of the Semantic Web. This survey paper provides a concise and comprehensive review of the methods related to XML-based semi-structured semantic analysis and disambiguation. It is made of four logical parts. First, we briefly cover traditional word sense disambiguation methods for processing flat textual data. Second, we describe and categorize disambiguation techniques developed and extended to handle semi-structured and XML data. Third, we describe current and potential application scenarios that can benefit from XML semantic analysis, including: data clustering and semantic-aware indexing, data integration and selective dissemination, semantic-aware and temporal querying, web and mobile services matching and composition, blog and social semantic network analysis, and ontology learning. Fourth, we describe and discuss ongoing challenges and future directions, including: the quantification of semantic ambiguity, expanding XML disambiguation context, combining structure and content, using collaborative/social information sources, integrating explicit and implicit semantic analysis, emphasizing user involvement, and reducing computational complexity. |
A method for automatic stock trading combining technical analysis and nearest neighbor classification | In this paper we propose and analyze a novel method for automatic stock trading which combines technical analysis and the nearest neighbor classification. Our first and foremost objective is to study the feasibility of the practical use of an intelligent prediction system exclusively based on the history of daily stock closing prices and volumes. To this end we propose a technique that consists of a combination of a nearest neighbor classifier and some well known tools of technical analysis, namely, stop loss, stop gain and RSI filter. For assessing the potential use of the proposed method in practice we compared the results obtained to the results that would be obtained by adopting a buy-and-hold strategy. The key performance measure in this comparison was profitability. The proposed method was shown to generate considerable higher profits than buy-and-hold for most of the companies, with few buy operations generated and, consequently, minimizing the risk of market exposure. 2010 Elsevier Ltd. All rights reserved. |
The Use of a Bayesian Neural Network Model for Classification Tasks | This thesis deals with a Bayesian neural network model. The focus is on how to use the model for automatic classification, i.e. on how to train the neural network to classify objects from some domain, given a database of labeled examples from the domain. The original Bayesian neural network is a one-layer network implementing a naive Bayesian classifier. It is based on the assumption that different attributes of the objects appear independent of each other. This work has been aimed at extending the original Bayesian neural network model, mainly focusing on three different aspects. First the model is extended to a multi-layer network, to relax the independence requirement. This is done by introducing a hidden layer of complex columns, groups of units which take input from the same set of input attributes. Two different types of complex column structures in the hidden layer are studied and compared. An information theoretic measure is used to decide which input attributes to consider together in complex columns. Also used are ideas from Bayesian statistics, as a means to estimate the probabilities from data which are required to set up the weights and biases in the neural network. The use of uncertain evidence and continuous valued attributes in the Bayesian neural network are also treated. Both things require the network to handle graded inputs, i.e. probability distributions over some discrete attributes given as input. Continuous valued attributes can then be handled by using mixture models. In effect, each mixture model converts a set of continuous valued inputs to a discrete number of probabilities for the component densities in the mixture model. Finally a query-reply system based on the Bayesian neural network is described. It constitutes a kind of expert system shell on top of the network. Rather than requiring all attributes to be given at once, the system can ask for the attributes relevant for the classification. Information theory is used to select the attributes to ask for. The system also offers an explanatory mechanism, which can give simple explanations of the state of the network, in terms of which inputs mean the most for the outputs. These extensions to the Bayesian neural network model are evaluated on a set of different databases, both realistic and synthetic, and the classification results are compared to those of various other classification methods on the same databases. The conclusion is that the Bayesian neural network model compares favorably to other methods for classification. In this work much inspiration has been taken from various branches of machine learning. The goal has been to combine the different ideas into one consistent and useful neural network model. A main theme throughout is to utilize independencies between attributes, to decrease the number of free parameters, and thus to increase the generalization capability of the method. Significant contributions are the method used to combine the outputs from mixture models over different subspaces of the domain, and the use of Bayesian estimation of parameters in the expectation maximization method during training of the mixture models. |
TCP accelerator for DVB-RCS SATCOM dynamic bandwidth environment with HAIPE | A high assurance IP encryption (HAIPE) compliant protocol accelerator is proposed for military networks consisting of red (or classified) networks and black (or unclassified) networks. The boundary between red and black sides is assumed to be protected via a HAIPE device. However, the IP layer encryption introduces challenges for bandwidth on demand satellite communication. The problems experienced by transmission control protocol (TCP) over satellites are well understood: While standard modems (on the black side) employ TCP performance enhancing proxy (PEP) which has been shown to work well, the HAIPE encryption of TCP headers renders the onboard modem's PEP ineffective. This is attributed to the fact that under the bandwidth-on-demand environment, PEP must use traditional TCP mechanisms such as slow start to probe for the available bandwidth of the link (which eliminates the usefulness of the PEP). Most implementations recommend disabling the PEP when a HAIPE device is used. In this paper, we propose a novel solution, namely broadband HAIPE-embeddable satellite communications terminal (BH-eST), which utilizes dynamic network performance enhancement algorithms for high latency bandwidth-on-demand satellite links protected by HAIPE. By moving the PEP into the red network and exploiting the explicit congestion notification bypass mechanism allowed by the latest HAIPE standard, we have been able to regain PEP's desired network enhancement that was lost due to HAIPE encryption (even though the idea of deploying PEP at the modem side is not new). Our BHeST solution employs direct video broadcast-return channel service (DVB-RCS), an open standard as a means of providing bandwidth-on-demand satellite links. Another issue we address is the estimation of current satellite bandwidth allocated to a remote terminal which is not available in DVB-RCS. Simulation results show that the improvement of our solution over FIX PEP is significant and could reach up to 100%. The improvement over the original TCP is even more (up to 500% for certain configurations). |
Learning supervised scoring ensemble for emotion recognition in the wild | State-of-the-art approaches for the previous emotion recognition in the wild challenges are usually built on prevailing Convolutional Neural Networks (CNNs). Although there is clear evidence that CNNs with increased depth or width can usually bring improved predication accuracy, existing top approaches provide supervision only at the output feature layer, resulting in the insufficient training of deep CNN models. In this paper, we present a new learning method named Supervised Scoring Ensemble (SSE) for advancing this challenge with deep CNNs. We first extend the idea of recent deep supervision to deal with emotion recognition problem. Benefiting from adding supervision not only to deep layers but also to intermediate layers and shallow layers, the training of deep CNNs can be well eased. Second, we present a new fusion structure in which class-wise scoring activations at diverse complementary feature layers are concatenated and further used as the inputs for second-level supervision, acting as a deep feature ensemble within a single CNN architecture. We show our proposed learning method brings large accuracy gains over diverse backbone networks consistently. On this year's audio-video based emotion recognition task, the average recognition rate of our best submission is 60.34%, forming a new envelop over all existing records. |
Danmaku vs. Forum Comments: Understanding User Participation and Knowledge Sharing in Online Videos | Danmaku is a new video comment feature that is gaining popularity. Unlike typical forum comments that are displayed with user names below videos, danmaku comments are overlaid on the screen of videos without showing users' information. Prior work studied forum comments and danmaku separately, and little work compared how these two features were used. We collected 38,399 danmaku comments and 16,414 forum comments posted in 2017 on 30 popular videos on Bilibili.com. We examined the usage of these two features in terms of user participation, language used, and ways of sharing knowledge. We found that more users posted danmaku comments, and they also posted these more frequently than forum comments. Even though, in total, more negative language was used in danmaku comments than in forum comments, active users appeared to post more positive comments in danmaku. There was no such correlation in forum comments. It is interesting to find that danmaku and forum comments enabled knowledge sharing in a complementary manner, where danmaku comments involved more explicit knowledge sharing and forum comments exhibited more tacit knowledge sharing. We discuss design implications to promote social interactions for online video systems. |
Price forecasting using wavelet transform and LSE based mixed model in Australian electricity market | Purpose – Price forecasting is essential for risk management in deregulated electricity markets. The purpose of this paper is to propose a hybrid technique using wavelet transform (WT) and multiple linear regression (MLR) to forecast price profile in electricity markets. Design/methodology/approach – Price series is highly volatile and non-stationary in nature. In this work, initially complete price series has been decomposed into separate 48 half-hourly series and then these series have been categorized into different segments for price forecasting. For some segments, WT based MLR has been applied and for the other segments, simple MLR model has been applied. The model is general in nature and has been implemented for one day-ahead price forecasting in National Electricity Market (NEM) of Australia. Participants can use the technique practically, since it predicts price well before submission of bids. Findings – Forecasting performance of the proposed WT and MLR based mixed model has been compared with the three other models, an analytical model, a MLR model and an artificial neural network (ANN) based model. The proposed model was found to be better. Performance evaluation for different wavelets was performed, and it has been observed that for improving forecasting accuracy using WT, Daubechies wavelet of order two gives the best performance. Originality/value – Forecasting accuracy improvement of an established technique by incorporating time domain and wavelet domain variables of the same time series into one set has been implemented in this work. The paper also attempts to explain how non-stationarity can be removed from a non-stationary time series by applying WT after appropriate statistical investigation. Moreover, real time electricity markets are highly unpredictable and yet under investigated. The model has been applied to NEM for the same reason. |
The Social Dynamics of Pair Programming | This paper presents data from a four month ethnographic study of professional pair programmers from two software development teams. Contrary to the current conception of pair programmers, the pairs in this study did not hew to the separate roles of "driver" and "navigator". Instead, the observed programmers moved together through different phases of the task, considering and discussing issues at the same strategic "range" or level of abstraction and in largely the same role. This form of interaction was reinforced by frequent switches in keyboard control during pairing and the use of dual keyboards. The distribution of expertise among the members of a pair had a strong influence on the tenor of pair programming interaction. Keyboard control had a consistent secondary effect on decision-making within the pair. These findings have implications for software development managers and practitioners as well as for the design of software development tools. |
New material design for liquid crystals and composites by magneto-processing | We have tried to form a variety of oriented structures in liquid crystalline materials and composites by magneto-processing. Using photo-curable liquid crystal (LC), homogeneous, homeotropic, and bend oriented structures were fixed in the films. Moreover, UV light was irradiated to the LC monomers through a photomask under the magnetic field. The pattern was successfully recorded in the film by molecular orientation. Low viscous branched LC molecules, LCs based on calix [4] resorcinarene and dendrimer, synthesized in this study could be highly aligned under the magnetic fields. The smectic structures of these LC materials were demonstrated from X-ray diffraction results of the magneto-oriented samples. Carbon nanotubes (CNTs) were aligned parallel to the field direction in polycarbonate (PC). Moreover, magneto-oriented CNTs remarkably enhanced the recrystallization of the PC during annealing. |
Performance comparison of multi-label learning algorithms on clinical data for chronic diseases | We are motivated by the issue of classifying diseases of chronically ill patients to assist physicians in their everyday work. Our goal is to provide a performance comparison of state-of-the-art multi-label learning algorithms for the analysis of multivariate sequential clinical data from medical records of patients affected by chronic diseases. As a matter of fact, the multi-label learning approach appears to be a good candidate for modeling overlapped medical conditions, specific to chronically ill patients. With the availability of such comparison study, the evaluation of new algorithms should be enhanced. According to the method, we choose a summary statistics approach for the processing of the sequential clinical data, so that the extracted features maintain an interpretable link to their corresponding medical records. The publicly available MIMIC-II dataset, which contains more than 19,000 patients with chronic diseases, is used in this study. For the comparison we selected the following multi-label algorithms: ML-kNN, AdaBoostMH, binary relevance, classifier chains, HOMER and RAkEL. Regarding the results, binary relevance approaches, despite their elementary design and their independence assumption concerning the chronic illnesses, perform optimally in most scenarios, in particular for the detection of relevant diseases. In addition, binary relevance approaches scale up to large dataset and are easy to learn. However, the RAkEL algorithm, despite its scalability problems when it is confronted to large dataset, performs well in the scenario which consists of the ranking of the labels according to the dominant disease of the patient. |
Understanding patterns and processes in models of trophic cascades | Climate fluctuations and human exploitation are causing global changes in nutrient enrichment of terrestrial and aquatic ecosystems and declining abundances of apex predators. The resulting trophic cascades have had profound effects on food webs, leading to significant economic and societal consequences. However, the strength of cascades-that is the extent to which a disturbance is diminished as it propagates through a food web-varies widely between ecosystems, and there is no formal theory as to why this should be so. Some food chain models reproduce cascade effects seen in nature, but to what extent is this dependent on their formulation? We show that inclusion of processes represented mathematically as density-dependent regulation of either consumer uptake or mortality rates is necessary for the generation of realistic 'top-down' cascades in simple food chain models. Realistically modelled 'bottom-up' cascades, caused by changing nutrient input, are also dependent on the inclusion of density dependence, but especially on mortality regulation as a caricature of, e.g. disease and parasite dynamics or intraguild predation. We show that our conclusions, based on simple food chains, transfer to a more complex marine food web model in which cascades are induced by varying river nutrient inputs or fish harvesting rates. |
DeepMon: Mobile GPU-based Deep Learning Framework for Continuous Vision Applications | The rapid emergence of head-mounted devices such as the Microsoft Holo-lens enables a wide variety of continuous vision applications. Such applications often adopt deep-learning algorithms such as CNN and RNN to extract rich contextual information from the first-person-view video streams. Despite the high accuracy, use of deep learning algorithms in mobile devices raises critical challenges, i.e., high processing latency and power consumption. In this paper, we propose DeepMon, a mobile deep learning inference system to run a variety of deep learning inferences purely on a mobile device in a fast and energy-efficient manner. For this, we designed a suite of optimization techniques to efficiently offload convolutional layers to mobile GPUs and accelerate the processing; note that the convolutional layers are the common performance bottleneck of many deep learning models. Our experimental results show that DeepMon can classify an image over the VGG-VeryDeep-16 deep learning model in 644ms on Samsung Galaxy S7, taking an important step towards continuous vision without imposing any privacy concerns nor networking cost. |
Reduction in the need for operation after conservative treatment of osteoarthritis of the first carpometacarpal joint: a seven year prospective study. | The effect of occupational therapy for patients awaiting surgery for isolated osteoarthritis of the carpometacarpal joint of the thumb was assessed. Thirty-three patients awaiting joint replacement because of pain were randomised into three groups. One group was treated with technical accessories, two other groups had in addition one of two types of splints, and all patients received extensive advice on how to accommodate activities of daily living. They all had an initial seven months' trial on this regimen at which time 23/33 (70%) no longer required an operation. During the following seven years four patients died, but only two of the remaining 19 patients wanted an operation. We therefore recommend that patients with osteoarthritis of the carpometacarpal joint of the thumb are offered a similar programme in addition to access to accessories and splints preoperatively. |
Survey on Opinion Mining and Summarization of User Reviews on Web | Large amount of user generated data is present on web as blogs, reviews tweets, comments etc. This data involve user’s opinion, view, attitude, sentiment towards particular product, topic, event, news etc. Opinion mining (sentiment analysis) is a process of finding users’ opinion from user-generated content. Opinion summarization is useful in feedback analysis, business decision making and recommendation systems. In recent years opinion mining is one of the popular topics in Text mining and Natural Language Processing. This paper presents the methods for opinion extraction, classification, and summarization. This paper also explains different approaches, methods and techniques used in process of opinion mining and summarization, and comparative study of these different methods. Keywords— Natural Language Processing, Opinion Mining, Opinion Summarization. |
Predicting elections from social media : a three-country , three-method comparative study | This study introduces and evaluates the robustness of different volumetric, sentiment, and social network approaches to predict the elections in three Asian countries – Malaysia, India, and Pakistan from Twitter posts. We find that predictive power of social media performs well for India and Pakistan but is not effective for Malaysia. Overall, we find that it is useful to consider the recency of Twitter posts while using it to predict a real outcome, such as an election result. Sentiment information mined using machine learning models was the most accurate predictor of election outcomes. Social network information is stable despite sudden surges in political discussions, for e.g. around electionsrelated news events. Methods combining sentiment and volume information, or sentiment and social network information, are effective at predicting smaller vote shares, for e.g. vote shares in the case of independent candidates and regional parties. We conclude with a detailed discussion on the caveats of social media analysis for predicting real-world outcomes and recommendations for future work. ARTICLE HISTORY Received 1 August 2017 Revised 12 February 2018 Accepted 12 March 2018 |
Simplified Morasses with Linear Limits | ?1. In a recent series of papers Kanamori ([4], [5], and [6]) defines generalizations of several combinatorial principles known to follow from the existence of morasses. Kanamori proves the consistency of his generalizations by forcing arguments which come close to satisfying the hypotheses of the Martin's Axiomtype characterizations of morasses developed independently by Shelah and Stanley [9] and the author [12]. A similar "almost application" of morasses appears in [11], in which Todorcevic uses forcing to prove the consistency of the existence of Kurepa trees with no Aronszajn or Cantor subtrees. In all cases the attempted proofs using morasses fail for the same reason: the partial orders involved do not have strong enough closure properties. In an attempt to solve this problem Shelah and Stanley strengthened their characterization of morasses to allow applications to what they called "good canonical limit" partial orders. However, for rather subtle reasons even this strengthened forcing axiom is not good enough for the proposed applications. The problem this time is that Shelah and Stanley's "weak commutativity of Lim and restriction" requirement (see [9, 3.9(iv)]) is not satisfied. Furthermore, there is reason to believe that an ordinary morass is just not good enough for these applications, since in L morasses exist at all regular uncountable cardinals, but even a weak form of Todorcevic's conclusion cannot hold at ineffable cardinals (see the end of ?4). A possible solution to this problem is suggested by the fact that FLK is equivalent to a forcing axiom which applies to partial orders satisfying precisely the kind of weak closure conditions involved in the examples described above (see [13]). What is needed to make the proposed morass applications work is something which will do for morass constructions what Lb. does for ordinary transfinite recursion constructions. In this paper we show how extra structure can be built into a morass to accomplish this goal. All of our results are stated in terms of the simplification of morasses introduced in [14]. For the convenience of the reader we list below the relevant definitions and theorems concerning simplified morasses. For the motivation behind the definitions and proofs of the theorems we refer the reader to [14]. |
Classifying the Semantic Relations in Noun Compounds via a Domain-Specific Lexical Hierarchy | We are developing corpus-based techniques for identifying semantic relations at an intermediate level of description (more specific than those used in case frames, but more general than those used in traditional knowledge representation systems). In this paper we describe a classification algorithm for identifying relationships between two-word noun compounds. We find that a very simple approach using a machine learning algorithm and a domain-specific lexical hierarchy successfully generalizes from training instances, performing better on previously unseen words than a baseline consisting of training on the words themselves. |
Association of body mass with pulmonary function in the Childhood Asthma Management Program (CAMP). | BACKGROUND
While increases in body mass index (BMI) have been associated with the incidence and prevalence of asthma, the mechanisms behind this association are unclear.
METHODS
We hypothesised that BMI would be independently associated with measures of asthma severity in a population of children with mild to moderate asthma enrolled in the Childhood Asthma Management Program (CAMP). A multivariable baseline cross sectional analysis of BMI with our outcomes of interest was performed.
RESULTS
BMI was generally not associated with symptoms, nor was it associated with atopy. While BMI was positively associated with the methacholine concentration that causes a 20% fall in forced expiratory volume in 1 second (PC(20)FEV(1)), this association did not persist after adjustment for FEV(1). Increasing BMI was associated with increasing FEV(1) (beta = 0.006 l, 95% CI (0.001 to 0.01)) and forced vital capacity (FVC) (beta = 0.012 l, 95% CI (0.007 to 0.017)). However, decrements in the FEV(1)/FVC ratio were noted with increasing BMI (beta = -0.242, 95% CI (-0.118 to -0.366)). Thus, an increase in BMI of 5 units was associated with a decrease in FEV(1)/FVC of over 1%.
CONCLUSIONS
Although the association of FEV(1) and FVC with BMI did not support our initial hypothesis, the decrease noted in the FEV(1)/FVC ratio has potential relevance in the relationship between BMI and asthma severity. |
Geometry-Contrastive Generative Adversarial Network for Facial Expression Synthesis | In this paper, we propose a geometry-contrastive generative adversarial network GC-GAN for generating facial expression images conditioned on geometry information. Specifically, given an input face and a target expression designated by a set of facial landmarks, an identity-preserving face can be generated guided by the target expression. In order to embed facial geometry onto a semantic manifold, we incorporate contrastive learning into conditional GANs. Experiment results demonstrate that the manifold is sensitive to the changes of facial geometry both globally and locally. Benefited from the semantic manifold, dynamic smooth transitions between different facial expressions are exhibited via geometry interpolation. Furthermore, our method can also be applied in facial expression transfer even there exist big differences in face shape between target faces and driving faces. |
The Language of Mathematics | Mohan Ganesalingam. The Language of Mathematics: A Linguistic and Philosophical Investigation. FoLLI Publications on Logic, Language and Information. Springer, 2013. ISBN: 978-3-642-37011-3 (hbk); 978-3-642-37012-0 (ebook). Pp. xx + 260. This ambitious book sets out to provide a linguistic analysis of the language used in written mathematics, both textual and symbolic. It is a revised version of the author’s Cambridge Ph.D. thesis, a worthy recipient of FoLLI’s E. W. Beth dissertation award for 2011. Mohan Ganesalingam is a linguist with a Ph.D. in computer science, and his work combines insights from these disciplines with a substantial grasp of mathematics. However, there is much in the book that should interest philosophers of mathematics. Firstly, Ganesalingam’s project leads him to confront some significant issues in the foundations of mathematics, for which he proposes a response that is, in part, novel. Secondly, and perhaps more importantly, he demonstrates something which is often discussed but seldom attempted: he shows how his account of mathematics can be applied to a significant body of actual mathematical practice. The book is very clearly structured. Chapter 1 begins with a defence of Ganesalingam’s methodological presuppositions. Critically, and in distinction from earlier projects of more modest scope, notably the work of Aarne Ranta (Ranta, 1997), he insists on sufficient breadth to encompass all of pure mathematics and on what he calls ‘full adaptivity’, that any mathematical content be extracted from the text under analysis, and not baked into the analytic system (p. 3). The latter constraint prevents him from, for example, building set theory into his linguistic model. Although his account is intended to provide an analysis of the content of ‘rigorous, careful textbooks’ he confines it to what he calls the ‘formal mode’ of the language found therein: the statements exclusively concerning mathematical objects and mathematical properties (p. 7). Characteristically, such textbooks also contain much that is in an informal mode—remarks about the context, or value, or interest, of the mathematical results, say—but, as Ganesalingam notes, analysis of these comments would require a full analysis of natural language (p. 8). Conversely, one of the attractions mathematics in the formal mode holds for the linguist is its comparative simplicity. In Chapter 2, Ganesalingam surveys the problem that he has set himself, identifying some of the distinctive linguistic features of mathematics that his analysis, or any comparable rival, should account for. These include the interdependency of text and symbols, the extensive use of stipulative definition to expand the language as it is being used, and some idiomatic features of symbolic notation that lack counterparts in natural language. In particular, Ganesalingam observes that the syntax of mathematics is type-dependent: for example, an expression such as α → (β)n may be syntactically well-defined only if α and β are ordinals and |
John Dewey on Happiness: Going Against the Grain of Contemporary Thought | Dewey’s theory of happiness goes against the grain of much contemporary psychologic and popular thought by identifying the highest form of human happiness with moral behavior. Such happiness, according to Dewey, avoids being at the mercy of circumstances because it is independent of the pleasures and successes we take from experience and, instead, is dependent upon the disposition we bring to experience. It accompanies a disposition characterized by an abiding interest in objects in which all can share, one founded upon a dynamic inner harmony and evolving adjustment to the world. The marks of such an expansive disposition are “stability of character, braveness of soul, and equanimity of soul,” and the key to the development of these traits is what Dewey calls “ethical love.” We conclude with consideration of three potential criticisms of Dewey’s view of happiness and possible Deweyan rejoinders. |
Getting at Systemic Risk via an Agent-Based Model of the Housing Market † | Rob Axtell (George Mason University) Ernesto Carrella (George Mason University) Ben Conlee (Ellington Capital Management) Doyne Farmer (Oxford University) John Geanakoplos (Yale University) Jon Goldstein (George Mason University) Matthew Hendrey (George Mason University) Peter Howitt (Brown University) Philip Kalikman (Yale University) David Masad (George Mason University) Nathan Palmer (George Mason University) Chun-Yi Yang (George Mason University) |
Technology and the Myth of ‘Natural Man’ | The main suggestions and objections raised by Don Ihde and Charles Lenay to my ‘Technology and the body: the (im)possibilities of re-embodiment’ are summarized and discussed. On the one hand, I agree that we should pay more attention to whole body experience and to further resisting Cartesian assumptions in the field of cognitive neuroscience and philosophy of cognition. On the other hand, I explain that my account in no way presupposes the myth of ‘natural man’ or of a natural, delineated body from before the fall into technology. |
Verifiable Computation in Smart Contracts | Recent developments in blockchain technologies have seen the popularisation of smart contracts, that is, distributed programs which store their state on a blockchain. Usage of these programs has practical restrictions, however, including both hard and soft limits on the number of calculations they can perform. This project aims to provide a proof-of-concept implementation of how techniques from verifiable computation may be used to reduce the practical costs of using smart contracts, enabling them to be used in a wider range of applications. Verifiable computation is typically seen from the perspective of a user wishing to outsource computation to a third party, but wanting guarantees that the party provides the correct results. In the case of smart contracts, this scenario is flipped, with the user wishing to perform the computation themselves, and guarantee the correctness to the rest of the network. This project provides a functioning means of utilising verifiable computation in an Ethereum fork, and determines the challenges in extending this into a practical, widely used technology. |
Performance Study of Class-E Power Amplifier With a Shunt Inductor at Subnominal Condition | This paper presents analytical expressions for the class-E power amplifier with a shunt inductor for satisfying the subnominal condition and 50% duty ratio. The subnominal condition means that only the zero-current switching condition (ZCS) is achieved, though the nominal conditions mean that both the ZCS and zero-current derivative switching (ZCDS) are satisfied. The design values for achieving the subnominal condition are expressed as a function of the phase shift between the input and output voltages. The class-E amplifier with subnominal condition increases one design degree of freedom compared with that with the nominal conditions. Because of the increase in the design degree of freedom, one more relationship can be specified as a design specification. It is shown analytically that the dc-supply voltage and the current are always proportional to the amplitude of the output voltage and the current as a function of the phase shift. Additionally, the output power capability is affected by the phase shift, and the peak switch voltage has influence on the phase shift as well. This paper gives a circuit design example based on our proposed design expression by specifying the peak switch voltage instead of the ZCDS condition. The measurement and PSpice simulation results agree with the analytical expressions quantitatively, which show the validity of our analytical expressions. |
Mechanisms for the intracellular manipulation of organelles by conventional electroporation. | Conventional electroporation (EP) changes both the conductance and molecular permeability of the plasma membrane (PM) of cells and is a standard method for delivering both biologically active and probe molecules of a wide range of sizes into cells. However, the underlying mechanisms at the molecular and cellular levels remain controversial. Here we introduce a mathematical cell model that contains representative organelles (nucleus, endoplasmic reticulum, mitochondria) and includes a dynamic EP model, which describes formation, expansion, contraction, and destruction for the plasma and all organelle membranes. We show that conventional EP provides transient electrical pathways into the cell, sufficient to create significant intracellular fields. This emerging intracellular electrical field is a secondary effect due to EP and can cause transmembrane voltages at the organelles, which are large enough and long enough to gate organelle channels, and even sufficient, at some field strengths, for the poration of organelle membranes. This suggests an alternative to nanosecond pulsed electric fields for intracellular manipulations. |
Field-Programmable Analog Arrays Enable Analog Signal Processing Education | In a laboratory environment, the practicality and scope of experiments is often constrained by time and financial resources. In the digital hardware design arena, the development of programmable logic devices, such as field--programmable gate arrays (FPGAs), has greatly enhanced the student’s ability to design and synthesize complete systems within a short period of time and at a reasonable cost. Unfortunately, analog circuit design and signal processing have not enjoyed similar advances. However, new advances in field--programmable analog arrays (FPAAs) have created many new opportunities in analog circuit design and signal processing education. This paper will investigate the usefulness of these FPAAs as viable pedagogical tools. It will also explore the new methodologies in analog signal processing education that are available when FPAAs are brought into the classroom. |
Completely medial versus hybrid medial approach for laparoscopic complete mesocolic excision in right hemicolon cancer | To explore the feasibilities between operational approaches for laparoscopic complete mesocolic excision (CME) to right hemicolon cancer. This prospective randomized controlled trial included patients admitted to a Shanghai minimally invasive surgical center to receive laparoscopic CME from September 2011 to January 2013 randomized into two groups: hybrid medial approach (HMA) and completely medial approach (CMA). The feasibilities and strategies of the two techniques were studied and compared. Furthermore, the operation time and vessel-related complications were designed to be the primary end points, and other operational findings, including the classification of the surgical plane and postoperative recovery, were designed to be the secondary end points for this study. After screening, 50 cases were allocated to the HMA group and 49 to the CMA group. Within the HMA group, there were 48 cases graded with mesocolic plane and 2 with intramesocolic plane. For the CMA group, there were 42 cases graded with mesocolic plane and seven with intramesocolic plane. The differences between the two were insignificant, as were the number of lymph nodes retrieved. The mean±standard deviation total operation time for the CMA group was 128.3 ± 36.4 min, which was significantly shorter than that for the HMA group, 142.6 ± 34.8 min. For the CMA group, the time involved in central vessel ligations and laparoscopic procedures was 58.5 %, 14.1 and 81.2 ± 23.5 min, respectively, which were shorter than the HMA group. The vessel-related complication rate was significantly higher in the HMA group. Laparoscopic CME via the total medial approach is technically feasible after the precise identification of the surgical planes and spaces for the right hemicolon. The procedure has a shorter operation time and fewer vessel-related complications. |
ZVS phase shift full bridge converter with separated primary winding | Generally additional leakage inductance and two clamp diodes are adopted into the conventional phase shift full bridge (PSFB) converter for reducing the voltage stress of secondary rectifier diodes and extending the range of zero voltage switching (ZVS) operation. However, the core and copper loss caused by additional leakage inductor can be high enough to decrease the whole efficiency of DC/DC converter. Therefore, a new ZVS PSFB converter with separated primary winding (SPW) is proposed. The proposed converter makes both the transformer and additional leakage inductor with same ferrite core by separating the primary winding method. Using this method, leakage inductance is controlled by the winding ratio of SPW. Moreover, by using this integrated magnetic component with single core, size and core loss can be greatly reduced and it results in the improvement of efficiency and power density of DC/DC converter. The operational principle and analysis of proposed converter are presented and verified by the 1.2kW prototype. |
Frankly, We Do Give a Damn: The Relationship Between Profanity and Honesty. | There are two conflicting perspectives regarding the relationship between profanity and dishonesty. These two forms of norm-violating behavior share common causes and are often considered to be positively related. On the other hand, however, profanity is often used to express one's genuine feelings and could therefore be negatively related to dishonesty. In three studies, we explored the relationship between profanity and honesty. We examined profanity and honesty first with profanity behavior and lying on a scale in the lab (Study 1; N = 276), then with a linguistic analysis of real-life social interactions on Facebook (Study 2; N = 73,789), and finally with profanity and integrity indexes for the aggregate level of U.S. states (Study 3; N = 50 states). We found a consistent positive relationship between profanity and honesty; profanity was associated with less lying and deception at the individual level and with higher integrity at the society level. |
Distinct Representations of Cognitive and Motivational Signals in Midbrain Dopamine Neurons | Dopamine is essential to cognitive functions. However, despite abundant studies demonstrating that dopamine neuron activity is related to reinforcement and motivation, little is known about what signals dopamine neurons convey to promote cognitive processing. We therefore examined dopamine neuron activity in monkeys performing a delayed matching-to-sample task that required working memory and visual search. We found that dopamine neurons responded to task events associated with cognitive operations. A subset of dopamine neurons were activated by visual stimuli if the monkey had to store the stimuli in working memory. These neurons were located dorsolaterally in the substantia nigra pars compacta, whereas ventromedial dopamine neurons, some in the ventral tegmental area, represented reward prediction signals. Furthermore, dopamine neurons monitored visual search performance, becoming active when the monkey made an internal judgment that the search was successfully completed. Our findings suggest an anatomical gradient of dopamine signals along the dorsolateral-ventromedial axis of the ventral midbrain. |
UCB revisited: Improved regret bounds for the stochastic multi-armed bandit problem | In the stochastic multi-armed bandit problem we consider a modification of the UCB algorithm of Auer et al. [4]. For this modified algorithm we give an improved bound on the regret with respect to the optimal reward. While for the original UCB algorithm the regret in Karmed bandits after T trials is bounded by const · K log(T ) ∆ , where ∆ measures the distance between a suboptimal arm and the optimal arm, for the modified UCB algorithm we show an upper bound on the regret of const · K log(T∆ 2) ∆ . |
Entrepreneurship education: A strength-based approach to substance use and suicide prevention for American Indian adolescents. | American Indian (AI) adolescents suffer the largest disparities in substance use and suicide. Predominating prevention models focus primarily on risk and utilize deficit-based approaches. The fields of substance use and suicide prevention research urge for positive youth development frameworks that are strength based and target change at individual and community levels. Entrepreneurship education is an innovative approach that reflects the gap in available programs. This paper describes the development and evaluation of a youth entrepreneurship education program in partnership with one AI community. We detail the curriculum, process evaluation results, and the randomized controlled trial evaluating its efficacy for increasing protective factors. Lessons learned may be applicable to other AI communities. |
A Computational Model of "Active Vision" for Visual Search in Human-Computer Interaction | Tim Halverson is a cognitive scientist with an interest in human-computer interaction, cognitive modeling, eye movements, and fatigue; he is a post-doctoral research associate in the Performance and Learning Model Team of the Air Force Research Laboratory. Anthony J. Hornof is a computer scientist with an interest in human-computer interaction, cognitive modeling, visual search, and eye tracking; he is an Associate Professor in the Department of Computer and Information Science at the University of Oregon. |
When accuracy hurts, and when it helps: a test of the empathic accuracy model in marital interactions. | This study tested predictions from W. Ickes and J. A. Simpson's (1997, 2001) empathic accuracy model. Married couples were videotaped as they tried to resolve a problem in their marriage. Both spouses then viewed a videotape of the interaction, recorded the thoughts and feelings they had at specific time points, and tried to infer their partner's thoughts and feelings. Consistent with the model, when the partner's thoughts and feelings were relationship-threatening (as rated by both the partners and by trained observers), greater empathic accuracy on the part of the perceiver was associated with pre-to-posttest declines in the perceiver's feelings of subjective closeness. The reverse was true when the partner's thoughts and feelings were nonthreatening. Exploratory analyses revealed that these effects were partially mediated through observer ratings of the degree to which partners tried to avoid the discussion issue. |
Does turgor limit growth in tall trees | The gravitational component of water potential contributes a standing 0.01 MPa m 1 to the xylem tension gradient in plants. In tall trees, this contribution can significantly reduce the water potential near the tree tops. The turgor of cells in buds and leaves is expected to decrease in direct proportion with leaf water potential along a height gradient unless osmotic adjustment occurs. The pressure–volume technique was used to characterize height-dependent variation in leaf tissue water relations and shoot growth characteristics in young and old Douglas-fir trees to determine the extent to which growth limitation with increasing height may be linked to the influence of the gravitational water potential gradient on leaf turgor. Values of leaf water potential ( Y l ), bulk osmotic potential at full and zero turgor, and other key tissue water relations characteristics were estimated on foliage obtained at 13.5 m near the tops of young (approximately 25-year-old) trees and at 34.7, 44.2 and 55.6 m in the crowns of old-growth (approximately 450year-old) trees during portions of three consecutive growing seasons. The sampling periods coincided with bud swelling, expansion and maturation of new foliage. Vertical gradients of Y l and pressure–volume analyses indicated that turgor decreased with increasing height, particularly during the late spring when vegetative buds began to swell. Vertical trends in branch elongation, leaf dimensions and leaf mass per area were consistent with increasing turgor limitation on shoot growth with increasing height. During the late spring (May), no osmotic adjustment to compensate for the gravitational gradient of Y l was observed. By July, osmotic adjustment had occurred, but it was not sufficient to fully compensate for the vertical gradient of Y l . In tall trees, the gravitational component of Y l is superimposed on phenologically driven changes in leaf water relations characteristics, imposing potential constraints on turgor that may be indistinguishable from those associated with soil water deficits. Key-words : Pseudotsuga menziesii ; Douglas-fir; gravitational component of water potential; height growth; osmotic adjustment; pressure–volume curve; turgor maintenance. INTRODUCTION A great deal of attention has recently been focused on the mechanisms responsible for reduced growth in trees as they age and increase in height (Friend 1993; Yoder et al . 1994; Ryan & Yoder 1997; Magnani, Mencuccini & Grace 2000; McDowell et al . 2002). Much of this research has addressed the hydraulic limitation hypothesis, which proposes that reduced growth in ageing and tall trees may be linked to reductions in leaf-specific hydraulic conductance, which in turn lead to reductions in stomatal conductance and therefore photosynthesis (Ryan & Yoder 1997). In addition to strictly hydraulic constraints that may require stomatal limitation of transpiration to regulate leaf water potential ( Y l ) within a relatively narrow range, the gravitational component of Y influences Y l whether or not transpiration is occurring. In the absence of transpiration, the gravitational component of water potential should result in a standing xylem tension gradient of 0.01 MPa per metre increase in height (Scholander et al . 1965; Hellkvist, Richards & Jarvis 1974; Zimmermann 1983; Bauerle et al . 1999). Thus, at the top of a non-transpiring 60-m-tall tree, leaf water potential should be at least 0.6 MPa more negative than foliage at ground level. When transpiration occurs, frictional resistance will lower leaf water potential even further (Hellkvist et al . 1974; Connor, Legge & Turner 1977; Bauerle et al . 1999). The turgor of cells in leaves and buds is expected to decrease in direct proportion with the leaf water potential if the osmotic potential of these cells remains constant. A variety of growth-related processes, including cell formation, expansion and metabolism are dependent on turgor pressure and cell volume (Boyer 1968; Hsiao et al . 1976; Gould & Measures 1977; Ray 1987). However, osmotic adjustment, the active accumulation of symplastic solutes, could maintain turgor and cell volume and may serve to sustain growth and photosynthetic gas exchange with increasing tree height. Nevertheless, compensatory osmotic adjustment may only partially offset reductions in growth with increasing height if it results in a substantial amount of resources being diverted from leaf expansion towards production and accumulation of osmolytes for turgor and volume maintenance. Osmotic adjustment has been well documented as an adaptation to drought and salinity stress (Hsiao et al . 1976; Osonubi & Davies 1978; Turner & Jones 1980; Morgan 1984; McNulty 1985; Heuer & Plaut 1989; Pereira & Pallardy 1989; Ranney, Bassuk & Whitlow 1991; Rieger 1995). 230 D. R. Woodruff et al . © 2004 Blackwell Publishing Ltd, Plant, Cell and Environment, 27, 229–236 However, we know of no studies that have attempted to evaluate the effects of the gravitational component of water potential on the osmotic, morphological and growth characteristics of foliage along a height gradient in tall trees. The objective of this study therefore was to characterize height-dependent variation in leaf tissue water relations and morphological characteristics in young and old Douglas-fir trees to determine the extent to which shoot growth limitation with increasing height may be linked to the influence of the gravitational water potential gradient on leaf turgor. The pressure–volume technique was employed at various phenological stages during three consecutive growing seasons to assess water relations characteristics of foliage sampled along a height gradient from approximately 14 to 57 m. MATERIALS AND METHODS |
A comparison of B2B e-service solutions | The Internet is evolving not only to provide information and e-commerce transactions, but also to act as the platform through which services are delivered to businesses and customers. These electronic services or e-services could become a key part of the value provided by many businesses [2, 5, 10]. At the core of this evolution is Extensible Markup Language (XML), which has emerged as the foundation of all architectures suggested for such services. XML simplifies the exchange of information by letting users define their own syntax and use underlying technologies of the Internet. However, while organizations can define XML syntax to solve their specific problems, the multitude of syntax (schemas) creates incompatibility problems with schemas developed by others. This is one of the reasons why major organizations are creating business-to-business (B2B) XML framework standards to enable interoperability. To overcome these problems, efforts are underway to develop standards for e-services including eCo by a consortium led by CommerceNet; RosettaNet by a consortium that includes IBM, Microsoft, EDS, SAP, Cisco systems, Compaq and Intel; BizTalk by Microsoft; e-speak by Hewlett Packard (HP), and several others. Since these B2B interoperability standards are likely to be very important in the way businesses interact with each other in the future, an overview of these standardization efforts is certain to be of considerable importance to the IS community. This article describes the components (core modules of platforms for linking Internet-based service providers) of e-services and compares popular B-to-B e-commerce frameworks based on their support for e-service components. |
Social Stigma and Self-Esteem : The Self-Protective Properties of Stigma | Although several psychological theories predict that members of stigmatized groups should have low global self-esteem, empirical research typically does not support this prediction. It is proposed here that this discrepancy may be explained by considering the ways in which membership in a stigmatized group may protect the self-concept It is proposed that members of stigmatized groups may (a) attribute negative feedback to prejudice against their group, (b) compare their outcomes with those of the ingroup, rather than with the relatively advantaged outgroup, and (c) selectively devalue those dimensions on which their group fares poorly and value those dimensions on which their group excels. Evidence for each of these processes and their consequences for self-esteem and motivation is reviewed. Factors that moderate the use of these strategies and implications of this analysis for treatment of stigmas are also discussed. |
Efficacy of non-nucleoside reverse transcriptase inhibitor-based highly active antiretroviral therapy in Thai HIV-infected children aged two years or less. | Twenty-six Thai HIV-infected children, aged 2 years or less were prospectively enrolled to receive non-nucleoside reverse transcription inhibitor-based highly active antiretroviral therapy (HAART). Twenty-two children (85%) had World Health Organization clinical stage 3 or 4. The median baseline CD4 cell percentage and plasma HIV RNA were 17% and 5.9 log 10 copies/mL, respectively. The median age at HAART initiation was 9.8 months (range, 1.5-24.0). One child died. The mean CD4 cell percentages at 24, 48, and 96 weeks of treatment were 26%, 31%, and 37%, respectively. The proportions of children with virologic suppression (<400 copies/mL) at week 24 and 48 were 14/26 (54%) and 19/26 (73%), respectively. Non-nucleoside reverse transcription inhibitor-based HAART is safe and effective in HIV-infected young children in a resource-limited setting. |
Women, Wives and Land Rights in Africa: Situating Gender Beyond the Household in the Debate Over Land Policy and Changing Tenure Systems | The debate over land reform in Africa is embedded in evolutionary models, in which it is assumed landholding systems are evolving into individualized systems of ownership with greater market integration. This process is seen to be occurring even without state protection of private land rights through titling. Gender as an analytical category is excluded in evolutionary models. Women are accommodated only in their dependent position as the wives of landholders in idealized ‘households’. This paper argues that gender relations are central to the organization and transformation of landholding systems. Women have faced different forms of tenure insecurity, both as wives and in their relations with wider kin, as landholding systems have been integrated into wider markets. These cannot be addressed while evolutionary models dominate the policy debate. The paper draws out these arguments from experience of tenure reform in Tanzania and asks how policy-makers might address these issues differently. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.