title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
The HUMOSIM Ergonomics Framework: A New Approach to Digital Human Simulation for Ergonomic Analysis | The potential of digital human modeling to improve the design of products and workspaces has been limited by the time-consuming manual manipulation of figures that is required to perform simulations. Moreover, the inaccuracies in posture and motion that result from manual procedures compromise the fidelity of the resulting analyses. This paper presents a new approach to the control of human figure models and the analysis of simulated tasks. The new methods are embodied in an algorithmic framework developed in the Human Motion Simulation (HUMOSIM) laboratory at the University of Michigan. The framework consists of an interconnected, hierarchical set of posture and motion modules that control aspects of human behavior, such as gaze or upper-extremity motion. Analysis modules, addressing issues such as shoulder stress and balance, are integrated into the framework. The framework encompasses many individual innovations in motion simulation algorithms, but the primary innovation is in the development of a comprehensive system for motion simulation and ergonomic analysis that is specifically designed to be independent of any particular human modeling system. The modules are developed as lightweight algorithms based on closed-form equations and simple numerical methods that can be communicated in written form and implemented in any computer language. The modules are independent of any particular figure model structure, requiring only basic forward-kinematics control and public-domain numerical algorithms. Key aspects of the module algorithms are “behavior-based,” meaning that the large amount of redundancy in the human kinematic linkage is resolved using empirical models based on laboratory data. The implementation of the HUMOSIM framework in human figure models will allow much faster and more accurate simulation of human interactions with products and workspaces using high-level, task-based control. INTRODUCTION Digital human figure models (DHM) are now widely used for ergonomic analysis of products and workplaces. In many organizations, DHM software is a tool of first resort for answering questions relating to physical interaction between people and objects. Yet any objective appraisal of the technology would conclude that the current reality of DHM software capability is far from the promise of a “digital human” that can interact realistically with products and environments. This paper is focused on efforts to improve the ability of DHM software to simulate physical posture and motion. Nearly every other aspect of DHM functionality also warrants improvement, including body shape representation, strength simulation, and cognitive function, but posture and motion are critical to the primary applications of DHM to the assessment of physical tasks. Posture simulation is as old as computerized manikins, because the manikin must be postured before an analysis can be conducted. Important early work was performed by Ryan for the U.S. Navy (Ryan 1970). Porter et al. (1993) summarized applications of digital human models in vehicle ergonomics during the early years of personal computers, at which time few of the current commercial DHM software tools were in use. Chaffin (2001) presented case studies of the expanding use of DHM for both product and workplace design and assessment. As evidence of the importance of posture and motion simulation, dozens of papers in the SAE literature and in other forums have presented a wide variety of methods for human simulating postures and motions, including multiple-regression (Snyder et al. 1972); analytic and numerical inverse kinematics (Jung et al. 1995; Tolani et al. 2000); optimization-based inverse kinematics (Wang and Verriest 1998); differential inverse kinematics (Zhang and Chaffin, 2000); functional regression on stretch-pivot parameters (Faraway 2000); scaling, warping, and blending of motion-capture data (Park et al. 2002; Faraway 2003; Monnier et al. 2003; Park et al. 2004; Dufour and Wang 2005); and many 2006-01-2365 The HUMOSIM Ergonomics Framework: A New Approach to Digital Human Simulation for Ergonomic Analysis Matthew P. Reed, Julian Faraway, Don B. Chaffin and Bernard J. Martin University of Michigan |
Activating visual energy: The MA circle and the art of Sándor Bortnyik | Abstract The article examines how energetically and biomechanically-based aesthetics, reception theories, and political ideas could inform, or inspire, leftist visual and poetic representation and choice of style in Hungarian modernism during the 1918–19 political upheavals, as a response to modern technology's effects. In connection to these views, the article also reconsiders the political ambiguities of Bortnyik's art in relation to Hungarian anarchism. |
Optical Flow-based 3D Human Motion Estimation from Monocular Video | This paper presents a method to estimate 3D human pose and body shape from monocular videos. While recent approaches infer the 3D pose from silhouettes and landmarks, we exploit properties of optical flow to temporally constrain the reconstructed motion. We estimate human motion by minimizing the difference between computed flow fields and the output of our novel flow renderer. By just using a single semi-automatic initialization step, we are able to reconstruct monocular sequences without joint annotation. Our test scenarios demonstrate that optical flow effectively regularizes the under-constrained problem of human shape and motion estimation from monocular video. Fig. 1: Following our main idea we compute the optical flow between two consecutive frames and match it to an optical flow field estimated by our proposed optical flow renderer. From left to right: input frame, color-coded observed flow, estimated flow, resulting pose. |
Office automation project: a research perspective | This paper attempts to place some perspective on the research and developments going on in office automation. It describes the functions which can be assisted by computers, and indicates where more research may be needed. A brief description of the Office Automation Project at the Wharton School is provided. The systems being developed include word processing, electronic mail, decision aiding technology, and integration with various databases. This effort is compared with some of the other, complementary research projects in office automation under way around the country. |
Project risk evaluation using a fuzzy analytic hierarchy process: An application to information technology projects | Projects are critical to the realization of performing organization’s strategies. Each project contains some degree of risk and it is required to be aware of these risks and to develop the necessary responses to get the desired level of project success. Because projects’ risks are multidimensional, they must be evaluated by using multi-attribute decision-making methods. The aim of this article is to provide an analytic tool to evaluate the project risks under incomplete and vague information. The fuzzy analytic hierarchy process ~AHP! as a suitable and practical way of evaluating project risks based on the heuristic knowledge of experts is used to evaluate the riskiness of an information technology ~IT! project of a Turkish firm. The means of the triangular fuzzy numbers produced by the IT experts for each comparison are successfully used in the pairwise comparison matrices. © 2006 Wiley Periodicals, Inc. |
Optimization of standard cell based detailed placement for 16 nm FinFET process | FinFET transistors have great advantages over traditional planar MOSFET transistors in high performance and low power applications. Major foundries are adopting the Fin-FET technology for CMOS semiconductor device fabrication in the 16 nm technology node and beyond. Edge device degradation is among the major challenges for the FinFET process. To avoid such degradation, dummy gates are needed on device edges, and the dummy gates have to be tied to power rails in order not to introduce unconnected parasitic transistors. This requires that each dummy gate must abut at least one source node after standard cell placement. If the drain nodes at two adjacent cell boundaries abut each other, additional source nodes must be inserted in between for dummy gate power tying, which costs more placement area. Usually there is some flexibility during detailed placement to horizontally flip the cells or switch the positions of adjacent cells, which has little impact on the global placement objectives, such as timing conditions and net congestion. This paper proposes a detailed placement optimization strategy for the standard cell based designs. By flipping a subset of cells in a standard cell row and switching pairs of adjacent cells, the number of drain to drain abutments between adjacent cell boundaries can be optimally minimized, which saves additional source node insertion and reduces the length of the standard cell row. In addition, the proposed graph model can be easily modified to consider more complicated design rules. The experimental results show that the optimization of 100k cells is completed within 0.1 second, verifying the efficiency of the proposed algorithm. |
Attention training and the threat bias: An ERP study | Anxiety is characterized by exaggerated attention to threat. Several studies suggest that this threat bias plays a causal role in the development and maintenance of anxiety disorders. Furthermore, although the threat bias can be reduced in anxious individuals and induced in non-anxious individual, the attentional mechanisms underlying these changes remain unclear. To address this issue, 49 non-anxious adults were randomly assigned to either attentional training toward or training away from threat using a modified version of the dot probe task. Behavioral measures of attentional biases were also generated pre- and post-training using the dot probe task. Event-related potentials (ERPs) were generated to threat and non-threat face pairs and probes during pre- and post-training assessments. Effects of training on behavioral measures of the threat bias were significant, but only for those participants showing pre-training biases. Attention training also influenced early spatial attention, as measured by post-training P1 amplitudes to cues. Results illustrate the importance of taking pre-training attention biases in non-anxious individuals into account when evaluating the effects of attention training and tracking physiological changes in attention following training. |
Experimental Verifications of the Casimir Attractive Force between Solid Bodies | A brief review of the recent experimental verifications of the Casimir force between extended bodies is presented. With modern techniques, it now appears feasible to test the force law with 1% precision; I will address the issues relating to the interpretation of experiments at this level of accuracy |
Sampling of Graph Signals With Successive Local Aggregations | A new scheme to sample signals defined on the nodes of a graph is proposed. The underlying assumption is that such signals admit a sparse representation in a frequency domain related to the structure of the graph, which is captured by the so-called graph-shift operator. Instead of using the value of the signal observed at a subset of nodes to recover the signal in the entire graph, the sampling scheme proposed here uses as input observations taken at a single node. The observations correspond to sequential applications of the graph-shift operator, which are linear combinations of the information gathered by the neighbors of the node. When the graph corresponds to a directed cycle (which is the support of time-varying signals), our method is equivalent to the classical sampling in the time domain. When the graph is more general, we show that the Vandermonde structure of the sampling matrix, critical when sampling time-varying signals, is preserved. Sampling and interpolation are analyzed first in the absence of noise, and then noise is considered. We then study the recovery of the sampled signal when the specific set of frequencies that is active is not known. Moreover, we present a more general sampling scheme, under which, either our aggregation approach or the alternative approach of sampling a graph signal by observing the value of the signal at a subset of nodes can be both viewed as particular cases. Numerical experiments illustrating the results in both synthetic and real-world graphs close the paper. |
Hybrid differential evolution based on fuzzy C-means clustering | In this paper, we propose a hybrid Differential Evolution (DE) algorithm based on the fuzzy C-means clustering algorithm, referred to as FCDE. The fuzzy C-means clustering algorithm is incorporated with DE to utilize the information of the population efficiently, and hence it can generate good solutions and enhance the performance of the original DE. In addition, the population-based algorithmgenerator is adopted to efficiently update the population with the clustering offspring. In order to test the performance of our approach, 13 high-dimensional benchmark functions of diverse complexities are employed. The results show that our approach is effective and efficient. Compared with other state-of-the-art DE approaches, our approach performs better, or at least comparably, in terms of the quality of the final solutions and the reduction of the number of fitness function evaluations (NFFEs). |
A data mining approach for database intrusion detection | In this paper we proposed a data mining approach for detecting malicious transactions in a Database System. Our approach concentrates on mining data dependencies among data items in the database. A data dependency miner is designed for mining data correlations from the database log. The transactions not compliant to the data dependencies mined are identified as malicious transactions. The experiment illustrates that the proposed method works effectively for detecting malicious transactions provided certain data dependencies exist in the database. |
BIM for facilities management: evaluating BIM standards in asset register creation and service life | Operation and Maintenance (O&M) costs in buildings represent a significant part of the total building life cycle cost. However, project delivery methods in the Architectural, Engineering and Construction (AEC) industry are often focused on the capital delivery stage and associated costs ranging from planning, through design, to construction and handover. Open data standards such as the Industry Foundation Classes (IFC) and specifications such as the Construction Operations Building information exchange (COBie) provide the capability to capture Facilities Management (FM) data requirements in a structured manner from the early stages of project development. We aim to investigate how and whether IFC and COBie can deliver the data and information about assets required by facility managers within a whole life cycle perspective. We focus on specific use cases including the creation of asset registers and service life planning. However, the methodology adopted can be generalised and applied to any other FM use case. The results show that IFC, COBie and the tested supporting tools exhibited some shortcomings in delivering some of the data entities, types and parameters required for the selected FM use cases. We discuss these shortcomings and propose them as areas for improvement to domain researchers, standardisation bodies and technology providers. Finally, we instigate domain researchers to adopt the proposed methodology and conduct further FM use cases. |
Advisor 2 . 0 : A Second-Generation Advanced Vehicle Simulator for Systems Analysis | The National Renewable Energy Laboratory has recently publicly released its second-generation advanced vehicle simulator called ADVISOR 2.0. This software program was initially developed four years ago, and after several years of in-house usage and evolution, this powerful tool is now available to the public through a new vehicle systems analysis World Wide Web page. ADVISOR has been applied to many different systems analysis problems, such as helping to develop the SAE J1711 test procedure for hybrid vehicles and helping to evaluate new technologies as part of the Partnership for a New Generation of Vehicles (PNGV) technology selection process. The model has been and will continue to be benchmarked and validated with other models and with real vehicle test data. After two months of being available on the Web, more than 100 users have downloaded ADVISOR. ADVISOR 2.0 has many new features, including an easy-to-use graphical user interface, a detailed exhaust aftertreatment thermal model, and complete browser-based documentation. Future work will include adding to the library of components available in ADVISOR, including optimization functionality, and linking with a more detailed fuel cell model. |
Feature Weight Tuning for Recursive Neural Networks | This paper addresses how a recursive neural network model can automatically leave out useless information and emphasize important evidence, in other words, to perform “weight tuning” for higher-level representation acquisition. We propose two models, Weighted Neural Network (WNN) and Binary-Expectation Neural Network (BENN), which automatically control how much one specific unit contributes to the higher-level representation. The proposed model can be viewed as incorporating a more powerful compositional function for embedding acquisition in recursive neural networks. Experimental results demonstrate the significant improvement over standard neural models. |
Acted vs. natural frustration and delight: Many people smile in natural frustration | This work is part of research to build a system to combine facial and prosodic information to recognize commonly occurring user states such as delight and frustration. We create two experimental situations to elicit two emotional states: the first involves recalling situations while expressing either delight or frustration; the second experiment tries to elicit these states directly through a frustrating experience and through a delightful video. We find two significant differences in the nature of the acted vs. natural occurrences of expressions. First, the acted ones are much easier for the computer to recognize. Second, in 90% of the acted cases, participants did not smile when frustrated, whereas in 90% of the natural cases, participants smiled during the frustrating interaction, despite self-reporting significant frustration with the experience. This paper begins to explore the differences in the patterns of smiling that are seen under natural frustration and delight conditions, to see if there might be something measurably different about the smiles in these two cases, which could ultimately improve the performance of classifiers applied to natural expressions. |
Social media analytics and research test-bed (SMART dashboard) | We developed a social media analytics and research testbed (SMART) dashboard for monitoring Twitter messages and tracking the diffusion of information in different cities. SMART dashboard is an online geo-targeted search and analytics tool, including an automatic data processing procedure to help researchers to 1) search tweets in different cities; 2) filter noise (such as removing redundant retweets and using machine learning methods to improve precision); 3) analyze social media data from a spatiotemporal perspective, and 4) visualize social media data in various ways (such as weekly and monthly trends, top URLs, top retweets, top mentions, or top hashtags). By monitoring social messages in geo-targeted cities, we hope that SMART dashboard can assist researchers investigate and monitor various topics, such as flu outbreaks, drug abuse, and Ebola epidemics at the municipal level. |
Direct slicing of STEP based NURBS models for layered manufacturing | Direct slicing of CAD models to generate process planning instructions for solid freeform fabrication may overcome inherent disadvantages of using stereolithography format in terms of the process accuracy, ease of file management, and incorporation of multiple materials. This paper will present the results of our development of a direct slicing algorithm for layered freeform fabrication. The direct slicing algorithm was based on a neutral, international standard (ISO 10303) STEP-formatted non-uniform rational B-spline (NURBS) geometric representation and is intended to be independent of any commercial CAD software. The following aspects of the development effort will be presented: (1) determination of optimal build direction based upon STEP-based NURBS models; (2) adaptive subdivision of NURBS data for geometric refinement; and (3) ray-casting slice generation into sets of raster patterns. The development also provides for multi-material slicing and will provide an effective tool in heterogeneous slicing processes. q 2004 Elsevier Ltd. All rights reserved. |
Motion Graph for Character Animation: Design Considerations | Animating human character has become an active research area in computer graphics. It is really important for development of virtual environment applications such as computer games and virtual reality. One of the popular methods to animate the character is by using motion graph. Since motion graph is the main focus of this research, we investigate the preliminary work of motion graph and discuss about the main components of motion graph like distance metrics and motion transition. These two components will be taken into consideration during the process of development of motion graph. In this paper, we will also present a general framework and future plan of this study. |
Novel Compact Circularly Polarized Square Microstrip Antenna | A novel compact circular-polarization (CP) operation of the square microstrip antenna with four slits and a pair of truncated corners is proposed and investigated. Experimental results show that the proposed compact CP design can have an antenna-size reduction of about 36% as compared to the conventional corner-truncated square microstrip antenna at a given operating frequency. Also, the required size of the truncated corners for CP operation is much greater than that for the conventional CP design using a simple square microstrip patch, providing a relaxed manufacturing tolerance for the proposed compact CP design. Details of the experimental results are presented and discussed. |
The strengths and difficulties questionnaire: A pilot study on the validity of the self-report version | The self-report version of the Strengths and Difficulties Questionnaire (SDQ) was administered to two samples of 11–16 year olds: 83 young people in the community and 116 young people attending a mental health clinic. The questionnaire discriminated satisfactorily between the two samples. For example, the clinic mean for the total difficulties score was 1.4 standard deviations above the community mean, with clinic cases being over six times more likely to have a score in the abnormal range. The correlations between self-report SDQ scores and teacher or parent-rated SDQ scores compared favourably with the average cross-informant correlations in previous studies of a range of measures. The self-report SDQ appears promising and warrants further evaluation. |
Continuity Editing for 3D Animation | We describe an optimization-based approach for automatically creating well-edited movies from a 3D animation. While previous work has mostly focused on the problem of placing cameras to produce nice-looking views of the action, the problem of cutting and pasting shots from all available cameras has never been addressed extensively. In this paper, we review the main causes of editing errors in literature and propose an editing model relying on a minimization of such errors. We make a plausible semi-Markov assumption, resulting in a dynamic programming solution which is computationally efficient. We also show that our method can generate movies with different editing rhythms and validate the results through a user study. Combined with state-of-the-art cinematography, our approach therefore promises to significantly extend the expressiveness and naturalness of virtual movie-making. |
Results of a randomized study of medical and surgical management of angina pectoris | Of 686 male adults randomly allocated to 2 treatment groups in the 3 years 1972, 1973, and 1974, there were 354 assigned to the medical (control) group and 332 assigned to medical plus surgical treatment (in 95% of cases, to saphenous vein aortocoronary bypass procedure alone). There were no statistically significant differences between the 2 groups in any of 35 different clinical, electrocardiographic and arteriographic characteristics at baseline, except for serum cholesterol, so the 2 groups were judged to be comparable. Twelve percent of patients were found to have significant disease of the left main coronary artery, in almost all instances in addition to disease of other vessels. For this subgroup, surgery was associated with increased survival (at 4 years,p<0.001). For the remaining 88% of patients, and for the 6 subgroups into which they could be categorized on the basis of extent of disease and status of left ventricular function, no significant differences in survival could as yet be identified in these preliminary analyses. Another important finding was that inclusion of patients with left main coronary artery disease in the analyses of subgroups with 3-vessel disease gave results for survival favoring surgically randomized patients, to a statistically significant degree. If patients with left main coronary artery disease were excluded, the difference in survival between medically and surgically treated patients was no longer statistically significant. Medically treated patients had better survival rates than earlier reports in the literature had indicated. Data on morbidity and quality of life are undergoing continuing analysis. Au cours des 3 années 1972, 1973 et 1974, 686 adultes mâles ont été répartis au hasard en 2 groupes thérapeutiques: 354 ont été traités médicalement (groupe témoin), 332 ont été traités médicalement et chirurgicalement (par simple pontage aorto-coronaire avec veine saphène dans 95% des cas). Les 2 groupes étaient statistiquement identiques pour 35 paramètres de départ, cliniques, électrocardiographiques, et artériographiques; seul différait le cholestérol sanguin; les 2 groupes ont donc été considérés comme comparables. Il existait une atteinte significative du tronc commun coronaire gauche dans 12% des cas, associée à des lésions d'autres vaisseaux dans presque tous les cas. Dans ce sous-groupe, la survie est nettement meilleure après chirurgie (p<0.001 à 4 ans). Il n'apparait pas, dans cette analyse préliminaire, de différence significative de survie pour les 88% restants, et pour les 6 sous-groupes fixés en fonction de l'étendue des lésions et de l'état fonctionnel du ventricule gauche. Une autre observation importante montre que, si l'on inclut dans les sousgroupes ayant des lésions de 3 vaisseaux, les malades avec atteinte du tronc commun coronaire gauche, la survie est statistiquement meilleure après chirurgie. Si l'on exclut ces malades avec artère coronaire gauche lésée, il n'y a plus de différence significative entre groupes traités médicalement et chirurgicalement. Pour les malades sous traitement médical, nos pourcentages de survie sont meilleurs que les chiffres donnés dans les publications antérieures. Nous poursuivons notre analyse des résultats aux points de vue morbidité et qualité de la vie. |
Silicon-Filled Rectangular Waveguides and Frequency Scanning Antennas for mm-Wave Integrated Systems | We present a technology for the manufacturing of silicon-filled integrated waveguides enabling the realization of low-loss high-performance millimeter-wave passive components and high gain array antennas, thus facilitating the realization of highly integrated millimeter-wave systems. The proposed technology employs deep reactive-ion-etching (DRIE) techniques with aluminum metallization steps to integrate rectangular waveguides with high geometrical accuracy and continuous metallic side walls. Measurement results of integrated rectangular waveguides are reported exhibiting losses of 0.15 dB/ λg at 105 GHz. Moreover, ultra-wideband coplanar to waveguide transitions with 0.6 dB insertion loss at 105 GHz and return loss better than 15 dB from 80 to 110 GHz are described and characterized. The design, integration and measured performance of a frequency scanning slotted-waveguide array antenna is reported, achieving a measured beam steering capability of 82 ° within a band of 23 GHz and a half-power beam-width (HPBW) of 8.5 ° at 96 GHz. Finally, to showcase the capability of this technology to facilitate low-cost mm-wave system level integration, a frequency modulated continuous wave (FMCW) transmit-receive IC for imaging radar applications is flip-chip mounted directly on the integrated array and experimentally characterized. |
Adding interactive interface to E-Government systems using AIML based chatterbots | This paper proposes a technique to utilize the power of chatterbots to serve as interactive Support systems to enterprise applications which aim to address a huge audience. The need for support systems arises due to inability of computer illiterate audience to utilize the services offered by an enterprise application. Setting up customer support centers works well for small-medium sized businesses but for mass applications (here E-Governance Systems) the audience counts almost all a country has as its population, Setting up support center that can afford such load is irrelevant. This paper proposes a solution by using AIML based chatterbots to implement Artificial Support Entity (ASE) to such Applications. |
Thematic development for measuring cohesion and coherence between sentences in English paragraph | Writing is a skill of marking coherent words on a paper and composing a text. There are several criteria of good writing, two of them are cohesion and coherence. Research of cohesion and coherence writing has been conducted by using Centering Theory and Entity Transition Value method. However, the result can still be improved. Therefore, in this research, we tried to use Thematic Development approach which focused on the use of Theme and Rheme as a method to analyze coherence level of a paragraph, combined with CogNIAC rules and Centering Theory to analyze its cohesion. Besides improving the result of previous methods, this research aims to help users in evaluating and assessing their writing text. To achieve these objectives, the proposed method is compared to previous works as well as human judge. Based on the experiment, the proposed method yields average result of 91% which is nearly equivalent to human judge which is 92%. Thematic Development also yields better result compared to Centering Theory which is 29% and Entity Transition Value which is 0%, given the same data set of beginner and intermediate simple writing. |
Artificial evil and the foundation of computer ethics | Moral reasoning traditionally distinguishes two types of evil:moral (ME) and natural (NE). The standard view is that ME is theproduct of human agency and so includes phenomena such as war,torture and psychological cruelty; that NE is the product ofnonhuman agency, and so includes natural disasters such asearthquakes, floods, disease and famine; and finally, that morecomplex cases are appropriately analysed as a combination of MEand NE. Recently, as a result of developments in autonomousagents in cyberspace, a new class of interesting and importantexamples of hybrid evil has come to light. In this paper, it iscalled artificial evil (AE) and a case is made for considering itto complement ME and NE to produce a more adequate taxonomy. Byisolating the features that have led to the appearance of AE,cyberspace is characterised as a self-contained environment thatforms the essential component in any foundation of the emergingfield of Computer Ethics (CE). It is argued that this goes someway towards providing a methodological explanation of whycyberspace is central to so many of CE's concerns; and it isshown how notions of good and evil can be formulated incyberspace. Of considerable interest is how the propensity for anagent's action to be morally good or evil can be determined evenin the absence of biologically sentient participants and thusallows artificial agents not only to perpetrate evil (and forthat matter good) but conversely to `receive' or `suffer from'it. The thesis defended is that the notion of entropy structure,which encapsulates human value judgement concerning cyberspace ina formal mathematical definition, is sufficient to achieve thispurpose and, moreover, that the concept of AE can be determinedformally, by mathematical methods. A consequence of this approachis that the debate on whether CE should be considered unique, andhence developed as a Macroethics, may be viewed, constructively,in an alternative manner. The case is made that whilst CE issuesare not uncontroversially unique, they are sufficiently novel torender inadequate the approach of standard Macroethics such asUtilitarianism and Deontologism and hence to prompt the searchfor a robust ethical theory that can deal with them successfully.The name Information Ethics (IE) is proposed for that theory. Itis argued that the uniqueness of IE is justified by its beingnon-biologically biased and patient-oriented: IE is anEnvironmental Macroethics based on the concept of data entityrather than life. It follows that the novelty of CE issues suchas AE can be appreciated properly because IE provides a newperspective (though not vice versa). In light of the discussionprovided in this paper, it is concluded that Computer Ethics isworthy of independent study because it requires its ownapplication-specific knowledge and is capable of supporting amethodological foundation, Information Ethics. |
Stylohyoid complex syndrome: a new diagnostic classification. | OBJECTIVE
To describe stylohyoid complex syndrome (SHCS) as a new diagnostic classification of all lateral neck and/or facial pain conditions resulting from an elongated styloid process, ossified stylohyoid ligament, or elongated hyoid bone. All of these pathologic conditions result in tension and reduced distensibility of the stylohyoid complex (SHC), with resultant irritation of the surrounding cervical structures with movement of the complex.
DESIGN
A retrospective medical chart review was performed to identify a cohort of patients who underwent surgical intervention for lateral neck and/or facial pain due to pathologic SHCS. Follow-up time of greater than 1 year is reported in 5 of 7 patients.
SETTING
Tertiary, academic referral center.
PATIENTS
Patients included were those given a diagnosis of SHCS who underwent surgical intervention from June 2006 through September 2009. There were 7 patients, 5 of whom were female. The age range was 38 to 53 years at time of presentation (mean age, 45.3 years). Common presenting complaints were lateral neck and oropharyngeal pain exacerbated by tongue and head movements.
INTERVENTION
The pathologic areas were surgically addressed through transoral or cervical approaches.
MAIN OUTCOME MEASURE
Symptoms following surgical intervention.
RESULTS
Seven patients (8 sides) were identified as having SHCS. Computed tomographic findings included elongated styloid processes (3 sides), ossified stylohyoid ligaments (2 sides), and elongated hyoid bones (3 sides). Computed tomographic scan, frequently with volume-rendered 3-dimensional reconstructions, identified the pathologic condition. All patients experienced clinically significant relief of presenting symptoms following surgical intervention.
CONCLUSIONS
Stylohyoid complex syndrome includes all lateral neck and/or facial pain conditions resulting from an elongated styloid process, ossified stylohyoid ligament, or elongated hyoid bone. Surgical intervention directed at any pathologic point to disrupt this complex relieves tension and offers patients relief of symptoms. |
Polysaccharides and proteoglycans in calcium carbonate-based biomineralization. | Biomineralization is a widespread phenomenon in nature leading to the formation of a variety of solid inorganic structures by living organisms, such as intracellular crystals in prokaryotes, exoskeletons in protozoa, algae, and invertebrates, spicules and lenses, bone, teeth, statoliths, and otoliths, eggshells, plant mineral structures, and also pathological biominerals such as gall stones, kidney stones, and oyster pearls. These biologically produced biominerals are inorganic-organic hybrid composites formed by self-assembled bottom up processes under mild conditions, showing interesting properties, controlled hierarchical structures, and remodeling or repair mechanisms which still remain to be developed into a practical engineering process. Therefore, the formation of biominerals provides a unique guide for the design of materials, especially those that need to be fabricated at ambient temperatures. In biominerals, the small amount of organic component not only reinforces the mechanical properties of the resulting composite but also exerts a crucial control on the mineralization process, contributing to the determination of the size, crystal morphology, specific crystallographic orientation, and superb properties of the particles formed. Therefore, biological routes of structuring biominerals are becoming valuable approaches for novel materials synthesis. Although several principles are applicable to the majority of the biominerals, herein we will focus on the role of polysaccharide polymers in calcium carbonate-based biominerals. As a general principle, the assembly of these biominerals consists of a four-stage process. It starts with the fabrication of a hydrophobic solid organic substrate or scaffolding onto which nucleation of the crystalline phase takes place closely associated with some polyanionic macromolecules. Crystal growth is then controlled by the addition of gel-structuring polyanionic macromolecules, and finally mineralization arrest is accompanied by the secretion of a new inert scaffolding of the same type or the deposition of other hydrophobic inhibitory macromolecules. Currently, a large number of proteins have been described which are involved in the control of biomineralization. These proteins are usually highly negatively charged and contain carboxylate, sulfate, or phosphate as functional groups, which may bind Ca ions and could control crystal nucleation and growth by lowering the interfacial energy between the crystal and the macromolecular substrate. However, the precise mechanism involved in controlling crystal nucleation, growth, and morphology is far from being understood. Combinatorial biology techniques have been recently developed for testing the ability of randomly generated peptides to bind different substrates or ions, thus allowing a correlation between peptide structure and ion binding affinity. However, the main focus is on the role of the backbone structure of the polymer due to the primary structure of the protein, because the synthetic technology does not allow the formation of post-translational modifications, such as sulfation and phosphorylation, which do occur in the eukaryotic cell. Even so, the occurrence of negatively charged groups in macromolecules involved in biomineralization, mainly derived from acidic amino acids, has inspired many researchers to produce synthetic polymers having such groups in order to control the size, orientation, phase, and morphology of inorganic crystals. However, since Abolins-Krogis’ work, a slow but increasing interest has been developed to explore the role of polysaccharides in biomineralization, despite the fact that their involvement in biomineralization seems to appear very early in evolution. There is no single type of polysaccharide associated with biominerals, but such polysaccharides are mainly hydroxylated, carboxylated, or sulfated or contain a mixture of these functional moieties. |
Role of Ryanodine Receptor Subtypes in Initiation and Formation of Calcium Sparks in Arterial Smooth Muscle: Comparison
with Striated Muscle | Calcium sparks represent local, rapid, and transient calcium release events from a cluster of ryanodine receptors (RyRs) in the sarcoplasmic reticulum. In arterial smooth muscle cells (SMCs), calcium sparks activate calcium-dependent potassium channels causing decrease in the global intracellular [Ca2+] and oppose vasoconstriction. This is in contrast to cardiac and skeletal muscle, where spatial and temporal summation of calcium sparks leads to global increases in intracellular [Ca2+] and myocyte contraction. We summarize the present data on local RyR calcium signaling in arterial SMCs in comparison to striated muscle and muscle-specific differences in coupling between L-type calcium channels and RyRs. Accordingly, arterial SMC Ca(v)1.2 L-type channels regulate intracellular calcium stores content, which in turn modulates calcium efflux though RyRs. Downregulation of RyR2 up to a certain degree is compensated by increased SR calcium content to normalize calcium sparks. This indirect coupling between Ca(v)1.2 and RyR in arterial SMCs is opposite to striated muscle, where triggering of calcium sparks is controlled by rapid and direct cross-talk between Ca(v)1.1/Ca(v)1.2 L-type channels and RyRs. We discuss the role of RyR isoforms in initiation and formation of calcium sparks in SMCs and their possible molecular binding partners and regulators, which differ compared to striated muscle. |
SCAN: Structure Correcting Adversarial Network for Chest X-rays Organ Segmentation | Chest X-ray (CXR) is one of the most commonly prescribed medical imaging procedures, often with over 2– 10x more scans than other imaging modalities such as MRI, CT scan, and PET scans. These voluminous CXR scans place significant workloads on radiologists and medical practitioners. Organ segmentation is a crucial step to obtain effective computer-aided detection on CXR. In this work, we propose Structure Correcting Adversarial Network (SCAN) to segment lung fields and the heart in CXR images. SCAN incorporates a critic network to impose on the convolutional segmentation network the structural regularities emerging from human physiology. During training, the critic network learns to discriminate between the ground truth organ annotations from the masks synthesized by the segmentation network. Through this adversarial process the critic network learns the higher order structures and guides the segmentation model to achieve realistic segmentation outcomes. Extensive experiments show that our method produces highly accurate and natural segmentation. Using only very limited training data available, our model reaches human-level performance without relying on any existing trained model or dataset. Our method also generalizes well to CXR images from a different patient population and disease profiles, surpassing the current state-of-the-art. |
Planar square quadrifilar spiral antenna for mobile RFID reader | A compact planar square quadrifilar spiral antenna (QSA) for passive UHF radio frequency identification (RFID) reader is proposed and experimentally investigated. The presented RFID reader antenna consists of four spiral antennas which are fed by microstrip feed network and the 4-port antenna matching technique for QSA is used to improve radiation efficiency. Experimental results shows the proposed antenna has peak gain of 0.06 dBic, axial ratio under 3 dB in wanted frequency band, 3-dB beamwidth of 124° with good right hand circular polarization (RHCP) characteristics. |
Assessment of chronic pain coping strategies | Introduction. We made an adaptation of the Coping Strategies Questionnaire (CSQ) to the Spanish population. This measure, the most used in its scope, was developed by Rosenstiel and Keefe in 1983. Method. 205 participants coming from Primary Health Care and pain clinics made up the sample. More than half suffered migraine and chronic tension-type headache; the rest, fibromyalgia, low back pain, arthrosis or arthritis. Results. Factor analyses explained 59% of the total variance, on an 8-factor structure that converged into a 2-factor structure. In the 8-factor solution the novelty was the diversification of mental-non-mental distraction strategies, and religious-non-religious hope strategies. In the 2-factor solution the novelty was the grouping according to the efficacy of the coping. All the CSQ factors showed inner consistency and construct validity. Thus, unadaptive coping strategies were related to negative, anxious and depressed self-talk, related to lack of control and perceived self-efficacy, and related to many pain behaviors. On the contrary it happened with adaptive coping strategies. In addition, the diagnosis of pain was related to the utilization and effectiveness of coping strategies. Conclussions. CSQ is shown to be a reliable and valid measure of coping strategies in chronic pain in the Spanish population, showing the difference between theoretical and empirical factor structures again. |
Gender , Entrepreneurial Self-Efficacy , and Entrepreneurial Career Intentions : Implications for Entrepreneurship Education ' | The relationships between gender, entrepreneurial self-efficacy, and entrepreneurial Intentions were examined for two sample groups: adolescents and adult master of business administration (MBA) students. Similar gender effects on entrepreneurial self-efficacy are shown for both groups and support earlier research on the relationship between self-efficacy and career intentions. Additionally, the effects of entrepreneurship education in MBA programs on entrepreneurial self-efficacy proved stronger for women than for men. Implications for educators and policy makers were discussed, and areas for future research outlined. Introduction Women play a substantial role in entrepreneurship throughout the world. In advanced market economies, women own 25% of all businesses and the number of women-owned businesses in Africa, Asia, Eastern Europe, and Latin America are increasing rapidly (Estes, 1999; Jalbert, 2000). In the United States alone, the 6.7 million privately held majority women-owned businesses account for $1.19 trillion in sales and employ 9.8 million people. Moreover, the growth rate of women-owned businesses is impressive (Women-Owned Businesses, 2004). Between 1997 and 2004, employment in women-owned businesses increased by 39% compared to 12% nationally, and revenues rose by 46% compared to 34% among all privately held U.S. businesses. These data reinforce the value of studying women's entrepreneurship, and likely account for the increased attention being paid to this area by scholars and educators. However, current trends mask the fact that men continue to be more active in entrepreneurship than women worldwide. Recent data suggest that the largest gaps occur in middle-income nations where men are 75% more likely than women to be active entrepreneurs, compared with 33% in high-income countries and 41% in low-income countries (Minnitti, Arenius, & Langowitz, 2005). In order to more fully capture the talents of women in new venture creation in the future, a vibrant "pipeline" of potential entrepreneurs is required. However, previous research has shown that this pipeline of women may be weak. Adult men in the United States are twice as likely as women to be in the process of starting a new business (Reynolds, Carter, Gartner, Greene, & Cox, 2002). Furthermore, research on the career interests of teens, the potential entrepreneurs of the next generation, has revealed significantly less interest among girls than among boys in entrepreneurial careers (Kourilsky & Walstad, 1998; Marlino & Wilson, 2003). Many factors undoubtedly contribute to the disparity between men and women in entrepreneurial career interests and behaviors. One factor in particular, entrepreneurial self-efficacy, or the self-confidence that one … |
Study on technology system of self-healing control in smart distribution grid | Smart distribution grid is an important part of smart grid, which connects the main network and user-oriented supply. As an “immune system”, self-healing is the most important feature of smart grid. Major problem of self-healing control is the ‘uninterrupted power supply problem’, that is, real-time monitoring of network operation, predicting the state power grid, timely detection, rapid diagnosis and elimination of hidden faults, without human intervention or only a few cases. First, the paper describes major problems, which are solved by self-healing control in smart distribution grid, and their functions. Then, it analysis the structure and technology components of self-healing control in smart distribution grid, including the base layer, support layer and application layer. The base layer is composed of the power grid and its equipments, which is the base for smart grid and self-healing control. The support layer is composed of the data and communication. High-speed, bi-directional, real-time and integrated communications system is the basis of achieving power transmission and the use of high efficiency, reliability and security, and the basis for intelligent distribution network and the key steps of self-prevention and self-recovery in distribution grid. The application layer is composed of Monitoring, assessment, pre-warning/analysis, decision making, control and restoration. Six modules are interconnected and mutual restraint. The application layer is important means of self-prevention and self-recovery in distribution grid. Through the research and analysis on the relationship and the technical composition of six modules in the application layer, the paper divides running states of smart grid distribution grid having self-healing capabilities into five states, which are normal state, warning state, critical state, emergency state and recovery state, and defines the characteristics and the relationship of each state. Through investigating and applying self-healing control in smart distribution grid, smart distribution grid can timely detect the happening or imminent failure and implement appropriate corrective action, so that it does not affect the normal supply or minimize their effects. Power supply reliability is improved observably and outage time is reduced significantly. Especially in extreme weather conditions, the distribution grid will give full play to its self-prevention and self-recovery capability, give priority to protecting people's life and provide electricity for the people furthest. |
Evaluation of Multiple-F0 Estimation and Tracking Systems | Multi-pitch estimation of sources in music is an ongoing research area that has a wealth of applications in music information retrieval systems. This paper presents the systematic evaluations of over a dozen competing methods and algorithms for extracting the fundamental frequencies of pitched sound sources in polyphonic music. The evaluations were carried out as part of the Music Information Retrieval Evaluation eXchange (MIREX) over the course of two years, from 2007 to 2008. The generation of the dataset and its corresponding ground-truth, the methods by which systems can be evaluated, and the evaluation results of the different systems are presented and discussed. |
Women's orgasm. | An orgasm in the human female is a variable, transient peak sensation of intense pleasure, creating an altered state of consciousness, usually with an initiation accompanied by involuntary, rhythmic contractions of the pelvic striated circumvaginal musculature, often with concomitant uterine and anal contractions, and myotonia that resolves the sexually induced vasocongestion and myotonia, generally with an induction of well-being and contentment. Women's orgasms can be induced by erotic stimulation of a variety of genital and nongenital sites. As of yet, no definitive explanations for what triggers orgasm have emerged. Studies of brain imaging indicate increased activation at orgasm, compared to pre-orgasm, in the paraventricular nucleus of the hypothalamus, periaqueductal gray of the midbrain, hippocampus, and the cerebellum. Psychosocial factors commonly discussed in relation to female orgasmic ability include age, education, social class, religion, personality, and relationship issues. Findings from surveys and clinical reports suggest that orgasm problems are the second most frequently reported sexual problems in women. Cognitive-behavioral therapy for anorgasmia focuses on promoting changes in attitudes and sexually relevant thoughts, decreasing anxiety, and increasing orgasmic ability and satisfaction. To date there are no pharmacological agents proven to be beneficial beyond placebo in enhancing orgasmic function in women. |
I Aesthetic Surgery of the Face 6 Anatomy of the aging face | The traditional approach to assessing the face is to consider the face in thirds (upper, middle, and lower thirds). While useful, this approach limits conceptualization, as it is not based on the function of the face. From a functional perspec tive, the face has an anterior aspect and a lateral aspect. The anterior face is highly evolved beyond the basic survival needs, specifically, for communication and facial expression. In contrast, the lateral face predominantly covers the struc tures of mastication. A vertical line descending from the lateral orbital rim is the approximate division between the anterior and lateral zones of the face. Internally, a series of facial retaining ligaments are strategically located along this line to demarcate the anterior from the lateral face (Fig. 6.1). The mimetic muscles of the face are located in the superficial fascia of the anterior face, mostly around the eyes and the mouth. This highly mobile area of the face is designed to allow fine movement and is prone to develop laxity with aging. In contrast, the lateral face is relatively immobile as it overlies the structures to do with mastication, the temporalis, masseter, the parotid gland and its duct, all located deep to the deep fascia. The only superficial muscle in the lateral face is the platysma in the lower third, which extends to the level of the oral commissure. Importantly, the soft tissues of the anterior face are subdi vided into two parts; that which overlies the skeleton and the larger part that comprises the highly specialized sphincters overlying the bony cavities. Where the soft tissues overlie the orbital and oral cavities they are modified, as there is no deep S Y N O P S I S |
Wireless system for remote monitoring of oxygen saturation and heart rate | This paper describes the realization of a wireless oxygen saturation and heart rate system for patient monitoring in a limited area. The proposed system will allow the automatic remote monitoring in hospitals, at home, at work, in real time, of persons with chronic illness, of elderly people, and of those having high medical risk. The system can be used for long-time continuous patient monitoring, as medical assistance of a chronic condition, as part of a diagnostic procedure, or recovery from an acute event. The blood oxygen saturation level (SpO2) and heart rate (HR) are continuously measured using commercially available pulse oximeters and then transferred to a central monitoring station via a wireless sensor network (WSN). The central monitoring station runs a patient monitor application that receives the SpO2 and HR from WSN, processes these values and activates the alarms when the results exceed the preset limits. A user-friendly Graphical User Interface was developed for the patient monitor application to display the received measurements from all monitored patients. A prototype of the system has been developed, implemented and tested. |
Supervised Opinion Aspect Extraction by Exploiting Past Extraction Results | One of the key tasks of sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. In this work, we focus on using supervised sequence labeling as the base approach to performing the task. Although several extraction methods using sequence labeling methods such as Conditional Random Fields (CRF) and Hidden Markov Models (HMM) have been proposed, we show that this supervised approach can be significantly improved by exploiting the idea of concept sharing across multiple domains. For example, “screen” is an aspect in iPhone, but not only iPhone has a screen, many electronic devices have screens too. When “screen” appears in a review of a new domain (or product), it is likely to be an aspect too. Knowing this information enables us to do much better extraction in the new domain. This paper proposes a novel extraction method exploiting this idea in the context of supervised sequence labeling. Experimental results show that it produces markedly better results than without using the past information. |
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization | We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent. Our approach – Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say logits for ‘dog’ or even a caption), flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, Grad- CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multi-modal inputs (e.g. visual question answering) or reinforcement learning, without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) outperform previous methods on the ILSVRC-15 weakly-supervised localization task, (c) are more faithful to the underlying model, and (d) help achieve model generalization by identifying dataset bias. For image captioning and VQA, our visualizations show even non-attention based models can localize inputs. Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a ‘stronger’ deep network from a ‘weaker’ one even when both make identical predictions. Our code is available at https: //github.com/ramprs/grad-cam/ along with a demo on CloudCV [2] and video at youtu.be/COjUB9Izk6E. |
An integrated collision prediction and avoidance scheme for mobile robots in non-stationary environments | A formulation that makes possible the integration of collision prediction and avoidance stages for mobile robots moving in general terrains containing moving obstacles is presented. A dynamic model of the mobile robot and the dynamic constraints are derived. Collision avoidance is guaranteed if the distance between the robot and a moving obstacle is nonzero. A nominal trajectory is assumed to be known from off-line planning. The main idea is to change the velocity along the nominal trajectory so that collisions are avoided. A feedback control is developed and local asymptotic stability is proved if the velocity of the moving obstacle is bounded. Simulation results verify the value of the proposed strategy. |
The Agency Cost Paradigm: The Good, the Bad, and the Ugly | In the “managerialist” world that preceded our present world—the shareholder value world—some corporate managers could, and did, help themselves when they should have been doing their jobs. They were bad agents, using their positions to get unwarranted leisure and unwarranted perquisites at the expense of their principals, whether the principals were seen as the corporation, its shareholders, or both. The modern agency cost paradigm has focused the attention of courts, directors, and scholars on this problem, in part by conceptualizing the duty of corporate managers as maximizing shareholder value. This paradigm has had a variety of effects: some good, some bad, and some ugly. As for the good, the agency cost paradigm focused on this problem of managerial enrichment, emphasizing to the bad agents a message that they should not be working for themselves, and set about looking for a solution. It provided a simple, clear benchmark that may quickly indicate when managers are performing badly. What about the bad? The pathologies of a laser-like focus on readily demonstrable—some would say short-term—shareholder value have become clear. Recent examples include takeovers and other transactions in which the principal motivations include reductions in research and development costs and tax savings through relocation to other jurisdic- |
CLAM: Connection-less, Lightweight, and Multiway Communication Support for Distributed Computing | A number of factors motivate and favor the implementation of communication protocols in user-space. There is a particularly strong motivation for the provision of scalable, multiway and connectionless transport for distributed computing, multimedia, and conferencing applications. This is also true of high speed networking, where it is beneficial to keep the OS kernel out of the critical path in communication. User-space protocol implementations may hold the key to optimal functionality and performance. We describe the Connectionless, Lightweight and Multiway (CLAM) communications system which provides efficient and scalable user-space support for distributed applications requiring multiple protocols. The system supports heterogeneous networked applications with irregular or asynchronous communication patterns and multimodal data. We focus on motivating and describing the CLAM architecture and present some experimental results that evaluate an specific protocol module inside this architecture. |
Scanning micromirrors : An overview | An overview of the current state of the art in scanning micromirror technology for switching, imaging, and beam steering applications is presented. The requirements that drive the design and fabrication technology are covered. Electrostatic, electromagnetic, and magnetic actuation techniques are discussed as well as the motivation toward combdrive configurations from parallel plate configurations for large diameter (mm range) scanners. Suitability of surface micromachining, bulk micromachining, and silicon on insulator (SOI) micromachining technology is presented in the context of the length scale and performance for given scanner applications. |
Discovery and validation of potential urinary biomarkers for bladder cancer diagnosis using a pseudotargeted GC-MS metabolomics method | Bladder cancer (BC) is the second most prevalent malignancy in the urinary system and is associated with significant mortality; thus, there is an urgent need for novel noninvasive diagnostic biomarkers. A urinary pseudotargeted method based on gas chromatography-mass spectrometry was developed and validated for a BC metabolomics study. The method exhibited good repeatability, intraday and interday precision, linearity and metabolome coverage. A total of 76 differential metabolites were defined in the discovery sample set, 58 of which were verified using an independent validation urine set. The verified differential metabolites revealed that energy metabolism, anabolic metabolism and cell redox states were disordered in BC. Based on a binary logistic regression analysis, a four-biomarker panel was defined for the diagnosis of BC. The area under the receiving operator characteristic curve was 0.885 with 88.0% sensitivity and 85.7% specificity in the discovery set and 0.804 with 78.0% sensitivity and 70.3% specificity in the validation set. The combinatorial biomarker panel was also useful for the early diagnosis of BC. This approach can be used to discriminate non-muscle invasive and low-grade BCs from healthy controls with satisfactory sensitivity and specificity. The results show that the developed urinary metabolomics method can be employed to effectively screen noninvasive biomarkers. |
Visual Attention driven by Convolutional Features | The understanding of where humans look in a scene is a problem of great interest in visual perception and computer vision. When eye-tracking devices are not a viable option, models of human attention can be used to predict fixations. In this paper we give two contribution. First, we show a model of visual attention that is simply based on deep convolutional neural networks trained for object classification tasks. A method for visualizing saliency maps is defined which is evaluated in a saliency prediction task. Second, we integrate the information of these maps with a bottom-up differential model of eye-movements to simulate visual attention scanpaths. Results on saliency prediction and scores of similarity with human scanpaths demonstrate the effectiveness of this model. |
Effects of innovation on employment in Latin America | This study examines the impact of process and product innovation on employment growth across four Latin American countries (Argentina, Chile, Costa Rica, and Uruguay) using micro data from innovation surveys. Specifically, we relate employment growth to process innovations and to the growth of sales separately due to innovative and unchanged products. Results show that that compensation effects are prevalent, and the introduction of new products is associated with employment growth at the firm level. Specifically, we find that for the manufacturing firms as a whole, the introduction of process innovations only affects the employment growth in the countries case of Chile. At the same time, we observe no evidence of displacement effects due to the introduction of product innovations. In fact, the observed compensation effects resulting from the introduction of new products imply, in turn, employment growth even when the replacement of old products is taken into account. |
DeepSim: deep learning code functional similarity | Measuring code similarity is fundamental for many software engineering tasks, e.g., code search, refactoring and reuse. However, most existing techniques focus on code syntactical similarity only, while measuring code functional similarity remains a challenging problem. In this paper, we propose a novel approach that encodes code control flow and data flow into a semantic matrix in which each element is a high dimensional sparse binary feature vector, and we design a new deep learning model that measures code functional similarity based on this representation. By concatenating hidden representations learned from a code pair, this new model transforms the problem of detecting functionally similar code to binary classification, which can effectively learn patterns between functionally similar code with very different syntactics.
We have implemented our approach, DeepSim, for Java programs and evaluated its recall, precision and time performance on two large datasets of functionally similar code. The experimental results show that DeepSim significantly outperforms existing state-of-the-art techniques, such as DECKARD, RtvNN, CDLH, and two baseline deep neural networks models. |
Modeling Event Importance for Ranking Daily News Events | We deal with the problem of ranking news events on a daily basis for large news corpora, an essential building block for news aggregation. News ranking has been addressed in the literature before but with individual news articles as the unit of ranking. However, estimating event importance accurately requires models to quantify current day event importance as well as its significance in the historical context. Consequently, in this paper we show that a cluster of news articles representing an event is a better unit of ranking as it provides an improved estimation of popularity, source diversity and authority cues. In addition, events facilitate quantifying their historical significance by linking them with long-running topics and recent chain of events. Our main contribution in this paper is to provide effective models for improved news event ranking.
To this end, we propose novel event mining and feature generation approaches for improving estimates of event importance. Finally, we conduct extensive evaluation of our approaches on two large real-world news corpora each of which span for more than a year with a large volume of up to tens of thousands of daily news articles. Our evaluations are large-scale and based on a clean human curated ground-truth from Wikipedia Current Events Portal. Experimental comparison with a state-of-the-art news ranking technique based on language models demonstrates the effectiveness of our approach. |
Customer Churn Prediction Based on SVM-RFE | As markets become increasingly saturated, churn prediction and management has become of great concern to many industries. A company wishing to retain its customers needs to be able to predict those who are likely to churn and will make those customers the focus of customer retention efforts. Today Customer data has properties of large samples, high dimensions and more noises. In response to the limitations of existing feature selection in churn-prediction, we introduce and experimentally evaluate Support vector machine-recursive feature elimination attribute selection algorithm. It can identify key attributes of customer churn, rule out the related and redundant attributes, and reduce the dimensions of data. It is more important that this algorithm is related with the followed classification learning algorithm, so it can be better integrated in churn prediction. The empirical evaluation results suggest that the proposed feature selection algorithm extracts less key attributes and exhibits better satisfactory predictive effectiveness than other three comparable attribute selection algorithms. |
Efficacy and Safety of Infliximab Therapy and Predictors of Response in Korean Patients with Crohn's Disease: A Nationwide, Multicenter Study | PURPOSE
Infliximab is currently used for the treatment of active Crohn's disease (CD). We aimed to assess the efficacy and safety of infliximab therapy and to determine the predictors of response in Korean patients with CD.
MATERIALS AND METHODS
A total of 317 patients who received at least one infliximab infusion for active luminal CD (n=198) and fistulizing CD (n=86) or both (n=33) were reviewed retrospectively in 29 Korean referral centers. Clinical outcomes of induction and maintenance therapy with infliximab, predictors of response, and adverse events were evaluated.
RESULTS
In patients with luminal CD, the rates of clinical response and remission at week 14 were 89.2% and 60.0%, respectively. Male gender and isolated colonic disease were associated with higher remission rates at week 14. In week-14 responders, the probabilities of sustained response and remission were 96.2% and 93.3% at week 30 and 88.0% and 77.0% at week 54, respectively. In patients with fistulizing CD, clinical response and remission were observed in 85.0% and 56.2% of patients, respectively, at week 14. In week-14 responders, the probabilities of sustained response and remission were 94.0% and 97.1%, respectively, at both week 30 and week 54. Thirty-nine patients (12.3%) experienced adverse events related to infliximab. Serious adverse events developed in 19 (6.0%) patients including seven cases of active pulmonary tuberculosis.
CONCLUSION
Infliximab induction and maintenance therapy are effective and well tolerable in Korean patients with luminal and fistulizing CD. However, clinicians must be aware of the risk of rare yet critical adverse events. |
Multiplexed protein detection by proximity ligation for cancer biomarker validation | We present a proximity ligation–based multiplexed protein detection procedure in which several selected proteins can be detected via unique nucleic-acid identifiers and subsequently quantified by real-time PCR. The assay requires a 1-μl sample, has low-femtomolar sensitivity as well as five-log linear range and allows for modular multiplexing without cross-reactivity. The procedure can use a single polyclonal antibody batch for each target protein, simplifying affinity-reagent creation for new biomarker candidates. |
A placebo-controlled investigation of synaesthesia-like experiences under LSD | The induction of synaesthesia in non-synaesthetes has the potential to illuminate the mechanisms that contribute to the development of this condition and the shaping of its phenomenology. Previous research suggests that lysergic acid diethylamide (LSD) reliably induces synaesthesia-like experiences in non-synaesthetes. However, these studies suffer from a number of methodological limitations including lack of a placebo control and the absence of rigorous measures used to test established criteria for genuine synaesthesia. Here we report a pilot study that aimed to circumvent these limitations. We conducted a within-groups placebo-controlled investigation of the impact of LSD on colour experiences in response to standardized graphemes and sounds and the consistency and specificity of grapheme- and sound-colour associations. Participants reported more spontaneous synaesthesia-like experiences under LSD, relative to placebo, but did not differ across conditions in colour experiences in response to inducers, consistency of stimulus-colour associations, or in inducer specificity. Further analyses suggest that individual differences in a number of these effects were associated with the propensity to experience states of absorption in one's daily life. Although preliminary, the present study suggests that LSD-induced synaesthesia-like experiences do not exhibit consistency or inducer-specificity and thus do not meet two widely established criteria for genuine synaesthesia. |
Personalized Recommender Systems Integrating Social Tags and Item Taxonomy | The social tags in web 2.0 are becoming another important information source to profile users' interests and preferences to make personalized recommendations. To solve the problem of low information sharing caused by the free-style vocabulary of tags and the long tails of the distribution of tags and items, this paper proposes an approach to integrate the social tags given by users and the item taxonomy with standard vocabulary and hierarchical structure provided by experts to make personalized recommendations. The experimental results show that the proposed approach can effectively improve the information sharing and recommendation accuracy. |
Efficient deep models for monocular road segmentation | This paper addresses the problem of road scene segmentation in conventional RGB images by exploiting recent advances in semantic segmentation via convolutional neural networks (CNNs). Segmentation networks are very large and do not currently run at interactive frame rates. To make this technique applicable to robotics we propose several architecture refinements that provide the best trade-off between segmentation quality and runtime. This is achieved by a new mapping between classes and filters at the expansion side of the network. The network is trained end-to-end and yields precise road/lane predictions at the original input resolution in roughly 50ms. Compared to the state of the art, the network achieves top accuracies on the KITTI dataset for road and lane segmentation while providing a 20× speed-up. We demonstrate that the improved efficiency is not due to the road segmentation task. Also on segmentation datasets with larger scene complexity, the accuracy does not suffer from the large speed-up. |
Advanced System-on-Package ( SOP ) Multilayer Architectures for RF / Wireless Systems up to Millimeter-Wave Frequency Bands | This paper reviews the development of advanced System-on-Package (SOP) architectures for the compact and low cost wireless RF wireless systems. A compact stacked patch antenna adopting soft-and-hard surface structures and cavity resonator filters using inter-resonance coupling mechanism for V-band applications are presented. A novel ultra-compact 3D integration technology is proposed and utilized for the implementation of a Ku-band VCO module. The high Q-factor inductors fabricated on the Liquid Crystal Polymer based multilayer substrate demonstrate superior performance than conventional organic packages. |
A 10.4 mW Electrical Impedance Tomography SoC for Portable Real-Time Lung Ventilation Monitoring System | An electrical impedance tomography (EIT) SoC is proposed for the portable real-time lung ventilation monitoring system. The proposed EIT SoC is integrated into belt-typefabric system with 32 electrodes and can show the dynamic images of the lung ventilation on the mobile devices. To get high fidelity images, a T-switch is adopted for high off-isolation between electrodes more than 60 dB, and I/Q signal generation and demodulation can obtain both real and imaginary part of images. For the real-time imaging, an on-chip fast demodulation scheme is proposed, and it can also reduce speed requirements of ADC for low-power consumption. The proposed EIT SoC of 5.0 mm × 5.0 mm is fabricated in 0.18 μm CMOS technology, and consumes only 10.4 mW with 1.8 V supply. As a result, EIT images were reconstructed with 97.3% of accuracy and up to 20 frames/s real-time lung images can be displayed on the mobile devices. |
Classes in Courage. | A Massachusetts school focuses on a character trait rarely spoken of in an academic context: courage. |
Software-Based Resolver-to-Digital Conversion Using a DSP | A simple and cost-effective software-based resolver-to-digital converter using a digital signal processor is presented. The proposed method incorporates software generation of the resolver carrier using a digital filter for synchronous demodulation of the resolver outputs in such a way that there is a substantial savings on hardware like the costly carrier oscillator and associated digital and analog circuits for amplitude demodulators. In addition, because the method does not cause any time delay, the dynamics of the servo control using the scheme are not affected. Furthermore, the method enables the determination of the angle for a complete 360deg shaft rotation with reasonable accuracy using a lookup table that contains entries of only up to 45deg. Computer simulations and experimental results demonstrate the effectiveness and applicability of the proposed scheme. |
Influence of Various Polymorphic Variants of Cytochrome P450 Oxidoreductase (POR) on Drug Metabolic Activity of CYP3A4 and CYP2B6 | Cytochrome P450 oxidoreductase (POR) is known as the sole electron donor in the metabolism of drugs by cytochrome P450 (CYP) enzymes in human. However, little is known about the effect of polymorphic variants of POR on drug metabolic activities of CYP3A4 and CYP2B6. In order to better understand the mechanism of the activity of CYPs affected by polymorphic variants of POR, six full-length mutants of POR (e.g., Y181D, A287P, K49N, A115V, S244C and G413S) were designed and then co-expressed with CYP3A4 and CYP2B6 in the baculovirus-Sf9 insect cells to determine their kinetic parameters. Surprisingly, both mutants, Y181D and A287P in POR completely inhibited the CYP3A4 activity with testosterone, while the catalytic activity of CYP2B6 with bupropion was reduced to approximately ~70% of wild-type activity by Y181D and A287P mutations. In addition, the mutant K49N of POR increased the CLint (Vmax/Km) of CYP3A4 up to more than 31% of wild-type, while it reduced the catalytic efficiency of CYP2B6 to 74% of wild-type. Moreover, CLint values of CYP3A4-POR (A115V, G413S) were increased up to 36% and 65% of wild-type respectively. However, there were no appreciable effects observed by the remaining two mutants of POR (i.e., A115V and G413S) on activities of CYP2B6. In conclusion, the extent to which the catalytic activities of CYP were altered did not only depend on the specific POR mutations but also on the isoforms of different CYP redox partners. Thereby, we proposed that the POR-mutant patients should be carefully monitored for the activity of CYP3A4 and CYP2B6 on the prescribed medication. |
Large scale visual recommendations from street fashion images | We describe a completely automated large scale visual recommendation system for fashion. Our focus is to efficiently harness the availability of large quantities of online fashion images and their rich meta-data. Specifically, we propose two classes of data driven models in the Deterministic Fashion Recommenders (DFR) and Stochastic Fashion Recommenders (SFR) for solving this problem. We analyze relative merits and pitfalls of these algorithms through extensive experimentation on a large-scale data set and baseline them against existing ideas from color science. We also illustrate key fashion insights learned through these experiments and show how they can be employed to design better recommendation systems. The industrial applicability of proposed models is in the context of mobile fashion shopping. Finally, we also outline a large-scale annotated data set of fashion images Fashion-136K) that can be exploited for future research in data driven visual fashion. |
Economic analysis of the project of warehouse centralization in the paper production company | In the modern conditions, business requires constant rationalization of all activities and processes that occur in the logistics system. One of the preconditions for ensuring the competitiveness in the market is to manage the own performance. This paper presents research that relates to the project of centralization of the warehouse in the company of paper production. Currently, any production facility has its own warehouse that is, through executed decomposition, proved like a poor solution. Any project requires certain investment funds, which are in this case over half million EUR, because it is a large and complex logistics company that employs about one thousand workers. The focus of this paper is an economic analysis of the project of centralization of warehouse. The new centralized system gives better results in the comparison with the current system of decentralization. Considering the savings, which are realized by switching to a centralized warehouse system, and required investment funds, repayment period of the same is slightly less than five years, what is relatively a short period. |
A Correspondence Between Random Neural Networks and Statistical Field Theory | A number of recent papers have provided evidence that practical design questions about neural networks may be tackled theoretically by studying the behavior of random networks. However, until now the tools available for analyzing random neural networks have been relatively ad hoc. In this work, we show that the distribution of pre-activations in random neural networks can be exactly mapped onto lattice models in statistical physics. We argue that several previous investigations of stochastic networks actually studied a particular factorial approximation to the full lattice model. For random linear networks and random rectified linear networks we show that the corresponding lattice models in the wide network limit may be systematically approximated by a Gaussian distribution with covariance between the layers of the network. In each case, the approximate distribution can be diagonalized by Fourier transformation. We show that this approximation accurately describes the results of numerical simulations of wide random neural networks. Finally, we demonstrate that in each case the large scale behavior of the random networks can be approximated by an effective field theory. |
Bradykinin contributes to the vasodilator effects of chronic angiotensin-converting enzyme inhibition in patients with heart failure. | BACKGROUND
Bradykinin, an endogenous vasodilator peptide, is metabolized by ACE. The aims of the present study were to determine the doses of B9340, a bradykinin receptor antagonist, that inhibit vasodilatation to exogenous bradykinin and to assess the contribution of bradykinin to the maintenance of basal vascular tone in patients with heart failure receiving chronic ACE inhibitor therapy.
METHODS AND RESULTS
Forearm blood flow was measured using bilateral venous occlusion plethysmography. On three occasions in a double-blind randomized manner, 8 healthy volunteers received intrabrachial infusions of placebo or B9340 (at 4.5 and 13.5 nmol/min). On each occasion, placebo or B9340 was coinfused with bradykinin (30 to 3000 pmol/min) and substance P (4 to 16 pmol/min). B9340 caused no change in basal FBF but produced dose-dependent inhibition of the vasodilatation to bradykinin (P<0.001) but not substance P. The effects of bradykinin antagonism were studied in 17 patients with NYHA grade II through IV heart failure maintained on chronic ACE inhibitor therapy. Incremental doses of B9340, but not HOE-140, produced a dose-dependent vasoconstriction (P=0.01). After withdrawal of ACE inhibitor therapy, B9340 produced no significant change in forearm blood flow. After reinstitution of therapy, B9340 again resulted in vasoconstriction (P<0.03).
CONCLUSIONS
B9340 is a potent and selective inhibitor of bradykinin-induced vasodilatation. Bradykinin does not contribute to the maintenance of basal peripheral arteriolar tone in healthy humans or patients with heart failure but contributes to the vasodilatation associated with chronic ACE inhibitor therapy in patients with heart failure via the B(1) receptor. |
Multilabel random walker image segmentation using prior models | The recently introduced random walker segmentation algorithm by Grady and Funka-Lea (2004) has been shown to have desirable theoretical properties and to perform well on a wide variety of images in practice. However, this algorithm requires user-specified labels and produces a segmentation where each segment is connected to a labeled pixel. We show that incorporation of a nonparametric probability density model allows for an extended random walkers algorithm that can locate disconnected objects and does not require user-specified labels. Finally, we show that this formulation leads to a deep connection with the popular graph cuts method by Boykov et al. (2001) and Wu and Leahy (1993). |
UK guidelines for the management of soft tissue sarcomas | Soft tissue sarcomas (STS) are rare tumours arising in mesenchymal tissues, and can occur almost anywhere in the body. Their rarity, and the heterogeneity of subtype and location means that developing evidence-based guidelines is complicated by the limitations of the data available. However, this makes it more important that STS are managed by teams, expert in such cases, to ensure consistent and optimal treatment, as well as recruitment to clinical trials, and the ongoing accumulation of further data and knowledge. The development of appropriate guidance, by an experienced panel referring to the evidence available, is therefore a useful foundation on which to build progress in the field. These guidelines are an update of the previous version published in 2010 (Grimer et al. in Sarcoma 2010:506182, 2010). The original guidelines were drawn up following a consensus meeting of UK sarcoma specialists convened under the auspices of the British Sarcoma Group (BSG) and were intended to provide a framework for the multidisciplinary care of patients with soft tissue sarcomas. This current version has been updated and amended with reference to other European and US guidance. There are specific recommendations for the management of selected subtypes of disease including retroperitoneal and uterine sarcomas, as well as aggressive fibromatosis (desmoid tumours) and other borderline tumours commonly managed by sarcoma services. An important aim in sarcoma management is early diagnosis and prompt referral. In the UK, any patient with a suspected soft tissue sarcoma should be referred to one of the specialist regional soft tissues sarcoma services, to be managed by a specialist sarcoma multidisciplinary team. Once the diagnosis has been confirmed using appropriate imaging, plus a biopsy, the main modality of management is usually surgical excision performed by a specialist surgeon. In tumours at higher risk of recurrence or metastasis pre- or post-operative radiotherapy should be considered. Systemic anti-cancer therapy (SACT) may be utilized in some cases where the histological subtype is considered more sensitive to systemic treatment. Regular follow-up is recommended to assess local control, development of metastatic disease, and any late-effects of treatment. For local recurrence, and more rarely in selected cases of metastatic disease, surgical resection would be considered. Treatment for metastases may include radiotherapy, or systemic therapy guided by the sarcoma subtype. In some cases, symptom control and palliative care support alone will be appropriate. |
BusyBody: creating and fielding personalized models of the cost of interruption | Interest has been growing in opportunities to build and deploy statistical models that can infer a computer user's current interruptability from computer activity and relevant contextual information. We describe a system that intermittently asks users to assess their perceived interruptability during a training phase and that builds decision-theoretic models with the ability to predict the cost of interrupting the user. The models are used at run-time to compute the expected cost of interruptions, providing a mediator for incoming notifications, based on a consideration of a user's current and recent history of computer activity, meeting status, location, time of day, and whether a conversation is detected. |
Convolutional Neural Networks for Fashion Classification and Object Detection | Fashion classification encompasses the identification of clothing items in an image. The field has applications in social media, e-commerce, and criminal law. In our work, we focus on four tasks within the fashion classification umbrella: (1) multiclass classification of clothing type; (2) clothing attribute classification; (3) clothing retrieval of nearest neighbors; and (4) clothing object detection. We report accuracy measurements for clothing style classification (50.2%) and clothing attribute classification (74.5%) that outperform baselines in the literature for the associated datasets. We additionally report promising qualitative results for our clothing retrieval and clothing object detection tasks. |
P2P Lending Platform Risk Observing Method Based on Short-Time Multi-Source Regression Algorithm | Peer-to-Peer (P2P) lending is a popular way of lending in contemporary Internet financial filed. Comparing with the traditional bank lending, the annual risk evaluation is no longer applicable for P2P platform because of the short life cycle and a lot of transaction records. This paper presents a method to dynamically evaluate the operation risk of P2P plat- forms based on a short-time multi-source regression algorithm. Dynamic time windows are used to split up the lending records and linear regression method is used to quantify the dynamic risk index of P2P platforms. The experimental results show that the proposed method can reflect the visible operation situation of platforms, and give investors dynamic risk assessment and effective tips of the platforms. |
Large Scale Semi-Supervised Object Detection Using Visual and Semantic Knowledge Transfer | Deep CNN-based object detection systems have achieved remarkable success on several large-scale object detection benchmarks. However, training such detectors requires a large number of labeled bounding boxes, which are more difficult to obtain than image-level annotations. Previous work addresses this issue by transforming image-level classifiers into object detectors. This is done by modeling the differences between the two on categories with both imagelevel and bounding box annotations, and transferring this information to convert classifiers to detectors for categories without bounding box annotations. We improve this previous work by incorporating knowledge about object similarities from visual and semantic domains during the transfer process. The intuition behind our proposed method is that visually and semantically similar categories should exhibit more common transferable properties than dissimilar categories, e.g. a better detector would result by transforming the differences between a dog classifier and a dog detector onto the cat class, than would by transforming from the violin class. Experimental results on the challenging ILSVRC2013 detection dataset demonstrate that each of our proposed object similarity based knowledge transfer methods outperforms the baseline methods. We found strong evidence that visual similarity and semantic relatedness are complementary for the task, and when combined notably improve detection, achieving state-of-the-art detection performance in a semi-supervised setting. |
Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as pseudo ground truth to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed pretext tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce. |
Engineering Sustainable Blockchain Applications | Blockchain technology has attracted attention as emerging paradigm for business collaboration. Blockchain’s consensus mechanisms allow partners to cooperate in a business network. However, many applications reported in literature present merely a proof of concept from an engineering perspective. An industrialization of blockchain requires an engineering framework, which assures the sustainability of the application and in particular its network partnerships, i.e. each participant has to act as an active peer in the network rather than being a mere consumer with a wallet for participation in the blockchain. This paper presents the skeleton of such an engineering framework starting with an ideation of partnerships and collaboration patterns to clarify the incentives for participation via business model design for sustainable network operations towards the selection of an implementation platform for the business processes re-engineered. Moreover, an initial version of an interactive tool for community-oriented capturing of know-how about characteristics of blockchain platforms is presented. |
Identification of Invariant Sensorimotor Structures as a Prerequisite for the Discovery of Objects | Perceiving the surrounding environment in terms of objects is useful for any general purpose intelligent agent. In this paper, we investigate a fundamental mechanism making object perception possible, namely the identification of spatio-temporally invariant structures in the sensorimotor experience of an agent. We take inspiration from the Sensorimotor Contingencies Theory to define a computational model of this mechanism through a sensorimotor, unsupervised and predictive approach. Our model is based on processing the unsupervised interaction of an artificial agent with its environment. We show how spatio-temporally invariant structures in the environment induce regularities in the sensorimotor experience of an agent, and how this agent, while building a predictive model of its sensorimotor experience, can capture them as densely connected subgraphs in a graph of sensory states connected by motor commands. Our approach is focused on elementary mechanisms, and is illustrated with a set of simple experiments in which an agent interacts with an environment. We show how the agent can build an internal model of moving but spatio-temporally invariant structures by performing a Spectral Clustering of the graph modeling its overall sensorimotor experiences. We systematically examine properties of the model, shedding light more globally on the specificities of the paradigm with respect to methods based on the supervised processing of collections of static images. ∗[email protected] †[email protected] ‡[email protected] 1 ar X iv :1 81 0. 05 05 7v 1 [ cs .A I] 1 1 O ct 2 01 8 |
Trust Evaluation in Online Social Networks Using Generalized Network Flow | In online social networks (OSNs), to evaluate trust from one user to another indirectly connected user, the trust evidence in the trusted paths (i.e., paths built through intermediate trustful users) should be carefully treated. Some paths may overlap with each other, leading to a unique challenge of path dependence, i.e., how to aggregate the trust values of multiple dependent trusted paths. OSNs bear the characteristic of high clustering, which makes the path dependence phenomenon common. Another challenge is trust decay through propagation, i.e., how to propagate trust along a trusted path, considering the possible decay in each node. We analyze the similarity between trust propagation and network flow, and convert a trust evaluation task with path dependence and trust decay into a generalized network flow problem. We propose a modified flow-based trust evaluation scheme GFTrust, in which we address path dependence using network flow, and model trust decay with the leakage associated with each node. Experimental results, with the real social network data sets of Epinions and Advogato, demonstrate that GFTrust can predict trust in OSNs with a high accuracy, and verify its preferable properties. |
Manifold differential evolution (MDE): a global optimization method for geodesic centroidal voronoi tessellations on meshes | Computing centroidal Voronoi tessellations (CVT) has many applications in computer graphics. The existing methods, such as the Lloyd algorithm and the quasi-Newton solver, are efficient and easy to implement; however, they compute only the local optimal solutions due to the highly non-linear nature of the CVT energy. This paper presents a novel method, called manifold differential evolution (MDE), for computing globally optimal geodesic CVT energy on triangle meshes. Formulating the mutation operator using discrete geodesics, MDE naturally extends the powerful differential evolution framework from Euclidean spaces to manifold domains. Under mild assumptions, we show that MDE has a provable probabilistic convergence to the global optimum. Experiments on a wide range of 3D models show that MDE consistently out-performs the existing methods by producing results with lower energy. Thanks to its intrinsic and global nature, MDE is insensitive to initialization and mesh tessellation. Moreover, it is able to handle multiply-connected Voronoi cells, which are challenging to the existing geodesic CVT methods. |
Evidence of distinct profiles of Posttraumatic Stress Disorder (PTSD) and Complex Posttraumatic Stress Disorder (CPTSD) based on the new ICD-11 Trauma Questionnaire (ICD-TQ). | BACKGROUND
The WHO International Classification of Diseases, 11th version (ICD-11), has proposed two related diagnoses following exposure to traumatic events; Posttraumatic Stress Disorder (PTSD) and Complex PTSD (CPTSD). We set out to explore whether the newly developed ICD-11 Trauma Questionnaire (ICD-TQ) can distinguish between classes of individuals according to the PTSD and CPTSD symptom profiles as per ICD-11 proposals based on latent class analysis. We also hypothesized that the CPTSD class would report more frequent and a greater number of different types of childhood trauma as well as higher levels of functional impairment. Methods Participants in this study were a sample of individuals who were referred for psychological therapy to a National Health Service (NHS) trauma centre in Scotland (N=193). Participants completed the ICD-TQ as well as measures of life events and functioning.
RESULTS
Overall, results indicate that using the newly developed ICD-TQ, two subgroups of treatment-seeking individuals could be empirically distinguished based on different patterns of symptom endorsement; a small group high in PTSD symptoms only and a larger group high in CPTSD symptoms. In addition, CPTSD was more strongly associated with more frequent and a greater accumulation of different types of childhood traumatic experiences and poorer functional impairment.
LIMITATIONS
Sample predominantly consisted of people who had experienced childhood psychological trauma or been multiply traumatised in childhood and adulthood.
CONCLUSIONS
CPTSD is highly prevalent in treatment seeking populations who have been multiply traumatised in childhood and adulthood and appropriate interventions should now be developed to aid recovery from this debilitating condition. |
Indications for Early Postoperative Intraperitoneal Chemotherapy of Advanced Gastric Cancer: Results of a Prospective Randomized Trial | Previous analysis of this prospective randomized trial and meta-analysis of published randomized trials of adjuvant intraperitoneal chemotherapy demonstrated improved survival in patients with advanced gastric cancer. Simple criteria applicable at the time of surgery for patient selection were sought in this analysis. From 1990 to 1995 a series of 248 patients with biopsy-proven gastric cancer were randomized intraoperatively to receive early postoperative intraperitoneal mitomycin C and 5-fluorouracil (125 patients) versus surgery only (123 patients). Gastric resection plus early postoperative intraperitoneal chemotherapy showed improved overall survival compared to surgery only (54% and 38%, respectively; p= 0.0278). There were statistically significant differences in patients with stage III (57% and 23%, respectively; p= 0.0024) and in those with stage IV (28% and 5%, respectively; p= 0.0098) gastric cancer. The improvement in survival rate was statistically significant for the subgroup of patients with gross serosal invasion (52% and 25%, respectively; p= 0.0004) and patients with lymph node metastasis (46% and 22%, respectively; p= 0.0027). The surgeons' impression about lymph node status was unreliable, but assessment of serosal invasion was accurate in 80% of cases. Gross serosal invasion with or without frozen section evaluation of lymph nodes can be used as the major selection criteria for early postoperative intraperitoneal chemotherapy of advanced gastric cancer. |
Advance directives in the cardiac care unit. | BACKGROUND
Despite effective therapies, mortality for many cardiovascular diseases remains higher than for many cancers and is difficult to predict. Guidelines recommend discussing advance directives (AD), including living wills and durable powers of attorney, with heart failure patients. The Patient Self-Determination Act mandates such discussions with all hospitalized patients. Little data are available on AD prevalence in patients with serious cardiac disease.
METHODS
Patients admitted to a cardiac care unit (CCU) were surveyed regarding demographics, medical history, prevalence of AD, and interest in obtaining more information about AD. Histories of life-threatening cardiac diagnoses were tabulated. Prevalence of AD and interest in obtaining more information about AD were obtained via chart review from patients on an oncology (ONC) floor at the same hospital.
RESULTS
One hundred twelve CCU (average age 58 +/- 16 years, 47 women) and 105 ONC (average age 58 +/- 14 years, 32 women) patients were enrolled. Prevalence of AD was not different between CCU and ONC patients (26% vs 31%, P = .37). Among CCU patients with prior hospitalizations but no AD, 21 of 64 did not recall being asked about AD. Cardiac care unit patients with heart failure and pulmonary hypertension were more likely to report being asked about AD in the past (39 of 54, P = .03 and 7 of 9, P = .008, respectively), but only heart failure patients were more likely to want more information about AD (P = .005). Of patients without AD, 83% from CCU and 18% from ONC wanted more information on AD (P < .001).
CONCLUSIONS
Prevalence of AD in the CCU was low, and many patients did not recall prior AD discussions. The CCU patients without AD were more likely to want information about AD than the ONC patients. A renewed emphasis on AD discussions with cardiovascular patients is needed and would be welcomed. Advance directives should be emphasized in cardiovascular training programs. |
Effects of gemfibrozil or simvastatin on apolipoprotein-B-containing lipoproteins, apolipoprotein-CIII and lipoprotein(a) in familial combined hyperlipidaemia. | BACKGROUND
Familial combined hyperlipidaemia (FCH), characterized by elevated very-low-density lipoprotein (VLDL) and/or low-density lipoprotein (LDL), is associated with an increased prevalence of premature cardiovascular disease. Therefore, lipid-lowering is frequently indicated.
METHODS
We evaluated in a parallel, double-blind randomized fashion the effect of gemfibrozil (1200 mg/day) (n = 40) or simvastatin (20 mg/day) (n = 41) on lipids, apolipoprotein-B (apo-B)-containing lipoproteins, apo-CIII and lipoprotein(a) [Lp(a)], in 81 well-defined FCH patients.
RESULTS
While both drugs lowered plasma cholesterol and triglyceride levels, gemfibrozil lowered plasma triglycerides more effectively by reduction of triglycerides in VLDL and LDL, whereas simvastatin was more effective in its reduction of total plasma cholesterol by exclusively decreasing LDL cholesterol. High-density lipoprotein (HDL) increased to an equal extent on both therapies. Total serum apo-B levels were reduced with both drugs; however, gemfibrozil decreased apo-B only in VLDL + IDL, whereas simvastatin decreased apo-B in both VLDL + IDL and LDL. In keeping with a more effective reduction of VLDL particles, a more pronounced reduction of apo-CIII also was observed after gemfibrozil, which correlated with the reduction in plasma triglycerides. Baseline concentrations of Lp(a) showed a wide range in both treatment groups. Median Lp(a) levels increased after simvastatin, but were not affected by gemfibrozil.
CONCLUSION
Both therapies exhibited their specific effects, although none of the drugs alone completely normalized the lipid profiles of these patients with FCH. Therefore, the choice of treatment should be based on the most elevated lipoprotein fraction, and in some cases a combination of the two drugs may be indicated. |
Multiple Access Gaussian Channels with Arbitrary Inputs: Optimal Precoding and Power Allocation | In this paper, we derive new closed-form expressions for the gradient of the mutual information with respect to arbitrary parameters of the two-user multiple access cha nnel (MAC). The derived relations generalize the fundamental relation between the derivative of the mutual i nformation and the minimum mean squared error (MMSE) to multiuser setups. We prove that the derivative of t he mutual information with respect to the signal to noise ratio (SNR) is equal to the MMSE plus a covariance induc e due to the interference, quantified by a term with respect to the cross correlation of the multiuser input esti mates, the channels and the precoding matrices. We also derive new relations for the gradient of the conditional and non-conditional mutual information with respect to the MMSE. Capitalizing on the new fundamental relations, we inv estigate the linear precoding and power allocation policies that maximize the mutual information for the two-u ser MAC Gaussian channels with arbitrary input distributions. We show that the optimal design of linear pre coders may satisfy a fixed-point equation as a function of the channel and the input constellation under specific set up . We show also that the non-mutual interference in a multiuser setup introduces a term to the gradient of the mut ual information which plays a fundamental role in the design of optimal transmission strategies, particular ly the optimal precoding and power allocation, and explains the losses in the data rates. Therefore, we provide a novel in terpretation of the interference with respect to the channel, power, and input estimates of the main user and the i n erferer. |
A novel algorithm for color constancy | Color constancy is the skill by which it is possible to tell the color of an object even under a colored light. I interpret the color of an object as its color under a fixed canonical light, rather than as a surface reflectance function. This leads to an analysis that shows two distinct sets of circumstances under which color constancy is possible. In this framework, color constancy requires estimating the illuminant under which the image was taken. The estimate is then used to choose one of a set of linear maps, which is applied to the image to yield a color descriptor at each point. This set of maps is computed in advance. The illuminant can be estimated using image measurements alone, because, given a number of weak assumptions detailed in the text, the color of the illuminant is constrained by the colors observed in the image. This constraint arises from the fact that surfaces can reflect no more light than is cast on them. For example, if one observes a patch that excites the red receptor strongly, the illuminant cannot have been deep blue. Two algorithms are possible using this constraint, corresponding to different assumptions about the world. The first algorithm, Crule will work for any surface reflectance. Crule corresponds to a form of coefficient rule, but obtains the coefficients by using constraints on illuminant color. The set of illuminants for which Crule will be successful depends strongly on the choice of photoreceptors: for narrowband photoreceptors, Crule will work in an unrestricted world. The second algorithm, Mwext, requires that both surface reflectances and illuminants be chosen from finite dimensional spaces; but under these restrictive conditions it can recover a large number of parameters in the illuminant, and is not an attractive model of human color constancy. Crule has been tested on real images of Mondriaans, and works well. I show results for Crule and for the Retinex algorithm of Land (Land 1971; Land 1983; Land 1985) operating on a number of real images. The experimental work shows that for good constancy, a color constancy system will need to adjust the gain of the receptors it employs in a fashion analagous to adaptation in humans. |
Partial Materialized Views | Early access to partial query results is highly desirable during exploration of massive data sets. However, it is challenging to provide transactionally consistent, immediate partial results without significantly increasing queries' execution time. To address this problem, this paper proposes a partial materialized view (PMV) method to cache some of the most frequently accessed results rather than all the possible results. Compared to traditional materialized views, the proposed PMVs do not require maintenance during insertion into base relations, and have much smaller storage and maintenance overhead. Upon the arrival of a query, the RDBMS first searches the PMV and returns to the user the cached partial results. Since a large portion of the PMV is cached in memory, this usually finishes within a millisecond. Then the RDBMS continues to execute the query to find the remaining results. The efficiency of our PMV method is evaluated through a simulation study, a theoretical analysis, and an initial implementation in PostgreSQL. |
Kava in the treatment of generalized anxiety disorder: a double-blind, randomized, placebo-controlled study. | Kava (Piper methysticum) is a plant-based medicine, which has been previously shown to reduce anxiety. To date, however, no placebo-controlled trial assessing kava in the treatment of generalized anxiety disorder (GAD) has been completed. A total of 75 participants with GAD and no comorbid mood disorder were enrolled in a 6-week double-blind trial of an aqueous extract of kava (120/240 mg of kavalactones per day depending on response) versus placebo. γ-Aminobutyric acid (GABA) and noradrenaline transporter polymorphisms were also analyzed as potential pharmacogenetic markers of response. Reduction in anxiety was measured using the Hamilton Anxiety Rating Scale (HAMA) as the primary outcome. Intention-to-treat analysis was performed on 58 participants who met inclusion criteria after an initial 1 week placebo run-in phase. Results revealed a significant reduction in anxiety for the kava group compared with the placebo group with a moderate effect size (P = 0.046, Cohen d = 0.62). Among participants with moderate to severe Diagnostic and Statistical Manual of Mental Disorders-diagnosed GAD, this effect was larger (P = 0.02; d = 0.82). At conclusion of the controlled phase, 26% of the kava group were classified as remitted (HAMA ≤ 7) compared with 6% of the placebo group (P = 0.04). Within the kava group, GABA transporter polymorphisms rs2601126 (P = 0.021) and rs2697153 (P = 0.046) were associated with HAMA reduction. Kava was well tolerated, and aside from more headaches reported in the kava group (P = 0.05), no other significant differences between groups occurred for any other adverse effects, nor for liver function tests. Standardized kava may be a moderately effective short-term option for the treatment of GAD. Furthermore, specific GABA transporter polymorphisms appear to potentially modify anxiolytic response to kava. |
Is multibeam IMRT better than standard treatment for patients with left-sided breast cancer? | PURPOSE
When treatment intent is to include breast and internal mammary lymph nodes (IMNs) in the clinical target volume (CTV), a significant volume of the heart may receive radiation, which may result in late morbidity. The value of conformal intensity-modulated radiation therapy (IMRT) to avoid heart dose was studied.
METHODS AND MATERIALS
Breast, IMNs, and normal tissues were contoured for 30 consecutive patients previously treated with RT after lumpectomy for left-sided breast cancer. Eleven-beam, conformal, inverse-planned IMRT plans were developed and compared with best standard plans. Conformity Index (CI), Homogeneity Index (HI), and doses to normal tissues were compared.
RESULTS
Intensity-modulated RT significantly improved (two-sided paired t test) HI (0.95 vs. 0.74), CI (0.91 vs. 0.48), volume of the heart receiving more than 30 Gy (V30-heart) (1.7% vs. 12.5%), and volume of lung receiving more than 20-Gy (V20-left lung) (17.1% vs. 26.6%), all p < 0.001. The mean Healthy Tissue Volume (HTV = CT set - PTV) dose was similar between IMRT and best standard plans (6.0 and 6.9 Gy, respectively), but IMRT increased the volume of normal tissues receiving low-dose RT: V5-right lung (13.7% vs. 2.0%), V5-right breast (29.2% vs. 7.9%), and V5-HTV (31.7% vs. 23.6%), all p < 0.001. IMRT plans were generated in less than 60 min and treatment delivered in approximately 20 min, suggesting that this technique is clinically applicable.
CONCLUSIONS
IMRT significantly improved conformity and homogeneity for plans when the breast + IMNs were in the CTV. Heart and lung volume receiving high doses were decreased, but more healthy tissue received low doses. A simple algorithm based on amount of heart included in the standard plan showed limited ability to predict the benefit from IMRT. |
BANK OF SAN FRANCISCO WORKING PAPER SERIES Uncertainty Shocks are Aggregate Demand Shocks | We show that to capture the empirical effects of uncertainty on the unemployment rate, it is crucial to study the interactions between search frictions and nominal rigidities. Our argument is guided by empirical evidence showing that an increase in uncertainty leads to a large increase in unemployment and a significant decline in inflation, suggesting that uncertainty partly operates via an aggregate demand channel. To understand the mechanism through which uncertainty generates these macroeconomic effects, we incorporate search frictions and nominal rigidities in a DSGE model. We show that an option-value channel that arises from search frictions interacts with a demand channel that arises from nominal rigidities, and such interactions magnify the effects of uncertainty to generate roughly 60 percent of the observed increase in unemployment following an uncer- |
Prerequisites between learning objects: Automatic extraction based on a machine learning approach | One standing problem in the area of web-based e-learning is how to support instructional designers to effectively and efficiently retrieve learning materials, appropriate for their educational purposes. Learning materials can be retrieved from structured repositories, such as repositories of Learning Objects and Massive Open Online Courses; they could also come from unstructured sources, such as web hypertext pages. Platforms for distance education often implement algorithms for recommending specific educational resources and personalized learning paths to students. But choosing and sequencing the adequate learning materials to build adaptive courses may reveal to be quite a challenging task. In particular, establishing the prerequisite relationships among learning objects, in terms of prior requirements needed to understand and complete before making use of the subsequent contents, is a crucial step for faculty, instructional designers or automated systems whose goal is to adapt existing learning objects to delivery in new distance courses. Nevertheless, this information is often missing. In this paper, an innovative machine learning-based approach for the identification of prerequisites between text-based resources is proposed. A feature selection methodology allows us to consider the attributes that are most relevant to the predictive modeling problem. These features are extracted from both the input material and weak-taxonomies available on the web. Input data undergoes a Natural language process that makes finding patterns of interest more easy for the applied automated analysis. Finally, the prerequisite identification is cast to a binary statistical classification task. The accuracy of the approach is validated by means of experimental evaluations on real online coursers covering different subjects. |
Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural Networks | Deep Neural Networks (DNNs) have shown to outperform traditional methods in various visual recognition tasks including Facial Expression Recognition (FER). In spite of efforts made to improve the accuracy of FER systems using DNN, existing methods still are not generalizable enough in practical applications. This paper proposes a 3D Convolutional Neural Network method for FER in videos. This new network architecture consists of 3D Inception-ResNet layers followed by an LSTM unit that together extracts the spatial relations within facial images as well as the temporal relations between different frames in the video. Facial landmark points are also used as inputs to our network which emphasize on the importance of facial components rather than the facial regions that may not contribute significantly to generating facial expressions. Our proposed method is evaluated using four publicly available databases in subject-independent and cross-database tasks and outperforms state-of-the-art methods. |
The slot-coupled hemispherical dielectric resonator antenna with a parasitic patch: applications to the circularly polarized antenna and wide-band antenna | The aperture-coupled hemispherical dielectric resonator antenna (DRA) with a parasitic patch is studied rigorously. Using the Green's function approach, integral equations for the unknown patch and slot currents are formulated and solved using the method of moments. The theory is utilized to design a circularly polarized (CP) DRA and a wide-band linearly polarized (LP) DRA. In the former, the CP frequency and axial ratio (AR) can easily be controlled by the patch location and patch size, respectively, with the impedance matched by varying the slot length and microstrip stub length. It is important that the AR will not be affected when the input impedance is tuned, and the CP design is therefore greatly facilitated. For the wide-band LP antenna, a maximum bandwidth of 22% can be obtained, which is much wider than the previous bandwidth of 7.5% with no parasitic patches. Finally, the frequency-tuning characteristics of the proposed antenna are discussed. Since the parasitic patch can be applied to any DRAs, the method will find applications in practical DRA designs. |
Genes, Mind, and Culture: The Coevolutionary Process | # The Next Synthesis: 25 Years of Genes, Mind, and Culture # The Primary Epigenetic Rules # The Secondary Epigenetic Rules # Gene-Culture Translation # The Gene-Culture Adaptive Landscape # The Coevolutionary Circuit # The Biogeography of the Mind # Gene-Culture Coevolution and Social Theory |
Stroke mismatch volume with the use of ABC/2 is equivalent to planimetric stroke mismatch volume. | BACKGROUND AND PURPOSE
In the clinical setting, there is a need to perform mismatch measurements quickly and easily on the MR imaging scanner to determine the specific amount of treatable penumbra. The objective of this study was to quantify the agreement of the ABC/2 method with the established planimetric method.
MATERIALS AND METHODS
Patients (n = 193) were selected from the NINDS Natural History Stroke Registry if they 1) were treated with standard intravenous rtPA, 2) had a pretreatment MR imaging with evaluable DWI and PWI, and 3) had an acute ischemic stroke lesion. A rater placed the linear diameters to measure the largest DWI and MTT lesion areas in 3 perpendicular axes-A, B, and C-and then used the ABC/2 formula to calculate lesion volumes. A separate rater measured the planimetric volumes. Multiple mismatch thresholds were used, including MTT volume - DWI volume ≥50 mL versus ≥60 mL and (MTT volume - DWI volume)/MTT volume ≥20% versus MTT/DWI = 1.8.
RESULTS
Compared with the planimetric method, the ABC/2 method had high sensitivity (0.91), specificity (0.90), accuracy (0.91), PPV (0.90), and NPV (0.91) to quantify mismatch by use of the ≥50 mL definition. The Spearman correlation coefficients were 0.846 and 0.876, respectively, for the DWI and MTT measurements. The inter-rater Bland-Altman plots demonstrated 95%, 95%, and 97% agreement for the DWI, MTT, and mismatch measurements.
CONCLUSIONS
The ABC/2 method is highly reliable and accurate for quantifying the specific amount of MR imaging-determined mismatch and therefore is a potential tool to quickly calculate a treatable mismatch pattern. |
Will Online Chat Help Alleviate Mood Loneliness? | The present study examines the relationship between social Internet use and loneliness and reviews the studies about this topic from both social psychology and computer-mediated communication literature, as a response to the call for interdisciplinary research from scholars in these two areas. Two hundred thirty-four people participated in both the survey testing trait loneliness and a 5-condition (face-to-face chatting, instant message chatting, watching video, writing assignments, and "do nothing") experiment. Participants reported increase of mood loneliness after chatting online. The level of mood loneliness after online chat was higher than that in face-to-face communication. For people with high trait loneliness, the mood loneliness increase in the computer-mediated communication condition was significantly higher than in the face-to-face communication condition. The author of the current study hopes to help clarify the mixed research findings in previous social Internet use literature about this topic and reminds communication researchers of the need to explore the constructs included in "psychological well-being" in terms of their nature, mechanism, causes, consequences, and furthermore, how they are related to communication. |
The association of self-esteem, depression and body satisfaction with obesity among Turkish adolescents | BACKGROUND
The purpose of this study was to determine the prevalence of overweight and obesity and to examine the effects of actual weight status, perceived weight status and body satisfaction on self-esteem and depression in a high school population in Turkey.
METHODS
A cross-sectional survey of 2101 tenth-grade Turkish adolescents aged 15-18 was conducted. Body mass index (BMI) was calculated using weight and height measures. The overweight and obesity were based on the age- and gender-specific BMI cut-off points of the International Obesity Task Force values. Self-esteem was measured using the Rosenberg Self-Esteem Scale, and depression was measured using Children's Depression Inventory. Logistic regression analysis was used to examine relationships among the variables.
RESULTS
Based on BMI cut-off points, 9.0% of the students were overweight and 1.1% were obese. Logistic regression analysis indicated that (1) being male and being from a higher socio-economical level were important in the prediction of overweight based on BMI; (2) being female and being from a higher socio-economical level were important in the prediction of perceived overweight; (3) being female was important in the prediction of body dissatisfaction; (4) body dissatisfaction was related to low self-esteem and depression, perceived overweight was related only to low self-esteem but actual overweight was not related to low self-esteem and depression in adolescents.
CONCLUSION
The results of this study suggest that school-based adolescents in urban Turkey have a lower risk of overweight and obesity than adolescents in developed countries. The findings of this study suggest that psychological well-being of adolescents is more related to body satisfaction than actual and perceived weight status is. |
A Survey of FPGA Based Neural Network Accelerator | Recent researches on neural network have shown signicant advantage in machine learning over traditional algorithms based on handcraed features and models. Neural network is now widely adopted in regions like image, speech and video recognition. But the high computation and storage complexity of neural network inference poses great diculty on its application. CPU platforms are hard to oer enough computation capacity. GPU platforms are the rst choice for neural network process because of its high computation capacity and easy to use development frameworks. On the other hand, FPGA-based neural network inference accelerator is becoming a research topic. With specically designed hardware, FPGA is the next possible solution to surpass GPU in speed and energy eciency. Various FPGA-based accelerator designs have been proposed with soware and hardware optimization techniques to achieve high speed and energy eciency. In this paper, we give an overview of previous work on neural network inference accelerators based on FPGA and summarize the main techniques used. An investigation from soware to hardware, from circuit level to system level is carried out to complete analysis of FPGA-based neural network inference accelerator design and serves as a guide to future work. |
Artificial Intelligence as Structural Estimation: Economic Interpretations of Deep Blue, Bonanza, and AlphaGo | Arti
cial intelligence (AI) has achieved superhuman performance in a growing number of tasks, but understanding and explaining AI remain challenging. This paper clari
es the connections between machine-learning algorithms to develop AIs and the econometrics of dynamic structural models through the case studies of three famous game AIs. Chess-playing Deep Blue is a calibrated value function, whereas shogiplaying Bonanza is an estimated value function via Rusts (1987) nested
xed-point method. AlphaGos supervised-learning policy network is a deep neural network implementation of Hotz and Millers (1993) conditional choice probability estimation; its reinforcement-learning value networkis equivalent to Hotz, Miller, Sanders, and Smiths (1994) conditional choice simulation method. Relaxing these AIs implicit econometric assumptions would improve their structural interpretability. Keywords: Arti
cial intelligence, Conditional choice probability, Deep neural network, Dynamic game, Dynamic structural model, Simulation estimator. JEL classi
cations: A12, C45, C57, C63, C73. First version: October 30, 2017. This paper bene
ted from seminar comments at Riken AIP, Georgetown, Tokyo, Osaka, Harvard, and The Third Cambridge Area Economics and Computation Day conference at Microsoft Research New England, as well as conversations with Susan Athey, Xiaohong Chen, Jerry Hausman, Greg Lewis, Robert Miller, Yusuke Narita, Aviv Nevo, Anton Popov, John Rust, Takuo Sugaya, Elie Tamer, and Yosuke Yasuda. yYale Department of Economics and MIT Department of Economics. E-mail: [email protected]. |
3LC: Lightweight and Effective Traffic Compression for Distributed Machine Learning | The performance and efficiency of distributed machine learning (ML) depends significantly on how long it takes for nodes to exchange state changes. Overly-aggressive attempts to reduce communication often sacrifice final model accuracy and necessitate additional ML techniques to compensate for this loss, limiting their generality. Some attempts to reduce communication incur high computation overhead, which makes their performance benefits visible only over slow networks. We present 3LC, a lossy compression scheme for state change traffic that strikes balance between multiple goals: traffic reduction, accuracy, computation overhead, and generality. It combines three new techniques—3-value quantization with sparsity multiplication, quartic encoding, and zero-run encoding—to leverage strengths of quantization and sparsification techniques and avoid their drawbacks. It achieves a data compression ratio of up to 39–107×, almost the same test accuracy of trained models, and high compression speed. Distributed ML frameworks can employ 3LC without modifications to existing ML algorithms. Our experiments show that 3LC reduces wall-clock training time of ResNet-110–based image classifiers for CIFAR-10 on a 10-GPU cluster by up to 16–23× compared to TensorFlow’s baseline design. |
Effect of Text Message, Phone Call, and In-Person Appointment Reminders on Uptake of Repeat HIV Testing among Outpatients Screened for Acute HIV Infection in Kenya: A Randomized Controlled Trial | BACKGROUND
Following HIV-1 acquisition, many individuals develop an acute retroviral syndrome and a majority seek care. Available antibody testing cannot detect an acute HIV infection, but repeat testing after 2-4 weeks may detect seroconversion. We assessed the effect of appointment reminders on attendance for repeat HIV testing.
METHODS
We enrolled, in a randomized controlled trial, 18-29 year old patients evaluated for acute HIV infection at five sites in Coastal Kenya (ClinicalTrials.gov NCT01876199). Participants were allocated 1:1 to either standard appointment (a dated appointment card) or enhanced appointment (a dated appointment card plus SMS and phone call reminders, or in-person reminders for participants without a phone). The primary outcome was visit attendance, i.e., the proportion of participants attending the repeat test visit. Factors associated with attendance were examined by bivariable and multivariable logistic regression.
PRINCIPAL FINDINGS
Between April and July 2013, 410 participants were randomized. Attendance was 41% (85/207) for the standard group and 59% (117/199) for the enhanced group, for a relative risk of 1.4 [95% Confidence Interval, CI, 1.2-1.7].Higher attendance was independently associated with older age, study site, and report of transactional sex in past month. Lower attendance was associated with reporting multiple partners in the past two months.
CONCLUSIONS
Appointment reminders through SMS, phone calls and in-person reminders increased the uptake of repeat HIV test by forty percent. This low-cost intervention could facilitate detection of acute HIV infections and uptake of recommended repeat testing.
TRIAL REGISTRATION
Clinicaltrials.gov NCT01876199. |
Abstractive Text Summarization with Quasi-Recurrent Neural Networks | ive Text Summarization with Quasi-Recurrent Neural Networks Peter Adelson Department of Computer Science Stanford University University [email protected] Sho Arora Department of Computer Science Stanford University University [email protected] Jeff Hara Department of Computer Science Stanford University University [email protected] |
Compact Dual-Band Microstrip Antenna for IEEE 802.11a WLAN Application | A compact dual-band rectangular microstrip antenna (RMSA) is realized by two different single-slotted single-band rectangular microstrip antennas with slotted ground plane. Each open-ended slot in the single-slotted antenna is responsible to generate a wide impedance band that is shifted to lower frequencies by the effect of the ground slot. The length and position of each open-ended slot is varied to operate the antenna in a suitable resonant band (5.15-5.35 and 5.725-5.825 GHz). The proposed antenna meets the required impedance bandwidth, necessary for dual-band IEEE 802.11a WLAN application (5.125-5.395 and 5.725-5.985 GHz). The dimension of the antenna (12 × 8 × 1.5875 mm3) shows an average compactness of about 53.73% with respect to a conventional unslotted rectangular microstrip patch antenna. |
Deep Learning-Based Document Modeling for Personality Detection from Text | This article presents a deep learning based method for determining the author's personality type from text: given a text, the presence or absence of the Big Five traits is detected in the author's psychological profile. For each of the five traits, the authors train a separate binary classifier, with identical architecture, based on a novel document modeling technique. Namely, the classifier is implemented as a specially designed deep convolutional neural network, with injection of the document-level Mairesse features, extracted directly from the text, into an inner layer. The first layers of the network treat each sentence of the text separately; then the sentences are aggregated into the document vector. Filtering out emotionally neutral input sentences improved the performance. This method outperformed the state of the art for all five traits, and the implementation is freely available for research purposes. |
Report of Recommendations: The Annapolis Coalition Conference on Behavioral Health Work Force Competencies | In May 2004, the Annapolis Coalition on Behavioral Health Workforce Education convened a national meeting on the identification and assessment of competencies. The Conference on Behavioral Health Workforce Competencies brought leading consumer and family advocates together with other experts on competencies from diverse disciplines and specialties in the fields of both mental health care and substance use disorders treatment. Aided by experts on competency development in business and medicine, conference participants have generated 10 consensus recommendations to guide the future development of workforce competencies in behavioral health. This article outlines those recommendations. A collaborative effort to identify a set of core or common competencies is envisioned as a key strategy for advancing behavioral health education, training, and other workforce development initiatives. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.