uuid
int64 0
6k
| title
stringlengths 8
285
| abstract
stringlengths 22
4.43k
|
---|---|---|
4,700 | An ART-based construction of RBF networks | Radial basis function (RBF) networks are widely used for modeling a function from given input-output patterns. However, two difficulties are involved with traditional RBF (TRBF) networks: The initial configuration of an RBF network needs to be determined by a trial-and-error method, and the performance suffers when the desired output has abrupt changes or constant values in certain intervals. We propose a novel approach to overcome these difficulties. New kernel functions are used for hidden nodes, and the number of nodes is determined automatically by an adaptive resonance theory (ART)-like algorithm. Parameters and weights are initialized appropriately, and then tuned and adjusted by the gradient-descent method to improve the performance of the network. Experimental results have shown that the RBF networks constructed by our method have a smaller number of nodes, a faster learning speed, and a smaller approximation error than the networks produced by other methods. |
4,701 | Adaptation of the classical end-point ITS-PCR for the diagnosis of avian trichomonosis to a real-time PCR reveals Bonelli's eagle as a new host for Trichomonas gypaetinii | Avian trichomonosis is a parasitic disease caused mainly by Trichomonas gallinae and other Trichomonas species. It can be asymptomatic, or it can produce a necrotic lesion in the upper digestive tract and spread to other organs, causing the death of the infected birds. In this study, we aimed to evaluate an adapted real-time PCR method for the diagnosis of different genotypes and species of avian oropharyngeal trichomonads. Fifty-six samples from the oropharynx of Bonelli's eagles (Aquila fasciata) obtained between 2018 and 2019 were analyzed using the real-time PCR and the end-point PCR, both targeting trichomonads ITS, and the results were compared by a coefficient of agreement. All positive samples were sequenced. The analysis showed a higher percentage of detection of real-time PCR ITS compared with end-point PCR ITS (64.3 vs 55.4%), and good agreement value (Kappa = 0.816). Melting temperature value for resulting amplicons of real-time PCR for avian trichomonads was 83.45 ± 0.72 °C. Genotypes A, D, and III were found among the sequences. Moreover, Trichomonas gypaetinii, a common species in scavenger birds, is reported for the first time in Bonelli's eagles. |
4,702 | Robust PCA Unrolling Network for Super-Resolution Vessel Extraction in X-Ray Coronary Angiography | Although robust PCA has been increasingly adopted to extract vessels from X-ray coronary angiography (XCA) images, challenging problems such as inefficient vessel-sparsity modelling, noisy and dynamic background artefacts, and high computational cost still remain unsolved. Therefore, we propose a novel robust PCA unrolling network with sparse feature selection for super-resolution XCA vessel imaging. Being embedded within a patch-wise spatiotemporal super-resolution framework that is built upon a pooling layer and a convolutional long short-term memory network, the proposed network can not only gradually prune complex vessel-like artefacts and noisy backgrounds in XCA during network training but also iteratively learn and select the high-level spatiotemporal semantic information of moving contrast agents flowing in the XCA-imaged vessels. The experimental results show that the proposed method significantly outperforms state-of-the-art methods, especially in the imaging of the vessel network and its distal vessels, by restoring the intensity and geometry profiles of heterogeneous vessels against complex and dynamic backgrounds. The source code is available at https://github.com/Binjie-Qin/RPCA-UNet |
4,703 | A Resource-Optimized VLSI Implementation of a Patient-Specific Seizure Detection Algorithm on a Custom-Made Wireless Device for Ambulatory Epilepsy Diagnostics | A patient-specific epilepsy diagnostic solution in the form of a wireless wearable ambulatory device is presented. First, the design, VLSI implementation, and experimental validation of a resource-optimized machine learning algorithm for epilepsy seizure detection are described. Next, the development of a mini-PCB that integrates a low-power wireless data transceiver and a programmable processor for hosting the seizure detection algorithm is discussed. The algorithm uses only EEG signals from the frontal lobe electrodes while yielding a seizure detection sensitivity and specificity competitive to the standard full EEG systems. The experimental validation of the algorithm VLSI implementation proves the possibility of conducting accurate seizure detection using quickly-mountable dry-electrode headsets without the need for uncomfortable/painful through-hair electrodes or adhesive gels. Details of design and optimization of the algorithm, the VLSI implementation, and the mini-PCB development are presented and resource optimization techniques are discussed. The optimized implementation is uploaded on a low-power Microsemi Igloo FPGA, requires 1237 logic elements, consumes 110 $\mu$W dynamic power, and yields a minimum detection latency of 10.2 $\mu$s. The measurement results from the FPGA implementation on data from 23 patients (198 seizures in total) shows a seizure detection sensitivity and specificity of 92.5 and 80.1, respectively. Comparison to the state of the art is presented from system integration, the VLSI implementation, and the wireless communication perspectives. |
4,704 | Deep Bayesian Hashing With Center Prior for Multi-Modal Neuroimage Retrieval | Multi-modal neuroimage retrieval has greatly facilitated the efficiency and accuracy of decision making in clinical practice by providing physicians with previous cases (with visually similar neuroimages) and corresponding treatment records. However, existing methods for image retrieval usually fail when applied directly to multi-modal neuroimage databases, since neuroimages generally have smaller inter-class variation and larger inter-modal discrepancy compared to natural images. To this end, we propose a deep Bayesian hash learning framework, called CenterHash, which can map multi-modal data into a shared Hamming space and learn discriminative hash codes from imbalanced multi-modal neuroimages. The key idea to tackle the small inter-class variation and large inter-modal discrepancy is to learn a common center representation for similar neuroimages from different modalities and encourage hash codes to be explicitly close to their corresponding center representations. Specifically, we measure the similarity between hash codes and their corresponding center representations and treat it as a center prior in the proposed Bayesian learning framework. A weighted contrastive likelihood loss function is also developed to facilitate hash learning from imbalanced neuroimage pairs. Comprehensive empirical evidence shows that our method can generate effective hash codes and yield state-of-the-art performance in cross-modal retrieval on three multi-modal neuroimage datasets. |
4,705 | Perceived no reference image quality measurement for chromatic aberration | Today there is need for no reference (NR) objective perceived image quality measurement techniques as conducting subjective experiments and making reference image available is a very difficult task. Very few NR perceived image quality measurement algorithms are available for color distortions like chromatic aberration (CA), color quantization with dither, and color saturation. We proposed NR image quality assessment (NR-IQA) algorithms for images distorted with CA. CA is mostly observed in images taken with digital cameras, having higher sensor resolution with inexpensive lenses. We compared our metric performance with two state-of-the-art NR blur techniques, one full reference IQA technique and three general-purpose NR-IQA techniques, although they are not tailored for CA. We used a CA dataset in the TID-2013 color image database to evaluate performance. Proposed algorithms give comparable performance with state-of-the-art techniques in terms of performance parameters and outperform them in terms of monotonicity and computational complexity. We have also discovered that the proposed CA algorithm best predicts perceived image quality of images distorted with realistic CA. (C) 2016 SPIE and IS&T |
4,706 | Building Resilience: An Art-Food Hub to Connect Local Communities | Resilience thinking is an appropriate framework when assessing the transitional potential of complex urban systems. The transformation of abandoned spaces into local hubs attracting new and innovative activities and events promotes a socioeconomic renaissance in urban communities, by stimulating adaptation to change, enhancing local resilience and strengthening urban-rural links. Under the conceptual umbrella of resilience thinking, the present study illustrates the outcomes of an integrated program of research-action aimed at urban regeneration in a medium-sized, economically disadvantaged city in Southern Italy (Battipaglia, Campania). The transformation of an abandoned building into an 'Art-Food Hub'-a multi-purpose and creative cultural space-based on resilience thinking was the specific case analyzed in our study. Appropriate stakeholders were identified and involved in a series of field activities and workshops, with the final objective of informing a comprehensive strategy strengthening awareness to change and capacity building. More specifically, stakeholder involvement was carried out with two aims: first, to make stakeholders active participants in co-designing a Strategic Urban Planning Document for Battipaglia and, second, to evaluate to what extent the proposed initiative contributes to building local resilience. By explicitly considering cross-scale drivers of community resilience, the results of this study show how the concept of resilience can be practically applied to policy formulation and implementation. |
4,707 | Foot/sandal prints and ovaliods in the rock art assemblage Ramat Matred, the Negev Desert, Israel | Studying the prints from Ramat Matred, their place within the different engraving phases and their relation to other motifs, abstracts, zoomorphs and inscriptions, several observations can be made referring to their date and possible meaning. Foot and sandal prints are not one of the most common motifs found in the Negev rock art. When examining the data globally, foot/sandal prints account for around 1-2% of images displayed. These prints present a rich variety of forms with many found clustered on a single panel. At Ramat Matred, prints are presented roughly ten times more often than at any other rock art site in the Negev with some 208 examples. These engravings are stylized, some represent feet, others sandals, though most are rather simplified elongated ovaloids with little other detail. Prints appear either as a single print or as a pair, with the feet/sandals set side-by-side or 'stepping' forward. Looking at the 'Islamic' rock art phase, the relationship between the foot/sandal prints and the formalized Arabic inscriptions, clarifies the cultural distinction and change in religious concepts and traditions that have occurred during the early historic period. (C) 2016 Elsevier Ltd. All rights reserved. |
4,708 | Understanding the influence of fluctuated low-temperature combined with high-humidity thawing on gelling properties of pork myofibrillar proteins | The present study further investigated the effects of fluctuated low-temperature combined with high-humidity thawing (FLHT) on the gelling properties of pork myofibrillar proteins (MPs). Results showed that compared with refrigerator thawing (RT) and low-temperature combined with high-humidity thawing (LHT), FLHT effectively reduced the protein aggregation and maintained the relative stability by decreasing the variation in the turbidity and absolute ζ-potential value. The rheological results confirmed its improved elastic gel network. Meanwhile, FLHT samples exhibited markedly higher WHC with lower cooking loss (P < 0.05). The whiteness and strength of MPs gel were significantly higher in the FLHT group (P < 0.05). Moreover, there was no difference in textural properties between FLHT samples and fresh meat (FS) (P > 0.05), due to its homogeneous and compact microstructure. Therefore, FLHT plays an essential role in holding a superior gel quality and a compact structure, thereby evidencing its potential application in meat thawing. |
4,709 | Molecular regulators of human blastocyst development and hatching: Their significance in implantation and pregnancy outcome | In humans, blastocyst hatching and implantation events are two sequential, critically linked and rate-limiting events for a prospective pregnancy. These events are regulated by embryo-endometrium derived molecular factors which include hormones, growth factors, cytokines, immune-modulators, cell adhesion molecules and proteases. Due to poor viability of blastocysts, they fail to hatch and implant, leading to a low 'Live Birth Rates', majorly contributing to infertility. Here, embryo-derived biomarkers analysis plays a key role to assess potential biological viability of blastocysts which are capable of implantation and prospective pregnancy. Thus far, embryo-derived biomarkers examined are mostly immune-modulators which are thought to be associated with blastocyst development-implantation and progression of pregnancy, leading to live births. There is an urgent need to develop a quantitative and a reliable non-invasive approach aiding embryo selection for elective single embryo transfer and to minimize recurrent pregnancy loss and multiple pregnancies. In this article, we provide a comprehensive review on our current knowledge and understanding of potential embryo-derived molecular regulators, that is, biomarkers, of development of human blastocysts, their hatching and implantation. We discuss their potential implications in the assessment of blastocyst implantation potential and pregnancy outcome in terms of live births in humans. |
4,710 | Advanced nano biosensors for rapid detection of zoonotic bacteria | An infectious disease that is transmitted from animals to humans and vice-versa is called zoonosis. Bacterial zoonotic diseases can re-emerge after they have been eradicated or controlled and are among the world's major health problems which inflict tremendous burden on healthcare systems. The first step to encounter such illnesses can be early and precise detection of bacterial pathogens to further prevent the following losses due to their infections. Although conventional methods for diagnosing pathogens, including culture-based, polymerase chain reaction-based, and immunological-based techniques, benefit from their advantages, they also have their own drawbacks, for example, taking long time to provide results, and requiring laborious work, expensive materials, and special equipment in certain conditions. Consequently, there is a greater tendency to introduce simple, innovative, quicker, accurate, and low-cost detection methods to effectively characterize the causative agents of infectious diseases. Biosensors, therefore, seem to practically be one of those novel promising diagnostic tools on this aim. These are effective and reliable elements with high sensitivity and specificity, that their usability can even be improved in medical diagnostic systems when empowered by nanoparticles. In the present review, recent advances in the development of several bio and nano biosensors, for rapid detection of zoonotic bacteria, have been discussed in details. |
4,711 | Linearized proximal alternating minimization algorithm for motion deblurring by nonlocal regularization | Non-blind motion deblurring problems are highly ill-posed and so it is quite difficult to find the original sharp and clean image. To handle ill-posedness of the motion deblurring problem, we use nonlocal total variation (abbreviated as TV) regularization approaches. Nonlocal TV can restore periodic textures and local geometric information better than local TV. But, since nonlocal TV requires weighted difference between pixels in the whole image, it demands much more computational resources than local TV. By using the linearization of the fidelity term and the proximal function, our proposed algorithm does not require any inversion of blurring operator and nonlocal operator. Therefore, the proposed algorithm is very efficient for motion deblurring problems. We compare the numerical performance of our proposed algorithm with that of several state-of-the-art algorithms for deblurring problems. Our numerical results show that the proposed method is faster and more robust than state-of-the-art algorithms on motion deblurring problems. (C) 2010 Elsevier Ltd. All rights reserved. |
4,712 | Assessment of hindlimb motor recovery after severe thoracic spinal cord injury in rats: classification of CatWalk XT® gait analysis parameters | Assessment of locomotion recovery in preclinical studies of experimental spinal cord injury remains challenging. We studied the CatWalk XT® gait analysis for evaluating hindlimb functional recovery in a widely used and clinically relevant thoracic contusion/compression spinal cord injury model in rats. Rats were randomly assigned to either a T9 spinal cord injury or sham laminectomy. Locomotion recovery was assessed using the Basso, Beattie, and Bresnahan open field rating scale and the CatWalk XT® gait analysis. To determine the potential bias from weight changes, corrected hindlimb (H) values (divided by the unaffected forelimb (F) values) were calculated. Six weeks after injury, cyst formation, astrogliosis, and the deposition of chondroitin sulfate glycosaminoglycans were assessed by immunohistochemistry staining. Compared with the baseline, a significant spontaneous recovery could be observed in the CatWalk XT® parameters max intensity, mean intensity, max intensity at%, and max contact mean intensity from 4 weeks after injury onwards. Of note, corrected values (H/F) of CatWalk XT® parameters showed a significantly less vulnerability to the weight changes than absolute values, specifically in static parameters. The corrected CatWalk XT® parameters were positively correlated with the Basso, Beattie, and Bresnahan rating scale scores, cyst formation, the immunointensity of astrogliosis and chondroitin sulfate glycosaminoglycan deposition. The CatWalk XT® gait analysis and especially its static parameters, therefore, seem to be highly useful in assessing spontaneous recovery of hindlimb function after severe thoracic spinal cord injury. Because many CatWalk XT® parameters of the hindlimbs seem to be affected by body weight changes, using their corrected values might be a valuable option to improve this dependency. |
4,713 | Trends in food consumption according to the degree of food processing among the UK population over 11 years | Although ultra-processed foods represent more than half of the total energy consumed by the UK population, little is known about the trend in food consumption considering the degree of food processing. We evaluated the trends of the dietary share of foods categorised according to the NOVA classification in a historical series (2018-2019) among the UK population. Data were acquired from the NDNS, a survey that collects diet information through a 4-d food record. We used adjusted linear regression to estimate the dietary participation of NOVA groups and evaluated the linear trends over the years. From 2008 to 2019, we observed a significant increase in the energy share of culinary ingredients (from 3·7 to 4·9 % of the total energy consumed; P-trend = 0·001), especially for butter and oils; and reduction of processed foods (from 9·6 to 8·6 %; P-trend = 0·002), especially for beer and wine. Unprocessed or minimally processed foods (≅30 %, P-trend = 0·505) and ultra-processed foods (≅56 %, P-trend = 0·580) presented no significant change. However, changes in the consumption of some subgroups are noteworthy, such as the reduction in the energy share of red meat, sausages and other reconstituted meat products as well as the increase of fruits, ready meals, breakfast cereals, cookies, pastries, buns and cakes. Regarding the socio-demographic characteristics, no interaction was observed with the trend of the four NOVA groups. From 2008 to 2019 was observed a significant increase in culinary ingredients and a reduction in processed food. Furthermore, it sheds light on the high share of ultra-processed foods in the contemporary British diet. |
4,714 | The Fast Heuristic Algorithms and Post-Processing Techniques to Design Large and Low-Cost Communication Networks | It is challenging to design large and low-cost communication networks. In this paper, we formulate this challenge as the prize-collecting Steiner Tree Problem (PCSTP). The objective is to minimize the costs of transmission routes and the disconnected monetary or informational profits. Initially, we note that the PCSTP is MAX SNP-hard. Then, we propose some post-processing techniques to improve suboptimal solutions to PCSTP. Based on these techniques, we propose two fast heuristic algorithms: the first one is a quasilinear time heuristic algorithm that is faster and consumes less memory than other algorithms; and the second one is an improvement of a state-of-the-art polynomial time heuristic algorithm that can find high-quality solutions at a speed that is only inferior to the first one. We demonstrate the competitiveness of our heuristic algorithms by comparing them with the state-of-the-art ones on the largest existing benchmark instances (169 800 vertices and 338 551 edges). Moreover, we generate new instances that are even larger (1 000 000 vertices and 10 000 000 edges) to further demonstrate their advantages in large networks. The state-of-the-art algorithms are too slow to find high-quality solutions for instances of this size, whereas our new heuristic algorithms can do this in around 6 to 45s on a personal computer. Ultimately, we apply our post-processing techniques to update the best-known solution for a notoriously difficult benchmark instance to show that they can improve near-optimal solutions to PCSTP. In conclusion, we demonstrate the usefulness of our heuristic algorithms and post-processing techniques for designing large and low-cost communication networks. |
4,715 | Image Denoising Using Mixtures of Projected Gaussian Scale Mixtures | We propose a new statistical model for image restoration in which neighborhoods of wavelet subbands are modeled by a discrete mixture of linear projected Gaussian Scale Mixtures (MPGSM). In each projection, a lower dimensional approximation of the local neighborhood is obtained, thereby modeling the strongest correlations in that neighborhood. The model is a generalization of the recently developed Mixture of GSM (MGSM) model, that offers a significant improvement both in PSNR and visually compared to the current state-of-the-art wavelet techniques. However, the computation cost is very high which hampers its use for practical purposes. We present a fast EM algorithm that takes advantage of the projection bases to speed up the algorithm. The results show that, when projecting on a fixed data-independent basis, even computational advantages with a limited loss of PSNR can be obtained with respect to the BLS-GSM denoising method, while data-dependent bases of Principle Components offer a higher denoising performance, both visually and in PSNR compared to the current wavelet-based state-of-the-art denoising methods. |
4,716 | Specular Reflections Removal for Endoscopic Image Sequences With Adaptive-RPCA Decomposition | Specular reflections (i.e., highlight) always exist in endoscopic images, and they can severely disturb surgeons observation and judgment. In an augmented reality (AR)-based surgery navigation system, the highlight may also lead to the failure of feature extraction or registration. In this paper, we propose an adaptive robust principal component analysis (Adaptive-RPCA) method to remove the specular reflections in endoscopic image sequences. It can iteratively optimize the sparse part parameter during RPCA decomposition. In this new approach, we first adaptively detect the highlight image based on pixels. With the proposed distance metric algorithm, it then automatically measures the similarity distance between the sparse result image and the detected highlight image. Finally, the low-rank and sparse results are obtained by enforcing the similarity distance between the two types of images to fall within a certain range. Our method has been verified by multiple different types of endoscopic image sequences in minimally invasive surgery (MIS). The experiments and clinical blind tests demonstrate that the new Adaptive-RPCA method can obtain the optimal sparse decomposition parameters directly and can generate robust highlight removal results. Compared with the state-of-the-art approaches, the proposed method not only achieves the better highlight removal results but also can adaptively process image sequences. |
4,717 | Cytoplasmic G Protein-Coupled Estrogen Receptor 1 as a Prognostic Indicator of Breast Cancer: A Meta-Analysis | Purpose: To determine whether G protein-coupled estrogen receptor 1 (GPER1) is a suitable biomarker to predict the treatment outcome of breast cancer (BC). Methods: A meta-analysis of the literature was performed to clarify the correlation between GPER1 protein expression and BC outcome. The relationship between GPER1 mRNA expression and survival was analyzed using Breast Cancer Gene-Expression Miner (bc-GenExMiner) v4.6 software. Results: Six studies involving 2697 patients were included in the meta-analysis. Four studies reported the correlation between GPER1 protein expression and relapse-free survival (RFS) and 4 others reported the impact of GPER1 protein expression on overall survival (OS). The results showed that high GPER1 protein expression was not associated with RFS (hazard ratio [HR] = 1.58; 95% confidence interval [CI] = 0.71-3.48; P = .26) or OS (HR = 1.18; 95% CI = 0.64-2.18; P = .60). Subgroup analysis suggested that nuclear expression of GPER1 was not associated with OS (HR = 0.91; 95% CI = 0.77-1.08; P = .30), but high expression of cytoplasmic GPER1 was significantly associated with longer OS (HR = 0.69; 95% CI = 0.55-0.86; P = .001). Furthermore, the association of GPER1 mRNA and OS of BC patients was analyzed using bc-GenExMiner v4.6. Two data sets involving 4016 patients were included in the analysis. The targeted prognostic analysis results showed that high mRNA expression of GPER1 was predictive of better OS in BC patients (HR = 0.71; 95% CI = 0.59-0.86; P = .0005), which was remarkably similar to the result of cytoplasmic GPER1. Further subgroup analysis demonstrated that high mRNA expression of GPER1 was predictive of better OS in estrogen receptor (ER)-positive, but not ER-negative or triple-negative BC patients. Conclusions: High mRNA and cytoplasmic protein expression of GPER1 were predictive of better OS of BC patients. |
4,718 | An optimized sono-heterogeneous Fenton degradation of olive-oil mill wastewater organic matter by new magnetic glutarlaldehyde-crosslinked developed cellulose | The present study highlights the olive mill wastewater (OMW) treatment characteristics through a sono-heterogeneous Fenton process using new designed [GTA-(PDA-g-DAC) @Fe3O4] and characterized by Fourier transform infrared (FTIR) spectroscopy, X-ray diffraction (XRD), thermogravimetric analysis (TGA), magnetic properties measurements, and point of zero charge (pH pzc) analysis. A preliminary removal study showed significant degradation efficiency (75%) occurred combining the magnetic synthesized catalyst [GTA-(PDA-g-DAC)@Fe3O4] ([catalyst] = 2 g/L) with US /H2O2 and maintaining 500WL-1 ultrasonic power (US). The values obtained by US only were (13%), H2O2/US (18%), US/Fe3O4 (28%), and US /Fe3O4/H2O2(35%). The catalytic findings have shown that [GTA-(PDA-g-DAC)@Fe3O4] exhibited good properties for OMW compound's degradation. The sonocatalytic process coupling and extra oxidant addition resulted in the degradation substantial levels. For instance, the concomitant effect of degradation optimized parameters; H2O2 10 mM, [GTA-(PDA-g-DAC) @Fe3O4] nanocomposites 2.5 g/L, at pH 3, and T 35 °C for 70 min resulted in an almost complete mineralization of aqueous OMW solution followed by a significant decolorization. Oxidation results exhibited efficient degradation rates in total phenolic compounds (TPC), total amino compounds (TAC), and chemical oxygen demand (COD) oxidation rate were 89.88, 92.75, and 95.66 respectively following the optimized sono-heterogeneous catalytic Fenton process. The prepared magnetic catalyst exhibited a good stability during repeated cycles. The gathered findings gave the evidence that sono-heterogeneous catalytic Fenton process is a promising treatment technology for OMW effluents. |
4,719 | Magnetic diffusion: Scalability, reliability, and QoS of data dissemination mechanisms for wireless sensor networks | Envisioning a new generation of sensor network applications in healthcare and workplace safety, we seek mechanisms that provide timely and reliable transmissions of mission-critical data. Inspired by the physics in magnetism, we propose a simple diffusion-based data dissemination mechanism, referred to as the magnetic diffusion (MD). In that, the data sink, functioning like the magnet, propagates the magnetic charge to set up the magnetic field. Under the influence of the magnetic field, the sensor data, functioning like the metallic nails, are attracted towards the sink. We compare MD to the state-of-the-art mechanisms and find that MD: (1) performs the best in timely delivery of data, (2) achieves high data reliability in the presence of network dynamics, and yet (3) works as energy efficiently as the state of the art. These suggest that MD is an effective data dissemination solution to the mission-critical applications. (c) 2006 Elsevier B.V. All rights reserved. |
4,720 | A Practical Data Classification Framework for Scalable and High Performance Chip-Multiprocessors | State-of-the-art chip multiprocessor (CMP) proposals emphasize general optimizations designed to deliver computing power for many types of applications. Potentially, significant performance improvements that leverage application-specific characteristics such as data access behavior are missed by this approach. In this paper, we demonstrate how scalable and high-performance parallel systems can be built by classifying data accesses into different categories and treating them differently. We develop a novel compiler-based approach to speculatively detect a data classification termed practically private, which we demonstrate is ubiquitous in a wide range of parallel applications. Leveraging this classification provides efficient solutions to mitigate data access latency and coherence overhead in today's many-core architectures. While the proposed data classification scheme can be applied to many micro-architectural constructs including the TLB, coherence directory, and interconnect, we demonstrate its potential through an efficient cache coherence design. Specifically, we show that the compiler-assisted mechanism reduces an average of 46% coherence traffic and achieves up to 12%, 8%, and 5% performance improvement over shared, private, and state-of-the-art NUCA-based caching, respectively, depending on scenarios. |
4,721 | Open Innovation Readiness Assessment within Students in Poland: Investigating State-of-the-Art and Challenges | In light of Poland's innovation performance level being below 70% of the EU average, open innovation can be a key path for innovation capacity increase. This paper explores the readiness of students in Poland for open innovation (OI). The study is based on a survey of a sample of 500 students using the Computer-Assisted Web Interview research technique. The main aim of this paper is to investigate Polish students' attitude to open innovation-in particular in terms of social product development, crowdsourcing, crowdfunding, and the sharing economy-to assess the state-of-the-art and identify challenges. Students are selected as the target group because they are open-minded, eager to use new solutions, and will soon enter the business sector to either become the staff of companies or set up their own startups or SMEs. However, the study shows that Polish students, if they use the OI-based platforms at all, use them passively. The key barriers identified within this study are a lack of knowledge about the open innovation paradigm, its elements and opportunities, and an issue of trust. Therefore, a change of mindset, the adjustment of universities' curricula, and the development of open innovation culture are critical. |
4,722 | PROTON plus : A Placement and Routing Tool for 3D Optical Networks-on-Chip with a Single Optical Layer | Optical Networks-on-Chip (ONoCs) are a promising technology to overcome the bottleneck of low bandwidth of electronic Networks-on-Chip. Recent research discusses power and performance benefits of ONoCs based on their system-level design, while layout effects are typically overlooked. As a consequence, laser power requirements are inaccurately computed from the logic scheme but do not consider the layout. In this article, we propose PROTON+, a fast tool for placement and routing of 3D ONoCs minimizing the total laser power. Using our tool, the required laser power of the system can be decreased by up to 94% compared to a state-of-the-art manually designed layout. In addition, with the help of our tool, we study the physical design space of ONoC topologies. For this purpose, topology synthesis methods (e.g., global connectivity and network partitioning) as well as different objective function weights are analyzed in order to minimize the maximum insertion loss and ultimately the system's laser power consumption. For the first time, we study optimal positions of memory controllers. A comparison of our algorithm to a state-of-the-art placer for electronic circuits shows the need for a different set of tools custom-tailored for the particular requirements of optical interconnects. |
4,723 | Real-Time Radio Technology and Modulation Classification via an LSTM Auto-Encoder | Identification of the type of communication technology and/or modulation scheme based on detected radio signal are challenging problems encountered in a variety of applications including spectrum allocation and radio interference mitigation. They are rendered difficult due to a growing number of emitter types and varied effects of real-world channels upon the radio signal. Existing spectrum monitoring techniques are capable of acquiring massive amounts of radio and real-time spectrum data using compact sensors deployed in a variety of settings. However, state-of-the-art methods that use such data to classify emitter types and detect communication schemes struggle to achieve required levels of accuracy at a computational efficiency that would allow their implementation on low-cost computational platforms. In this paper, we present a learning framework based on an LSTM denoising auto-encoder designed to automatically extract stable and robust features from noisy radio signals, and infer modulation or technology type using the learned features. The algorithm utilizes a compact neural network architecture readily implemented on a low-cost computational platform while exceeding state-of-the-art accuracy. Results on realistic synthetic as well as over-the-air radio data demonstrate that the proposed framework reliably and efficiently classifies received radio signals, often demonstrating superior performance compared to state-of-the-art methods. |
4,724 | Sampling biases obscure the early diversification of the largest living vertebrate group | Extant ray-finned fishes (Actinopterygii) dominate marine and freshwater environments, yet spatio-temporal diversity dynamics following their origin in the Palaeozoic are poorly understood. Previous studies investigate face-value patterns of richness, with only qualitative assessment of biases acting on the Palaeozoic actinopterygian fossil record. Here, we investigate palaeogeographic trends, reconstruct local richness and apply richness estimation techniques to a recently assembled occurrence database for Palaeozoic ray-finned fishes. We identify substantial fossil record biases, such as geographical bias in sampling centred around Europe and North America. Similarly, estimates of diversity are skewed by extreme unevenness in the occurrence distributions, reflecting historical biases in sampling and taxonomic practices, to the extent that evenness has an overriding effect on diversity estimates. Other than a genuine rise in diversity in the Tournaisian following the end-Devonian mass extinction, diversity estimates for Palaeozoic actinopterygians appear to lack biological signal, are heavily biased and are highly dependent on sampling. Increased sampling of poorly represented regions and expanding sampling beyond the literature to include museum collection data will be critical in obtaining accurate estimates of Palaeozoic actinopterygian diversity. In conjunction, applying diversity estimation techniques to well-sampled regional subsets of the 'global' dataset may identify accurate local diversity trends. |
4,725 | Design of a Host Interface Logic for GC-Free SSDs | Garbage collection (GC) and resource contention on I/O buses (channels) are among the critical bottlenecks in solid-state drives (SSDs) that cannot be easily hidden. Most existing I/O scheduling algorithms in the host interface logic (HIL) of state-of-the-art SSDs are oblivious to such low-level performance bottlenecks in SSDs. As a result, SSDs may violate quality of service (QoS) requirements by not being able to meet the deadlines of I/O requests. In this paper, we propose a novel host interface I/O scheduler that is both GC aware and QoS aware. The proposed scheduler redistributes the GC overheads across noncritical I/O requests and reduces channel resource contention. Our experiments with workloads from various application domains revealed that the proposed client-level SSD scheduler reduces the standard deviation for latency by 52.5% and the worst-case latency by 86.6%, compared to the state-of-the-art I/O schedulers used for the HIL. In addition, for I/O requests smaller than a superpage, the proposed scheduler avoids channel resource conflicts and reduces latency by 29.2% in comparison to the state-of-the-art I/O schedulers. Furthermore, we present an extension of the proposed I/O scheduler for enterprise SSDs based on the NVMe protocol. |
4,726 | Spintronic Integrate-Fire-Reset Neuron with Stochasticity for Neuromorphic Computing | Spintronics has been recently extended to neuromorphic computing because of its energy efficiency and scalability. However, a biorealistic spintronic neuron with probabilistic "spiking" and a spontaneous reset functionality has not been demonstrated yet. Here, we propose a biorealistic spintronic neuron device based on the heavy metal (HM)/ferromagnet (FM)/antiferromagnet (AFM) spin-orbit torque (SOT) heterostructure. The spintronic neuron can autoreset itself after firing due to the exchange bias of the AFM. The firing process is inherently stochastic because of the competition between the SOT and AFM pinning effects. We also implement a restricted Boltzmann machine (RBM) and stochastic integration multilayer perceptron (SI-MLP) using our proposed neuron. Despite the bit-width limitation, the proposed spintronic model can achieve an accuracy of 97.38% in pattern recognition, which is even higher than the baseline accuracy (96.47%). Our results offer a spintronic device solution to emulate biologically realistic spiking neurons. |
4,727 | Graph-Based Region and Boundary Aggregation for Biomedical Image Segmentation | Segmentation is a fundamental task in biomedical image analysis. Unlike the existing region-based dense pixel classification methods or boundary-based polygon regression methods, we build a novel graph neural network (GNN) based deep learning framework with multiple graph reasoning modules to explicitly leverage both region and boundary features in an end-to-end manner. The mechanism extracts discriminative region and boundary features, referred to as initialized region and boundary node embeddings, using a proposed Attention Enhancement Module (AEM). The weighted links between cross-domain nodes (region and boundary feature domains) in each graph are defined in a data-dependent way, which retains both global and local cross-node relationships. The iterative message aggregation and node update mechanism can enhance the interaction between each graph reasoning module's global semantic information and local spatial characteristics. Our model, in particular, is capable of concurrently addressing region and boundary feature reasoning and aggregation at several different feature levels due to the proposed multi-level feature node embeddings in different parallel graph reasoning modules. Experiments on two types of challenging datasets demonstrate that our method outperforms state-of-the-art approaches for segmentation of polyps in colonoscopy images and of the optic disc and optic cup in colour fundus images. The trained models will be made available at: https://github.com/smallmax00/Graph_Region_Boudnary |
4,728 | Locally Affine Invariant Descriptors for Shape Matching and Retrieval | This work proposes novel locally affine invariant descriptors for shape representation. The descriptors are theoretically simple and solid, derived from the matrix theories. They can be used for matching and retrieval of shapes under affine transformation, articulated motion or nonrigid deformation. Comparisons of the work with the state-of-the-art shape descriptors are performed based on synthetic and some well-known databases. The experiments validate that the proposed descriptors achieve higher retrieval accuracy and have faster running speed than most of other approaches. |
4,729 | Rinderidine and oblongifolidine new pyrrolizidine alkaloids from Rindera oblongifolia M. Popov and their absolute configurations | The alkaloid composition of Rindera oblongifolia was studied, in which the pyrrolizidine alkaloids echinatine and trachelanthamine N-oxide, as well as two new quaternary salts namely rinderidine and the oblongifolidine were isolated. The structures of the isolated new alkaloids were elucidated by NMR spectroscopy. The absolute configuration of lindelofine, trachelanthamine N-oxide, rinderidine and oblongifolidine was established by single crystal X-ray diffraction as: 1 R, 4 R, 8 R, 2'S, 3'R; 1 R, 4S, 8S, 2'S, 3'R; 4 R, 7S, 8 R, 2'S, 3'S; 4 R, 7S, 8 R, 2'S, 3'S (7''S, 8''R) respectively. Both new pyrrolizidine alkaloids showed no cytotoxicity against four cancer cell lines such as HeLa, НEр-2, HBL-100 and CCRF-CEM. |
4,730 | Making Polypharmacy Safer for Children with Medical Complexity | A 14-year-old patient who has severe cerebral palsy and seizures, requested that his parents speak to his pediatrician about a medication to help with sleep. He already uses 13 other medications, including anticonvulsants, analgesics, and respiratory medications, and 5 additional as needed (PRN) medications (Figure 1). He has a vagal nerve stimulator and a gastrostomy tube. His parents had researched several sleep medications, and they were interested in discussing trazodone therapy for his sleep issues. Clinicians who prescribe medications to children with medical complexity (CMC) frequently must consider the question: How does one safely prescribe a patient like ours a new medication, like the sleep medication trazodone, amidst an already complex background of polypharmacy? |
4,731 | Antithrombotic therapy for durable left ventricular assist devices - current strategies and future directions | Left ventricular assist devices (LVADs) improve survival and quality of life for patients with advanced heart failure but are associated with high rates of thromboembolic and hemorrhagic complications. Antithrombotic therapy is required following LVAD implantation, though practices vary. Identifying a therapeutic strategy that minimizes the risks of thromboembolic and hemorrhagic complications is critical to optimizing patient outcomes and is an area of active investigation. This paper reviews strategies for initiating and maintaining antithrombotic therapy in durable LVAD recipients, focusing on those with centrifugal-flow devices. |
4,732 | A Nonlocal Poisson Denoising Algorithm Based on Stochastic Distances | In this letter, a new version of the Nonlocal-Means(NLM) algorithm based on stochastic distances is proposed for Poisson denoising. NLM estimates a noise-free pixel as a weighted average of image pixels, where each pixel is weighted according to the similarity between image patches. In this work, stochastic distances are used as a new similarity measure. We explored the use of four stochastic distances for which closed-form solutions were found for Poisson distribution. This approach was demonstrated to be competitive with related state-of-the-art methods. |
4,733 | A 65-nm CMOS Lossless Bio-Signal Compression Circuit With 250 FemtoJoule Performance Per Bit | A 65nm CMOS integrated circuit implementation of a bio-physiological signal compression device is presented, reporting exceptionally low power, and extremely low silicon area cost, relative to state-of-the-art. A novel 'xor-log2-sub-band' data compression scheme is evaluated, achieving modest compression, but with very low resource cost. With the intent to design the simplest useful compression algorithm', the outcome is demonstrated to be very favourable where power must be saved by trading off compression effort against data storage capacity, or data transmission power, even where more complex algorithms can deliver higher compression ratios. A VLSI design and fabricated Integrated Circuit implementation are presented, and estimated performance gains and efficiency measures for various bio-medical use-cases are given. Power costs as low as 1.2 pJ per sample-bit are suggested for a 10kSa/s data-rate, whilst utilizing a power-gating scenario, and dropping to 250fJ/bit at continuous conversion data-rates of 5MSa/sec. This is achieved with a diminutive circuit area of 155um(2). Both power and area appear to be state-of-the-art in terms of compression versus resource cost, and this yields benefit for system optimization. |
4,734 | Reinforcement learning combined with a fuzzy adaptive learning control network (FALCON-R) for pattern classification | Reinforcement learning has been widely-used for applications in planning. control. and decision making. Rather than using instructive feedback as in supervised learning. reinforcement learning makes use of evaluative feedback to guide the learning process. In this paper, we formulate a pattern classification problem as a reinforcement learning problem. The problem is realized with a temporal difference method in a FALCON-R network. FALCON-R is constructed by integrating two basic FALCON-ART networks as function approximators, where one acts as a critic network (fuzzy predictor) and the other as an action network (fuzzy controller). This paper serves as a guideline in formulating a classification problem as a reinforcement learning problem using FALCON-R. The strengths of applying the reinforcement learning method to the pattern classification application are demonstrated. We show that such a system can converge faster. is able to escape from local minima. and has excellent disturbance rejection capability. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved. |
4,735 | Finding Faster Configurations Using FLASH | Finding good configurations of a software system is often challenging since the number of configuration options can be large. Software engineers often make poor choices about configuration or, even worse, they usually use a sub-optimal configuration in production, which leads to inadequate performance. To assist engineers in finding the better configuration, this article introduces Flash, a sequential model-based method that sequentially explores the configuration space by reflecting on the configurations evaluated so far to determine the next best configuration to explore. Flash scales up to software systems that defeat the prior state-of-the-art model-based methods in this area. Flash runs much faster than existing methods and can solve both single-objective and multi-objective optimization problems. The central insight of this article is to use the prior knowledge of the configuration space (gained from prior runs) to choose the next promising configuration. This strategy reduces the effort (i.e., number of measurements) required to find the better configuration. We evaluate Flash using 30 scenarios based on 7 software systems to demonstrate that Flash saves effort in 100 and 80 percent of cases in single-objective and multi-objective problems respectively by up to several orders of magnitude compared to state-of-the-art techniques. |
4,736 | Estimating A Reference Standard Segmentation With Spatially Varying Performance Parameters: Local MAP STAPLE | We present a new algorithm, called local MAP STAPLE, to estimate from a set of multi-label segmentations both a reference standard segmentation and spatially varying performance parameters. It is based on a sliding window technique to estimate the segmentation and the segmentation performance parameters for each input segmentation. In order to allow for optimal fusion from the small amount of data in each local region, and to account for the possibility of labels not being observed in a local region of some (or all) input segmentations, we introduce prior probabilities for the local performance parameters through a new maximum a posteriori formulation of STAPLE. Further, we propose an expression to compute confidence intervals in the estimated local performance parameters. We carried out several experiments with local MAP STAPLE to characterize its performance and value for local segmentation evaluation. First, with simulated segmentations with known reference standard segmentation and spatially varying performance, we show that local MAP STAPLE performs better than both STAPLE and majority voting. Then we present evaluations with data sets from clinical applications. These experiments demonstrate that spatial adaptivity in segmentation performance is an important property to capture. We compared the local MAP STAPLE segmentations to STAPLE, and to previously published fusion techniques and demonstrate the superiority of local MAP STAPLE over other state-of-the-art algorithms. |
4,737 | SLAP: Simpler, Improved Private Stream Aggregation from Ring Learning with Errors | Private Stream Aggregation (PSA) protocols perform secure aggregation of time-series data without leaking information about users' inputs to the aggregator. Previous work in post-quantum PSA used the Ring Learning with Errors (RLWE) problem indirectly via homomorphic encryption (HE), leading to a needlessly complex and intensive construction. In this work, we present SLAP, the first PSA protocol that is directly constructed from the RLWE problem to gain post-quantum security. By nature of our white-box approach, SLAP is simpler and more efficient than previous PSA that uses RLWE indirectly through the black box of HE. We also show how to apply stateof-the-art optimizations for lattice-based cryptography to greatly improve the practical performance of SLAP. The communication overhead of SLAP is much less than in previous work, with decreases of up to 99.96% in ciphertext sizes as compared to previous work in RLWE-based PSA. We demonstrate a speedup of 20.76x over the previous state-of-the-art RLWE-based PSA work's aggregation and show that SLAP achieves a throughput of 390,691 aggregations per second for 1000 users. We also compare SLAP to other state-of-the-art post-quantum PSA and showthat SLAP is comparable in latency and shows improvement in throughput when compared to these works, and we compare the qualitative features of these schemes with regards to practical usability. |
4,738 | Modeling of a Novel Atmospheric SOFC/GT Hybrid Process and Comparison with State-of-the-Art SOFC System Concepts | A novel concept for an atmospheric solid oxide fuel cell (SOFC) hybrid system applying a sub-atmospheric gas turbine is proposed. Based on a developed process model suitable operating conditions are studied. The resulting performance is compared to different state-of-the-art SOFC system concepts. The comparison shows that pressurized SOFC/GT systems offer the highest electric efficiency. The novel concept as well as a two-stage SOFC system offer lower efficiencies, but still well above 60%. However, avoiding the complex system requirements of pressurized SOFC/GT systems, these concepts are promising for highly efficient electricity generation. |
4,739 | Deep Volumetric Descriptor Learning for Dense Correspondence of Cone-Beam Computed Tomography via Spectral Maps | The deep neural network has achieved great success in 3D volumetric correspondence. These methods infer the dense displacement or velocity fields directly from the extracted volumetric features without addressing the intrinsic structure correspondence, being prone to shape and pose variations. On the other hand, the spectral maps address the intrinsic structure matching in the low dimensional embedding space, remain less involved in volumetric image correspondence. This paper presents an unsupervised deep volumetric descriptor learning neural network via the low dimensional spectral maps to address the dense volumetric correspondence. The neural network is optimized by a novel criterion on descriptor alignments in the spectral domain regarding the supervoxel graph. Aside from the deep convolved multi-scale features, we explicitly address the supervoxel-wise spatial and cross-channel dependencies to enrich deep descriptors. The dense volumetric correspondence is formulated as the low-dimensional spectral mapping. The proposed approach has been applied to both synthetic and clinically obtained cone-beam computed tomography images to establish dense supervoxel-wise and up-scaled voxel-wise correspondences. Extensive series of experimental results demonstrate the contribution of the proposed approach in volumetric descriptor extraction and consistent correspondence, facilitating attribute transfer for segmentation and landmark location. The proposed approach performs favorably against the state-of-the-art volumetric descriptors and the deep registration models, being resilient to pose or shape variations and independent of the prior transformations. |
4,740 | Differential expression and roles of Huntingtin and Huntingtin-associated protein 1 in the mouse and primate brains | Huntingtin-associated protein 1 (HAP1) is the first identified protein whose function is affected by its abnormal interaction with mutant huntingtin (mHTT), which causes Huntington disease. However, the expression patterns of Hap1 and Htt in the rodent brain are not correlated. Here we found that the primate HAP1, unlike the rodent Hap1, is correlatively expressed with HTT in the primate brains. CRISPR/Cas9 targeting revealed that HAP1 deficiency in the developing human neurons did not affect neuronal differentiation and gene expression as seen in the mouse neurons. However, deletion of HAP1 exacerbated neurotoxicity of mutant HTT in the organotypic brain slices of adult monkeys. These findings demonstrate differential HAP1 expression and function in the mouse and primate brains, and suggest that interaction of HAP1 with mutant HTT may be involved in mutant HTT-mediated neurotoxicity in adult primate neurons. |
4,741 | An effective 3-D fast fourier transform framework for multi-GPU accelerated distributed-memory systems | This paper introduces an efficient and flexible 3D FFT framework for state-of-the-art multi-GPU distributed-memory systems. In contrast to the traditional pure MPI implementation, the multi-GPU distributed-memory systems can be exploited by employing a hybrid multi-GPU programming model that combines MPI with OpenMP to achieve effective communication. An asynchronous strategy that creates multiple streams and threads to reduce blocking time is adopted to accelerate intra-node communication. Furthermore, we combine our scheme with the GPU-Aware MPI implementation to perform GPU-GPU data transfers without CPU involvement. We also optimize the local FFT and transpose by creating fast parallel kernels to accelerate the total transform. Results show that our framework outperforms the state-of-the-art distributed 3D FFT library, being up to achieve 2x faster in a single node and 1.65x faster using two nodes. |
4,742 | Pigeon pea penta- and hexapeptides with antioxidant properties also inhibit renin and angiotensin-I-converting enzyme activities | Pigeon pea protein was sequentially digested with pepsin followed by pancreatin and the hydrolysate separated into 18 fractions using reversed-phase high-performance liquid chromatography. Fractions were analyzed for in vitro antioxidant properties (radical scavenging, metal chelation, and ferric iron reducing ability) in addition to inhibition of renin and angiotensin-converting enzyme (ACE). The most active fractions were analyzed by mass spectroscopy followed by identification of 10 peptide sequences (7 pentapeptides and 3 hexapeptides). All the peptides showed a wide range of multifunctional activity by scavenging hydroxyl (31.9-66.8%) and superoxide (25.6-100.0%) radicals in addition to ACE inhibition (7.4-100%) with significant (p < .05) differences between the peptides. AGVTVS, TKDIG, TSRLG, GRIST, and SGEKI were the most active; however, AGVTVS had the highest hydrophobic residue and exhibited the strongest activity against ACE, renin as well as superoxide and hydroxyl radicals. PRACTICAL APPLICATIONS: There is an increasing attraction of researchers to food peptides especially from legume proteins. Enzymatic digestion as well as high performance liquid chromatography (HPLC) purification has become an important process used to separate peptides with significant biological activities and health-promoting effects. There is useful information regarding the bioactive and functional (in vitro antioxidant, antidiabetic, in vitro/in vivo antihypertensive) properties of hydrolyzed and ultra-filtered pigeon pea fractions but scant research output still exists for purified peptides from pigeon pea establishing their therapeutic potential. The present study aimed to separate peptide fractions from pigeon pea hydrolysate and identify available amino acid sequences from the parent protein. Therefore, peptide sequences generated from the most bioactive fractions showed prospects for the expanded industrial utilization of pigeon pea. Further promoting its application as functional ingredient or additive for alleviating angiotensin-converting enzyme-related diseases. |
4,743 | Adobe Boxes: Locating Object Proposals Using Object Adobes | Despite the previous efforts of object proposals, the detection rates of the existing approaches are still not satisfactory enough. To address this, we propose Adobe Boxes to efficiently locate the potential objects with fewer proposals, in terms of searching the object adobes that are the salient object parts easy to be perceived. Because of the visual difference between the object and its surroundings, an object adobe obtained from the local region has a high probability to be a part of an object, which is capable of depicting the locative information of the proto-object. Our approach comprises of three main procedures. First, the coarse object proposals are acquired by employing randomly sampled windows. Then, based on local-contrast analysis, the object adobes are identified within the enlarged bounding boxes that correspond to the coarse proposals. The final object proposals are obtained by converging the bounding boxes to tightly surround the object adobes. Meanwhile, our object adobes can also refine the detection rate of most state-of-the-art methods as a refinement approach. The extensive experiments on four challenging datasets (PASCAL VOC2007, VOC2010, VOC2012, and ILSVRC2014) demonstrate that the detection rate of our approach generally outperforms the state-of-the-art methods, especially with relatively small number of proposals. The average time consumed on one image is about 48 ms, which nearly meets the real-time requirement. |
4,744 | Stacked Sequential Scale-Space Taylor Context | We analyze sequential image labeling methods that sample the posterior label field in order to gather contextual information. We propose an effective method that extracts local Taylor coefficients from the posterior at different scales. Results show that our proposal outperforms state-of-the-art methods on MSRC-21, CAMVID, eTRIMS8 and KAIST2 data sets. |
4,745 | Exploiting Negative Evidence for Deep Latent Structured Models | The abundance of image-level labels and the lack of large scale detailed annotations (e.g. bounding boxes, segmentation masks) promotes the development of weakly supervised learning (WSL) models. In this work, we propose a novel framework for WSL of deep convolutional neural networks dedicated to learn localized features from global image-level annotations. The core of the approach is a new latent structured output model equipped with a pooling function which explicitly models negative evidence, e.g. a cow detector should strongly penalize the prediction of the bedroom class. We show that our model can be trained end-to-end for different visual recognition tasks: multi-class and multi-label classification, and also structured average precision (AP) ranking. Extensive experiments highlight the relevance of the proposed method: our model outperforms state-of-the art results on six datasets. We also show that our framework can be used to improve the performance of state-of-the-art deep models for large scale image classification on ImageNet. Finally, we evaluate our model for weakly supervised tasks: in particular, a direct adaptation for weakly supervised segmentation provides a very competitive model. |
4,746 | Restricted random testing: Adaptive random testing by exclusion | Restricted Random Testing (RRT) is a new method of testing software that improves upon traditional Random Testing (RT) techniques. Research has indicated that failure patterns (portions of an input domain which, when executed, cause the program to fail or reveal an error) can influence the effectiveness of testing strategies. For certain types of failure patterns, it has been found that a widespread and even distribution of test cases in the input domain can be significantly more effective at detecting failure compared with ordinary RT. Testing methods based on RT, but which aim to achieve even and widespread distributions, have been called Adaptive Random Testing (ART) strategies. One implementation of ART is RRT. RRT uses exclusion zones around executed, but non-failure-causing, test cases to restrict the regions of the input domain from which subsequent test cases may be drawn. In this paper, we introduce the motivation behind RRT, explain the algorithm and detail some empirical analyses carried out to examine the effectiveness of the method. Two versions of RRT are presented: Ordinary RRT (ORRT) and Normalized RRT (NRRT). The two versions share the same fundamental algorithm, but differ in their treatment of non-homogeneous input domains. Investigations into the use of alternative exclusion shapes are outlined, and a simple technique for reducing the computational overheads of RRT, prompted by the alternative exclusion shape investigations, is also explained. The performance of RRT is compared with RT and another ART method based on maximized minimum test case separation (DART), showing excellent improvement over RT and a very favorable comparison with DART. |
4,747 | Low-recovery, -energy-consumption, -emission hybrid systems of seawater desalination: Energy optimization and cost analysis | Ultraconcentrated seawater disposal is detrimental to marine biota, and carbon emissions from desalination processes are detrimental to the atmosphere. These detrimental effects are expected to increase, given the continuously growing global water demand and the associated water stress problems caused by water scarcity and population and economic growth. Along with political inclination to impose strict environmental regulations and a carbon tax on the price of freshwater, developing low-energy-consumption, low-carbon-emission desalination systems operating at low recovery is the future of SW desalination. Such desalination systems can be achieved by integrating electrodialysis, which does not have driving force limitations, with nanofiltration and brackish water reverse osmosis (RO), which provide low-energy-consumption desalination regions, to decrease energy consumption below that of current state-of-the-art RO systems. In this study, iterative optimization algorithms were developed for hybrid desalination systems. As a result, energy consumptions as low as 1.3 kWh/m(3) were achieved at recoveries < 30%. Despite the higher cost of freshwater production compared with that of state-of-the-art RO systems, owing to utilization of larger membrane areas, the hybrid systems reduced carbon dioxide emissions and brine concentrations from 63 to 26 and 41 to 34%, respectively. |
4,748 | A Probabilistic Model for Automatic Segmentation of the Esophagus in 3-D CT Scans | Being able to segment the esophagus without user interaction from 3-D CT data is of high value to radiologists during oncological examinations of the mediastinum. The segmentation can serve as a guideline and prevent confusion with pathological tissue. However, limited contrast to surrounding structures and versatile shape and appearance make segmentation a challenging problem. This paper presents a multistep method. First, a detector that is trained to learn a discriminative model of the appearance is combined with an explicit model of the distribution of respiratory and esophageal air. In the next step, prior shape knowledge is incorporated using a Markov chain model. We follow a "detect and connect" approach to obtain the maximum a posteriori estimate of the approximate esophagus shape from hypothesis about the esophagus contour in axial image slices. Finally, the surface of this approximation is nonrigidly deformed to better fit the boundary of the organ. The method is compared to an alternative approach that uses a particle filter instead of a Markov chain to infer the approximate esophagus shape, to the performance of a human observer and also to state of the art methods, which are all semiautomatic. Cross-validation on 144 CT scans showed that the Markov chain based approach clearly outperforms the particle filter. It segments the esophagus with a mean error of 1.80 mm in less than 16 s on a standard PC. This is only 1 mm above the interobserver variability and can compete with the results of previously published semiautomatic methods. |
4,749 | Endoplasmic reticulum stress in diabetic mouse or glycated LDL-treated endothelial cells: protective effect of Saskatoon berry powder and cyanidin glycans | Endoplasmic reticulum (ER) stress is associated with insulin resistance and diabetic cardiovascular complications, and mechanism or remedy for ER stress remains to be determined. The results of the present study demonstrated that the levels of ER stress or unfolded protein response (UPR) markers, the intensity of thioflavin T (ThT) fluorescence and the abundances of GRP78/94, XBP-1 and CHOP proteins were elevated in cardiovascular tissue of diabetic leptin receptor-deficient (db/db) mice. Cyanidin-3-glucoside (C3G) and cyanidin-3-galactoside (C3Ga) are major anthocyanins in Saskatoon berry (SB) powder. The administration of 5% SB powder for 4 weeks attenuated ThT fluorescence and the UPR markers in hearts and aortae of wild-type and db/db mice. Treatment with glycated low-density lipoprotein (gLDL) increased ThT intensity in human umbilical vein endothelial cells (ECs). Elevated UPR markers were detected in gLDL-treated EC compared to control cultures. The involvement of ER stress in gLDL-treated EC was supported by that the addition of 4-phenyl butyrate acid (a known ER stress antagonist) inhibited gLDL-induced increases in ER stress or UPR markers. C3G at 30 μM or C3Ga at 100 μM reached their maximal inhibition on gLDL-induced increases in ThT, GRP78/94, XBP-1 and CHOP in EC. The results demonstrated that ER stress was enhanced in cardiovascular tissue of db/db mice or gLDL-treated EC. SB powder or cyanidin glycans prevented the abnormal increases in ER stress and UPR markers in cardiovascular tissue of diabetic db/db mice or gLDL-treated EC. |
4,750 | Providers' reflections on infrastructure and improvements to promote access to care for Veterans experiencing housing instability in rural areas of the United States: A qualitative study | Veterans in rural areas of the United States face barriers to accessing healthcare and other services, which are intensified for those experiencing housing instability. Recent legislative acts have the potential to address obstacles faced by rural patients in the U.S. This study explores how infrastructure-including features related to the physical and digital environment-impacts the ability of rural Veterans experiencing housing instability to access healthcare and related services from the perspective of homeless service providers within the Veterans Health Administration (VHA). We conducted semi-structured telephone interviews (n = 22) with providers in high/low performing and/or resourced communities across the U.S. in May and June 2021 and analysed transcripts using template analysis. Themes described by providers highlight how infrastructure limitations in rural areas can exacerbate health disparities for Veterans experiencing housing instability, the impact of COVID-19 on service access, and recommendations to enhance service delivery. Providers suggested that VHA reconfigure where and how staff work, identify additional resources for transportation and/or alternative transportation models, and increase Veterans' access to technology and broadband Internet. Federal infrastructure investments should address challenges faced by Veterans experiencing housing instability in rural areas and the concerns of providers connecting them with care. |
4,751 | Ovarian vein thrombosis: A rare cause of abdominal pain as a complication of an elective abortion | Ovarian vein thrombosis (OVT) is a rare diagnosis. Patients can appear to be very uncomfortable on presentation with a physical examination that can mimic an acute abdomen. OVT is most often diagnosed during the postpartum period [Jenayah et al., 2015] and not typically seen during pregnancy or after procedures such as dilation and curettage (D&C). The complications from an OVT are significant and include sepsis, thrombophlebitis and pulmonary embolism [Harris et al., 2012]. Here we describe a case of OVT with an atypical presentation, diagnosed twenty-four hours after an elective D&C for a second trimester abortion. |
4,752 | Texture-Independent Long-Term Tracking Using Virtual Corners | Long-term tracking of an object, given only a single instance in an initial frame, remains an open problem. We propose a visual tracking algorithm, robust to many of the difficulties that often occur in real-world scenes. Correspondences of edge-based features are used, to overcome the reliance on the texture of the tracked object and improve invariance to lighting. Furthermore, we address long-term stability, enabling the tracker to recover from drift and to provide redetection following object disappearance or occlusion. The two-module principle is similar to the successful state-of-the-art long-term TLD tracker; however, our approach offers better performance in benchmarks and extends to cases of low-textured objects. This becomes obvious in cases of plain objects with no texture at all, where the edge-based approach proves the most beneficial. We perform several different experiments to validate the proposed method. First, results on short-term sequences show the performance of tracking challenging (low textured and/or transparent) objects that represent failure cases for competing the state-of-the-art approaches. Second, long sequences are tracked, including one of almost 30 000 frames, which, to the best of our knowledge, is the longest tracking sequence reported to date. This tests the redetection and drift resistance properties of the tracker. Finally, we report the results of the proposed tracker on the VOT Challenge 2013 and 2014 data sets as well as on the VTB1.0 benchmark, and we show relative performance of the tracker compared with its competitors. All the results are comparable with the state of the art on sequences with textured objects and superior on non-textured objects. The new annotated sequences are made publicly available. |
4,753 | High efficiency DCM buck converter with dynamic logic based adaptive switch-time control | This paper presents a dynamic logic based adaptive switch-time control circuit (DASTC) for fast control to minimize body diode conduction losses and dead-time of a buck regulator operating in discontinuous conduction mode. Dead-time is an important metric for improving efficiency of low voltage converters. DASTC provides instant sensing and feedback based on the load to minimize dead-time significantly compared to prior-art and scales low side power switch pulse width adaptively across load currents, just enough to discharge the stored inductor energy fully. Furthermore, the converter has inherent pulse skipping mode to lower the switching frequency at light loads to enhance light load efficiency. The proposed 1.5 V buck regulator was fabricated in 0.35 mu m CMOS process with an input voltage range of 1.8-3 V and load current range of 1-200 mA. Measurement results demonstrate a peak power efficiency of 94.6 % and a high overall power efficiency (>similar to 89 %) for load currents greater than 5 mA for V-IN = 1.8 V. The proposed scheme achieves 4x (similar to 5 ns) better dead-time and 4-5.5 % better power efficiency compared to prior-art over the load current range. |
4,754 | iDNA-ABF: multi-scale deep biological language learning model for the interpretable prediction of DNA methylations | In this study, we propose iDNA-ABF, a multi-scale deep biological language learning model that enables the interpretable prediction of DNA methylations based on genomic sequences only. Benchmarking comparisons show that our iDNA-ABF outperforms state-of-the-art methods for different methylation predictions. Importantly, we show the power of deep language learning in capturing both sequential and functional semantics information from background genomes. Moreover, by integrating the interpretable analysis mechanism, we well explain what the model learns, helping us build the mapping from the discovery of important sequential determinants to the in-depth analysis of their biological functions. |
4,755 | Adversarial scratches: Deployable attacks to CNN classifiers | A growing body of work has shown that deep neural networks are susceptible to adversarial examples. These take the form of small perturbations applied to the model's input which lead to incorrect predictions. Unfortunately, most literature focuses on visually imperceivable perturbations to be applied to digital images that often are, by design, impossible to be deployed to physical targets. We present Adversarial Scratches: a novel L-0 black-box attack, which takes the form of scratches in images, and which possesses much greater deployability than other state-of-the-art attacks. Adversarial Scratches leverage Bezier Curves to reduce the dimension of the search space and possibly constrain the attack to a specific location. We test Adversarial Scratches in several scenarios, including a publicly available API and images of traffic signs. Results show that our attack achieves higher fooling rate than other deployable state-of-the-art methods, while requiring significantly fewer queries and modifying very few pixels. (C) 2022 Elsevier Ltd. All rights reserved. |
4,756 | Three-stage RGBD architecture for vehicle and pedestrian detection using convolutional neural networks and stereo vision | With the growth of autonomous vehicles and collision-avoidance systems, several approaches using deep learning and convolutional neural networks (CNNs) continually address accuracy improvement in obstacle detection. The authors introduce a three-stage architecture that adds side channels as low-level features to serve as input to existing CNNs. In a case study, the architecture is used to extract depth from stereo cameras, and then compose RGBD inputs to state-of-the-art CNNs to improve their vehicle and pedestrian detection accuracy. This can be achieved by simple modifications on the first layers of any existing CNN with RGB inputs. To validate the architecture, the state-of-the-art matching cost-CNN, and cascade residual learning, both specialist algorithms to extract depth information combined to the state-of-the-art Faster-region-based CNN, MSCNCN, and Subcategory-aware Convolutional Neural Network (SubCNN) to yield the models to be tested using the KITTI dataset benchmark. In many cases, the accuracy (in terms of average precision) using their proposal outperforms the original scores in various scenarios of detection difficulty, reaching improvements up to +3.96% in the training and +1.50% in the testing KITTI datasets. This proposal also introduces efficient methods to initialise the weights of the depth convolutional filters during transfer learning using net surgery. |
4,757 | A new ART-counterpropagation neural network for solving a forecasting problem | This study presents a novel Adaptive resonance theory-Counterpropagation neural network (ART-CPN) for solving forecasting problems. The network is based on the ART concept and the CPN learning algorithm for constructing the neural network. The vigilance parameter is used to automatically generate the nodes of the cluster layer for the CPN learning process. This process improves the initial weight problem and the adaptive nodes of the cluster layer (Kohonen layer). ART-CPN involves real-time learning and is capable of developing a more stable and plastic prediction model of input patterns by self-organization. The advantages of ART-CPN include the ability to cluster, learn and construct the network model for forecasting problems. The network was applied to solve the real forecasting problems. The learning algorithm revealed better learning efficiency and good prediction performance. (C) 2004 Elsevier Ltd. All rights reserved. |
4,758 | Effects of predator modulation and vector preference on pathogen transmission in plant populations | Biological control programs frequently rely on predators to control vector-borne pathogens by consumptive effects on vector abundance in agroecosystems. Meanwhile, the spread of vectored disease depends on the vector preference for host status (healthy or infected hosts). Yet, it is unclear how vector preferences alter the controlled effectivity of predators in pathogen transmission. Therefore, we here addressed the plant-vector-pathogen models assessing how pathogen transmission in plant was affected by variable predation rates and vector preferences for host status. Specifically, we discussed effects of predators on vector abundance and pathogen transmission under both a non-spatial model and a spatially structured metapopulation model. We showed that predators can decrease the vector abundance and inhibit pathogen prevalence, whereas vector preference contributes profoundly to the controlled effectivity of predators on the spread of vector-borne pathogens. Moreover, predation can increase oscillation amplitude of the pathogen prevalence in both plant and vector; suggesting that the inclusion of predator can amplify the effects of environmental stochasticity on pathogen dynamics. In conclusion, our results support the prediction of theoretical disease models showing predator can be a natural enemy for pathogen control, and also extend that predatory interactions interacting with vector preferences play the singularly joint effects on the spread of vector-borne pathogens. |
4,759 | Markedly lowering the viscosity of aqueous solutions of DNA by additives | Aqueous solutions of DNAs, while relevant in drug delivery and as a target of therapies, are often very viscous making them difficult to use. Since less viscous solutions could enable targeted drug delivery and/or therapies, the purpose of the present work was to explore compounds capable of "thinning" such DNA solutions under pharmaceutically relevant conditions. To this end, viscosities of aqueous solutions of DNAs and model polyanions were examined at 25 °C in the absence and presence of a number of bulky organic salts (and related compounds) previously found to substantially lower the viscosities of concentrated protein solutions. Out of two dozen compounds tested, only three were found to be effective; the FDA-approved local anesthetics lidocaine, mepivacaine, and prilocaine at near-isotonic concentrations and pH 6.4 lowered solution viscosity of three different DNAs up to about 20 fold. The observed multi-fold viscosity reductions appear to be due to these bulky organic salts' structure-specific non-covalent binding to nucleotide bases resulting in denaturation (unwinding) to, and stabilization of, single-stranded DNA. |
4,760 | A review of occupational health and safety risk assessment approaches based on multi-criteria decision-making methods and their fuzzy versions | Occupational health and safety (OHS) is a multidisciplinary activity working under the tasks of protection of workers and worksites. Risk assessment, as a compulsory process in implementation of OHS, stands out as evaluating the risks arising from the hazards, taking into account the required control measures, and deciding whether or not the risks can be reduced to an acceptable level. The diversity in risk assessment approaches is such that there are many methods for any industry. Multicriteria decision-making (MCDM)-based approaches contribute to risk assessment knowledge with their ability on solving real-world problems with multiple, conflicting, and incommensurate criteria. This article conducts a critical state-of-the-art review of OHS risk assessment studies using MCDM-based approaches. Additionally, it includes fuzzy versions of MCDM approaches applied to OHS risk assessment. A total of 80 papers are classified in eight different application areas. The papers are reviewed by the points of publication trend, published journal, risk parameters/factors, and tools used. This critical review provides an insight for researchers and practitioners on MCDM-based OHS risk assessment approaches in terms of showing current state and potential areas for attempts to be focused in the future. |
4,761 | Multilingual Audio-Visual Smartphone Dataset and Evaluation | Smartphones have been employed with biometric-based verification systems to provide security in highly sensitive applications. Audio-visual biometrics are getting popular due to their usability, and also it will be challenging to spoof because of their multimodal nature. In this work, we present an audio-visual smartphone dataset captured in five different recent smartphones. This new dataset contains 103 subjects captured in three different sessions considering the different real-world scenarios. Three different languages are acquired in this dataset to include the problem of language dependency of the speaker recognition systems. These unique characteristics of this dataset will pave the way to implement novel state-of-the-art unimodal or audio-visual speaker recognition systems. We also report the performance of the bench-marked biometric verification systems on our dataset. The robustness of biometric algorithms is evaluated towards multiple dependencies like signal noise, device, language and presentation attacks like replay and synthesized signals with extensive experiments. The obtained results raised many concerns about the generalization properties of state-of-the-art biometrics methods in smartphones. |
4,762 | Warping of Radar Data Into Camera Image for Cross-Modal Supervision in Automotive Applications | We present an approach to automatically generate semantic labels for real recordings of automotive range-Doppler (RD) radar spectra. Such labels are required when training a neural network for object recognition from radar data. The automatic labeling approach rests on the simultaneous recording of camera and lidar data in addition to the radar spectrum. By warping radar spectra into the camera image, state-of-the-art object recognition algorithms can be applied to label relevant objects, such as cars, in the camera image. The warping operation is designed to be fully differentiable, which allows backpropagating the gradient computed on the camera image through the warping operation to the neural network operating on the radar data. As the warping operation relies on accurate scene flow estimation, we further propose a novel scene flow estimation algorithm which exploits information from camera, lidar and radar sensors. The proposed scene flow estimation approach is compared against a state-of-the-art scene flow algorithm, and it outperforms it by approximately 30% w.r.t. mean average error. The feasibility of the overall framework for automatic label generation for RD spectra is verified by evaluating the performance of neural networks trained with the proposed framework for Direction-of-Arrival estimation. |
4,763 | Mitigation of radiation exposure during surgical hepatectomy after yttrium-90 radioembolization | Yttrium-90 (Y-90) radioembolization for the treatment of hepatocellular carcinoma can present safety challenges when transplanting recently treated Y-90 patients. To reduce surgeons' contact with radioactive tissue and remain within occupational dose limits, current guidelines recommend delaying transplants at least 14 days, if possible. We wanted to determine the level of radiation exposure to the transplant surgeon when explanting an irradiated liver before the recommended decay period. An ex-vivo radiation exposure analysis was conducted on the explanted liver of a patient who received Y-90 therapy 46 h prior to orthotopic liver transplant. To estimate exposure to the surgeon's hands, radiation dosimeter rings were placed inside three different surgical glove configurations and exposed to the explanted liver. Estimated radiation doses corrected for Y-90 decay were calculated. Radiation safety gloves performed best, with an average radiation exposure rate of 5.36 mSV h(-1) in the static hand position, an 83% reduction in exposure over controls with no glove (31.31 mSv h(-1)). Interestingly, non-radiation safety gloves also demonstrated reduced exposure rates, well below occupational regulation limits. Handling of Y-90 radiated organs within the immediate post-treatment period can be done safely and does not exceed federal occupational dose limits if appropriate gloves and necessary precautions are exercised. |
4,764 | Cross-Bar Design of Nano-Vacuum Triode for High-Frequency Applications | In this letter, a new nano-vacuum triode based on carbon nanotubes (CNTs) has been designed. The use of CNTs as emitters with their extremely high aspect ratio and their characteristics to be patterned in specific emitting areas allowed the realization of a cross-bar geometry for which the transconductance is maximized and the grid-cathode capacitance is reduced. This allowed us to achieve a device cutoff frequency of 156 GHz, which is well beyond the state of the art. |
4,765 | Multi-Scale Pathological Fluid Segmentation in OCT With a Novel Curvature Loss in Convolutional Neural Network | The segmentation of pathological fluid lesions in optical coherence tomography (OCT), including intraretinal fluid, subretinal fluid, and pigment epithelial detachment, is of great importance for the diagnosis and treatment of various eye diseases such as neovascular age-related macular degeneration and diabetic macular edema. Although significant progress has been achieved with the rapid development of fully convolutional neural networks (FCN) in recent years, some important issues remain unsolved. First, pathological fluid lesions in OCT show large variations in location, size, and shape, imposing challenges on the design of FCN architecture. Second, fluid lesions should be continuous regions without holes inside. But the current architectures lack the capability to preserve the shape prior information. In this study, we introduce an FCN architecture for the simultaneous segmentation of three types of pathological fluid lesions in OCT. First, attention gate and spatial pyramid pooling modules are employed to improve the ability of the network to extract multi-scale objects. Then, we introduce a novel curvature regularization term in the loss function to incorporate shape prior information. The proposed method was extensively evaluated on public and clinical datasets with significantly improved performance compared with the state-of-the-art methods. |
4,766 | Unconventional localization of electrons inside of a nematic electronic phase | The magnetotransport behavior inside the nematic phase of bulk FeSe reveals unusual multiband effects that cannot be reconciled with a simple two-band approximation proposed by surface-sensitive spectroscopic probes. In order to understand the role played by the multiband electronic structure and the degree of two-dimensionality, we have investigated the electronic properties of exfoliated flakes of FeSe by reducing their thickness. Based on magnetotransport and Hall resistivity measurements, we assess the mobility spectrum that suggests an unusual asymmetry between the mobilities of the electrons and holes, with the electron carriers becoming localized inside the nematic phase. Quantum oscillations in magnetic fields up to 38 T indicate the presence of a hole-like quasiparticle with a lighter effective mass and a quantum scattering time three times shorter, as compared with bulk FeSe. The observed localization of negative charge carriers by reducing dimensionality can be driven by orbitally dependent correlation effects, enhanced interband spin fluctuations, or a Lifshitz-like transition, which affect mainly the electron bands. The electronic localization leads to a fragile two-dimensional superconductivity in thin flakes of FeSe, in contrast to the two-dimensional high-[Formula: see text] induced with electron doping via dosing or using a suitable interface. |
4,767 | THE ARTISTIC PLANNING METHOD OF URBAN GARDEN PLANT LANDSCAPE UNDER THE CONCEPT OF ECOLOGICAL SUSTAINABLE DEVELOPMENT | Landscape planning is difficult for designers to be influenced by bad culture. It is easy to ignore the cultural progress of the times and lose the sense of innovation. At the same time, in order to pursue the quick effect, the movement of "transplanting big trees", "lawn hot" and "urban beautification" prevails leads to the problems of regional ecological environment. Therefore, this paper puts forward a kind of urban landscape plant landscape art under the concept of ecological sustainable development based on the improving the plant growth environment, protecting, and transmitting the ecological environment and the participation of the public in the design. At the same time, the design follows the ecological design principles of locality, protection of diversity and natural, social, and sustainable plant landscapes. Finally, the "beauty, greening, and natural" artistry is incorporated into the design to complete the plan for the concept of sustainable development, reflecting the "coordinated" and "comprehensive" beauty of garden plant landscape art. |
4,768 | Stacked Sparse Autoencoder (SSAE) for Nuclei Detection on Breast Cancer Histopathology Images | Automated nuclear detection is a critical step for a number of computer assisted pathology related image analysis algorithms such as for automated grading of breast cancer tissue specimens. The Nottingham Histologic Score system is highly correlated with the shape and appearance of breast cancer nuclei in histopathological images. However, automated nucleus detection is complicated by 1) the large number of nuclei and the size of high resolution digitized pathology images, and 2) the variability in size, shape, appearance, and texture of the individual nuclei. Recently there has been interest in the application of "Deep Learning" strategies for classification and analysis of big image data. Histopathology, given its size and complexity, represents an excellent use case for application of deep learning strategies. In this paper, a Stacked Sparse Autoencoder (SSAE), an instance of a deep learning strategy, is presented for efficient nuclei detection on high-resolution histopathological images of breast cancer. The SSAE learns high-level features from just pixel intensities alone in order to identify distinguishing features of nuclei. A sliding window operation is applied to each image in order to represent image patches via high-level features obtained via the auto-encoder, which are then subsequently fed to a classifier which categorizes each image patch as nuclear or non-nuclear. Across a cohort of 500 histopathological images (2200 2200) and approximately 3500 manually segmented individual nuclei serving as the groundtruth, SSAE was shown to have an improved F-measure 84.49% and an average area under Precision-Recall curve (AveP) 78.83%. The SSAE approach also out-performed nine other state of the art nuclear detection strategies. |
4,769 | HybridGBN-SR: A Deep 3D/2D Genome Graph-Based Network for Hyperspectral Image Classification | The successful application of deep learning approaches in remote sensing image classification requires large hyperspectral image (HSI) datasets to learn discriminative spectral-spatial features simultaneously. To date, the HSI datasets available for image classification are relatively small to train deep learning methods. This study proposes a deep 3D/2D genome graph-based network (abbreviated as HybridGBN-SR) that is computationally efficient and not prone to overfitting even with extremely few training sample data. At the feature extraction level, the HybridGBN-SR utilizes the three-dimensional (3D) and two-dimensional (2D) Genoblocks trained using very few samples while improving HSI classification accuracy. The design of a Genoblock is based on a biological genome graph. From the experimental results, the study shows that our model achieves better classification accuracy than the compared state-of-the-art methods over the three publicly available HSI benchmarking datasets such as the Indian Pines (IP), the University of Pavia (UP), and the Salinas Scene (SA). For instance, using only 5% labeled data for training in IP, and 1% in UP and SA, the overall classification accuracy of the proposed HybridGBN-SR is 97.42%, 97.85%, and 99.34%, respectively, which is better than the compared state-of-the-art methods. |
4,770 | First Report of Anthracnose on Chili Pepper Caused by Colletotrichum sojae in Hebei Province, China | China is the largest chili pepper producing country, and Hebei Province stands out as the forth with planting area at about 1500 km2 in China. Pepper (Capsicum annuum L.) is susceptible to Colletotrichum spp. infection during its growth, which seriously affects production yield and quality. In September 2020, widespread anthracnose was observed on pepper in Hebei (115.48° N, 38.77° E), China. Necrotic lesions on pepper fruits were suborbcular, sunken, with acervuli arranged in the middle of lesion (e-Xtra 1A). To perform fungal isolation, small tissue with 0.3 cm2 in size at the symptomatic tissue margin was surface disinfested with 75% ethanol for 10 s, and 0.1% HgCl2 for 40 s, then washed three times with sterile ddH2O. Fragments were placed on potato dextrose agar (PDA) amended with 100 mg·L-1 chloramphenicol and incubated at 28 ºC under darkness for 4 days. One of the strains of Colletotrichum spp., named HQY157, was purified by single-spore isolation, then used for morphological characterization, phylogenetic analysis, and pathogenicity tests. Colonies presented light grey aerial mycelium, occasionally mixed with gray-black strips, and the reverse was similar to the surface on PDA (e-Xtra 1B). Conidia were smooth-walled, aseptate, straight with obtuse to slightly rounded ends, 17.3-28.5 × 3.1-7.4 μm (n=50) (e-Xtra 1C). For molecular identification, the internal transcribed spacer (ITS) region, partial sequences of actin (ACT), β-tublin (TUB), glyceraldehyde-3-phosphate dehydrogenase (GAPDH), and chitin synthase (CHS) were sequenced using the specific primers (Weir et al. 2012). Sequences were deposited in GenBank with the following accession numbers OM317600-OM317604. A Maximum-Likelihood phylogenetic tree was constructed, based on the concatenated sequences (ACT, CHS, GAPDH, TUB, and ITS) of HQY157 and other closely matching Colletotrichum species obtained from GenBank, by using MEGA-X. It showed that HQY157 was grouped with the C. sojae with bootstrap values of 100% (e-Xtra 2). To confirm the pathogenicity, surface-sterilized healthy pepper fruits and healthy fruits with wounds (deal with a sterile toothpick after surface-sterilized) were then inoculated with 2 μL of conidial suspension (106 conidia/mL). The fruits inoculated with 2 μL sterile distilled water were taken as negative controls. After inoculation, the fruits were kept in a plastic box with sterilized filter paper moistened with sterilized water, and maintained at 25°C in the dark. The experiment was repeated three times. Anthracnose symptoms were observed 7 days after inoculation on the wounded pepper fruits, whereas the unwounded and negative control fruits remained symptomless (e-Xtra 1D). Colletotrichum sojae was re-isolated from the infected pepper fruits and identified by morphological and molecular analysis, fulfilling Koch's postulates. Colletotrichum sojae occurs mainly on Fabaceae plants such as Glycine max, Medicago sativa, Phaseolus vulgaris, and Vigna unguiculata (Damm et al. 2019, Talhinhas and Baroncelli, 2021), and Panax quinquefolium (Guan et al. 2021). To our knowledge, this is the first report of C. sojae causing anthracnose on pepper in China. This study provided crucial information for epidemiologic studies and appropriate control strategies for this chili pepper disease. |
4,771 | Autonomy, procedural and substantive: a discussion of the ethics of cognitive enhancement | As cognitive enhancement research advances, important ethical questions regarding individual autonomy and freedom are raised. Advocates of cognitive enhancement frequently adopt a procedural approach to autonomy, arguing that enhancers improve an individual's reasoning capabilities, which are quintessential to being an autonomous agent. On the other hand, critics adopt a more nuanced approach by considering matters of authenticity and self-identity, which go beyond the mere assessment of one's reasoning capacities. Both positions, nevertheless, require further philosophical scrutiny. In this paper, we investigate the ethics of cognitive enhancement through the lenses of political and philosophical arguments about autonomy and freedom. In so doing, we contend that a substantive, relational account of individual autonomy offers a more holistic understanding of the ethical concerns of cognitive enhancement. |
4,772 | Comparison of BinaxNOW and SARS-CoV-2 qRT-PCR Detection of the Omicron Variant from Matched Anterior Nares Swabs | The COVID-19 pandemic has increased use of rapid diagnostic tests (RDTs). In winter 2021 to 2022, the Omicron variant surge made it apparent that although RDTs are less sensitive than quantitative reverse transcription-PCR (qRT-PCR), the accessibility, ease of use, and rapid readouts made them a sought after and often sold-out item at local suppliers. Here, we sought to qualify the Abbott BinaxNOW RDT for use in our university testing program as a method to rule in positive or rule out negative individuals quickly at our priority qRT-PCR testing site. To perform this qualification study, we collected additional swabs from individuals attending this site. All swabs were tested using BinaxNOW. Initially as part of a feasibility study, test period 1 (n = 110) samples were stored cold before testing. In test period 2 (n = 209), samples were tested immediately. Combined, 102/319 samples tested severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) positive via qRT-PCR. All sequenced samples were Omicron (n = 92). We calculated 53.9% sensitivity, 100% specificity, a 100% positive predictive value, and an 82.2% negative predictive value for BinaxNOW (n = 319). Sensitivity would be improved (75.3%) by changing the qRT-PCR positivity threshold from a threshold cycle (CT) value of 40 to a CT value of 30. The receiver operating characteristic (ROC) curve shows that for qRT-PCR-positive CT values of between 24 and 40, the BinaxNOW test is of limited value diagnostically. Results suggest BinaxNOW could be used in our setting to confirm SARS-CoV-2 infection in individuals with substantial viral load, but a significant fraction of infected individuals would be missed if we used RDTs exclusively to rule out infection. IMPORTANCE Our results suggest BinaxNOW can rule in SARS-CoV-2 infection but would miss infections if RDTs were exclusively used. |
4,773 | [Retrospective Analysis of Common Faults and Maintenance Strategies of Medical Electronic Endoscope] | Medical electronic endoscope is one of the indispensable tools in medical diagnosis and treatment. With the development of science and technology, electronic endoscope has higher safety and accuracy than traditional optical endoscope. Due to the sophisticated construction and high price, hospitals spend a lot of money on maintenance every year. In order to prolong the working life of electronic endoscope, reduce the incidence of artificial failure and save hospital costs, this study made a retrospective analysis on the common faults of electronic endoscope, and summarized the maintenance strategies for reference. |
4,774 | 2D and 3D crystallization of the wild-type IIC domain of the glucose PTS transporter from Escherichia coli | The bacterial phosphoenolpyruvate: sugar phosphotransferase system serves the combined uptake and phosphorylation of carbohydrates. This structurally and functionally complex system is composed of several conserved functional units that, through a cascade of phosphorylated intermediates, catalyze the transfer of the phosphate moiety from phosphoenolpyruvate to the substrate, which is bound to the integral membrane domain IIC. The wild-type glucose-specific IIC domain (wt-IIC(glc)) of Escherichia coli was cloned, overexpressed and purified for biochemical and functional characterization. Size-exclusion chromatography and scintillation-proximity binding assays showed that purified wt-IIC(glc) was homogenous and able to bind glucose. Crystallization was pursued following two different approaches: (i) reconstitution of wt-IIC(glc) into a lipid bilayer by detergent removal through dialysis, which yielded tubular 2D crystals, and (ii) vapor-diffusion crystallization of detergent-solubilized wt-IIC(glc), which yielded rhombohedral 3D crystals. Analysis of the 2D crystals by cryo-electron microscopy and the 3D crystals by X-ray diffraction indicated resolutions of better than 6Å and 4Å, respectively. Furthermore, a complete X-ray diffraction data set could be collected and processed to 3.93Å resolution. These 2D and 3D crystals of wt-IIC(glc) lay the foundation for the determination of the first structure of a bacterial glucose-specific IIC domain. |
4,775 | The effect of population model choice and network topologies on extinction time patterns in dendritic ecological networks | Models of populations in habitat networks are vital for understanding and linking processes and patterns across individuals, environments, ecological interactions, and population structures. River ecosystem models combine the physical structure of the networks with the biological processes of the organisms using structural and functional models, respectively. Previous studies on dendritic river networks have employed different functional (population) models and either directly claimed or implied that the results illustrate general properties of actual river systems. However, these studies have used different approaches and assumptions when modeling population characteristics and behavior, and it is possible that inferences regarding a system may vary based on the combination of functional model and the spatial structure of a network. This study aims to understand if different functional models in river systems produce substantially different model results and, therefore, whether conclusions are model-dependent. We compare variation in extinction time and occupancy proportion of river networks with linear, trellis, dendritic and ring-lattice topologies, using three population models (uniform, age-class and individual based) and one metapopulation-based (patch-occupancy) model. Dendritic, linear, and trellis structures did not show notable differences among extinction times for any of the four models. The difference between topologies was higher for the patch-occupancy model compared to the three population models. There were significant differences in the variations of patch-occupancy between the metapopulation and the population models, but the three population models of differing complexity produced broadly similar results. Therefore, if the occupancy data is obtained based on local subpopulations, spatial arrangement and connectivity does not appear to be the sole predictor of single-species metapopulation responses. We conclude that the outputs from functional models are robust to assumptions and varying levels of detail as long as they contain at least some detail at the level of individuals within habitat nodes. Also, if we are modeling network-scale populations, models that include at least some detailed information on individuals are a far better choice than considering populations implicitly. |
4,776 | Fast Online Set Intersection for Network Processing on FPGA | Online set intersection operations have been widely used in network processing tasks, such as Quality of Service differentiation, firewall processing, and packet/traffic classification. The major challenge for online set intersection is to sustain line-rate processing speed; accelerating set intersection using state-of-the-art hardware devices is of great interest to the research community. In this paper, we present a novel high-performance set intersection approach on FPGA. In our approach, each element in any set is represented by a combination of Group ID (GID) and Bit Stride (BS); all the sets are intersected using linear merge techniques and bitwise AND operations. We map our online set intersection algorithm onto hardware; this is done by constructing modular Processing Element (PE) and concatenating multiple PEs into a tree-based parallel architecture. In order to improve the throughput on a state-of-the-art FPGA, we feed all the inputs to FPGA in a streaming fashion with the help of the synchronization GIDs. Post place-and-route results show that, for a typical set intersection problem in network processing, our design can intersect eight sets, each of up to 32 K elements, at a throughput of 47: 4 Thousand Intersections Per Second (KIPS) and a latency of 94: 8 mu s per batch of inputs. Compared to the classic linear merge or bitwise AND techniques on state-of-the-art multi-core processors, our designs on FPGA achieves up to 66 x throughput improvement and 80 x latency reduction. |
4,777 | The state of the art for numerical simulations of the effect of the microstructure and its evolution in the metal-cutting processes | Microstructure features directly reflect mechanical responses during material deformation and can influence the surface integrity of machined components, which is of great interest to both academic communities and in-dustries. This paper summarizes the state of the art in modeling approaches for the effect of the microstructures and its evolution during metal-cutting processes. The aim of this paper is to analyze the advantages and drawbacks of current methods to improve modeling work and direct future orientations. First, dominant nu-merical modeling approaches for metal-cutting processes are reviewed. The finite element method (FEM) and mesh-free methods widely applied in professional research are discussed. Following this, approaches to modeling the effect of microstructures and their evolution are reviewed for two major categories-homogeneous field distribution and heterogeneous characteristics. Both the advantages and disadvantages of these two categories are analyzed and discussed in detail. Experimental techniques based on advanced characterization approaches to the validation of pertinent models are also discussed. Finally, a brief summary is presented and an outlook for future work is delineated. |
4,778 | Lithium-Salt-Containing High-Molecular-Weight Polystyrene-block-Polyethylene Oxide Block Copolymer Films | Ionic conductivity in relation to the morphology of lithium-doped high-molecular-weight polystyrene-block-polyethylene oxide (PS-b-PEO) diblock copolymer films was investigated as solid-state membranes for lithium-ion batteries. The tendency of the polyethylene (PEO) block to crystallize was highly suppressed by increasing both the salt-doping level and the temperature. The PEO crystallites completely vanished at a salt-doping ratio of Li/EO>0.08, at which the PEO segments were hindered from entering the crystalline unit of the PEO chain. A kinetically trapped lamella morphology of PS-b-PEO was observed, due to PEO crystallization. The increase in the lamella spacing with increasing salt concentration was attributed to the conformation of the PEO chain rather than the volume contribution of the salt or the previously reported increase in the effective interaction parameter. Upon loading the salt, the PEO chains changed from a compact/highly folded conformation to an amorphous/expanded-like conformation. The ionic conductivity was enhanced by amorphization of PEO and thereby the mobility of the PEO blocks increased upon increasing the salt-doping level. |
4,779 | Role of Post-translational Modification of Silent Mating Type Information Regulator 2 Homolog 1 in Cancer and Other Disorders | Silent mating type information regulator 2 homolog 1 (SIRT1), an NAD+-dependent histone/protein deacetylase, has multifarious physiological roles in development, metabolic regulation, and stress response. Thus, its abnormal expression or malfunction is implicated in pathogenesis of various diseases. SIRT1 undergoes post-translational modifications, including phosphorylation, oxidation/reduction, carbonylation, nitrosylation, glycosylation, ubiquitination/deubiquitination, SUMOylation etc. which can modulate its catalytic activity, stability, subcellular localization, and also binding affinity for substrate proteins. This short review highlights the regulation of SIRT1 post-translational modifications and their pathophysiologic implications. |
4,780 | The role of metal transporters in phytoremediation: A closer look at Arabidopsis | Pollution of the environment by heavy metals (HMs) has recently become a global issue, affecting the health of all living organisms. Continuous human activities (industrialization and urbanization) are the major causes of HM release into the environment. Over the years, two methods (physical and chemical) have been widely used to reduce HMs in polluted environment. However, these two methods are inefficient and very expensive to reduce the HMs released into the atmosphere. Alternatively, researchers are trying to remove the HMs by employing hyper-accumulator plants. This method, referred to phytoremediation, is highly efficient, cost-effective, and eco-friendly. Phytoremediation can be divided into five types: phytostabilization, phytodegradation, rhizofiltration, phytoextraction, and phytovolatilization, all of which contribute to HMs removal from the polluted environment. Brassicaceae family members (particularly Arabidopsis thaliana) can accumulate more HMs from the contaminated environment than those of other plants. This comprehensive review focuses on how HMs pollute the environment and discusses the phytoremediation measures required to reduce the impact of HMs on the environment. We discuss the role of metal transporters in phytoremediation with a focus on Arabidopsis. Then draw insights into the role of genome editing tools in enhancing phytoremediation efficiency. This review is expected to initiate further research to improve phytoremediation by biotechnological approaches to conserve the environment from pollution. |
4,781 | Tritium discharges in the environment and consequences: Recent advances and perspectives | Several publications have raised questions in France about the behavior of tritium in the environment and its impact on human health. In 2008, ASN asked two groups of experts to make a state of the art on this subject. An action plan based on the recommendations by these groups of experts was presented in the Tritium White Paper published in 2010. Since then, the follow-up committee has periodically addressed the identified topics. Metrological progresses, research work on the transfer and level of tritium activity in the environment were studied. The understanding of its toxicity has progressed. The operators of nuclear facilities have characterized the physicochemical forms of tritiated effluents existing in the discharges from their installations. Each year, ASN updates the inventory of tritium releases from nuclear facilities and the associated dosimetric impacts on the White Paper website. As the committee's actions concerning research subjects still in progress are now limited, ASN proposed to close the committee's work in its current form and to deal with research work not yet completed during a dedicated seminar organized by IRSN. |
4,782 | SiameseGAN: A Generative Model for Denoising of Spectral Domain Optical Coherence Tomography Images | Optical coherence tomography (OCT) is a standard diagnostic imaging method for assessment of ophthalmic diseases. The speckle noise present in the high-speed OCT images hampers its clinical utility, especially in Spectral-Domain Optical Coherence Tomography (SDOCT). In this work, a new deep generative model, called as SiameseGAN, for denoising Low signal-to-noise ratio (LSNR) B-scans of SDOCT has been developed. SiameseGAN is a Generative Adversarial Network (GAN) equipped with a siamese twin network. The siamese network module of the proposed SiameseGAN model helps the generator to generate denoised images that are closer to groundtruth images in the feature space, while the discriminator helps in making sure they are realistic images. This approach, unlike baseline dictionary learning technique (MSBTD), does not require an apriori high-quality image from the target imaging subject for denoising and takes less time for denoising. Moreover, various deep learning models that have been shown to be effective in performing denoising task in the SDOCT imaging were also deployed in this work. A qualitative and quantitative comparison on the performance of proposed method with these state-of-the-art denoising algorithms has been performed. The experimental results show that the speckle noise can be effectively mitigated using the proposed SiameseGAN along with faster denoising unlike existing approaches. |
4,783 | A 26.5 pA(rms) Neurotransmitter Front-End With Class-AB Background Subtraction | This paper presents an analog front-end (AFE) for fast-scan cyclic voltammetry (FSCV) with analog background subtraction using a pseudo-differential sensing scheme to cancel the large non-faradaic current before seeing the front-end. As a result, the AFE can be compact and low-power compared to conventional FSCV AFEs with dedicated digital back-ends to digitize and subtract the background from subsequent recordings. The reported AFE, fabricated in a 0.18-mu m CMOS process, consists of a class-AB common-mode rejection circuit, a low-input-impedance current conveyor, and a 1st-order current-mode delta-sigma (Delta Sigma) modulator with an infinite impulse response quantizer. This AFE achieves an effective dynamic range of 83 dB with a state-of-the-art 39.2 pA(rms) input-referred noise when loaded with a 1 nF input capacitance (26.5 pA(rms) open-circuit) across a 5 kHz bandwidth while consuming an average power of 3.7 mu W. This design was tested with carbon-fiber microelectrodes scanned at 300 V/s using flow-injection of dopamine, a key neurotransmitter. |
4,784 | The Relationship between Sustainable Built Environment, Art Therapy and Therapeutic Design in Promoting Health and Well-Being | At present, a smart city from the perspective of the United Nations Sustainable Development Goals (SDGs) emphasizes the importance of providing citizens with promising health and well-being. However, with the continuous impact of coronavirus disease 2019 (COVID-19) and the increase of city population, the health of citizens is facing new challenges. Therefore, this paper aims to assess the relationship between building, environment, landscape design, art therapy (AT), and therapeutic design (TD) in promoting health within the context of sustainable development. It also summarizes the existing applied research areas and potential value of TD that informs future research. This paper adopts the macro-quantitative and micro-qualitative research methods of bibliometric analysis. The results show that: the built environment and AT are related to sustainable development, and closely associated with health and well-being; the application of TD in the environment, architecture, space, and landscape fields promotes the realization of SDGs and lays the foundation for integrating digital technologies such as Building Information Modeling (BIM) into the design process to potentially solve the challenges of TD; and the principle of TD can consider design elements and characteristics from based on people's health needs to better promote human health and well-being. |
4,785 | Triplet Cross-Fusion Learning for Unpaired Image Denoising in Optical Coherence Tomography | Optical coherence tomography (OCT) is a widely-used modality in clinical imaging, which suffers from the speckle noise inevitably. Deep learning has proven its superior capability in OCT image denoising, while the difficulty of acquiring a large number of well-registered OCT image pairs limits the developments of paired learning methods. To solve this problem, some unpaired learning methods have been proposed, where the denoising networks can be trained with unpaired OCT data. However, majority of them are modified from the cycleGAN framework. These cycleGAN-based methods train at least two generators and two discriminators, while only one generator is needed for the inference. The dual-generator and dual-discriminator structures of cycleGAN-based methods demand a large amount of computing resource, which may be redundant for OCT denoising tasks. In this work, we propose a novel triplet cross-fusion learning (TCFL) strategy for unpaired OCT image denoising. The model complexity of our strategy is much lower than those of the cycleGAN-based methods. During training, the clean components and the noise components from the triplet of three unpaired images are cross-fused, helping the network extract more speckle noise information to improve the denoising accuracy. Furthermore, the TCFL-based network which is trained with triplets can deal with limited training data scenarios. The results demonstrate that the TCFL strategy outperforms state-of-the-art unpaired methods both qualitatively and quantitatively, and even achieves denoising performance comparable with paired methods. Code is available at: https://github.com/gengmufeng/TCFL-OCT. |
4,786 | In silico analysis of bacterial metabolism of glutamate and GABA in the gut in a rat model of obesity and type 2 diabetes | Dysbiosis of gut microbiota has adverse effects on host health. This study aimed to determine the effects of changes of faecal microbiota in obese and diabetic rats on the imputed production of enzymes involved in the metabolism of glutamate, gamma-aminobutyric acid (GABA), and succinate. The levels of glutamate decarboxylase, GABA transaminase, succinate-semialdehyde dehydrogenase, and methylisocitrate lyase were reduced or absent in diabetic rats compared with controls and obese rats. Glutamate decarboxylase (GAD) was significantly reduced in obese rats compared with control rats, while the other enzymes were unaltered; different bacterial taxa are suggested to be involved. Levels of bacterial enzymes were inversely correlated with the blood glucose level. These findings suggest that the absence of GABA and reduced succinate metabolism from gut microbiota contribute to the diabetic state in rats. |
4,787 | Toward Specular Removal from Natural Images Based on Statistical Reflection Models | Removing specular reflections from images is critical for improving the performance of computer vision algorithms. Recently, state-of-the-art methods have demonstrated remarkably good performance at removing specular reflections from chromatic images. These methods are typically based on the chromatic pixels assumption; therefore, they are prone to failure in the achromatic regions. This paper presents a novel method that is applicable to natural images, because it is effective for both chromatic and achromatic regions. The proposed method is based on modeling the general properties of diffuse and specular reflections in a solid convex optimization framework. Considering the physical constraints, we determine the global optimal solution using the split Bregman method. Experimental results demonstrate the effectiveness of the proposed method, particularly for the achromatic regions, and its competence as a state-of-the-art method for removing specular reflections from the chromatic regions. |
4,788 | Transcriptional and post-transcriptional mechanisms that regulate the genetic program in Zika virus-infected macrophages | Besides our understanding of the effects of ZIKA virus (ZIKV) infection on neural progenitors' cells the pathogenesis of this RNA virus also involves antigen-presenting cells, including macrophages. However, the molecular mechanisms that control gene activation and repression associated with the macrophage response to acute ZIKV infection are not fully understood. We approached the issue by RNA-seq and miRNA-seq datasets to understand the genetic program of ZIKV-infected macrophages. Results indicate that macrophage activates a regulatory program, involving 1067 differentially expressed genes. These genetic programs induced an inflammatory response mediated by chemokines as well as an interferon-independent anti-viral response, presumptively activated by IL-27. Additionally, the pathogenetic process involves changes in other signaling pathways such as cellular stress, cell signaling, metabolism, and cell differentiation. Furthermore, transcriptional control analysis revealed regulatory functions of key transcription factors principally, NFκB and STAT1, as well as HIF1A, ETV7, and PRMD1 that are associated with metabolic reprogramming during viral infection. We also noted six long-noncoding RNAs (lncRNAs) that may act in the regulation of gene expression, including MROCKI and ZC2HC1A-2, that are involved in the inflammatory response and expression of the cytokines, respectively. On the other hand, post-transcriptional control by miRNAs, including miR-155-5p and miR-146a-5p, are associated with modulation of genes related to inflammatory and antiviral responses. Relevant to the post-transcriptional control, our data unveiled the role of RNA binding proteins that have diverse functions such as ribonucleases (PNPT1, ZC3H12A, and ZC3HAV1), splicing factors (SSB, RBM11, and RAVER2), and RNA modifiers (PARP10 and PARP14). Overall, the results establish an unbiased approach to discerning the wiring of a regulatory mechanism controlling the genetic program in ZIKV-infected macrophages. |
4,789 | Respiratory neuromodulation in patients with neurological pathologies: for whom and how? | Implanted phrenic nerve stimulation is a technique restoring spontaneous breathing in patients with respiratory control failure, leading to being dependent on mechanical ventilation. This is the case for quadriplegic patients with a high spinal cord injury level and for patients with congenital central hypoventilation syndrome. The electrophysiological diaphragm explorations permits better patient selection, confirming on the one hand a definite issue with central respiratory command and on the other hand the integrity of diaphragmatic phrenic nerves. Today there are two different phrenic stimulation techniques: the quadripolar intrathoracic stimulation and the bipolar intradiaphragmatic stimulation. Both techniques allow patients to be weaned off their mechanical ventilator, improving dramatically their quality of life. In fact, one of the systems (phrenic intradiaphragmatic stimulation) was granted social security reimbursement in 2009, and now both are reimbursed. In the future, phrenic intradiaphragmatic stimulation may find its place in the intensive care unit, for patients needing it temporarily, for example, after certain surgeries with respiratory complications as well as diaphragmatic atrophies induced by prolonged mechanical ventilation. |
4,790 | Multi-Band Brain Network Analysis for Functional Neuroimaging Biomarker Identification | The functional connectomic profile is one of the non-invasive imaging biomarkers in the computer-assisted diagnostic system for many neuro-diseases. However, the diagnostic power of functional connectivity is challenged by mixed frequency-specific neuronal oscillations in the brain, which makes the single Functional Connectivity Network (FCN) often underpowered to capture the disease-related functional patterns. To address this challenge, we propose a novel functional connectivity analysis framework to conduct joint feature learning and personalized disease diagnosis, in a semi-supervised manner, aiming at focusing on putative multi-band functional connectivity biomarkers from functional neuroimaging data. Specifically, we first decompose the Blood Oxygenation Level Dependent (BOLD) signals into multiple frequency bands by the discrete wavelet transform, and then cast the alignment of all fully-connected FCNs derived from multiple frequency bands into a parameter-free multi-band fusion model. The proposed fusion model fuses all fully-connected FCNs to obtain a sparsely-connected FCN (sparse FCN for short) for each individual subject, as well as lets each sparse FCN be close to its neighbored sparse FCNs and be far away from its furthest sparse FCNs. Furthermore, we employ the l(1)-SVM to conduct joint brain region selection and disease diagnosis. Finally, we evaluate the effectiveness of our proposed framework on various neuro-diseases, i.e., Fronto-Temporal Dementia (FTD), Obsessive-Compulsive Disorder (OCD), and Alzheimer's Disease (AD), and the experimental results demonstrate that our framework shows more reasonable results, compared to state-of-the-art methods, in terms of classification performance and the selected brain regions. |
4,791 | Scene-Graph Augmented Data-Driven Risk Assessment of Autonomous Vehicle Decisions | There is considerable evidence that evaluating the subjective risk level of driving decisions can improve the safety of Autonomous Driving Systems (ADS) in both typical and complex driving scenarios. In this paper, we propose a novel data-driven approach that uses scene-graphs as intermediate representations for modeling the subjective risk of driving maneuvers. Our approach includes a Multi-Relation Graph Convolution Network, a Long-Short Term Memory Network, and attention layers. To train our model, we formulate subjective risk assessment as a supervised scene classification problem. We evaluate our model on both synthetic lane-changing datasets and real-driving datasets with various driving maneuvers. We show that our approach achieves a higher classification accuracy than the stateof-the-art approach on both large (96.4% vs. 91.2%) and small (91.8% vs. 71.2%) lane-changing synthesized datasets, illustrating that our approach can learn effectively even from small datasets. We also show that our model trained on a lane-changing synthesized dataset achieves an average accuracy of 87.8% when tested on a real-driving lane-changing dataset. In comparison, the state-of-the-art model trained on the same synthesized dataset only achieved 70.3% accuracy when tested on the real-driving dataset, showing that our approach can transfer knowledge more effectively. Moreover, we demonstrate that the addition of spatial and temporal attention layers improves our model's performance and explainability. Finally, our results illustrate that our model can assess the risk of various driving maneuvers more accurately than the state-of-the-art model (86.5% vs. 58.4%, respectively). |
4,792 | Cross-domain object detection using unsupervised image translation | Unsupervised domain adaptation for object detection addresses the adaption of detectors trained in a source domain to work accurately in an unseen target domain. Recently, methods approaching the alignment of the intermediate features proven to be promising, achieving state-of-the-art results. However, these methods are laborious to implement and hard to interpret. Although promising, there is still room for improvements to close the performance gap toward the upper-bound (when training with the target data). In this work, we propose a method to generate an artificial dataset in the target domain to train an object detector. We employed two unsupervised image translators (CycleGAN and an AdaIN-based model) using only annotated data from the source domain and non-annotated data from the target domain. Our key contributions are the proposal of a less complex yet more effective method that also has an improved interpretability. Results on real-world scenarios for autonomous driving show significant improvements, outperforming state-of-the-art methods in most cases, further closing the gap toward the upper-bound. |
4,793 | DeepDemosaicking: Adaptive Image Demosaicking via Multiple Deep Fully Convolutional Networks | Convolutional neural networks are currently the state-of-the-art solution for a wide range of image processing tasks. Their deep architecture extracts low-and high-level features from images, thus improving the model's performance. In this paper, we propose a method for image demosaicking based on deep convolutional neural networks. Demosaicking is the task of reproducing full color images from incomplete images formed from overlaid color filter arrays on image sensors found in digital cameras. Instead of producing the output image directly, the proposed method divides the demosaicking task into an initial demosaicking step and a refinement step. The initial step produces a rough demosaicked image containing unwanted color artifacts. The refinement step then reduces these color artifacts using deep residual estimation and multi-model fusion producing a higher quality image. Experimental results show that the proposed method outperforms several existing and state-of-the-art methods in terms of both the subjective and objective evaluations. |
4,794 | An Improved Flicker Noise Model for Circuit Simulations | Compact flicker noise models used in SPICE circuit simulators are derived from the seminal BSIM unified noise model. In this paper, we show that use of this model can give anomalous bias dependence of input referred noise. In addition, we find that the state-of-the art flicker noise models are not adequate to capture the drain bias dependence of flicker noise in short channel devices. In this paper, we address both the issues with a new compact model. |
4,795 | Combining Facial Dynamics With Appearance for Age Estimation | Estimating the age of a human from the captured images of his/her face is a challenging problem. In general, the existing approaches to this problem use appearance features only. In this paper, we show that in addition to appearance information, facial dynamics can be leveraged in age estimation. We propose a method to extract and use dynamic features for age estimation, using a person's smile. Our approach is tested on a large, gender-balanced database with 400 subjects, with an age range between 8 and 76. In addition, we introduce a new database on posed disgust expressions with 324 subjects in the same age range, and evaluate the reliability of the proposed approach when used with another expression. State-of-the-art appearance-based age estimation methods from the literature are implemented as baseline. We demonstrate that for each of these methods, the addition of the proposed dynamic features results in statistically significant improvement. We further propose a novel hierarchical age estimation architecture based on adaptive age grouping. We test our approach extensively, including an exploration of spontaneous versus posed smile dynamics, and gender-specific age estimation. We show that using spontaneity information reduces the mean absolute error by up to 21%, advancing the state of the art for facial age estimation. |
4,796 | The Significance of Hypothalamic Inflammation and Gliosis for the Pathogenesis of Obesity in Humans | Accumulated preclinical literature demonstrates that hypothalamic inflammation and gliosis are underlying causal components of diet-induced obesity in rodent models. This review summarizes and synthesizes available translational data to better understand the applicability of preclinical findings to human obesity and its comorbidities. The published literature in humans includes histopathologic analyses performed postmortem and in vivo neuroimaging studies measuring indirect markers of hypothalamic tissue microstructure. Both support the presence of hypothalamic inflammation and gliosis in children and adults with obesity. Findings predominantly point to tissue changes in the region of the arcuate nucleus of the hypothalamus, although findings of altered tissue characteristics in whole hypothalamus or other hypothalamic regions also emerged. Moreover, the severity of hypothalamic inflammation and gliosis has been related to comorbid conditions, including glucose intolerance, insulin resistance, type 2 diabetes, and low testosterone levels in men, independent of elevated body adiposity. Cross-sectional findings are augmented by a small number of prospective studies suggesting that a greater degree of hypothalamic inflammation and gliosis may predict adiposity gain and worsening insulin sensitivity in susceptible individuals. In conclusion, existing human studies corroborate a large preclinical literature demonstrating that hypothalamic neuroinflammatory responses play a role in obesity pathogenesis. Extensive or permanent hypothalamic tissue remodeling may negatively affect the function of neuroendocrine regulatory circuits and promote the development and maintenance of elevated body weight in obesity and/or comorbid endocrine disorders. |
4,797 | MSB-FCN: Multi-Scale Bidirectional FCN for Object Skeleton Extraction | The performance of state-of-the-art object skeleton detection (OSD) methods have been greatly boosted by Convolutional Neural Networks (CNNs). However, the most existing CNN-based OSD methods rely on a 'skip-layer' structure where low-level and high-level features are combined to gather multi-level contextual information. Unfortunately, as shallow features tend to be noisy and lack semantic knowledge, they will cause errors and inaccuracy. Therefore, in order to improve the accuracy of object skeleton detection, we propose a novel network architecture, the Multi-Scale Bidirectional Fully Convolutional Network (MSB-FCN), to better gather and enhance multi-scale high-level contextual information. The advantage is that only deep features are used to construct multi-scale feature representations along with a bidirectional structure for better capturing contextual knowledge. This enables the proposed MSB-FCN to learn semantic-level information from different sub-regions. Moreover, we introduce dense connections into the bidirectional structure to ensure that the learning process at each scale can directly encode information from all other scales. An attention pyramid is also integrated into our MSB-FCN to dynamically control information propagation and reduce unreliable features. Extensive experiments on various benchmarks demonstrate that the proposed MSB-FCN achieves significant improvements over the state-of-the-art algorithms. |
4,798 | A Smartphone Step Counter Using IMU and Magnetometer for Navigation and Health Monitoring Applications | The growing market of smart devices make them appealing for various applications. Motion tracking can be achieved using such devices, and is important for various applications such as navigation, search and rescue, health monitoring, and quality of life-style assessment. Step detection is a crucial task that affects the accuracy and quality of such applications. In this paper, a new step detection technique is proposed, which can be used for step counting and activity monitoring for health applications as well as part of a Pedestrian Dead Reckoning (PDR) system. Inertial and Magnetic sensors measurements are analyzed and fused for detecting steps under varying step modes and device pose combinations using a free-moving handheld device (smartphone). Unlike most of the state of the art research in the field, the proposed technique does not require a classifier, and adaptively tunes the filters and thresholds used without the need for presets while accomplishing the task in a real-time operation manner. Testing shows that the proposed technique successfully detects steps under varying motion speeds and device use cases with an average performance of 99.6%, and outperforms some of the state of the art techniques that rely on classifiers and commercial wristband products. |
4,799 | Solving 3-D PDEs by Tensor B-Spline Methodology: A High Performance Approach Applied to Optical Diffusion Tomography | Solutions of 3-D elliptic PDEs form the basis of many mathematical models in medicine and engineering. Solving elliptic PDEs numerically in 3-D with fine discretization and high precision is challenging for several reasons, including the cost of 3-D meshing, the massive increase in operation count, and memory consumption when a high-order basis is used, and the need to overcome the "curse of dimensionality." This paper describes how these challenges can be either overcome or relaxed by a Tensor B-spline methodology with the following key properties: 1) the tensor structure of the variational formulation leads to regularity, separability, and sparsity, 2) a method for integration over the complex domain boundaries eliminates meshing, and 3) the formulation induces high-performance and memory-efficient computational algorithms. The methodology was evaluated by application to the forward problem of Optical Diffusion Tomography (ODT), comparing it with the solver from a state-of-the-art Finite-Element Method (FEM)-based ODT reconstruction framework. We found that the Tensor B-spline methodology allows one to solve the 3-D elliptic PDEs accurately and efficiently. It does not require 3-D meshing even on complex and non-convex boundary geometries. The Tensor B-spline approach outperforms and is more accurate than the FEM when the order of the basis function is > 1, requiring fewer operations and lower memory consumption. Thus, the Tensor B-spline methodology is feasible and attractive for solving large elliptic 3-D PDEs encountered in real-world problems. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.