journal-title
stringclasses 191
values | pmid
stringlengths 8
8
⌀ | pmc
stringlengths 10
11
| doi
stringlengths 12
31
⌀ | article-title
stringlengths 11
423
| abstract
stringlengths 18
3.69k
⌀ | related-work
stringlengths 12
84k
| references
sequencelengths 0
206
| reference_info
listlengths 0
192
|
---|---|---|---|---|---|---|---|---|
Frontiers in Artificial Intelligence | null | PMC8849203 | 10.3389/frai.2021.734521 | Evaluation of Goal Recognition Systems on Unreliable Data and Uninspectable Agents | Goal or intent recognition, where one agent recognizes the goals or intentions of another, can be a powerful tool for effective teamwork and improving interaction between agents. Such reasoning can be challenging to perform, however, because observations of an agent can be unreliable and, often, an agent does not have access to the reasoning processes and mental models of the other agent. Despite this difficulty, recent work has made great strides in addressing these challenges. In particular, two Artificial Intelligence (AI)-based approaches to goal recognition have recently been shown to perform well: goal recognition as planning, which reduces a goal recognition problem to the problem of plan generation; and Combinatory Categorical Grammars (CCGs), which treat goal recognition as a parsing problem. Additionally, new advances in cognitive science with respect to Theory of Mind reasoning have yielded an approach to goal recognition that leverages analogy in its decision making. However, there is still much unknown about the potential and limitations of these approaches, especially with respect to one another. Here, we present an extension of the analogical approach to a novel algorithm, Refinement via Analogy for Goal Reasoning (RAGeR). We compare RAGeR to two state-of-the-art approaches which use planning and CCGs for goal recognition, respectively, along two different axes: reliability of observations and inspectability of the other agent's mental model. Overall, we show that no approach dominates across all cases and discuss the relative strengths and weaknesses of these approaches. Scientists interested in goal recognition problems can use this knowledge as a guide to select the correct starting point for their specific domains and tasks. | 2. Related WorkGoal Recognition is the process of inferring the top-level goal of a partial plan executed by an agent (Mirsky et al., 2021) and is of interest to a variety of AI-related research communities and topics, including cognitive science (Rabkina et al., 2017), gaming (Gold, 2010), human-robot teaming (Hiatt et al., 2017), and others. Related work falls along two axes: techniques for goal recognition, and assumptions placed on the information available to goal recognition. We will return to these axes in section 4.1 to describe how our work helps to better describe the relative strengths and weaknesses of the approaches in different types of situations.2.1. Goal Recognition TechniquesWhile there are many types of approaches that can be used for goal recognition, we focus on four conceptually different approaches here: theory of mind-based approaches, plan-based goal recognition, goal recognition as planning, and learned goal recognition.2.1.1. Theory of Mind-Based ApproachesWork in theory of mind, which can include inferring another agent's intentions (i.e., goals), has yielded rich computational models that can model human judgments (Baker et al., 2011; Hiatt et al., 2011; Rabkina et al., 2017). Görür et al. (2017) is one approach that performs theory of mind-based intent recognition that incorporates a human's emotional states into its recognition, focusing on determining when a human may or may not want a robot's assistance with their task. Our work, in contrast, focuses on improving the accuracy of the recognition step itself.2.1.2. Case-Based ReasoningGoal recognition can also be done via case based reasoning (CBR) as demonstrated by Cox and Kerkez (2006) or Fagan and Cunningham (2003). Such approaches use case libraries that store sets of actions or observations of an agent along with the goal that the agent was accomplishing while performing those actions. The case libraries can be learned incrementally over time (Kerkez and Cox, 2003), and so do not always explicitly model an agent's behavior. When trying to recognize a goal, these approaches retrieve a case from their library that best matches the current situation, and use the goal of that case as the recognized goal. This is similar in spirit to one of the approaches we discuss here, RAGeR; however, RAGeR is unique in that its retrieval mechanisms is based on cognitive analogy.2.1.3. Plan-Based Goal RecognitionPlan-based goal recognition approaches generally utilize a library of the expected behaviors of observed agents that is based on a model of its behavior. These libraries have been represented in a variety of ways, including context-free grammars (CFGs) (Vilain, 1990), probabilistic CFGs (Pynadath and Wellman, 2000), partially-ordered multiset CFGs (Geib et al., 2008; Geib and Goldman, 2009; Mirsky and Gal, 2016), directed acyclic graphs (Avrahami-Zilberbrand and Kaminka, 2005), plan graphs (Kautz, 1991), hierarchical task networks (HTNs) (Höller et al., 2018), and combinatory categorial grammars (CCGs) (Geib, 2009). The last two are among the most popular, which is why PANDA-Rec and Elexir-MCTS, two of the approaches we explicitly analyze in this paper, are based on them.2.1.4. Goal Recognition as PlanningGoal recognition as planning (e.g., Hong, 2001; Ramírez and Geffner, 2009; Ramirez and Geffner, 2010; E-Martin et al., 2015; Sohrabi et al., 2016; Vered and Kaminka, 2017; Pereira et al., 2020; Shvo and McIlraith, 2020; Shvo et al., 2020) do not use explicit plan libraries. These approaches use off-the-shelf classical planners to solve the goal recognition problem. Generally, when recognizing goals, these approaches generate plans for different possible goals and see which best match the observed behavior. The main advantage is that they then require only a model of the domain's actions instead of one that explicitly contains the expected behavior of observed agents. However, they are not always robust to differences between the generated plan and the executed plan.2.1.5. Learned Goal RecognitionGold (2010) uses an Input-Output Hidden Markov Models (Bengio and Frasconi, 1994) to recognize player goals from low-level actions in a top-down action adventure game. Ha et al. (2011) uses a Markov Logic Network (Richardson and Domingos, 2006) to recognize goals in the educational game Crystal Island. Min et al. (2014) and Min et al. (2016) use deep learning techniques (i.e., stacked denoising autoencoders, Vincent et al., 2010; and Long Short-Term Memory, Hochreiter and Schmidhuber 1997) to also recognize goals in Crystal Island. Pereira et al. (2019) combine deep learning with planning techniques to recognize goals with continuous action spaces. Amado et al. (2018) also use deep learning in an unsupervised fashion to lessen the need for domain expertise in goal recognition approaches; Polyvyanyy et al. (2020) take a similar approach, but using process mining techniques. To learn these models, existing data of agents' behaviors is required to learn these models. In our approach, in contrast, we use domain knowledge to construct a model and so do not require this learning.2.2. Characteristics of Goal Recognition DataWe consider here work related to what data is available for goal recognition. Specifically, we consider levels of inspectability of the other agent's mental model in the data and levels of reliability of the observations that comprise the data.With respect to inspectability, most approaches evaluate on data that has a constant level of agent inspectability. Generally speaking, that is at the level of knowing the actions that an agent takes (vs. the full plan, or vs. only observing their behavior). We therefore focus this discussion on work related to the reliability of data.As with inspectability, most prior work uses a single dataset with a particular set of characteristics (whether reliable or not) to evaluate competing goal recognition approaches. Sohrabi et al. (2016) provides one exception to this, and considers unreliable observations that can be missing or noisy (i.e., incorrect). They show that noisy observations can, for some approaches that perform goal recognition as planning, prove more challenging than missing observations; this can be mitigated, however, by adding penalties for missing or noisy observations into the “costs” that rank candidate plans. Borrajo and Veloso (2020) handle such noise by using plan-based distance measures between observed execution traces and candidate plan traces. We also consider these two types of reliability in our experiments.Ramirez and Geffner (2011) look at how a partially-observable Markov decision process (POMDP) performing goal recognition can handle missing or noisy observations, in part because of its probabilistic representation of agent behavior. POMDPs can be fairly computationally expensive to compute, however, precluding our use of them here.Another prior study that looked at inspectability compared a goal recognition via analogy approach, Analogical Theory of Mind (AToM) with an HTN-based goal recognition approach (Rabkina et al., 2020). It showed that while the HTN-based approach performed better under high inspectability, the HTN-based approach degraded quickly as inspectability lessened, while AToM maintained a fairly high accuracy throughout. We include the same HTN-based approach, PANDA-Rec, in this paper, as well as RAGeR, a goal recognition approach that is an extension of AToM.A long line of work focuses on learning action models from partial or noisy traces. Wang (1995) created a system to learn the preconditions and effects of STRIPS planning operators from expert traces and demonstrated that having the system refine the learned knowledge was able to obtain results as good as expertly crafted operators. Amir and Chang (2008) develop a method for online, incremental learning of action models for partially observable deterministic domains. They demonstrate that the approach can learn exact action models from a partially visible subset of the traces from benchmark PDDL problems from the 1998 and 2002 International Planning Competition. Mourao et al. (2012), in turn, are able to learn STRIPS planning operators from noisy and incomplete observations by using classifiers, which are robust to noise and partial observability, as an intermediate step in the translation. Pasula et al. (2007), in contrast, look at learning symbolic models of domains with noisy, non-deterministic action effects. Plan rules are both relational and probabilistic, and are learned by selecting the model that maximizes the likelihood of the input action effects. Zhuo and Kambhampati (2013) consider how to learn action models where actions are not always correctly specified (i.e., “pickup” instead of “putdown”). Gregory and Lindsay (2016) developed an approach for the automated acquisition of models for numeric domains (such as tracking resource usage). Related approaches also can operate when their underling model may not be correct, and take steps to update it iteratively during execution (Chakraborti et al., 2019). While we assume that the models used by the three approaches we consider are pre-existing and correct, this prior work could be incorporated into the approaches discussed here to initially learn the domain models, or to improve their model and goal recognition over time. | [
"26215079",
"27322750",
"28481471",
"9377276",
"26215079"
] | [
{
"pmid": "26215079",
"title": "The Synthesis, Characterization, Crystal Structure and Photophysical Properties of a New Meso-BODIPY Substituted Phthalonitrile.",
"abstract": "A new highly fluorescent difluoroboradipyrromethene (BODIPY) dye (4) bearing an phthalonitrile group at meso-position of the chromophoric core has been synthesized starting from 4-(4-meso-dipyrromethene-phenoxy)phthalonitrile (3) which was prepared by the oxidation of 4-(2-meso-dipyrromethane-phenoxy)phthalonitrile (2). The structural, electronic and photophysical properties of the prepared dye molecule were investigated. The final product exhibit noticeable spectroscopic properties which were examined by its absorption and fluorescence spectra. The original compounds prepared in the reaction pathway were characterized by the combination of FT-IR, (1)H and (13)C NMR, UV-vis and MS spectral data. It has been calculated; molecular structure, vibrational frequencies, (1)H and (13)C NMR chemical shifts and HOMO and LUMO energies of the title compound by using B3LYP method with 6-311++G(dp) basis set, as well. The final product (4) was obtained as single crystal which crystallized in the triclinic space group P-1 with a = 9.0490 (8) Å, b = 10.5555 (9) Å, c = 11.7650 (9) Å, α = 77.024 (6)°, β = 74.437 (6)°, γ = 65.211 (6)° and Z = 2. The crystal structure has intermolecular C-H···F weak hydrogen bonds. The singlet oxygen generation ability of the dye (4) was also investigated in different solvents to determine of using in photodynamic therapy (PDT)."
},
{
"pmid": "27322750",
"title": "Extending SME to Handle Large-Scale Cognitive Modeling.",
"abstract": "Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before."
},
{
"pmid": "28481471",
"title": "Analogy Lays the Foundation for Two Crucial Aspects of Symbolic Development: Intention and Correspondence.",
"abstract": "We argue that analogical reasoning, particularly Gentner's (1983, 2010) structure-mapping theory, provides an integrative theoretical framework through which we can better understand the development of symbol use. Analogical reasoning can contribute both to the understanding of others' intentions and the establishment of correspondences between symbols and their referents, two crucial components of symbolic understanding. We review relevant research on the development of symbolic representations, intentionality, comparison, and similarity, and demonstrate how structure-mapping theory can shed light on several ostensibly disparate findings in the literature. Focusing on visual symbols (e.g., scale models, photographs, and maps), we argue that analogy underlies and supports the understanding of both intention and correspondence, which may enter into a reciprocal bootstrapping process that leads children to gain the prodigious human capacity of symbol use."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
},
{
"pmid": "26215079",
"title": "The Synthesis, Characterization, Crystal Structure and Photophysical Properties of a New Meso-BODIPY Substituted Phthalonitrile.",
"abstract": "A new highly fluorescent difluoroboradipyrromethene (BODIPY) dye (4) bearing an phthalonitrile group at meso-position of the chromophoric core has been synthesized starting from 4-(4-meso-dipyrromethene-phenoxy)phthalonitrile (3) which was prepared by the oxidation of 4-(2-meso-dipyrromethane-phenoxy)phthalonitrile (2). The structural, electronic and photophysical properties of the prepared dye molecule were investigated. The final product exhibit noticeable spectroscopic properties which were examined by its absorption and fluorescence spectra. The original compounds prepared in the reaction pathway were characterized by the combination of FT-IR, (1)H and (13)C NMR, UV-vis and MS spectral data. It has been calculated; molecular structure, vibrational frequencies, (1)H and (13)C NMR chemical shifts and HOMO and LUMO energies of the title compound by using B3LYP method with 6-311++G(dp) basis set, as well. The final product (4) was obtained as single crystal which crystallized in the triclinic space group P-1 with a = 9.0490 (8) Å, b = 10.5555 (9) Å, c = 11.7650 (9) Å, α = 77.024 (6)°, β = 74.437 (6)°, γ = 65.211 (6)° and Z = 2. The crystal structure has intermolecular C-H···F weak hydrogen bonds. The singlet oxygen generation ability of the dye (4) was also investigated in different solvents to determine of using in photodynamic therapy (PDT)."
}
] |
Frontiers in Plant Science | null | PMC8850787 | 10.3389/fpls.2022.731816 | Background-Aware Domain Adaptation for Plant Counting | Deep learning-based object counting models have recently been considered preferable choices for plant counting. However, the performance of these data-driven methods would probably deteriorate when a discrepancy exists between the training and testing data. Such a discrepancy is also known as the domain gap. One way to mitigate the performance drop is to use unlabeled data sampled from the testing environment to correct the model behavior. This problem setting is also called unsupervised domain adaptation (UDA). Despite UDA has been a long-standing topic in machine learning society, UDA methods are less studied for plant counting. In this paper, we first evaluate some frequently-used UDA methods on the plant counting task, including feature-level and image-level methods. By analyzing the failure patterns of these methods, we propose a novel background-aware domain adaptation (BADA) module to address the drawbacks. We show that BADA can easily fit into object counting models to improve the cross-domain plant counting performance, especially on background areas. Benefiting from learning where to count, background counting errors are reduced. We also show that BADA can work with adversarial training strategies to further enhance the robustness of counting models against the domain gap. We evaluated our method on 7 different domain adaptation settings, including different camera views, cultivars, locations, and image acquisition devices. Results demonstrate that our method achieved the lowest Mean Absolute Error on 6 out of the 7 settings. The usefulness of BADA is also supported by controlled ablation studies and visualizations. | 2. Related workIn this section, we briefly review the applications of machine learning in plant science. Then we focus on the object counting methods and the unsupervised domain adaptation (UDA) methods in open literature.Machine Learning. Machine learning is a useful tool for plant science, which can model the relationships and patterns between targets and factors given a set of data. It is widely used in many none-destructive phenotyping tasks, e.g., field estimation (Yoosefzadeh-Najafabadi et al., 2021) and plant identification (Tsaftaris et al., 2016). A dominating trend in machine learning is deep learning, as deep learning models can learn to extract robust features and complete the tasks in a end-to-end manner. Deep learning-based methods have shown great advantages in different tasks of plant phenomics, e.g., plant counting (Lu et al., 2017b), detection (Bargoti and Underwood, 2017; Madec et al., 2019), segmentation (Tsaftaris et al., 2016), and classification (Lu et al., 2017a). For in-field plant counting tasks (from RGB images), deep learning-based methods show great robustness against different illuminations, scales and complex backgrounds (Lu et al., 2017b). The release of datasets (David et al., 2020; Lu et al., 2021) also accelerates the development of deep learning-based plant counting methods. Therefore, the deep learning has become the default choice for in-field plant counting.Object counting. Plant counting is a subset of object counting. Object counting aims to inference the number of target objects in the input images. Current cutting-edge object counting methods (Lempitsky and Zisserman, 2010; Zhang et al., 2015; Arteta et al., 2016; Onoro-Rubio and López-Sastre, 2016; Li et al., 2018; Ma et al., 2019; Xiong et al., 2019b; Wang et al., 2020) utilize the power of deep learning and formulate the object counting problem as a regression task. A fully-convolutional neural network is trained to predict density maps (Lempitsky and Zisserman, 2010) for target objects, where the value of each pixel denotes the local counting value. The integral of the density map is equal to the total number of objects. Inspired by the success of these methods in crowd counting, a constellation of methods (Lu et al., 2017b; Xiong et al., 2019a; Liu et al., 2020) and datasets (David et al., 2020; Lu et al., 2021) are proposed for plant counting. However, existing plant counting methods neglect the influence of domain gap, which is common in real applications.Unsupervised domain adaptation. The harm of domain gaps is common for data-driven methods (Ganin and Lempitsky, 2015; Vu et al., 2019). Therefore, UDA has been a long-standing topic in deep learning society, where unlabeled data collected in the target domain are utilized to prompt the model performance on the target domain. Ben-David et al. (2010) theoretically prove that domain adaptation can be achieved by narrowing the domain gap. One can achieve this from the feature level, or, more directly, from the image level. The feature-level methods (Ganin and Lempitsky, 2015; Tzeng et al., 2017) align the feature to be domain-invariant. And the image-level methods (Zhu et al., 2017; Wang et al., 2019; Yang and Soatto, 2020; Yang et al., 2020) manipulate the styles of images, e.g., hues, illuminations, textures to make the images in two different domains closer. Some of the UDA methods are proposed to address the domain gap for plant counting (Giuffrida et al., 2019; Ayalew et al., 2020). However, existing UDA methods for plant counting directly adopt the generic feature-level UDA methods. This motivates us to test different UDA methods under the context of plant counting. | [
"33313551",
"33259318",
"33313541",
"29118821",
"27295638",
"27810146",
"31857821"
] | [
{
"pmid": "33313551",
"title": "Global Wheat Head Detection (GWHD) Dataset: A Large and Diverse Dataset of High-Resolution RGB-Labelled Images to Develop and Benchmark Wheat Head Detection Methods.",
"abstract": "The detection of wheat heads in plant images is an important task for estimating pertinent wheat traits including head population density and head characteristics such as health, size, maturity stage, and the presence of awns. Several studies have developed methods for wheat head detection from high-resolution RGB imagery based on machine learning algorithms. However, these methods have generally been calibrated and validated on limited datasets. High variability in observational conditions, genotypic differences, development stages, and head orientation makes wheat head detection a challenge for computer vision. Further, possible blurring due to motion or wind and overlap between heads for dense populations make this task even more complex. Through a joint international collaborative effort, we have built a large, diverse, and well-labelled dataset of wheat images, called the Global Wheat Head Detection (GWHD) dataset. It contains 4700 high-resolution RGB images and 190000 labelled wheat heads collected from several countries around the world at different growth stages with a wide range of genotypes. Guidelines for image acquisition, associating minimum metadata to respect FAIR principles, and consistent head labelling methods are proposed when developing new head detection datasets. The GWHD dataset is publicly available at http://www.global-wheat.com/and aimed at developing and benchmarking methods for wheat head detection."
},
{
"pmid": "33259318",
"title": "Feature-Aware Adaptation and Density Alignment for Crowd Counting in Video Surveillance.",
"abstract": "With the development of deep neural networks, the performance of crowd counting and pixel-wise density estimation is continually being refreshed. Despite this, there are still two challenging problems in this field: 1) current supervised learning needs a large amount of training data, but collecting and annotating them is difficult and 2) existing methods cannot generalize well to the unseen domain. A recently released synthetic crowd dataset alleviates these two problems. However, the domain gap between the real-world data and synthetic images decreases the models' performance. To reduce the gap, in this article, we propose a domain-adaptation-style crowd counting method, which can effectively adapt the model from synthetic data to the specific real-world scenes. It consists of multilevel feature-aware adaptation (MFA) and structured density map alignment (SDA). To be specific, MFA boosts the model to extract domain-invariant features from multiple layers. SDA guarantees the network outputs fine density maps with a reasonable distribution on the real domain. Finally, we evaluate the proposed method on four mainstream surveillance crowd datasets, Shanghai Tech Part B, WorldExpo'10, Mall, and UCSD. Extensive experiments are evidence that our approach outperforms the state-of-the-art methods for the same cross-domain counting problem."
},
{
"pmid": "33313541",
"title": "High-Throughput Rice Density Estimation from Transplantation to Tillering Stages Using Deep Networks.",
"abstract": "Rice density is closely related to yield estimation, growth diagnosis, cultivated area statistics, and management and damage evaluation. Currently, rice density estimation heavily relies on manual sampling and counting, which is inefficient and inaccurate. With the prevalence of digital imagery, computer vision (CV) technology emerges as a promising alternative to automate this task. However, challenges of an in-field environment, such as illumination, scale, and appearance variations, render gaps for deploying CV methods. To fill these gaps towards accurate rice density estimation, we propose a deep learning-based approach called the Scale-Fusion Counting Classification Network (SFC2Net) that integrates several state-of-the-art computer vision ideas. In particular, SFC2Net addresses appearance and illumination changes by employing a multicolumn pretrained network and multilayer feature fusion to enhance feature representation. To ameliorate sample imbalance engendered by scale, SFC2Net follows a recent blockwise classification idea. We validate SFC2Net on a new rice plant counting (RPC) dataset collected from two field sites in China from 2010 to 2013. Experimental results show that SFC2Net achieves highly accurate counting performance on the RPC dataset with a mean absolute error (MAE) of 25.51, a root mean square error (MSE) of 38.06, a relative MAE of 3.82%, and a R 2 of 0.98, which exhibits a relative improvement of 48.2% w.r.t. MAE over the conventional counting approach CSRNet. Further, SFC2Net provides high-throughput processing capability, with 16.7 frames per second on 1024 × 1024 images. Our results suggest that manual rice counting can be safely replaced by SFC2Net at early growth stages. Code and models are available online at https://git.io/sfc2net."
},
{
"pmid": "29118821",
"title": "TasselNet: counting maize tassels in the wild via local counts regression network.",
"abstract": "BACKGROUND\nAccurately counting maize tassels is important for monitoring the growth status of maize plants. This tedious task, however, is still mainly done by manual efforts. In the context of modern plant phenotyping, automating this task is required to meet the need of large-scale analysis of genotype and phenotype. In recent years, computer vision technologies have experienced a significant breakthrough due to the emergence of large-scale datasets and increased computational resources. Naturally image-based approaches have also received much attention in plant-related studies. Yet a fact is that most image-based systems for plant phenotyping are deployed under controlled laboratory environment. When transferring the application scenario to unconstrained in-field conditions, intrinsic and extrinsic variations in the wild pose great challenges for accurate counting of maize tassels, which goes beyond the ability of conventional image processing techniques. This calls for further robust computer vision approaches to address in-field variations.\n\n\nRESULTS\nThis paper studies the in-field counting problem of maize tassels. To our knowledge, this is the first time that a plant-related counting problem is considered using computer vision technologies under unconstrained field-based environment. With 361 field images collected in four experimental fields across China between 2010 and 2015 and corresponding manually-labelled dotted annotations, a novel Maize Tassels Counting (MTC) dataset is created and will be released with this paper. To alleviate the in-field challenges, a deep convolutional neural network-based approach termed TasselNet is proposed. TasselNet can achieve good adaptability to in-field variations via modelling the local visual characteristics of field images and regressing the local counts of maize tassels. Extensive results on the MTC dataset demonstrate that TasselNet outperforms other state-of-the-art approaches by large margins and achieves the overall best counting performance, with a mean absolute error of 6.6 and a mean squared error of 9.6 averaged over 8 test sequences.\n\n\nCONCLUSIONS\nTasselNet can achieve robust in-field counting of maize tassels with a relatively high degree of accuracy. Our experimental evaluations also suggest several good practices for practitioners working on maize-tassel-like counting problems. It is worth noting that, though the counting errors have been greatly reduced by TasselNet, in-field counting of maize tassels remains an open and unsolved problem."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "31857821",
"title": "TasselNetv2: in-field counting of wheat spikes with context-augmented local regression networks.",
"abstract": "BACKGROUND\nGrain yield of wheat is greatly associated with the population of wheat spikes, i.e., . To obtain this index in a reliable and efficient way, it is necessary to count wheat spikes accurately and automatically. Currently computer vision technologies have shown great potential to automate this task effectively in a low-end manner. In particular, counting wheat spikes is a typical visual counting problem, which is substantially studied under the name of object counting in Computer Vision. TasselNet, which represents one of the state-of-the-art counting approaches, is a convolutional neural network-based local regression model, and currently benchmarks the best record on counting maize tassels. However, when applying TasselNet to wheat spikes, it cannot predict accurate counts when spikes partially present.\n\n\nRESULTS\nIn this paper, we make an important observation that the counting performance of local regression networks can be significantly improved via adding visual context to the local patches. Meanwhile, such context can be treated as part of the receptive field without increasing the model capacity. We thus propose a simple yet effective contextual extension of TasselNet-TasselNetv2. If implementing TasselNetv2 in a fully convolutional form, both training and inference can be greatly sped up by reducing redundant computations. In particular, we collected and labeled a large-scale wheat spikes counting (WSC) dataset, with 1764 high-resolution images and 675,322 manually-annotated instances. Extensive experiments show that, TasselNetv2 not only achieves state-of-the-art performance on the WSC dataset ( counting accuracy) but also is more than an order of magnitude faster than TasselNet (13.82 fps on images). The generality of TasselNetv2 is further demonstrated by advancing the state of the art on both the Maize Tassels Counting and ShanghaiTech Crowd Counting datasets.\n\n\nCONCLUSIONS\nThis paper describes TasselNetv2 for counting wheat spikes, which simultaneously addresses two important use cases in plant counting: improving the counting accuracy without increasing model capacity, and improving efficiency without sacrificing accuracy. It is promising to be deployed in a real-time system with high-throughput demand. In particular, TasselNetv2 can achieve sufficiently accurate results when training from scratch with small networks, and adopting larger pre-trained networks can further boost accuracy. In practice, one can trade off the performance and efficiency according to certain application scenarios. Code and models are made available at: https://tinyurl.com/TasselNetv2."
}
] |
Frontiers in Artificial Intelligence | null | PMC8851243 | 10.3389/frai.2021.723936 | Models of Intervention: Helping Agents and Human Users Avoid Undesirable Outcomes | When working in an unfamiliar online environment, it can be helpful to have an observer that can intervene and guide a user toward a desirable outcome while avoiding undesirable outcomes or frustration. The Intervention Problem is deciding when to intervene in order to help a user. The Intervention Problem is similar to, but distinct from, Plan Recognition because the observer must not only recognize the intended goals of a user but also when to intervene to help the user when necessary. We formalize a family of Intervention Problems and show that how these problems can be solved using a combination of Plan Recognition methods and classification algorithms to decide whether to intervene. For our benchmarks, the classification algorithms dominate three recent Plan Recognition approaches. We then generalize these results to Human-Aware Intervention, where the observer must decide in real time whether to intervene human users solving a cognitively engaging puzzle. Using a revised feature set more appropriate to human behavior, we produce a learned model to recognize when a human user is about to trigger an undesirable outcome. We perform a human-subject study to evaluate the Human-Aware Intervention. We find that the revised model also dominates existing Plan Recognition algorithms in predicting Human-Aware Intervention. | 8. Related WorkWe discuss prior research related to the The Intervention Problem beginning with plan/goal recognition. This is because in order to intervene, the observer must first recognize what the actor is trying to accomplish in the domain. In the two intervention models we have proposed: Unsafe Suffix Analysis Intervention and Human-aware Intervention, the observer has limited interaction with the actor. For example, upon sensing an action and determining that it requires intervention the observer simply executes the accept-observation action to admit the observed action into H. In real-life situations, helpful intervention requires more observer engagement, i.e., an active observer to help the actor recover from intervention. Therefore, we discuss existing work on designing active observers having both the recognition and interaction capabilities. Next, we discuss related work on dealing with misconceptions held by the actor about the planning domain and AI safety in general. Because we specially focus on developing intervention models for human users, we discuss related work on using machine learning algorithm to classify human user behavior and how intervention is leveraged to provide intelligent help to human users.8.1. Plan RecognitionThe Plan Recognition Problem is to “take as input a sequence of actions performed by an actor and to infer the goal pursued by the actor and also to organize the action sequence in terms of a plan structure” (Schmidt et al., 1978). Early solutions to the Plan Recognition problem require that the part of a plan given as input to the recognizer be matched to a plan library. Generalized Plan Recognition (Kautz and Allen, 1986), identifies a minimal set of top-level actions sufficient to explain the set of observed actions. The plans are modeled in a graph, where the nodes are actions and there are two types of edges: action specialization and decomposition. The Plan Recognition task then becomes the minimum vertex cover problem of the graph. Geib and Goldman (2009) represent the plan library for a cyber security domain as partially ordered AND/OR trees. In order to recognize the actor's plan, the recognizer needs to derive a probability distribution over a set of likely explanation plans π given observations O, P(π|O). To extract the likely explanation plans, the authors define a grammar to parse the AND/OR trees and generatively build the explanations by starting off with a “guess” and refining it as more observations arrive. The actor is hostile to the recognition task and will hide some actions, causing the recognizer to deal with partial observability when computing P(π|O).One concern with using plan libraries for recognition is the noise in the input observations. The case-based Plan Recognition approach (Vattam and Aha, 2015) relaxes the error-free requirement of the observations and introduces a recognition algorithm that can handle missing and misclassified observations. The solution assumes that there exists a plan library consisting of a set of cases stored using a labeled, directed graph called action sequence graphs, which is an encoding of an action-state sequence that preserves the order of the actions and states.Ramırez and Geffner (2010) proposed a recognition solution that does not rely on defining a plan library. By compiling the observations away into a planning language called PDDL, their approach exploits automated planners to find plans that are compatible with the observations. Ramirez and Geffners' solution recognizes the actor's goals (and plans) by accounting for the cost differences of two types of plans for each candidate goal: (1) plans that reach the goal while going through the observations and, (2) plans that reach the goal without going through the observations. Ramirez and Geffner characterize the likelihood P(O|g) as a Boltzmann distribution P(O|g)=e-βΔg1+e-βΔg where β is a positive constant. Sohrabi et al. (2016a) propose two extensions to the Ramirez and Geffener's Plan Recognition work. First, the recognition system can now handle noisy or missing observations. Their new approach to “Compiling Observations Away” modifies the planning domain to include action costs; specifically penalties for noisy/missing observations. Second, recognition is defined for observations over state variables.In plan recognition, the recognizer reasons about the likely goals from a set of goal hypotheses given the actor's behavior. The recognizer's task may fail if he cannot accurately disambiguate between possible goal hypotheses. Mirsky et al. (2018) propose sequential plan recognition, where the user is sequentially queried in real-time to verify whether the observed partial plan is correct. The actor's answers are used to prune the possible hypotheses, while accounting for the incomplete plans that could match with the observations after several other observations happen in the future. In order to optimize the querying process, the recognizer considers only the queries that maximize the information-gain and the likelihood of the resulting hypotheses. This solution assumes that a plan library is available. Their implementation of the plan library uses trees to represent the possible plans for goal hypotheses. Sequential plan recognition is not really suited for the intervention scenarios we discuss in this work, because we assume that the undesirable state that must be avoided is hidden to the user.Online goal recognition (Vered and Kaminka, 2017) extends the recognition problem to continuous domains where the recognition problem must be solved for every new observation when they are revealed. Formally, we seek to determine the probability of a goal g given observations O, P(g|O) for each goal g∈G. The recognized goal is the one that has the highest posterior probability. Instead of taking the cost difference (Ramirez and Geffners' approach) they define a ratio score(g)=cost(ig)cost(mg), where ig is the optimal plan to achieve g and mg is the optimal plan that achieves g and includes all the observations. When the optimal plan that has all the observations is the same cost as the optimal the score approaches 1. Then P(g|O) = ηscore(g), where η is the normalizing constant. Optimal plan ig can be computed using a planner. To compute mg, they exploit the fact that each observation is a trajectory or point in the continuous space and each likely plan is also a trajectory in the same space. Therefore, mg = prefix + suffix, where prefix is built by concatenating all observations in O into a single trajectory, and the suffix is generated by calling a planner from the last observed point to goal g. Follow up work further reduces the computational cost of online goal recognition by introducing landmarks to prune the likely goals (Vered et al., 2018). Landmarks are facts that must be true at some point in all valid plans that achieve a goal from an initial state (Hoffmann et al., 2004). Goal recognition is performed by using landmarks to compute the completion ratio of the likely goals as a proxy for estimating P(g|O).8.2. Goal/Plan Recognition With an Active ObserverWhile the goal/plan recognition works discussed in the previous section assume a passive observer, a growing body of work has also looked into recognition problems with active observers. Only recognizing when intervention is needed (as a passive observer) solves only a part of the problem. In cases where intervention is used for an artificial agent, active observers can force the agent to alter its current plan. When intervention happens during a cognitively engaging task, as in the Rush Hour puzzle, a human user would naturally like to know what to do next. An active observer who can take action or give instructions to the human user, not only will be able to assist the user complete the task safely but also will improve the human user's interaction with the AI system.Bisson et al. (2011) propose a plan library based plan recognition technique to provoke the observed agent so that it becomes easier to disambiguate between pending goal hypotheses. The observer modifies the fluents associated with a provokable action, which forces the observed agent to react on the modification. The provokable event is selected heuristically such that it reduces the uncertainty among the observed agent's likely goals. In another approach that aims to expedite the goal recognition, Shvo and McIlraith (2020) use landmarks to eliminate hypothesized goals. They define the Active Goal Recognition problem for an observer agent who can execute sensing and world-altering actions. The observer executes a contingent plan containing the sensing and world-altering actions to confirm/refute the landmarks of the planning problems for each goal hypothesis. Goals hypotheses whose landmarks (for the corresponding planning problem) are refuted by the execution of the contingent plan are removed from the set of likely goals. Although the initial problem definition assumes that the observer's contingent plan is non-intervening and is primarily used to reduce the goal hypotheses, Shvo and McIlraith (2020) also propose an extension where the observer can actively impede or aid the actor. For example, the authors suggest adopting the Counter-planning Algorithm proposed by Pozanco et al. (2018) to generate a plan for the observer to impede the actor, after the actor's goals are identified through Active Goal Recognition. Pozanco's Counter-planning Algorithm is designed for a domain where two adversarial agents (seeking and preventing) pursue different goals. In the context of the Active Goal Recognition problem, the seeking agent is the actor while the preventing agent is the observer. Counter-planning requires that the observer quickly identify the seeking agent's goal. They use the Ramirez and Geffener's probabilistic goal recognition algorithm to perform goal recognition. Then the preventing agent actively intervenes the seeking agent by identifying the earliest landmark for the seeking agent's planning problem (for the recognized goal) that needs to be blocked (i.e., counter-planning landmark). The recognizer uses automated planning to generate a plan to achieve the counter-planning landmark (e.g., negating the landmark), thus blocking the seeking agent's goal achievement. The aforementioned works in Active Goal Recognition assume full observability over the actor. Amato and Baisero (2019) relax this constraint and propose Active Goal Recognition with partial observability over the actor and model the planning problem as a partially observable Markov decision process (POMDP) (Kaelbling et al., 1998). Similar to the previously discussed Active Goal Recognition problems, the observer agent is trying to reach it's own goal as well as correctly predict the chosen goal of the actor. Therefore, they define the Active Goal Recognition problem for the observer by augmenting the observer's action space with the actor's actions, the observer's own actions and the decision actions on the actor's goals. The state space is defined as the Cartesian product of the observer's states, actor's states and actor's goals. The goals for the recognition problem are augmented with the observer's own goals. and the prediction of the actor's goals. A solution to this planning problem starts at the the initial states of the observer and the actor and chooses actions to the augmented goal while minimizing the cost (or maximizing a reward). A POMDP is defined to solve the augmented planning problem (i.e., Active Goal Recognition problem).The goal recognition algorithms discussed above mainly focus on pruning the pending goal hypotheses to allow the observer quickly disambiguate between goals. To accomplish this objective, Shvo et al. use sensing and world-altering actions to confirm/refute the landmarks. Counter-planning also uses landmarks. Bisson et al. use heuristics. Other solutions for goal recognition take a decision theoretic approach where the observer attempts to find plans to achieve own goals while predicting the user's goal optimizing over some reward function. Our intervention models differ from these solutions in the intervention recognition task because we do not prune the goal hypotheses. Instead, we emphasize on accurately recognizing whether an actor's revealed plan is unhelpful (and must be interrupted) where the plans leading to the goal hypotheses share common prefixes, making the disambiguation difficult. We use machine learning to learn the differences between the helpful and unhelpful plan suffixes and use that information to decide when to intervene. We rely on the same plan properties as existing recognition algorithms to learn the differences between plan suffixes: plan cost and landmarks. In addition, we have shown that the plan distance metrics can also be used to differentiate between helpful and unhelpful plan suffixes.The next step in our work is to extend the Human-aware Intervention model so that the observer can actively help the human user modify his plan following the recognition phase. The works we discussed in this section have already addressed this requirement in agent environments, where the observer also executes actions to support the goal recognition process. Pozanco et al. take a step further to show that following recognition, the observer can impede the actor using planning. Freedman and Zilberstein (2017) discuss a method that allows the observer to interact with the actor while the actor's plan is in progress with fewer observations available. Our experiments validate their argument that plan/goal recognition by itself is more useful as a post-processing step when the final actions are observed, which will be too late for the Intervention problems we discuss in this work. Our solution addresses this limitation, allowing the observer to recognize “before it's too late” that the undesirable state is developing. We use machine learning to perform the recognition task. In contrast, Freedman and Zilberstein (2017) propose a domain modification technique (similar to Ramirez and Geffner's) to formulate a planning problem that determines a relevant interactive response from the current state. Plans that agree (and do not agree) with the observations can now be found using an off-the-shelf planner on the modified domain. The actor's goal is recognized by comparing the costs of these plan sets. Following the recognition phase, they also define assistive and adversarial responsive actions the observer can execute during the interaction phase. Assistive responsive action generation is more related to our Intervention problem because our observer's goal is to help the actor avoid the undesirable state. The authors define an assistive interaction planning problem to generate a plan from the current state for the observer. This assistive plan uses the combined fluents of the actor and the observer, the Cartesian product of the actor's and the observer's actions (including no-op actions) and a modified goal condition for the observer. The assistive action generation through planning proposed by Freedman and Zilberstein (2017) is a complementary approach for the interactive Human-aware Intervention model we hope to implement in the next phase of this work. However, we will specially focus on using automated planning to inform the decision making process of the human actors following intervention. In addition, our work in Unsafe Suffix Recognition can be further extended by relaxing the assumptions we have made in the current implementation about the agents and the environment, specifically deterministic actions and full observability for the observer. This may require adopting planning techniques like the one proposed by Amato et al., but with different reward functions. For example, for intervention problems the reward function may take into account the freedom of the actor to reach his goal while ensuring safety.8.3. Dealing With Misconceptions Held by the ActorIn our intervention model the undesirable state is hidden to the user. This is similar to the user having a misconception or a false belief about the domain as the user “believes” the undesirable state is actually safe. Although for our Intervention problem, we assume that the user's belief model is explicitly available to the observer, in other situations this assumption may not hold (e.g., the observer may have limited sensing capabilities). In this case, another agent in the environment (like the observer in our Intervention problem), needs to be able to acquire the beliefs the user has. Talamadupula et al. (2014) discuss a belief acquisition process for a search and rescue domain. The belief acquirer maps the beliefs into a planning problem, allowing him to predict the plan of the agent who is missing the beliefs. The predicted plan and the belief acquirer's own plans are then used to achieve coordination among human-robot teams.Shvo et al. (2020), in Epistemic Multi-agent Planning, use a multi-agent modal logic to model an observer (and other actors) having different beliefs about the world and other actors. This is in contrast to Talamadupula et al. (2014), who use First-order logic. Given an Epistemic Plan Recognition problem (for an Epistemic Planning observer and an actor), the authors define an ill-formed plan with respect to some goal if and only if the plan achieves the goal with respect to the actor but does not achieve the goal from the observer's perspective. The authors highlight a limitation of Epistemic Plan Recognition (also applicable in normal Plan Recognition). The observer's recognition efficacy is dependent on the completeness and the veracity of the observer's beliefs about the environment and the actor. In addition it is also limited by how distinguishable the goals and the plans are that need to be recognized. Our intervention solution attempts to address the problem of improving recognition accuracy when plans are indistinguishable. Shvo et al. (2020) introduce adequacy for the recognition process when the actor's actual beliefs are different from the observer's beliefs about the actor. If the observer's beliefs about the actor's beliefs are adequate, then the observer can generate precisely all plans that the actor can also generates for some goal that also satisfies the observations.8.4. AI SafetyUsing our Intervention models an observer can recognize, with few false alarms/misses, that an undesirable state is developing. The recognition enables the observer to take some action to help the user avoid the undesirable state and complete the task safely. Therefore, our work is also a precursor to incorporating safety into AI systems.Zhang et al. (2018) use factored Markov Decision Process to model a domain where an agent, while executing plans to achieve the goals that are desirable to a human user, also wants to avoid the negative side effects that the human user would find undesirable/surprising. The agent has complete knowledge about the MDP, but does not know about the domain features that the user has given permission to change. In order to find the safety optimal policies, the agent partitions the domain features as free, locked and unknown (treated as locked). Then the MDP is solved using linear programming with constraints that prevent the policy from visiting states with changed values for the locked, unknown features. The feature partitioning is similar to our analysis of safe and unsafe plan suffixes using features of plans, where we explore the plan space to recognize what plans enable/satisfy the undesirable state and what do not. In contrast to their model, we model the agents' environment as a deterministic domain using STRIPS. Zhang et al. (2018) policy generation process interacts with the user (through querying) to find the safe-optimal policies that the user really cares about. Saisubramanian et al. (2020) propose a multi-objective approach to mitigating the negative side-effects. Given an task modeled as a MDP, the agent must optimize over the reward for the assigned task (akin to the desirable goal in our Intervention problem), minimize the negative side effects (the undesirable state) within a maximum expected loss of the reward for the assigned task (slack) in order to minimize the negative side effect. Being able to handle the negative side effects, caused by imperfect information in the environment is also pertinent to Human-aware Intervention that we propose. Although in this work we are more focused on intervention recognition than intervention response, it's also important to consider how the user's feedback/preferences can be factored into intervention recovery for more robust human-agent interaction.Hadfield-Menell et al. (2016) introduce cooperative inverse reinforcement learning (CIRL) to ensure that the autonomous system poses no risks to the human user and align it's values to that of the human in the environment. The key idea is that the observer (a robot) is interactively attempting to maximize the human's reward while observing the actions executed by the human. The cooperative game environment is modeled as a Partially Observable Markov Decision Process and the reward function incentivizes the human to teach and the robot to learn, leading to a cooperative learning behavior. The problem of finding the optimal policy pair for the robot and the human is found by reducing the problem to solving a partially observable Markov decision process. Intervention is a continuous process where the user and the agent will interact with each other repeatedly until the task is complete Especially in helpful intervention (like the idea we propose in this work, repeated interaction allows the human user and the agent to learn more about the task and hopefully complete it safe-optimally. CIRL formalizes a solution to address this problem.8.5. User Behavior ClassificationThe design of observers for human users require that the recognizer be able to identify human behavior and how well the behavior aligns with the goals of the system they interact with. Human users are not always rational and may have hidden goals. It may be an unfair comparison to model humans as rational agents in real life scenarios. Behavior classification aims to achieve some insight about the actor from the passive observer's perspective. The work proposed by Borrajo and Veloso (2020) discusses the design of an observer, which tries to learn characteristics other agents (humans and other) by observing their behavior when taking actions in a given environment.Using the financial transactions domain as a case study, Borrajo and Veloso (2020) models two agents: the actor (e.g., a bank customer) and an observer (e.g., the banking institution). Only the actor can execute plans in the environment. The observer does not know the actor's goal and has partial observability of the actor's behavior (actions the actor executes). Then, the observer's task is to classify the observed behavior into different types of known behavior classes. In order for the application to be domain-independent, the authors use plan distance measures (e.g., Jaccard similarity) between observed actions and distance between observed states as features to train the classifier. We use similar features to recognize the actor's plan prefixes that lead to undesirable states.8.6. Providing Intelligent Help to Human Users Through InterventionVirvou and Kabassi (2002a,b) discuss the design of a system that provides intelligent help for novice human users while using a file manipulating software application. The Intelligent File Manipulator (IFM) is an online help system where it automatically recognizes that an action may not have the desired goal for the user and offers help by generating alternative actions that would achieve the user's goals. IFM uses a user modeling component to reason over the observed actions. The user modeling component combines a limited goal recognition mechanism and a simulator for users' reasoning based on Human Plausible Reasoning theory to generate hypotheses about possible errors the user might make.There are some similarities between IFM and our proposed intervention framework. Both models use observations of actions as input for deciding intervention. Both models assume that the user's goals are known. We now discuss some differences between the IFM intervention model and our proposed model: Intervention by Suffix Analysis. The IFM domain is modeled as a task hierarchy, while our domain models (benchmark and Rush Hour) are sequential. To map the user's observed sequence of actions to the plans leading to the desirable and undesirable states, our intervention model uses automated planning to explore the plan space. Then, we analyze the remaining plan suffixes using machine learning to decide intervention. IFM does not use automated planning. Instead it uses a limited goal recognition mechanism called “instability” to identify when users need help. They identify a set of states of the file system as undesirable such as empty directories, multiple copies of a certain file etc. If the file system state contains any of the preset undesirable states, then the system contains instabilities. The user's action will either add an instability or remove an existing one from the system's state. The system tracks the progress of the user's plan(s) by monitoring how the instabilities are added and removed from the system. IFM categorizes the user's observed actions into four categories “expected,” “neutral,” “suspect,” and “erroneous” depending on how compatible the observed actions are with the user's hypothesized intentions. Intervention in IFM takes place when the user executes “suspect” or “erroneous” actions because they signal that there are still unfinished plans. To help the user recover from intervention, the IFM flags “suspect” or “erroneous” actions, and suggests alternative actions that are compatible with the user's intentions. Finding the alternative actions similar to the ones the user has already executed is done based on the user models derived from the Human Plausible Reasoning theory. We hope to address the issue of intervention recovery for our proposed Human-aware Intervention Problem in future developments of our application.Yadav et al. (2016) present HEALER, a software agent that sequentially select persons for intervention camps from a dynamic, uncertain network of participants such that the spread of HIV/AIDS awareness is maximized. Real-life information about the nodes of the network (human users) are captured and modeled as a POMDP. The Intervention problem discussed in this work is slightly different from our model. Solving the POMDP gives the solution for how to select the most influential individuals from the network to maximize awareness among the population. In contrast, our intervention model is defined for a discrete and sequential environment. A similarity between the models is that they codify properties of actual human users into the POMDP so that the model can be adopted in real-life application. Our Human-aware Intervention model too is designed from actual human user data.A body of literature on managing task interruption focuses on using cognitive modeling to predict human behavior, which can be used to identify intervention points. Hiatt et al. (2011) apply theory of mind to accommodate variability in human behavior during task completion. They show that a theory of mind approach can help explain possible reasons behind a human's unexpected action, that then allows the robot to respond appropriately. Ratwani et al. (2008) demonstrate that a cognitive model can accurately predict situations where a human missed a step in a sequence of tasks. More recently, Altmann and Trafton (2020) show how to extend a cognitive model to explain a cognitively plausible mechanism for tracking multiple, interacting goals.8.7. Intervention in Cyber-Security for Home UsersCyber-security domain offers a lot of promise to study behavior both as normal users and as adversaries in automated planning. Behavioral Adversary Modeling System (BAMS) (Boddy et al., 2005) uses automated planning to help computer network administrators in analyzing vulnerabilities in their system against various kinds of attacks. BAMS takes into account the properties of an adversary and produces plans that lead to system exploits that also coincides with the adversary model. While this work does not directly apply to plan recognition at its core, it illustrates a use case where classical planning can be used to design assistive systems targeted toward human end users.In this work, we take a step toward designing assistive systems to help human end users, who are non-experts (e.g., home users). Home users are specially vulnerable to undesirable consequences because they lack the know-how to recognize risky situations in advance. A previous study (Byrne et al., 2016) showed that home users pay more attention to the benefits of the activities than the risk; they have goals that they want/need to achieve and are willing to take the risk to achieve them. Many triggering actions may be normal activities (e.g., reading email, clicking on links) with the user more focused on the goal than on the risk. Thus, the undesirable consequence recognition problem needs to take into account the user's intention as well as the undesirable consequence.Howe et al. (2012) observed that most studies that look into computer security practices of users relying on self reported surveys suffered from issues such as respondent bias, socially desirable responding and peer perception. The authors posited that experiments based on simulation, which place the participant in the actual situation that is monitored can help reduce such issues and also be leveraged to assess the emotional reactions of users to interventions and warnings.The Intervention Problem can be directly applied in the cyber-security domain. An attacker attempting to trick the user into compromising his security/privacy during day-to-day computing tasks (e.g., reading email, installing software) fits the intervention model with the user, competitor and the observer we discussed in this work. Given a cyber-security planning domain model with sufficient complexity (e.g., BAMS domain model), where the undesirable state (i.e., security breach) may develop over time, the Unsafe Suffix Analysis Intervention model can be applied to recognize the threat in advance. A key requirement in helping users in cyber-security domain is to minimize the false positives and negatives during intervention recognition. As evidenced by the experiment results on benchmark domains and the Rush Hour domain confirm, our proposed learning based algorithm addresses this requirement well. While our approach uses Automated Planning, a complementary approach proposed by Roschke et al. (2011) use Attack Graphs to model vulnerabilities in an intrusion detection system to detect attack scenarios while decreasing false positives. However, the intervention recognition must also be paired with intervention recovery in cyber-security domains to ensure the safety of the agent or the human user, particularly when the user has partial visibility or limited capability for understanding the severity of threats. Intervention recovery is also important in help the agent or the human user safely complete the task. | [] | [] |
International Journal of Social Robotics | 35194482 | PMC8853423 | 10.1007/s12369-021-00843-0 | Integrating Social Assistive Robots, IoT, Virtual Communities and Smart Objects to Assist at-Home Independently Living Elders: the MoveCare Project | The integration of Ambient Assisted Living (AAL) frameworks with Socially Assistive Robots (SARs) has proven useful for monitoring and assisting older adults in their own home. However, the difficulties associated with long-term deployments in real-world complex environments are still highly under-explored. In this work, we first present the MoveCare system, an unobtrusive platform that, through the integration of a SAR into an AAL framework, aimed to monitor, assist and provide social, cognitive, and physical stimulation in the own houses of elders living alone and at risk of falling into frailty. We then focus on the evaluation and analysis of a long-term pilot campaign of more than 300 weeks of usages. We evaluated the system’s acceptability and feasibility through various questionnaires and empirically assessed the impact of the presence of an assistive robot by deploying the system with and without it. Our results provide strong empirical evidence that Socially Assistive Robots integrated with monitoring and stimulation platforms can be successfully used for long-term support to older adults. We describe how the robot’s presence significantly incentivised the use of the system, but slightly lowered the system’s overall acceptability. Finally, we emphasise that real-world long-term deployment of SARs introduces a significant technical, organisational, and logistical overhead that should not be neglected nor underestimated in the pursuit of long-term robust systems. We hope that the findings and lessons learned from our work can bring value towards future long-term real-world and widespread use of SARs. | Related WorkWhen surveying the contributions among SARs, the authors of [1] distinguished between service robots, aiming at helping users in daily activities, and companion robots, as [63], targeting the psychological well-being of their owners. Our work focuses on the first category, service robots, which currently present two significant drawbacks: (1) they often offer simple functionalities, mainly associated to monitoring, and the gap between these functionalities and the ones required by end-users still consistent; (2) deployment of such robots in real-world working conditions is yet in early stages, and there is little evidence obtained from actual long-term deployment in uncontrolled environments [48]. In this section, we will review various pieces of work that have addressed any of these two drawbacks and analyse how these studies differ from the approach we take in this work.To identify which services older adults expect from service robots, previous works used structured interviews that were performed by asking a set of questions to older adults, often after a controlled demonstration, to show examples of the robot’s capabilities [23, 53]. Other pieces of work conducted interviews in focus groups by directly asking the potential users to describe the functionalities they would expect from such platforms [12, 51, 65]. This second approach exploited demonstrations too, typically with limited autonomy, such as teleoperation or semi-autonomous Wizard-of-Oz (WOZ) design [18, 31]. Only a limited set of such identified functionalities have been implemented in SARs, such as fall detection [3], meal assistance [29], or information and stimulation through messages [17].Integration of SARs with AAL environments has been proposed to improve the robots’ capabilities and allow for more general service robots [7, 54]. For instance, the GiraffPlus project [11] deployed a teleoperated mobile robot to the elder’s home, together with a network of sensors, to monitor daily activities. In this project, however, the robot is semi-autonomous, meaning that an external user controls it to navigate the elder’s house when needed, and the system eases navigation. Integration of autonomous robots with AAL platforms is studied in [4, 18, 24] with robots whose primarily goal is to identify possible falls. Their integration with a broader AAL architecture offers additional services as reminders, pick and place of objects, and suggestions to perform entertainment activities.One recent example is the Robot-Era project [8, 15], which investigated the technical feasibility, acceptance and satisfaction of older adults when using several functionalities provided by three different robots dedicated respectively to domestic, condominium, and outdoor environments. Elders were allowed to test the functionalities of autonomous SARs by performing with them a set of scenarios selected from those offered by the robot. Interestingly, older adults were involved during the entire development phase of the robots and the scenarios in a continuous integration framework. The evaluation was performed in a challenging but controlled environment: a sensorized living-lab apartment [8], and for a limited amount of time. This project focused on showing the feasibility of the integration of AAL with SARs and its potential applications (e.g., with live controlled demonstrations), while leaving open the challenge of studying a use case for actual real-world implementation. However, the evaluation of robotic systems in real settings is fundamental to discover challenges posed by such environments [27, 33] and fill the gap towards widespread adoption. The urge of tackling this task is widely recognised and critical to enable effective long-term deployments [48].Nonetheless, only a few studies have deployed SARs for real-world evaluation in conditions similar to widespread deployment, as we do in this work. A relevant example is Strands [30], where an autonomous social robot was deployed for several weeks in the common areas of assisted living facilities. Unlike the work presented in this paper, the robot from Strands was deployed in large-scale environments to assist multiple users simultaneously (e.g., by giving directions). Such a context poses different challenges than those assessed in our work, where the goal is to provide fine-grained assistance and monitoring of a single user in their own house.A SAR similar to ours can be found in the projects CompanionAble and SERROGA [26, 28], which presented performance results of long-term tests in private apartments, similar to those obtained in our pilot study. More recently, the project SYMPARTNER [27] showed the results obtained in a 20-weeks field study with 20 elders (1 week for each participant where the system was available to the user for 4 days). Compared to this series of studies, our work investigated a much longer interaction between elders and robots, where each robot is, moreover, deployed within a broader integrated framework of several components offering multiple functionalities to elders.Fig. 1High-level overview of the MoveCare system. The system is composed of three components installed in the user’s home (the smart objects, the environmental sensors, and the service robot) and two components deployed in the cloud (the Virtual Caregiver and the Community-Based Activity Centre (CBAC)). Most of the communication between the components is performed through an MQTT Gateway. In addition, some communication between components in the cloud is performed through RESTful APIs. The users and human caregivers interact with the system through various interfacesFig. 2The functionalities implemented in the MoveCare platform with the components involved in their realisationSimilar to our work, the EnrichMe project [59] assessed the feasibility of long-term deployments inside the house of 10 elders for 10 weeks. The main objective of this project was to provide everyday-use tools and applications to assist the elderly user at home. These tools focused on health monitoring (body temperature, heart rate, and breathing), complementary care (diet and medicine reminder, physical and cognitive exercises), and everyday support (phone calls, object search, weather and news provider). While both the EnrichMe and the MoveCare projects present similarities in their platform and deployment, they differ by their focus and, therefore, the type of scenarios they support. The EnrichMe project focused on assisting the elder in their everyday tasks, while our work focused on monitoring early mild cognitive impairment and stimulating the users physically, cognitively, and socially through dedicated applications. Therefore, we included social activities and smart objects to detect frailty indicators in addition to environmental sensors. We also included the elder’s human caregiver in the loop.To the best of our knowledge, the integration of assistive robots with monitoring frameworks to provide effective long-term interventions in the physical, cognitive, and most importantly, social domain, while also investigating the possibility to perform early detection of early signs of frailty in the long-term, has not been investigated so far. Previous work considered the use of smart-home monitoring [13, 56], and the detection of signs of frailty [60], but without the integration with a robotic platform nor the personalised interventions that our work considers. | [
"30249588",
"31389341",
"26292348",
"1202204",
"11253156",
"29128788",
"27037685",
"32955442",
"18204438",
"31906184",
"26257646",
"7431205",
"30678280",
"24652924"
] | [
{
"pmid": "30249588",
"title": "Robotic Services Acceptance in Smart Environments With Older Adults: User Satisfaction and Acceptability Study.",
"abstract": "BACKGROUND\nIn Europe, the population of older people is increasing rapidly. Many older people prefer to remain in their homes but living alone could be a risk for their safety. In this context, robotics and other emerging technologies are increasingly proposed as potential solutions to this societal concern. However, one-third of all assistive technologies are abandoned within one year of use because the end users do not accept them.\n\n\nOBJECTIVE\nThe aim of this study is to investigate the acceptance of the Robot-Era system, which provides robotic services to permit older people to remain in their homes.\n\n\nMETHODS\nSix robotic services were tested by 35 older users. The experiments were conducted in three different environments: private home, condominium, and outdoor sites. The appearance questionnaire was developed to collect the users' first impressions about the Robot-Era system, whereas the acceptance was evaluated through a questionnaire developed ad hoc for Robot-Era.\n\n\nRESULTS\nA total of 45 older users were recruited. The people were grouped in two samples of 35 participants, according to their availability. Participants had a positive impression of Robot-Era robots, as reflected by the mean score of 73.04 (SD 11.80) for DORO's (domestic robot) appearance, 76.85 (SD 12.01) for CORO (condominium robot), and 75.93 (SD 11.67) for ORO (outdoor robot). Men gave ORO's appearance an overall score higher than women (P=.02). Moreover, participants younger than 75 years understood more readily the functionalities of Robot-Era robots compared to older people (P=.007 for DORO, P=.001 for CORO, and P=.046 for ORO). For the ad hoc questionnaire, the mean overall score was higher than 80 out of 100 points for all Robot-Era services. Older persons with a high educational level gave Robot-Era services a higher score than those with a low level of education (shopping: P=.04; garbage: P=.047; reminding: P=.04; indoor walking support: P=.006; outdoor walking support: P=.03). A higher score was given by male older adults for shopping (P=.02), indoor walking support (P=.02), and outdoor walking support (P=.03).\n\n\nCONCLUSIONS\nBased on the feedback given by the end users, the Robot-Era system has the potential to be developed as a socially acceptable and believable provider of robotic services to facilitate older people to live independently in their homes."
},
{
"pmid": "31389341",
"title": "How Prefrail Older People Living Alone Perceive Information and Communications Technology and What They Would Ask a Robot for: Qualitative Study.",
"abstract": "BACKGROUND\nIn the last decade, the family system has changed significantly. Although in the past, older people used to live with their children, nowadays, they cannot always depend on assistance of their relatives. Many older people wish to remain as independent as possible while remaining in their homes, even when living alone. To do so, there are many tasks that they must perform to maintain their independence in everyday life, and above all, their well-being. Information and communications technology (ICT), particularly robotics and domotics, could play a pivotal role in aging, especially in contemporary society, where relatives are not always able to accurately and constantly assist the older person.\n\n\nOBJECTIVE\nThe aim of this study was to understand the needs, preferences, and views on ICT of some prefrail older people who live alone. In particular, we wanted to explore their attitude toward a hypothetical caregiver robot and the functions they would ask for.\n\n\nMETHODS\nWe designed a qualitative study based on an interpretative phenomenological approach. A total of 50 potential participants were purposively recruited in a big town in Northern Italy and were administered the Fried scale (to assess the participants' frailty) and the Mini-Mental State Examination (to evaluate the older person's capacity to comprehend the interview questions). In total, 25 prefrail older people who lived alone participated in an individual semistructured interview, lasting approximately 45 min each. Overall, 3 researchers independently analyzed the interviews transcripts, identifying meaning units, which were later grouped in clustering of themes, and finally in emergent themes. Constant triangulation among researchers and their reflective attitude assured trustiness.\n\n\nRESULTS\nFrom this study, it emerged that a number of interviewees who were currently using ICT (ie, smartphones) did not own a computer in the past, or did not receive higher education, or were not all young older people (aged 65-74 years). Furthermore, we found that among the older people who described their relationship with ICT as negative, many used it in everyday life. Referring to robotics, the interviewees appeared quite open-minded. In particular, robots were considered suitable for housekeeping, for monitoring older people's health and accidental falls, and for entertainment.\n\n\nCONCLUSIONS\nOlder people's use and attitudes toward ICT does not always seem to be related to previous experiences with technological devices, higher education, or lower age. Furthermore, many participants in this study were able to use ICT, even if they did not always acknowledge it. Moreover, many interviewees appeared to be open-minded toward technological devices, even toward robots. Therefore, proposing new advanced technology to a group of prefrail people, who are self-sufficient and can live alone at home, seems to be feasible."
},
{
"pmid": "26292348",
"title": "Automated Cognitive Health Assessment From Smart Home-Based Behavior Data.",
"abstract": "Smart home technologies offer potential benefits for assisting clinicians by automating health monitoring and well-being assessment. In this paper, we examine the actual benefits of smart home-based analysis by monitoring daily behavior in the home and predicting clinical scores of the residents. To accomplish this goal, we propose a clinical assessment using activity behavior (CAAB) approach to model a smart home resident's daily behavior and predict the corresponding clinical scores. CAAB uses statistical features that describe characteristics of a resident's daily activity performance to train machine learning algorithms that predict the clinical scores. We evaluate the performance of CAAB utilizing smart home sensor data collected from 18 smart homes over two years. We obtain a statistically significant correlation ( r=0.72) between CAAB-predicted and clinician-provided cognitive scores and a statistically significant correlation ( r=0.45) between CAAB-predicted and clinician-provided mobility scores. These prediction results suggest that it is feasible to predict clinical scores using smart home sensor data and learning-based data analysis."
},
{
"pmid": "11253156",
"title": "Frailty in older adults: evidence for a phenotype.",
"abstract": "BACKGROUND\nFrailty is considered highly prevalent in old age and to confer high risk for falls, disability, hospitalization, and mortality. Frailty has been considered synonymous with disability, comorbidity, and other characteristics, but it is recognized that it may have a biologic basis and be a distinct clinical syndrome. A standardized definition has not yet been established.\n\n\nMETHODS\nTo develop and operationalize a phenotype of frailty in older adults and assess concurrent and predictive validity, the study used data from the Cardiovascular Health Study. Participants were 5,317 men and women 65 years and older (4,735 from an original cohort recruited in 1989-90 and 582 from an African American cohort recruited in 1992-93). Both cohorts received almost identical baseline evaluations and 7 and 4 years of follow-up, respectively, with annual examinations and surveillance for outcomes including incident disease, hospitalization, falls, disability, and mortality.\n\n\nRESULTS\nFrailty was defined as a clinical syndrome in which three or more of the following criteria were present: unintentional weight loss (10 lbs in past year), self-reported exhaustion, weakness (grip strength), slow walking speed, and low physical activity. The overall prevalence of frailty in this community-dwelling population was 6.9%; it increased with age and was greater in women than men. Four-year incidence was 7.2%. Frailty was associated with being African American, having lower education and income, poorer health, and having higher rates of comorbid chronic diseases and disability. There was overlap, but not concordance, in the cooccurrence of frailty, comorbidity, and disability. This frailty phenotype was independently predictive (over 3 years) of incident falls, worsening mobility or ADL disability, hospitalization, and death, with hazard ratios ranging from 1.82 to 4.46, unadjusted, and 1.29-2.24, adjusted for a number of health, disease, and social characteristics predictive of 5-year mortality. Intermediate frailty status, as indicated by the presence of one or two criteria, showed intermediate risk of these outcomes as well as increased risk of becoming frail over 3-4 years of follow-up (odds ratios for incident frailty = 4.51 unadjusted and 2.63 adjusted for covariates, compared to those with no frailty criteria at baseline).\n\n\nCONCLUSIONS\nThis study provides a potential standardized definition for frailty in community-dwelling older adults and offers concurrent and predictive validity for the definition. It also finds that there is an intermediate stage identifying those at high risk of frailty. Finally, it provides evidence that frailty is not synonymous with either comorbidity or disability, but comorbidity is an etiologic risk factor for, and disability is an outcome of, frailty. This provides a potential basis for clinical assessment for those who are frail or at risk, and for future research to develop interventions for frailty based on a standardized ascertainment of frailty."
},
{
"pmid": "29128788",
"title": "Inclusion of service robots in the daily lives of frail older users: A step-by-step definition procedure on users' requirements.",
"abstract": "The implications for the inclusion of robots in the daily lives of frail older adults, especially in relation to these population needs, have not been extensively studied. The \"Multi-Role Shadow Robotic System for Independent Living\" (SRS) project has developed a remotely-controlled, semi-autonomous robotic system to be used in domestic environments. The objective of this paper is to document the iterative procedure used to identify, select and prioritize user requirements. Seventy-four requirements were identified by means of focus groups, individual interviews and scenario-based interviews. The list of user requirements, ordered according to impact, number and transnational criteria, revealed a high number of requirements related to basic and instrumental activities of daily living, cognitive and social support and monitorization, and also involving privacy, safety and adaptation issues. Analysing and understanding older users' perceptions and needs when interacting with technological devices adds value to assistive technology and ensures that the systems address currently unmet needs."
},
{
"pmid": "27037685",
"title": "Evaluation of an Assistive Telepresence Robot for Elderly Healthcare.",
"abstract": "In this paper we described the telepresence robot system designed to improve the well-being of elderly by supporting them to do daily activities independently, to facilitate social interaction in order to overcome a sense of social isolation and loneliness as well as to support the professional caregivers in everyday care. In order to investigate the acceptance of the developed robot system, evaluation study involved elderly people and professional caregivers, as two potential user groups was conducted. The results of this study are also presented and discussed."
},
{
"pmid": "32955442",
"title": "Supervised Digital Neuropsychological Tests for Cognitive Decline in Older Adults: Usability and Clinical Validity Study.",
"abstract": "BACKGROUND\nDementia is a major and growing health problem, and early diagnosis is key to its management.\n\n\nOBJECTIVE\nWith the ultimate goal of providing a monitoring tool that could be used to support the screening for cognitive decline, this study aims to develop a supervised, digitized version of 2 neuropsychological tests: Trail Making Test and Bells Test. The system consists of a web app that implements a tablet-based version of the tests and consists of an innovative vocal assistant that acts as the virtual supervisor for the execution of the test. A replay functionality is added to allow inspection of the user's performance after test completion.\n\n\nMETHODS\nTo deploy the system in a nonsupervised environment, extensive functional testing of the platform was conducted, together with a validation of the tablet-based tests. Such validation had the two-fold aim of evaluating system usability and acceptance and investigating the concurrent validity of computerized assessment compared with the corresponding paper-and-pencil counterparts.\n\n\nRESULTS\nThe results obtained from 83 older adults showed high system acceptance, despite the patients' low familiarity with technology. The system software was successfully validated. A concurrent validation of the system reported good ability of the digitized tests to retain the same predictive power of the corresponding paper-based tests.\n\n\nCONCLUSIONS\nAltogether, the positive results pave the way for the deployment of the system to a nonsupervised environment, thus representing a potential efficacious and ecological solution to support clinicians in the identification of early signs of cognitive decline."
},
{
"pmid": "18204438",
"title": "The coming acceleration of global population ageing.",
"abstract": "The future paths of population ageing result from specific combinations of declining fertility and increasing life expectancies in different parts of the world. Here we measure the speed of population ageing by using conventional measures and new ones that take changes in longevity into account for the world as a whole and for 13 major regions. We report on future levels of indicators of ageing and the speed at which they change. We show how these depend on whether changes in life expectancy are taken into account. We also show that the speed of ageing is likely to increase over the coming decades and to decelerate in most regions by mid-century. All our measures indicate a continuous ageing of the world's population throughout the century. The median age of the world's population increases from 26.6 years in 2000 to 37.3 years in 2050 and then to 45.6 years in 2100, when it is not adjusted for longevity increase. When increases in life expectancy are taken into account, the adjusted median age rises from 26.6 in 2000 to 31.1 in 2050 and only to 32.9 in 2100, slightly less than what it was in the China region in 2005. There are large differences in the regional patterns of ageing. In North America, the median age adjusted for life expectancy change falls throughout almost the entire century, whereas the conventional median age increases significantly. Our assessment of trends in ageing is based on new probabilistic population forecasts. The probability that growth in the world's population will end during this century is 88%, somewhat higher than previously assessed. After mid-century, lower rates of population growth are likely to coincide with slower rates of ageing."
},
{
"pmid": "31906184",
"title": "Automatic Waypoint Generation to Improve Robot Navigation Through Narrow Spaces.",
"abstract": "In domestic robotics, passing through narrow areas becomes critical for safe and effective robot navigation. Due to factors like sensor noise or miscalibration, even if the free space is sufficient for the robot to pass through, it may not see enough clearance to navigate, hence limiting its operational space. An approach to facing this is to insert waypoints strategically placed within the problematic areas in the map, which are considered by the robot planner when generating a trajectory and help to successfully traverse them. This is typically carried out by a human operator either by relying on their experience or by trial-and-error. In this paper, we present an automatic procedure to perform this task that: (i) detects problematic areas in the map and (ii) generates a set of auxiliary navigation waypoints from which more suitable trajectories can be generated by the robot planner. Our proposal, fully compatible with the robotic operating system (ROS), has been successfully applied to robots deployed in different houses within the H2020 MoveCare project. Moreover, we have performed extensive simulations with four state-of-the-art robots operating within real maps. The results reveal significant improvements in the number of successful navigations for the evaluated scenarios, demonstrating its efficacy in realistic situations."
},
{
"pmid": "26257646",
"title": "\"Are we ready for robots that care for us?\" Attitudes and opinions of older adults toward socially assistive robots.",
"abstract": "Socially Assistive Robots (SAR) may help improve care delivery at home for older adults with cognitive impairment and reduce the burden of informal caregivers. Examining the views of these stakeholders on SAR is fundamental in order to conceive acceptable and useful SAR for dementia care. This study investigated SAR acceptance among three groups of older adults living in the community: persons with Mild Cognitive Impairment, informal caregivers of persons with dementia, and healthy older adults. Different technology acceptance questions related to the robot and user characteristics, potential applications, feelings about technology, ethical issues, and barriers and facilitators for SAR adoption, were addressed in a mixed-method study. Participants (n = 25) completed a survey and took part in a focus group (n = 7). A functional robot prototype, a multimedia presentation, and some use-case scenarios provided a base for the discussion. Content analysis was carried out based on recorded material from focus groups. Results indicated that an accurate insight of influential factors for SAR acceptance could be gained by combining quantitative and qualitative methods. Participants acknowledged the potential benefits of SAR for supporting care at home for individuals with cognitive impairment. In all the three groups, intention to use SAR was found to be lower for the present time than that anticipated for the future. However, caregivers and persons with MCI had a higher perceived usefulness and intention to use SAR, at the present time, than healthy older adults, confirming that current needs are strongly related to technology acceptance and should influence SAR design. A key theme that emerged in this study was the importance of customizing SAR appearance, services, and social capabilities. Mismatch between needs and solutions offered by the robot, usability factors, and lack of experience with technology, were seen as the most important barriers for SAR adoption."
},
{
"pmid": "7431205",
"title": "The revised UCLA Loneliness Scale: concurrent and discriminant validity evidence.",
"abstract": "The development of an adequate assessment instrument is a necessary prerequisite for social psychological research on loneliness. Two studies provide methodological refinement in the measurement of loneliness. Study 1 presents a revised version of the self-report UCLA (University of California, Los Angeles) Loneliness Scale, designed to counter the possible effects of response bias in the original scale, and reports concurrent validity evidence for the revised measure. Study 2 demonstrates that although loneliness is correlated with measures of negative affect, social risk taking, and affiliative tendencies, it is nonetheless a distinct psychological experience."
},
{
"pmid": "30678280",
"title": "A Low-Cost Indoor Activity Monitoring System for Detecting Frailty in Older Adults.",
"abstract": "Indoor localization systems have already wide applications mainly for providing localized information and directions. The majority of them focus on commercial applications providing information such us advertisements, guidance and asset tracking. Medical oriented localization systems are uncommon. Given the fact that an individual's indoor movements can be indicative of his/her clinical status, in this paper we present a low-cost indoor localization system with room-level accuracy used to assess the frailty of older people. We focused on designing a system with easy installation and low cost to be used by non technical staff. The system was installed in older people houses in order to collect data about their indoor localization habits. The collected data were examined in combination with their frailty status, showing a correlation between them. The indoor localization system is based on the processing of Received Signal Strength Indicator (RSSI) measurements by a tracking device, from Bluetooth Beacons, using a fingerprint-based procedure. The system has been tested in realistic settings achieving accuracy above 93% in room estimation. The proposed system was used in 271 houses collecting data for 1⁻7-day sessions. The evaluation of the collected data using ten-fold cross-validation showed an accuracy of 83% in the classification of a monitored person regarding his/her frailty status (Frail, Pre-frail, Non-frail)."
},
{
"pmid": "24652924",
"title": "The Attitudes and Perceptions of Older Adults With Mild Cognitive Impairment Toward an Assistive Robot.",
"abstract": "The purpose of this study was to explore perceived difficulties and needs of older adults with mild cognitive impairment (MCI) and their attitudes toward an assistive robot to develop appropriate robot functionalities. Twenty subjects were recruited to participate in either a focus group or an interview. Findings revealed that although participants reported difficulties in managing some of their daily activities, they did not see themselves as needing assistance. Indeed, they considered that they were capable of coping with difficulties with some compensatory strategies. They therefore declared that they did not need or want a robot for the moment but that they considered it potentially useful either for themselves in the future or for other older adults suffering from frailty, loneliness, and disability. Factors underlying unwillingness to adopt an assistive robot were discussed. These issues should be carefully addressed in the design and diffusion processes of an assistive robot."
}
] |
Scientific Reports | null | PMC8854424 | 10.1038/s41598-022-06559-z | Distance-based clustering using QUBO formulations | In computer science, clustering is a technique for grouping data. Ising machines can solve distance-based clustering problems described by quadratic unconstrained binary optimization (QUBO) formulations. A typical simple method using an Ising machine makes each cluster size equal and is not suitable for clustering unevenly distributed data. We propose a new clustering method that provides better performance than the simple method, especially for unevenly distributed data. The proposed method is a hybrid algorithm including an iterative process that comprises solving a discrete optimization problem with an Ising machine and calculating parameters with a general-purpose computer. To minimize the communication overhead between the Ising machine and the general-purpose computer, we employed a low-latency Ising machine implementing the simulated bifurcation algorithm with a field-programmable gate array attached to a local server. The proposed method results in clustering 200 unevenly distributed data points with a clustering score 18% higher than that of the simple method. The discrete optimization with 2000 variables is performed 100 times per iteration, and the overhead time is reduced to approximately 20% of the total execution time. These results suggest that hybrid algorithms using Ising machines can efficiently solve practical optimization problems. | Related workWe briefly review some recent works on the clustering method using Ising machines and quantum computers. Simple distance-based clustering is a typical example using an Ising machine or a quantum computer. Simple clustering methods based on the distance between data points34,35,37 and graph partitioning33 using a quantum annealer still attract some interest. Several methods inspired by classical clustering have been proposed and developed recently. For example, quantum-assisted clustering based on similarities28, K-means29–31 and K-Medoids32 clustering on a quantum annealer, as well as K-means clustering on a gate-model quantum computer38, have been proposed. K-means-like clustering on a digital Ising machine has also been investigated36. | [
"21562559",
"31499928",
"31016238",
"33536219",
"27811271",
"27811274",
"31139743",
"31283311",
"33406162",
"33875711",
"33976283"
] | [
{
"pmid": "21562559",
"title": "Quantum annealing with manufactured spins.",
"abstract": "Many interesting but practically intractable problems can be reduced to that of finding the ground state of a system of interacting spins; however, finding such a ground state remains computationally difficult. It is believed that the ground state of some naturally occurring spin systems can be effectively attained through a process called quantum annealing. If it could be harnessed, quantum annealing might improve on known methods for solving certain types of problem. However, physical investigation of quantum annealing has been largely confined to microscopic spins in condensed-matter systems. Here we use quantum annealing to find the ground state of an artificial Ising spin system comprising an array of eight superconducting flux quantum bits with programmable spin-spin couplings. We observe a clear signature of quantum annealing, distinguishable from classical thermal annealing through the temperature dependence of the time at which the system dynamics freezes. Our implementation can be configured in situ to realize a wide variety of different spin networks, each of which can be monitored as it moves towards a low-energy configuration. This programmable artificial spin network bridges the gap between the theoretical study of ideal isolated spin networks and the experimental investigation of bulk magnetic samples. Moreover, with an increased number of spins, such a system may provide a practical physical means to implement a quantum algorithm, possibly allowing more-effective approaches to solving certain classes of hard combinatorial optimization problems."
},
{
"pmid": "31499928",
"title": "Binary optimization by momentum annealing.",
"abstract": "One of the vital roles of computing is to solve large-scale combinatorial optimization problems in a short time. In recent years, methods have been proposed that map optimization problems to ones of searching for the ground state of an Ising model by using a stochastic process. Simulated annealing (SA) is a representative algorithm. However, it is inherently difficult to perform a parallel search. Here we propose an algorithm called momentum annealing (MA), which, unlike SA, updates all spins of fully connected Ising models simultaneously and can be implemented on GPUs that are widely used for scientific computing. MA running in parallel on GPUs is 250 times faster than SA running on a modern CPU at solving problems involving 100 000 spin Ising models."
},
{
"pmid": "31016238",
"title": "Combinatorial optimization by simulating adiabatic bifurcations in nonlinear Hamiltonian systems.",
"abstract": "Combinatorial optimization problems are ubiquitous but difficult to solve. Hardware devices for these problems have recently been developed by various approaches, including quantum computers. Inspired by recently proposed quantum adiabatic optimization using a nonlinear oscillator network, we propose a new optimization algorithm simulating adiabatic evolutions of classical nonlinear Hamiltonian systems exhibiting bifurcation phenomena, which we call simulated bifurcation (SB). SB is based on adiabatic and chaotic (ergodic) evolutions of nonlinear Hamiltonian systems. SB is also suitable for parallel computing because of its simultaneous updating. Implementing SB with a field-programmable gate array, we demonstrate that the SB machine can obtain good approximate solutions of an all-to-all connected 2000-node MAX-CUT problem in 0.5 ms, which is about 10 times faster than a state-of-the-art laser-based machine called a coherent Ising machine. SB will accelerate large-scale combinatorial optimization harnessing digital computer technologies and also offer a new application of computational and mathematical physics."
},
{
"pmid": "33536219",
"title": "High-performance combinatorial optimization based on classical mechanics.",
"abstract": "Quickly obtaining optimal solutions of combinatorial optimization problems has tremendous value but is extremely difficult. Thus, various kinds of machines specially designed for combinatorial optimization have recently been proposed and developed. Toward the realization of higher-performance machines, here, we propose an algorithm based on classical mechanics, which is obtained by modifying a previously proposed algorithm called simulated bifurcation. Our proposed algorithm allows us to achieve not only high speed by parallel computing but also high solution accuracy for problems with up to one million binary variables. Benchmarking shows that our machine based on the algorithm achieves high performance compared to recently developed machines, including a quantum annealer using a superconducting circuit, a coherent Ising machine using a laser, and digital processors based on various algorithms. Thus, high-performance combinatorial optimization is realized by massively parallel implementations of the proposed algorithm based on classical mechanics."
},
{
"pmid": "27811271",
"title": "A coherent Ising machine for 2000-node optimization problems.",
"abstract": "The analysis and optimization of complex systems can be reduced to mathematical problems collectively known as combinatorial optimization. Many such problems can be mapped onto ground-state search problems of the Ising model, and various artificial spin systems are now emerging as promising approaches. However, physical Ising machines have suffered from limited numbers of spin-spin couplings because of implementations based on localized spins, resulting in severe scalability problems. We report a 2000-spin network with all-to-all spin-spin couplings. Using a measurement and feedback scheme, we coupled time-multiplexed degenerate optical parametric oscillators to implement maximum cut problems on arbitrary graph topologies with up to 2000 nodes. Our coherent Ising machine outperformed simulated annealing in terms of accuracy and computation time for a 2000-node complete graph."
},
{
"pmid": "27811274",
"title": "A fully programmable 100-spin coherent Ising machine with all-to-all connections.",
"abstract": "Unconventional, special-purpose machines may aid in accelerating the solution of some of the hardest problems in computing, such as large-scale combinatorial optimizations, by exploiting different operating mechanisms than those of standard digital computers. We present a scalable optical processor with electronic feedback that can be realized at large scale with room-temperature technology. Our prototype machine is able to find exact solutions of, or sample good approximate solutions to, a variety of hard instances of Ising problems with up to 100 spins and 10,000 spin-spin connections."
},
{
"pmid": "31139743",
"title": "Experimental investigation of performance differences between coherent Ising machines and a quantum annealer.",
"abstract": "Physical annealing systems provide heuristic approaches to solving combinatorial optimization problems. Here, we benchmark two types of annealing machines-a quantum annealer built by D-Wave Systems and measurement-feedback coherent Ising machines (CIMs) based on optical parametric oscillators-on two problem classes, the Sherrington-Kirkpatrick (SK) model and MAX-CUT. The D-Wave quantum annealer outperforms the CIMs on MAX-CUT on cubic graphs. On denser problems, however, we observe an exponential penalty for the quantum annealer [exp(-αDW N 2)] relative to CIMs [exp(-αCIM N)] for fixed anneal times, both on the SK model and on 50% edge density MAX-CUT. This leads to a several orders of magnitude time-to-solution difference for instances with over 50 vertices. An optimal-annealing time analysis is also consistent with a substantial projected performance difference. The difference in performance between the sparsely connected D-Wave machine and the fully-connected CIMs provides strong experimental support for efforts to increase the connectivity of quantum annealers."
},
{
"pmid": "31283311",
"title": "Large-Scale Photonic Ising Machine by Spatial Light Modulation.",
"abstract": "Quantum and classical physics can be used for mathematical computations that are hard to tackle by conventional electronics. Very recently, optical Ising machines have been demonstrated for computing the minima of spin Hamiltonians, paving the way to new ultrafast hardware for machine learning. However, the proposed systems are either tricky to scale or involve a limited number of spins. We design and experimentally demonstrate a large-scale optical Ising machine based on a simple setup with a spatial light modulator. By encoding the spin variables in a binary phase modulation of the field, we show that light propagation can be tailored to minimize an Ising Hamiltonian with spin couplings set by input amplitude modulation and a feedback scheme. We realize configurations with thousands of spins that settle in the ground state in a low-temperature ferromagneticlike phase with all-to-all and tunable pairwise interactions. Our results open the route to classical and quantum photonic Ising machines that exploit light spatial degrees of freedom for parallel processing of a vast number of spins with programmable couplings."
},
{
"pmid": "33406162",
"title": "Reverse annealing for nonnegative/binary matrix factorization.",
"abstract": "It was recently shown that quantum annealing can be used as an effective, fast subroutine in certain types of matrix factorization algorithms. The quantum annealing algorithm performed best for quick, approximate answers, but performance rapidly plateaued. In this paper, we utilize reverse annealing instead of forward annealing in the quantum annealing subroutine for nonnegative/binary matrix factorization problems. After an initial global search with forward annealing, reverse annealing performs a series of local searches that refine existing solutions. The combination of forward and reverse annealing significantly improves performance compared to forward annealing alone for all but the shortest run times."
},
{
"pmid": "33875711",
"title": "Hybrid quantum annealing via molecular dynamics.",
"abstract": "A novel quantum-classical hybrid scheme is proposed to efficiently solve large-scale combinatorial optimization problems. The key concept is to introduce a Hamiltonian dynamics of the classical flux variables associated with the quantum spins of the transverse-field Ising model. Molecular dynamics of the classical fluxes can be used as a powerful preconditioner to sort out the frozen and ambivalent spins for quantum annealers. The performance and accuracy of our smooth hybridization in comparison to the standard classical algorithms (the tabu search and the simulated annealing) are demonstrated by employing the MAX-CUT and Ising spin-glass problems."
},
{
"pmid": "33976283",
"title": "QUBO formulations for training machine learning models.",
"abstract": "Training machine learning models on classical computers is usually a time and compute intensive process. With Moore's law nearing its inevitable end and an ever-increasing demand for large-scale data analysis using machine learning, we must leverage non-conventional computing paradigms like quantum computing to train machine learning models efficiently. Adiabatic quantum computers can approximately solve NP-hard problems, such as the quadratic unconstrained binary optimization (QUBO), faster than classical computers. Since many machine learning problems are also NP-hard, we believe adiabatic quantum computers might be instrumental in training machine learning models efficiently in the post Moore's law era. In order to solve problems on adiabatic quantum computers, they must be formulated as QUBO problems, which is very challenging. In this paper, we formulate the training problems of three machine learning models-linear regression, support vector machine (SVM) and balanced k-means clustering-as QUBO problems, making them conducive to be trained on adiabatic quantum computers. We also analyze the computational complexities of our formulations and compare them to corresponding state-of-the-art classical approaches. We show that the time and space complexities of our formulations are better (in case of SVM and balanced k-means clustering) or equivalent (in case of linear regression) to their classical counterparts."
}
] |
Scientific Reports | null | PMC8854707 | 10.1038/s41598-022-06767-7 | Machine learning approach for study on subway passenger flow | We investigate regional features nearby the subway station using the clustering method called the funFEM and propose a two-step procedure to predict a subway passenger transport flow by incorporating the geographical information from the cluster analysis to functional time series prediction. A massive smart card transaction dataset is used to analyze the daily number of passengers for each station in Seoul Metro. First, we cluster the stations into six categories with respect to their patterns of passenger transport. Then, we forecast the daily number of passengers with respect to each cluster. By comparing our predicted results with the actual number of passengers, we demonstrate the predicted number of passengers based on the clustering results is more accurate in contrast to the result without considering the regional properties. The result from our data-driven approach can be applied to improve the subway service plan and relieve infectious diseases as we can reduce the congestion by controlling train intervals based on the passenger flow. Furthermore, the prediction result can be utilized to plan a ‘smart city’ which seeks shorter commuting time, comfortable ridership, and environmental sustainability. | Related workA variety of studies have been conducted to analyze and predict the number of subway passengers. Tang et al. and Wang et al.1,2 introduced a semantic method to identify spatio-temporal latent functions of subway stations in Shanghai, China, based on the mobility patterns. They utilize the smart card transaction data, network data, and point of interest information of each station. They especially cluster the stations in ten functional clusters and present latent functions of them. Ling et al.3 compared the performance of the historical average model, neural network model using multilayer perceptron, support vector regression model, and gradient boosted regression trees model to predict the dynamical passenger flow. Kim et al.4 analyzed the space-time variability of subway passengers data in Seoul using cyclostationary empirical orthogonal function. Similar to this, Yu et al.5 utilized smart card data of Nanjing Metro, China, to find a commuting characteristic of residents. Shin6 applied a fluid dynamic model to Seoul metropolitan’s big data to extract commuting patterns. An additional method was presented to predict subway passenger flow using the particle swarm optimization algorithm7, which incorporates backpropagation in a neural network with empirical mode decomposition. Cluster analysis of the New York city dataset was conducted by8. Based on the server, Alan and Birant9 proposed a personalized fare calculation system to provide benefits to passengers who do not have a specific smart card for regional transportation. For a more comprehensive review of analyzing subway datasets, see10.Furthermore, there has been some research about urban transportation planning. For example, Lim11 presented a comparative analysis between the urban design policy in Seoul and the transit-oriented development policy of Singapore and Tokyo to support the idea of transforming Seoul into a transit-friendly city. Oh et al.12 introduced the model to analyze important latent factors in the supply and demand of the number of subway passengers. Because the regional attributes can be regarded as a piece of useful information to predict passenger transport, there was research about their relationship. For example, Sohn and Kim13 investigated the relationship between the number of passengers and the regional characteristic of the urban transit stations in the Seoul metropolitan area. Similar to this, Lee et al.14 categorized the regional property of the nearby subway stations and implemented correlation analysis between the regional characteristic and passenger flow patterns. Kim et al.15 further implemented cluster analysis by using the information from the ratio of land usage and the distance between subway stations. Sung and Kim16 applied a multidimensional scaling method to factor analysis to investigate the regional characteristic and passenger flow. Choi et al.17 and Lee et al.18 focus on the relationship between the characteristics of stations and passenger patterns. | [
"30148888",
"9950739",
"23379722"
] | [
{
"pmid": "30148888",
"title": "Predicting subway passenger flows under different traffic conditions.",
"abstract": "Passenger flow prediction is important for the operation, management, efficiency, and reliability of urban rail transit (subway) system. Here, we employ the large-scale subway smartcard data of Shenzhen, a major city of China, to predict dynamical passenger flows in the subway network. Four classical predictive models: historical average model, multilayer perceptron neural network model, support vector regression model, and gradient boosted regression trees model, were analyzed. Ordinary and anomalous traffic conditions were identified for each subway station by using the density-based spatial clustering of applications with noise (DBSCAN) algorithm. The prediction accuracy of each predictive model was analyzed under ordinary and anomalous traffic conditions to explore the high-performance condition (ordinary traffic condition or anomalous traffic condition) of different predictive models. In addition, we studied how long in advance that passenger flows can be accurately predicted by each predictive model. Our finding highlights the importance of selecting proper models to improve the accuracy of passenger flow prediction, and that inherent patterns of passenger flows are more prominently influencing the accuracy of prediction."
},
{
"pmid": "9950739",
"title": "Mixtures of probabilistic principal component analyzers.",
"abstract": "Principal component analysis (PCA) is one of the most popular techniques for processing, compressing, and visualizing data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Therefore, previous attempts to formulate mixture models for PCA have been ad hoc to some extent. In this article, PCA is formulated within a maximum likelihood framework, based on a specific form of gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analyzers, whose parameters can be determined using an expectation-maximization algorithm. We discuss the advantages of this model in the context of clustering, density modeling, and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition."
},
{
"pmid": "23379722",
"title": "Wavelet-based clustering for mixed-effects functional models in high dimension.",
"abstract": "We propose a method for high-dimensional curve clustering in the presence of interindividual variability. Curve clustering has longly been studied especially using splines to account for functional random effects. However, splines are not appropriate when dealing with high-dimensional data and can not be used to model irregular curves such as peak-like data. Our method is based on a wavelet decomposition of the signal for both fixed and random effects. We propose an efficient dimension reduction step based on wavelet thresholding adapted to multiple curves and using an appropriate structure for the random effect variance, we ensure that both fixed and random effects lie in the same functional space even when dealing with irregular functions that belong to Besov spaces. In the wavelet domain our model resumes to a linear mixed-effects model that can be used for a model-based clustering algorithm and for which we develop an EM-algorithm for maximum likelihood estimation. The properties of the overall procedure are validated by an extensive simulation study. Then, we illustrate our method on mass spectrometry data and we propose an original application of functional data analysis on microarray comparative genomic hybridization (CGH) data. Our procedure is available through the R package curvclust which is the first publicly available package that performs curve clustering with random effects in the high dimensional framework (available on the CRAN)."
}
] |
Scientific Reports | 35177713 | PMC8854737 | 10.1038/s41598-022-06609-6 | MPPT mechanism based on novel hybrid particle swarm optimization and salp swarm optimization algorithm for battery charging through simulink | In this paper, a battery charging model is developed for solar PV system applications. As a means of photovoltaic power controlling system, buck-boost converter with a Maximum Power Point Tracking (MPPT) mechanism is developed in this paper for maximum efficiency. This paper proposed a novel combined technique of hybrid Particle Swarm Optimisation (PSO) and Salp Swarm Optimization (SSO) models to perform Maximum Power Point Tracking mechanisms and obtain a higher efficiency for battery charging. In order to retrieve the maximum power from the PV array, the Maximum Power Point Tracking mechanism is observed which reaches the maximum efficiency and the maximum power is fed through the buck-boost converter into the load. The buck-boost converter steps up the voltage to essential magnitude. The energy drawn from the PV array is used for the battery charging by means of an isolated buck converter since the buck-boost converter is not directly connected to the battery. The Fractional Order Proportional Integral Derivative (FOPID) controller handles the isolated buck converter and battery to enhance the efficiency obtained through the Maximum Power Point Tracking mechanism. The simulation results show higher steady efficiency by using the hybrid PSOSSO algorithm in all stages. The battery is charged without losing the efficiency obtained from the hybrid PSOSSO algorithm-based Maximum Power Point Tracking mechanism. The higher efficiency was obtained as 99.99% at Standard Test Conditions (STC) and 99.52% at PV partial shading conditions (PSCs) by using the new hybrid algorithm. | Related worksThe following section describes the review of literature related to the buck-boost converter, optimization algorithm, MPPT mechanism, isolated buck converter, and FOPID controller. For maximum power searching, the Perturb and observe (PO) MPPT process has been imposed for the battery charging. The system with MPPT shows greater output power compared with the system without MPPT. The efficiency obtained was 90.56 percentage and maximum power has been transferred to the battery10. Furthermore, the MPPT technique used to extract maximum power in a photovoltaic system is also illustrated in this11 study. For the photovoltaic system incorporated through a Z-source inverter, the improvised Particle Swarm Optimization (PSO) depends on the MPPT technique that has been involved in this paper. When the maximum power point has been placed, the steady-state oscillation has diminished. This proposed system has tracked the MPP efficiently even though extreme environmental conditions occur like large fluctuations and partial shading conditions related to the abrupt change in the temperature and the irradiance. Correspondingly, for the unshaded PV systems, the maximum power is extracted and exhibited only one power peak in the P–V curve. Various peaks are generated from the partial shading conditions (PSCs) like multiple local maximum power points (LMPP) and global maximum power points (GMPP). In the searching process, experimental methods like PSO or the Gray Wolf Optimization (GWO) identified the GMPP. The GMPP changes its position and value if the PSC value is changed. However, the GWO or PSO has not caught this GMPP value during the previous GMPP search area and only the searching agents are searched. Based on the PSC changes the searching agents should be re-initialized. As per the discussion point of view, this12 study adopted the GWO- Fuzzy logic controller integrated technique that shows better performance in the tracking of dynamic GMPP. Moreover, the output power oscillations have been minimized by this integration13. This study14 focused on the photovoltaic systems, in which a three-port switching boost converter considered as a non-isolated converter has been applied and the network of switching boost characteristics is combined. These ports have exhibited the boost, buck, and buck-boost converters characteristics by controlling three degrees of freedom. The three-port converter has been developed with the basic feature, low cost, and small size. In various working modes, the system power flow has been verified15. The photovoltaic system feasibility has been shown after evaluation. Additionally, a sliding mode MPPT controller has been designed in this16 research for the PV systems in the atmospheric rapid change conditions. The gains of optimal sliding mode controller (SMC) have been identified by the PSO algorithm17. In the online mode operation, the optimal sliding mode controller (SMC) gains are used for MPPT step drive whereas, in the offline mode, the requirements for various sets of optimal SMC gains testing can result in some optimum values. A maximum power point with better tracking speed, low overshoot, low ripple, and low oscillation has been effectively tracked in both fast and slow-changing atmospheric conditions. Furthermore, a DC–DC regulated buck-boost converter has been developed for PV systems in another18 research. For easy implementation, the MPPT based incremental conductance (INC) has been selected. The suitable output voltage has been introduced by the buck-boost converter that ensures better transfer of the energy. The MPPT algorithm-based PSO is used to overcome these limitations and irradiation imperfection has been identified in19 research. The adaptive PID is used to control the buck converter input voltage.From the design point of view, the model reference adaptive control (MRAC) is similar to the traditional PID structure. To reach PV array maximum power, the PSO with high tracking speed and high tracking accuracy is helpful according to this study. Particularly for the solar PV system, the DC–DC converters act as a major role in the applications of renewable energy. To supply suitable DC voltage with particular loads, various topologies of incorporated DC–DC converters with solar PV systems have been used. The drawbacks like low voltage, unstable and unregulated voltage generation have been solved using these types of DC–DC converters (buck, boost, and buck-boost converters). For maximum output power, the MPPT algorithm is used as discussed above. In this study, it is suggested that the DC–DC buck-boost converter incorporated with a solar PV system shows higher efficiency compared to other converters. The solar PV power generation growth supported by limited cost with enhanced efficiency is obtained by these topologies20. At various DC voltage gains and different power levels of the converter, the higher power conversion efficiency has been achieved. The asymmetrical pulse width modulation and the phase-shift modulation have been connected. Through resonant network and in case of discontinuous current both control methods are assessed. In steady-state analysis and buck mode, the quasi-z source Series resonant converter operating principle has been explained in this study. From PV system, in different operating conditions, the buck converter input voltage is essential for a higher energy range. The new isolated buck-boost converter is used for the load regulation and ultra-wide input voltage without any decreases seen in the efficiency21. The load frequency control in power system operation is considered a major issue. Compared with the PID controller, the FOPID controller shows more flexibility. This study utilized the FOPID controller merits and gases Brownian motion optimization (GBMO) technique for load frequency to solve the load frequency control issue22. | [
"27172840"
] | [
{
"pmid": "27172840",
"title": "Design of a fractional order PID controller using GBMO algorithm for load-frequency control with governor saturation consideration.",
"abstract": "Load-frequency control is one of the most important issues in power system operation. In this paper, a Fractional Order PID (FOPID) controller based on Gases Brownian Motion Optimization (GBMO) is used in order to mitigate frequency and exchanged power deviation in two-area power system with considering governor saturation limit. In a FOPID controller derivative and integrator parts have non-integer orders which should be determined by designer. FOPID controller has more flexibility than PID controller. The GBMO algorithm is a recently introduced search method that has suitable accuracy and convergence rate. Thus, this paper uses the advantages of FOPID controller as well as GBMO algorithm to solve load-frequency control. However, computational load will higher than conventional controllers due to more complexity of design procedure. Also, a GBMO based fuzzy controller is designed and analyzed in detail. The performance of the proposed controller in time domain and its robustness are verified according to comparison with other controllers like GBMO based fuzzy controller and PI controller that used for load-frequency control system in confronting with model parameters variations."
}
] |
Complex & Intelligent Systems | 35194546 | PMC8855031 | 10.1007/s40747-022-00672-2 | Combating the infodemic: COVID-19 induced fake news recognition in social media networks | COVID-19 has caused havoc globally due to its transmission pace among the inhabitants and prolific rise in the number of people contracting the disease worldwide. As a result, the number of people seeking information about the epidemic via Internet media has increased. The impact of the hysteria that has prevailed makes people believe and share everything related to illness without questioning its truthfulness. As a result, it has amplified the misinformation spread on social media networks about the disease. Today, there is an immediate need to restrict disseminating false news, even more than ever before. This paper presents an early fusion-based method for combining key features extracted from context-based embeddings such as BERT, XLNet, and ELMo to enhance context and semantic information collection from social media posts and achieve higher accuracy for false news identification. From the observation, we found that the proposed early fusion-based method outperforms models that work on single embeddings. We also conducted detailed studies using several machine learning and deep learning models to classify misinformation on social media platforms relevant to COVID-19. To facilitate our work, we have utilized the dataset of “CONSTRAINT shared task 2021”. Our research has shown that language and ensemble models are well adapted to this role, with a 97% accuracy. | Related worksThe ever-increasing attraction and beauty of using social networks directly or indirectly influence our everyday lives. Therefore, it is not shocking that social networking has been a platform to exploit emotions through disseminating misinformation according to patterns. The adverse use of these channels was primarily used to spread inaccurate or unclear, communal, financial, and political information3. For example, false news research on Twitter in a Boston attack reveals that scare peddlers successfully manipulate social media to cause mass hysteria and panic [11]. Furthermore, distributing disinformation may have a detrimental effect on individuals and the communities they work in; it can instill fear and affect their emotional reactions to elections and natural disasters [3, 18]. The spread of misleading vaccine information has been one such example, indicating that many parents have spread fake news concerning the vaccine’s safety. Consequently, some young children’s parents have criticized the pediatric advice and declined to vaccinate or immunize their children [20]. As a result, there has been an alarming increase in sickness that could have been avoided.In recent years, the research community has been exploring the driving force behind the spread of false information on incidents such as the pandemic COVID-19 or policy scenarios [26, 36] has uncovered early social media campaigns on political, religious, and economic propaganda. Results revealed that hacked identities (using someone’s personal information and pictures to create fake profiles and use them to spread fake information [30, 31]) are used to promote misinformation and can also be used to propagate propaganda [7] attempted to track the dissemination of scientific misinformation, which concluded that most people found details reliable, because their friends also tweeted. Another potential cause is that people want to share new information [38].Fake content detection in various scenariosAvoiding the spread of fake news has been a serious concern for the scientific community. Recently, a significant amount of research has been conducted to identify fake information on social media. Fake news identification methods are typically classified into four classes: (1) news content-based; (2) social background-based [34, 43]; (3) propagation-based; and (4) information-based.Social background-based methods incorporate elements from social media user accounts and message content [32] attempted to construct interaction networks that represent the interconnections between various entities such as publishers (person who post the news), news articles, and users to perform fake news detection [6] proven that attributes gathered from user posting and re-posting behaviors will help in determining the reliability of the information [35] tried to combine post-content with social context information such as relationships among the publishers and users.Propagation-based models study the fake news spreading pattern in Social Media Networks. [19] have extracted information from social networks (structural properties of fake news spreading) to identify fake news; they built a friendship network for this purpose. [28] built an RNN-based framework consisting of three models: the first model captures the temporal patterns of user activity on a given post. The second module learns the source characteristic from user behavior, and the third module integrates the two [33] have developed hierarchical dissemination networks for false news and real news and then conducted a comparative review of structural, temporal, and linguistic networking features for the perspective on recognizing fake news. The majority of these models can be grouped as positions or propagation-based. However, the issue with these approaches is the early identification and prevention of the dissemination of fake news.Some analysis experiments have been conducted to establish the basis for false news studies with information base and knowledge diagrams, such as DBpedia and Knowledge Vault [46]. They depend on an established information base that includes “common knowledge” to equate news articles with the truthfulness of the news. However, recently emerging methods often produce new and unforeseen information, which is not contained in the current knowledge bases or knowledge diagrams. Therefore, such approaches are often incapable of dealing with news stories [46]. One more alternative is to use a credibility-based approach; these methods require external content, such as source information and news comments, to determine false news, which may not always be available, especially if the item is spread via social media [46].Content-based approaches extract various features from news content and are better adapted for early fake news detection. [21] used a content-based machine learning approach; the author employed two types of machine learning models. The first is the word-bag model, made more robust by stacking two layers of ensemble learning models. Second, neural networks that use pre-trained GloVe Word embeddings, including (a) a one-dimensional convolutional neural network (CNN) and (b) a bi-directional long short-term memory network (BiLSTM); [27] tried to detect fake news by constructing deep semi-supervised two-way CNN. Machine-generated fake news detection was explored by [45]. More recently, [40, 44] have tried to combine user comment emotion with post-content to improve the accuracy of prediction. Finally, [17] proposed a multi-layer dense neural network for identifying fake news in Urdu news articles.Fake content detection on COVID-19 dataIn the previous research, the primary subject was political and communal fake news spread through social media. However, very few studies have concentrated on spread of disinformation, linked mainly to COVID-19. [10] tried to combine topical Dirichlet Allocation (LDA) with contextualized representations from XLNet for the task of fake news identification related to COVID-19. [8] achieved 95% accuracy by applying standard machine learning algorithms on many linguistic features such as n-grams, readability, emotional tone, and punctuation for the COVID data set. [24] presented CTF, a large-scale COVID-19 Twitter dataset with tagged real and fake tweets. Cross-SEAN, a semi-supervised end-to-end neural attention model based on cross-stitching, was also proposed. They obtained an F1 score of 0.95% percent on the CTF data set. The usefulness and effectiveness of pre-trained Transformer-based language models in retrieving and classifying fake news in a specialized domain of COVID-19 were demonstrated by Vijjali et al. [37]. They also concluded that the suggested two-stage model outperforms other baseline NLP techniques. Gupta et al. [12] developed a model to detect fake news about COVID-19 in English tweets. They attain a 0.93 F1 score on the English fake news detection challenge by combining entity information taken from tweets with textual representations learned through word embeddings. Finding fake news related to COVID-19, on the other hand, is a “need of the hour” during this pandemic, and a lot of research needs to be conducted on this topic. | [
"24135961",
"27252566",
"32783794",
"26173286",
"32603243",
"29590045"
] | [
{
"pmid": "24135961",
"title": "The anatomy of a scientific rumor.",
"abstract": "The announcement of the discovery of a Higgs boson-like particle at CERN will be remembered as one of the milestones of the scientific endeavor of the 21(st) century. In this paper we present a study of information spreading processes on Twitter before, during and after the announcement of the discovery of a new particle with the features of the elusive Higgs boson on 4(th) July 2012. We report evidence for non-trivial spatio-temporal patterns in user activities at individual and global level, such as tweeting, re-tweeting and replying to existing tweets. We provide a possible explanation for the observed time-varying dynamics of user activities during the spreading of this scientific \"rumor\". We model the information spreading in the corresponding network of individuals who posted a tweet related to the Higgs boson discovery. Finally, we show that we are able to reproduce the global behavior of about 500,000 individuals with remarkable accuracy."
},
{
"pmid": "27252566",
"title": "Social Media's Initial Reaction to Information and Misinformation on Ebola, August 2014: Facts and Rumors.",
"abstract": "OBJECTIVE\nWe analyzed misinformation about Ebola circulating on Twitter and Sina Weibo, the leading Chinese microblog platform, at the outset of the global response to the 2014-2015 Ebola epidemic to help public health agencies develop their social media communication strategies.\n\n\nMETHODS\nWe retrieved Twitter and Sina Weibo data created within 24 hours of the World Health Organization announcement of a Public Health Emergency of International Concern (Batch 1 from August 8, 2014, 06:50:00 Greenwich Mean Time [GMT] to August 9, 2014, 06:49:59 GMT) and seven days later (Batch 2 from August 15, 2014, 06:50:00 GMT to August 16, 2014, 06:49:59 GMT). We obtained and analyzed a 1% random sample of tweets containing the keyword Ebola. We retrieved all Sina Weibo posts with Chinese keywords for Ebola for analysis. We analyzed changes in frequencies of keywords, hashtags, and Web links using relative risk (RR) and c(2) feature selection algorithm. We identified misinformation by manual coding and categorizing randomly selected sub-datasets.\n\n\nRESULTS\nWe identified two speculative treatments (i.e., bathing in or drinking saltwater and ingestion of Nano Silver, an experimental drug) in our analysis of changes in frequencies of keywords and hashtags. Saltwater was speculated to be protective against Ebola in Batch 1 tweets but their mentions decreased in Batch 2 (RR=0.11 for \"salt\" and RR=0.14 for \"water\"). Nano Silver mentions were higher in Batch 2 than in Batch 1 (RR=10.5). In our manually coded samples, Ebola-related misinformation constituted about 2% of Twitter and Sina Weibo content. A range of 36%-58% of the posts were news about the Ebola outbreak and 19%-24% of the posts were health information and responses to misinformation in both batches. In Batch 2, 43% of Chinese microblogs focused on the Chinese government sending medical assistance to Guinea.\n\n\nCONCLUSION\nMisinformation about Ebola was circulated at a very low level globally in social media in either batch. Qualitative and quantitative analyses of social media posts can provide relevant information to public health agencies during emergency responses."
},
{
"pmid": "32783794",
"title": "COVID-19-Related Infodemic and Its Impact on Public Health: A Global Social Media Analysis.",
"abstract": "Infodemics, often including rumors, stigma, and conspiracy theories, have been common during the COVID-19 pandemic. Monitoring social media data has been identified as the best method for tracking rumors in real time and as a possible way to dispel misinformation and reduce stigma. However, the detection, assessment, and response to rumors, stigma, and conspiracy theories in real time are a challenge. Therefore, we followed and examined COVID-19-related rumors, stigma, and conspiracy theories circulating on online platforms, including fact-checking agency websites, Facebook, Twitter, and online newspapers, and their impacts on public health. Information was extracted between December 31, 2019 and April 5, 2020, and descriptively analyzed. We performed a content analysis of the news articles to compare and contrast data collected from other sources. We identified 2,311 reports of rumors, stigma, and conspiracy theories in 25 languages from 87 countries. Claims were related to illness, transmission and mortality (24%), control measures (21%), treatment and cure (19%), cause of disease including the origin (15%), violence (1%), and miscellaneous (20%). Of the 2,276 reports for which text ratings were available, 1,856 claims were false (82%). Misinformation fueled by rumors, stigma, and conspiracy theories can have potentially serious implications on the individual and community if prioritized over evidence-based guidelines. Health agencies must track misinformation associated with the COVID-19 in real time, and engage local communities and government stakeholders to debunk misinformation."
},
{
"pmid": "26173286",
"title": "Misinformation and Its Correction: Continued Influence and Successful Debiasing.",
"abstract": "The widespread prevalence and persistence of misinformation in contemporary societies, such as the false belief that there is a link between childhood vaccinations and autism, is a matter of public concern. For example, the myths surrounding vaccinations, which prompted some parents to withhold immunization from their children, have led to a marked increase in vaccine-preventable disease, as well as unnecessary public expenditure on research and public-information campaigns aimed at rectifying the situation. We first examine the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely. Misinformation can originate from rumors but also from works of fiction, governments and politicians, and vested interests. Moreover, changes in the media landscape, including the arrival of the Internet, have fundamentally influenced the ways in which information is communicated and misinformation is spread. We next move to misinformation at the level of the individual, and review the cognitive factors that often render misinformation resistant to correction. We consider how people assess the truth of statements and what makes people believe certain things but not others. We look at people's memory for misinformation and answer the questions of why retractions of misinformation are so ineffective in memory updating and why efforts to retract misinformation can even backfire and, ironically, increase misbelief. Though ideology and personal worldviews can be major obstacles for debiasing, there nonetheless are a number of effective techniques for reducing the impact of misinformation, and we pay special attention to these factors that aid in debiasing. We conclude by providing specific recommendations for the debunking of misinformation. These recommendations pertain to the ways in which corrections should be designed, structured, and applied in order to maximize their impact. Grounded in cognitive psychological theory, these recommendations may help practitioners-including journalists, health professionals, educators, and science communicators-design effective misinformation retractions, educational tools, and public-information campaigns."
},
{
"pmid": "32603243",
"title": "Fighting COVID-19 Misinformation on Social Media: Experimental Evidence for a Scalable Accuracy-Nudge Intervention.",
"abstract": "Across two studies with more than 1,700 U.S. adults recruited online, we present evidence that people share false claims about COVID-19 partly because they simply fail to think sufficiently about whether or not the content is accurate when deciding what to share. In Study 1, participants were far worse at discerning between true and false content when deciding what they would share on social media relative to when they were asked directly about accuracy. Furthermore, greater cognitive reflection and science knowledge were associated with stronger discernment. In Study 2, we found that a simple accuracy reminder at the beginning of the study (i.e., judging the accuracy of a non-COVID-19-related headline) nearly tripled the level of truth discernment in participants' subsequent sharing intentions. Our results, which mirror those found previously for political fake news, suggest that nudging people to think about accuracy is a simple way to improve choices about what to share on social media."
},
{
"pmid": "29590045",
"title": "The spread of true and false news online.",
"abstract": "We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise ~126,000 stories tweeted by ~3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it."
}
] |
Frontiers in Computational Neuroscience | null | PMC8855153 | 10.3389/fncom.2021.784592 | Reinforcement Learning Model With Dynamic State Space Tested on Target Search Tasks for Monkeys: Self-Determination of Previous States Based on Experience Saturation and Decision Uniqueness | The real world is essentially an indefinite environment in which the probability space, i. e., what can happen, cannot be specified in advance. Conventional reinforcement learning models that learn under uncertain conditions are given the state space as prior knowledge. Here, we developed a reinforcement learning model with a dynamic state space and tested it on a two-target search task previously used for monkeys. In the task, two out of four neighboring spots were alternately correct, and the valid pair was switched after consecutive correct trials in the exploitation phase. The agent was required to find a new pair during the exploration phase, but it could not obtain the maximum reward by referring only to the single previous one trial; it needed to select an action based on the two previous trials. To adapt to this task structure without prior knowledge, the model expanded its state space so that it referred to more than one trial as the previous state, based on two explicit criteria for appropriateness of state expansion: experience saturation and decision uniqueness of action selection. The model not only performed comparably to the ideal model given prior knowledge of the task structure, but also performed well on a task that was not envisioned when the models were developed. Moreover, it learned how to search rationally without falling into the exploration–exploitation trade-off. For constructing a learning model that can adapt to an indefinite environment, the method of expanding the state space based on experience saturation and decision uniqueness of action selection used by our model is promising. | Relationship of the Proposed Model to Related WorksThe hierarchical Dirichlet model, which is compared with the proposed model in Figures 12H–K, is useful for language recognition problems, such as word estimation in sentences and word segmentation in Chinese and Japanese (Mochihashi and Sumita, 2007; Mochihashi et al., 2009). This model shows unstable performance in the two-target search task compared with the proposed model, although it often exhibits good performance. However, the rapid expansion of the state in the hierarchical Dirichlet model seems to be useful in problems such as language recognition, where the number of samples must be small, unlike the two-target search task where tens of thousands of trials can be sampled. The two criteria for the appropriateness of state expansion used in the proposed model are somewhat strict; if similar but more relaxed criteria are incorporated into the iHMM for language recognition processing, the model performance may improve.As a learning architecture using KLDs, the free-energy principle has recently attracted considerable attention (Friston, 2009, 2010; Friston et al., 2009). This principle infers hidden variables in the environment such that free energy is minimized; specifically, predictions are maximized while allowing learners to actively work on the environment. KLD is used to maximize predictions; therefore, the computation aims to make no better predictions. This corresponds to the calculation of experience saturation in our model. It also may include active perceptual behavior (e.g., moving the eyes) to maximize prediction, which is consistent with our own behavior. However, this method is similar to the POMDP method in that it includes estimation of uncertain states, and the possible states are provided as prior knowledge. Thus, we cannot conclude that this principle is inherently equipped with the ability to adapt to indefinite environments.Our proposed model attempted to extract complex temporal structures in the environment by using dynamic state space, similar to the reconstruction of dynamical systems in the field of non-linear dynamics. In particular, embedding is regarded as a method for identifying the underlying dynamics from time series data (e.g., Takens, 1981; Sauer et al., 1991; Ikeguchi and Aihara, 1995). For example, a chaotic dynamical system requires at least three dimensions. To reconstruct the trajectory of the chaotic system from the time series, two time intercepts (two-dimensional reconstruction map) are insufficient; three time intercepts (three-dimensional reconstruction map) are necessary. By applying the proposed model, we may be able to build a model that can learn to automatically reconstruct the non-linear dynamical system behind the time series data, just as our model could learn the task structure behind the three-target search task. | [
"27295638",
"26353250",
"19559644",
"20068583",
"19641614",
"15436888",
"25411455",
"20143140",
"11240119",
"21869067",
"5342881",
"31923449",
"25027732",
"18252744",
"31719167",
"16611182",
"17183266",
"9812901"
] | [
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "26353250",
"title": "Bayesian Nonparametric Methods for Partially-Observable Reinforcement Learning.",
"abstract": "Making intelligent decisions from incomplete information is critical in many applications: for example, robots must choose actions based on imperfect sensors, and speech-based interfaces must infer a user's needs from noisy microphone inputs. What makes these tasks hard is that often we do not have a natural representation with which to model the domain and use for choosing actions; we must learn about the domain's properties while simultaneously performing the task. Learning a representation also involves trade-offs between modeling the data that we have seen previously and being able to make predictions about new data. This article explores learning representations of stochastic systems using Bayesian nonparametric statistics. Bayesian nonparametric methods allow the sophistication of a representation to scale gracefully with the complexity in the data. Our main contribution is a careful empirical evaluation of how representations learned using Bayesian nonparametric methods compare to other standard learning approaches, especially in support of planning and control. We show that the Bayesian aspects of the methods result in achieving state-of-the-art performance in decision making with relatively few samples, while the nonparametric aspects often result in fewer computations. These results hold across a variety of different techniques for choosing actions given a representation."
},
{
"pmid": "19559644",
"title": "The free-energy principle: a rough guide to the brain?",
"abstract": "This article reviews a free-energy formulation that advances Helmholtz's agenda to find principles of brain function based on conservation laws and neuronal energy. It rests on advances in statistical physics, theoretical biology and machine learning to explain a remarkable range of facts about brain structure and function. We could have just scratched the surface of what this formulation offers; for example, it is becoming clear that the Bayesian brain is just one facet of the free-energy principle and that perception is an inevitable consequence of active exchange with the environment. Furthermore, one can see easily how constructs like memory, attention, value, reinforcement and salience might disclose their simple relationships within this framework."
},
{
"pmid": "20068583",
"title": "The free-energy principle: a unified brain theory?",
"abstract": "A free-energy principle has been proposed recently that accounts for action, perception and learning. This Review looks at some key brain theories in the biological (for example, neural Darwinism) and physical (for example, information theory and optimal control theory) sciences from the free-energy perspective. Crucially, one key theme runs through each of these theories - optimization. Furthermore, if we look closely at what is optimized, the same quantity keeps emerging, namely value (expected reward, expected utility) or its complement, surprise (prediction error, expected cost). This is the quantity that is optimized under the free-energy principle, which suggests that several global brain theories might be unified within a free-energy framework."
},
{
"pmid": "19641614",
"title": "Reinforcement learning or active inference?",
"abstract": "This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain."
},
{
"pmid": "25411455",
"title": "Surprise signals in the supplementary eye field: rectified prediction errors drive exploration-exploitation transitions.",
"abstract": "Visual search is coordinated adaptively by monitoring and predicting the environment. The supplementary eye field (SEF) plays a role in oculomotor control and outcome evaluation. However, it is not clear whether the SEF is involved in adjusting behavioral modes based on preceding feedback. We hypothesized that the SEF drives exploration-exploitation transitions by generating \"surprise signals\" or rectified prediction errors, which reflect differences between predicted and actual outcomes. To test this hypothesis, we introduced an oculomotor two-target search task in which monkeys were required to find two valid targets among four identical stimuli. After they detected the valid targets, they exploited their knowledge of target locations to obtain a reward by choosing the two valid targets alternately. Behavioral analysis revealed two distinct types of oculomotor search patterns: exploration and exploitation. We found that two types of SEF neurons represented the surprise signals. The error-surprise neurons showed enhanced activity when the monkey received the first error feedback after the target pair change, and this activity was followed by an exploratory oculomotor search pattern. The correct-surprise neurons showed enhanced activity when the monkey received the first correct feedback after an error trial, and this increased activity was followed by an exploitative, fixed-type search pattern. Our findings suggest that error-surprise neurons are involved in the transition from exploitation to exploration and that correct-surprise neurons are involved in the transition from exploration to exploitation."
},
{
"pmid": "20143140",
"title": "Attributions of blame and responsibility in sexual harassment: reexamining a psychological model.",
"abstract": "Kelley's (Nebr Symp Motiv 15:192-238, 1967) attribution theory can inform sexual harassment research by identifying how observers use consensus, consistency, and distinctiveness information in determining whether a target or perpetrator is responsible for a sexual harassment situation. In this study, Kelley's theory is applied to a scenario in which a male perpetrator sexually harasses a female target in a university setting. Results from 314 predominantly female college students indicate that consistency and consensus information significantly affect participants' judgments of blame and responsibility for the situation. The authors discuss the importance of the reference groups used to derive consensus and distinctiveness information, and reintroduce Kelley's attribution theory as a means of understanding observers' perceptions of sexual harassment."
},
{
"pmid": "11240119",
"title": "Visually based path-planning by Japanese monkeys.",
"abstract": "To construct an animal model of strategy formation, we designed a maze path-finding task. First, we asked monkeys to capture a goal in the maze by moving a cursor on the screen. Cursor movement was linked to movements of each wrist. When the animals learned the association between cursor movement and wrist movement, we established a start and a goal in the maze, and asked them to find a path between them. We found that the animals took the shortest pathway, rather than approaching the goal randomly. We further found that the animals adopted a strategy of selecting a fixed intermediate point in the visually presented maze to select one of the shortest pathways, suggesting a visually based path planning. To examine their capacity to use that strategy flexibly, we transformed the task by blocking pathways in the maze, providing a problem to solve. The animals then developed a strategy of solving the problem by planning a novel shortest path from the start to the goal and rerouting the path to bypass the obstacle."
},
{
"pmid": "21869067",
"title": "Automata in random environments with application to machine intelligence.",
"abstract": "Computers and brains are modeled by finite and probabilistic automata, respectively. Probabilistic automata are known to be strictly more powerful than finite automata. The observation that the environment affects behavior of both computer and brain is made. Automata are then modeled in an environment. Theorem 1 shows that useful environmental models are those which are infinite sets. A probabilistic structure is placed on the environment set. Theorem 2 compares the behavior of finite (deterministic) and probabilistic automata in random environments. Several interpretations of Theorem 2 are discussed which offer some insight into some mathematical limits of machine intelligence."
},
{
"pmid": "31923449",
"title": "Differences in task-phase-dependent time-frequency patterns of local field potentials in the dorsal and ventral regions of the monkey lateral prefrontal cortex.",
"abstract": "Although the ventral and dorsal regions of the lateral prefrontal cortex (lPFC) are anatomically distinct, their functional differentiation is still controversial. Local field potentials (LFPs) reflect synaptic input and are widely modulated by information from both the external world and the internal state of the brain. However, functional mapping using LFPs has not been fully tested and is expected to provide new insights into their differences. Thus, the present study analyzed the task-phase-dependent modulations of LFPs recorded from the lPFC of monkeys as they performed a shape manipulation task. Hierarchical cluster analyses of the LFP time-frequency spectra revealed characteristic patterns, especially in the theta and low gamma ranges. In particular, the theta range distinguished the ventral and dorsal parts of the lPFC well. The ventral part exhibited a block of similar LFP patterns whereas the dorsal part showed scattered patterns of small or single sites with different LFP patterns. These results suggest that functional segregation within the lPFC, especially between the ventral and dorsal regions, can be evaluated using task-phase-dependent time-frequency modulations in LFPs."
},
{
"pmid": "25027732",
"title": "Spatiotemporal patterns of current source density in the prefrontal cortex of a behaving monkey.",
"abstract": "One of the fundamental missions of neuroscience is to explore the input and output properties of neuronal networks to reveal their functional significance. However, it is technically difficult to examine synaptic inputs into neuronal circuits in behaving animals. Here, we conducted current source density (CSD) analysis on local field potentials (LFPs) recorded simultaneously using a multi-contact electrode in the prefrontal cortex (PFC) of a behaving monkey. We observed current sink task-dependent spatiotemporal patterns considered to reflect the synaptic input to neurons adjacent to the recording site. Specifically, the inferior convex current sink in the PFC was dominant during the delay period, whereas the current sink was prominent in the principal sulcus during the sample cue and test cue periods. Surprisingly, sulcus current sink patterns were spatially periodic, which corresponds to the columnar structure suggested by previous anatomical studies. The approaches used in the current study will help to elucidate how the PFC network performs executive functions according to its synaptic input."
},
{
"pmid": "18252744",
"title": "Discharge synchrony during the transition of behavioral goal representations encoded by discharge rates of prefrontal neurons.",
"abstract": "To investigate the temporal relationship between synchrony in the discharge of neuron pairs and modulation of the discharge rate, we recorded the neuronal activity of the lateral prefrontal cortex of monkeys performing a behavioral task that required them to plan an immediate goal of action to attain a final goal. Information about the final goal was retrieved via visual instruction signals, whereas information about the immediate goal was generated internally. The synchrony of neuron pair discharges was analyzed separately from changes in the firing rate of individual neurons during a preparatory period. We focused on neuron pairs that exhibited a representation of the final goal followed by a representation of the immediate goal at a later stage. We found that changes in synchrony and discharge rates appeared to be complementary at different phases of the behavioral task. Synchrony was maximized during a specific phase in the preparatory period corresponding to a transitional stage when the neuronal activity representing the final goal was replaced with that representing the immediate goal. We hypothesize that the transient increase in discharge synchrony is an indication of a process that facilitates dynamic changes in the prefrontal neural circuits in order to undergo profound state changes."
},
{
"pmid": "31719167",
"title": "Dynamic Axis-Tuned Cells in the Monkey Lateral Prefrontal Cortex during a Path-Planning Task.",
"abstract": "The lateral prefrontal cortex (lPFC) plays a crucial role in the cognitive processes known as executive functions, which are necessary for the planning of goal-directed behavior in complex and constantly changing environments. To adapt to such environments, the lPFC must use its neuronal resources in a flexible manner. To investigate the mechanism by which lPFC neurons code directional information flexibly, the present study explored the tuning properties and time development of lPFC neurons in male Japanese monkeys during a path-planning task, which required them to move a cursor to a final goal in a stepwise manner within a checkerboard-like maze. We identified \"axis-tuned\" cells that preferred two opposing directions of immediate goals (i.e., vertical and horizontal directions). Among them, a considerable number of these axis-tuned cells dynamically transformed from vector tuning to a single final-goal direction. We also found that the activities of axis-tuned cells, especially pyramidal neurons, were also modulated by the abstract sequence patterns that the animals were to execute. These findings suggest that the axis-tuned cells change what they code (the type of behavioral goal) as well as how they code (their tuning shapes) so that the lPFC can represent a large number of possible actions or sequences with limited neuronal resources. The dynamic axis-tuned cells must reflect the flexible coding of behaviorally relevant information at the single neuron level by the lPFC to adapt to uncertain environments.SIGNIFICANCE STATEMENT The lateral PFC (lPFC) plays a crucial role in the planning of goal-directed behavior in uncertain environments. To adapt to such situations, the lPFC must flexibly encode behaviorally relevant information. Here, we investigated the goal-tuning properties of neuronal firing in the monkey lPFC during a path-planning task. We identified axis-tuned cells that preferred \"up-down\" or \"left-right\" immediate goals, and found that many were dynamically transformed from vector tuning to a final-goal direction. The activities of neurons, especially pyramidal neurons, were also modulated by the abstract sequence patterns. Our findings suggest that PFC neurons can alter not only what they code (behavioral goal) but also how they code (tuning shape) when coping with unpredictable environments with limited neuronal resources."
},
{
"pmid": "16611182",
"title": "God does not play dice: causal determinism and preschoolers' causal inferences.",
"abstract": "Three studies investigated children's belief in causal determinism. If children are determinists, they should infer unobserved causes whenever observed causes appear to act stochastically. In Experiment 1, 4-year-olds saw a stochastic generative cause and inferred the existence of an unobserved inhibitory cause. Children traded off inferences about the presence of unobserved inhibitory causes and the absence of unobserved generative causes. In Experiment 2, 4-year-olds used the pattern of indeterminacy to decide whether unobserved variables were generative or inhibitory. Experiment 3 suggested that children (4 years old) resist believing that direct causes can act stochastically, although they accept that events can be stochastically associated. Children's deterministic assumptions seem to support inferences not obtainable from other cues."
},
{
"pmid": "17183266",
"title": "Categorization of behavioural sequences in the prefrontal cortex.",
"abstract": "Although it has long been thought that the prefrontal cortex of primates is involved in the integrative regulation of behaviours, the neural architecture underlying specific aspects of cognitive behavioural planning has yet to be clarified. If subjects are required to remember a large number of complex motor sequences and plan to execute each of them individually, categorization of the sequences according to the specific temporal structure inherent in each subset of sequences serves to facilitate higher-order planning based on memory. Here we show, using these requirements, that cells in the lateral prefrontal cortex selectively exhibit activity for a specific category of behavioural sequences, and that categories of behaviours, embodied by different types of movement sequences, are represented in prefrontal cells during the process of planning. This cellular activity implies the generation of neural representations capable of storing structured event complexes at an abstract level, exemplifying the development of macro-structured action knowledge in the lateral prefrontal cortex."
},
{
"pmid": "9812901",
"title": "Role for cingulate motor area cells in voluntary movement selection based on reward.",
"abstract": "Most natural actions are chosen voluntarily from many possible choices. An action is often chosen based on the reward that it is expected to produce. What kind of cellular activity in which area of the cerebral cortex is involved in selecting an action according to the expected reward value? Results of an analysis in monkeys of cellular activity during the performance of reward-based motor selection and the effects of chemical inactivation are presented. We suggest that cells in the rostral cingulate motor area, one of the higher order motor areas in the cortex, play a part in processing the reward information for motor selection."
}
] |
Journal of Big Data | null | PMC8857750 | 10.1186/s40537-022-00572-9 | Detection of fickle trolls in large-scale online social networks | Online social networks have attracted billions of active users over the past decade. These systems play an integral role in the everyday life of many people around the world. As such, these platforms are also attractive for misinformation, hoaxes, and fake news campaigns which usually utilize social trolls and/or social bots for propagation. Detection of so-called social trolls in these platforms is challenging due to their large scale and dynamic nature where users’ data are generated and collected at the scale of multi-billion records per hour. In this paper, we focus on fickle trolls, i.e., a special type of trolling activity in which the trolls change their identity frequently to maximize their social relations. This kind of trolling activity may become irritating for the users and also may pose a serious threat to their privacy. To the best of our knowledge, this is the first work that introduces mechanisms to detect these trolls. In particular, we discuss and analyze troll detection mechanisms on different scales. We prove that the order of centralized single-machine detection algorithm is \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$O(n^3)$$\end{document}O(n3) which is slow and impractical for early troll detection in large-scale social platforms comprising of billions of users. We also prove that the streaming approach where data is gradually fed to the system is not practical in many real-world scenarios. In light of such shortcomings, we then propose a massively parallel detection approach. Rigorous evaluations confirm that our proposed method is at least six times faster compared to conventional parallel approaches. | Background and related worksMany research studies have focused their attention on various aspects and challenges that online social networks face. The focus of these studies ranges from community detection [11–15], social recommender systems [16, 17], social media analysis [18–21] to misbehaviour and disruptive activities [22–25]. In particular, the topic of troll detection in online social networks has attracted many research studies in the course of the past few years [26–28]. Various studies have focused on troll detection approaches. Table 1 lists and compares recent approaches. Tomaiuolo et al. [29] surveyed troll detection and prevention approach comprehensively. Tsantarliotis et al. [4] presented a framework to define and predict trolling activity in social networks. Fornacciari et al. [5] focus on introducing a holistic for troll detection on Twitter2. Alsmadi [30] discussed features related to trolling activity using Twitter’s Russian Troll Tweets dataset. Other studies also focused their attention on various aspects of that dataset [31–33], some of them using Botometer which is a machine learning approach. However, Rauchfleisch et al. [34] discussed that these approaches suffer from relatively high false negatives and also false positives, especially for languages other than English. Tsantarliotis et al. [35] proposed a graph-theoretic model for troll detection in online social networks. They introduced a metric called TVRank to measure the severity of the troll activity with respect to a post.Table 1Comparison of various troll and bot detection approaches discussed in the literatureRef.DatasetTechniqueLimitation(s)[4]RedditTroll vulnerability metrics to predict a post is likely to become the victim of a troll attack.Focuses on the contents of posts and the activity history of users; does not consider trolling behaviour directly.[5]TwitterTakes Holistic approach, i.e., it considers various features such as sentiment analysis, time and frequency of action and etc.The approach is slow since it considers a magnitude of features also it suffers from false positive detection[30]TwitterMulti feature analysis, i.e., it considers the timing of tweets and the contentsIt only focuses on the dataset, e.g., the usage of formal tone in trolls instead of slang and slurs[31]TwitterClassification based on multiple behavioural and content-based features such as wording and hashtags or mentionsIt suffers from high false positive and only concentrates on the behaviours extracted from one specific dataset[32]TwitterClassification based on bot detection using Botometer and geolocation dataInaccuracy of Botometer and the ability of trolls and bots to mask their real locationOther research efforts have been devoted to analyzing the behaviors and socio-cultural features of trolling activity and reactions of the target society. Mkono [36] studied the trolling activity on Tripadvisor3 which is a social platform specialized in travel and tourism. Hodge et al. [37] examined the geographical distribution of trolling on social media platforms. Sun et al. [26] studied the reaction of YouTube4 users to the trolls. They showed that well-connected users situated in densely connected communities with a prior pattern of engaging trolls are more likely to respond to trolls, especially when the trolling messages convey negative sentiments. Basak et al. [38] focused their attention on a specific type of trolling activity, i.e., public shaming. March [39] analyzed the psychological and behavioral background of trolling activities.More recently, few research studies have focused on trolling activities, their detection, and prevention during the COVID-19 pandemic [40]. Jachim et al. [41] introduced a machine learning-based linguistic analysis approach to detect the so-called “COVID-19 infodemic” trolls. Thomas et al. [42] discussed a trolling activity during the recent pandemic. Sharma et al. [43] analyzed disinformation campaigns on major social media platforms conducted by social trolls during the COVID-19 pandemic.In spite of the above effort, detection methods for fickle trolls have not been fully investigated in the literature. Specifically, the existing methods can not be altered to detect such activity since the main aim of the fickle trolls is to maximize their followers and thus, they may not exhibit behaviors that can be detected by typical methods. This paper aims to provide novel approaches to detect fickle trolls at different scales. We also hope that this paper provides a better understanding of this malicious behavior and serves as a basis for future investigations and research studies on this topic.Fig. 1Changes in the number of followers per week for the case study | [
"34429255"
] | [
{
"pmid": "34429255",
"title": "How social media shapes polarization.",
"abstract": "This article reviews the empirical evidence on the relationship between social media and political polarization. We argue that social media shapes polarization through the following social, cognitive, and technological processes: partisan selection, message content, and platform design and algorithms."
}
] |
Frontiers in Big Data | null | PMC8859470 | 10.3389/fdata.2022.796897 | Network Models and Simulation Analytics for Multi-scale Dynamics of Biological Invasions | Globalization and climate change facilitate the spread and establishment of invasive species throughout the world via multiple pathways. These spread mechanisms can be effectively represented as diffusion processes on multi-scale, spatial networks. Such network-based modeling and simulation approaches are being increasingly applied in this domain. However, these works tend to be largely domain-specific, lacking any graph theoretic formalisms, and do not take advantage of more recent developments in network science. This work is aimed toward filling some of these gaps. We develop a generic multi-scale spatial network framework that is applicable to a wide range of models developed in the literature on biological invasions. A key question we address is the following: how do individual pathways and their combinations influence the rate and pattern of spread? The analytical complexity arises more from the multi-scale nature and complex functional components of the networks rather than from the sizes of the networks. We present theoretical bounds on the spectral radius and the diameter of multi-scale networks. These two structural graph parameters have established connections to diffusion processes. Specifically, we study how network properties, such as spectral radius and diameter are influenced by model parameters. Further, we analyze a multi-pathway diffusion model from the literature by conducting simulations on synthetic and real-world networks and then use regression tree analysis to identify the important network and diffusion model parameters that influence the dynamics. | 2. Related work2.1. Networked Representations of Invasive Species SpreadThe current state-of-the-art for modeling invasive species involves developing risk maps using ecological niche models (Venette et al., 2010). Such models account for climate and biology of the invasive species and its hosts to map the long-term establishment potential. They do not provide a causal explanation to the extent and dispersion of spread or explicitly account for human-mediated pathways. However, in recent years, network diffusion models are being increasingly applied to model the spread dynamics of invasive species in order to account for their long distance spread. Hernandez Nopsa et al. (2015) studied the structure of rail networks for grain transport in the United States and Eastern Australia to identify the shortest paths for the anthropogenic dispersal of pests and mycotoxins, as well as the major sources, sinks, and bridges for movement. Sutrave et al. (2012) used an SI model (Easley and Kleinberg, 2010) to county-to-county network to model wind speed and direction, and host density to identify locations to monitor soybean rust, a pathogen. Koch et al. (2014) assess the risk of forest pests due to camping activities. Venkatramanan et al. (2020) used a Bayesian inference method to identify the most likely spread pattern of an invasive pest by modeling the spread as a diffusion process on a time varying network.2.2. Complex Multi-Pathway Dispersal ModelsCarrasco et al. (2010) considered both local and long distance spread in a process-based spatially explicit simulation model to study pest of the maize crop. They use phenology models to estimate pest population size, a negative power law kernel for self-mediated spread, and a gravity model representation of long distance edges. Our work is based on a similar approach by McNitt et al. (2019) who model the multi-pathway spread by accounting for self-mediated spread and spread within and between areas of high human activity (e.g., urban areas). Both works account for suitability of establishment and the distribution of host crop production. Similar modeling approaches have been applied to study infectious diseases in humans and livestock (Ajelli et al., 2010; Kim et al., 2018; Venkatramanan et al., 2021).2.3. Spectral Characterization of Network DynamicsSeveral structural properties of networks have been used to understand the progression of a diffusion process. These include basic properties such as degree distribution or clustering coefficient to other properties such as graph spectrum, diameter, and degeneracy, to name a few. The spectrum of a graph is the set of eigenvalues of its adjacency matrix. There are several works that relate spectrum, particularly the first eigenvalue or spectral radius λ1(G) of the adjacency matrix of a graph G, to disease spread in SEIR-like epidemic models (Ganesh et al., 2005; Prakash et al., 2012). A well known result that highlights the impact of the network structure on the dynamics is the following: an epidemic dies out “quickly” if λ1(G) ≤ T, where T is a threshold that depends on the disease model. This relationship has motivated a number of works on epidemic control where the objective is to find an optimal set of nodes (or edges) to remove from the network that leads to the maximum reduction in its spectral radius (Van Mieghem et al., 2011; Saha et al., 2015; Zhang et al., 2016; Chen et al., 2021).2.4. Diameter and Network DynamicsThe diameter of real-world networks is an important structural parameter used to characterize epidemics on real-world graphs (Holme, 2013; Pastor-Satorras et al., 2015). In the literature, average path lengths between pairs of nodes and diameter of the network have been observed to have an effect on the rate of diffusion. In particular, the lower the diameter, the higher is the diffusion rate (Banos et al., 2015; Taghvaei et al., 2020; Kamra et al., 2021). In spatial networks, the diameter tends to be large when compared to social networks that exhibit the small-world effect (Watts and Strogatz, 1998), and long distance edges can be responsible for bringing the diameter down (Barthélemy, 2011).2.5. Machine Learning and Simulation SystemsFrom the works described above, it is clear that even simple SIR-like processes (Easley and Kleinberg, 2010) on arbitrary static networks are difficult to characterize. In recent years, there have been several studies that use a combination of extensive simulations and machine learning algorithms to understand the phase space of complex models. Fox et al. (2019) explored multiple ways in which such a nexus of the two modeling approaches can be used to understand complex systems. Lamperti et al. (2018) used extreme gradient boosted trees for the purpose of phase space exploration of a complex model. Our approach to apply classification and regression trees (CART) and random forests (Breiman, 2001, 2017) is motivated by this work. Other methods based on Gaussian emulators have been used for calibrating agent-based models (Fadikar et al., 2018). | [
"20044936",
"20018697",
"20393559",
"25843374",
"33790468",
"22611848",
"26955074",
"24386377",
"34151134",
"29311619",
"25007186",
"34679079",
"19792216",
"31615355",
"19688925",
"21296708",
"23056174",
"19357301",
"22701580",
"31913322",
"21867251",
"34375835",
"33397941",
"32742052",
"23642247",
"9623998",
"26045330",
"27295638"
] | [
{
"pmid": "20044936",
"title": "Bacteremia causes hippocampal apoptosis in experimental pneumococcal meningitis.",
"abstract": "BACKGROUND\nBacteremia and systemic complications both play important roles in brain pathophysiological alterations and the outcome of pneumococcal meningitis. Their individual contributions to the development of brain damage, however, still remain to be defined.\n\n\nMETHODS\nUsing an adult rat pneumococcal meningitis model, the impact of bacteremia accompanying meningitis on the development of hippocampal injury was studied. The study comprised of the three groups: I. Meningitis (n = 11), II. meningitis with attenuated bacteremia resulting from iv injection of serotype-specific pneumococcal antibodies (n = 14), and III. uninfected controls (n = 6).\n\n\nRESULTS\nPneumococcal meningitis resulted in a significantly higher apoptosis score 0.22 (0.18-0.35) compared to uninfected controls (0.02 (0.00-0.02), Mann Whitney test, P = 0.0003). Also, meningitis with an attenuation of bacteremia by antibody treatment resulted in significantly reduced apoptosis (0.08 (0.02-0.20), P = 0.01) as compared to meningitis.\n\n\nCONCLUSIONS\nOur results demonstrate that bacteremia accompanying meningitis plays an important role in the development of hippocampal injury in pneumococcal meningitis."
},
{
"pmid": "20018697",
"title": "Multiscale mobility networks and the spatial spreading of infectious diseases.",
"abstract": "Among the realistic ingredients to be considered in the computational modeling of infectious diseases, human mobility represents a crucial challenge both on the theoretical side and in view of the limited availability of empirical data. To study the interplay between short-scale commuting flows and long-range airline traffic in shaping the spatiotemporal pattern of a global epidemic we (i) analyze mobility data from 29 countries around the world and find a gravity model able to provide a global description of commuting patterns up to 300 kms and (ii) integrate in a worldwide-structured metapopulation epidemic model a timescale-separation technique for evaluating the force of infection due to multiscale mobility processes in the disease dynamics. Commuting flows are found, on average, to be one order of magnitude larger than airline flows. However, their introduction into the worldwide model shows that the large-scale pattern of the simulated epidemic exhibits only small variations with respect to the baseline case where only airline traffic is considered. The presence of short-range mobility increases, however, the synchronization of subpopulations in close proximity and affects the epidemic behavior at the periphery of the airline transportation infrastructure. The present approach outlines the possibility for the definition of layered computational approaches where different modeling assumptions and granularities can be used consistently in a unifying multiscale framework."
},
{
"pmid": "20393559",
"title": "Catastrophic cascade of failures in interdependent networks.",
"abstract": "Complex networks have been studied intensively for a decade, but research still focuses on the limited case of a single, non-interacting network. Modern systems are coupled together and therefore should be modelled as interdependent networks. A fundamental property of interdependent networks is that failure of nodes in one network may lead to failure of dependent nodes in other networks. This may happen recursively and can lead to a cascade of failures. In fact, a failure of a very small fraction of nodes in one network may lead to the complete fragmentation of a system of several interdependent networks. A dramatic real-world example of a cascade of failures ('concurrent malfunction') is the electrical blackout that affected much of Italy on 28 September 2003: the shutdown of power stations directly led to the failure of nodes in the Internet communication network, which in turn caused further breakdown of power stations. Here we develop a framework for understanding the robustness of interacting networks subject to such cascading failures. We present exact analytical solutions for the critical fraction of nodes that, on removal, will lead to a failure cascade and to a complete fragmentation of two interdependent networks. Surprisingly, a broader degree distribution increases the vulnerability of interdependent networks to random failure, which is opposite to how a single network behaves. Our findings highlight the need to consider interdependent network properties in designing robust networks."
},
{
"pmid": "25843374",
"title": "Thirteen challenges in modelling plant diseases.",
"abstract": "The underlying structure of epidemiological models, and the questions that models can be used to address, do not necessarily depend on the host organism in question. This means that certain preoccupations of plant disease modellers are similar to those of modellers of diseases in human, livestock and wild animal populations. However, a number of aspects of plant epidemiology are very distinctive, and this leads to specific challenges in modelling plant diseases, which in turn sets a certain agenda for modellers. Here we outline a selection of 13 challenges, specific to plant disease epidemiology, that we feel are important targets for future work."
},
{
"pmid": "33790468",
"title": "High and rising economic costs of biological invasions worldwide.",
"abstract": "Biological invasions are responsible for substantial biodiversity declines as well as high economic losses to society and monetary expenditures associated with the management of these invasions1,2. The InvaCost database has enabled the generation of a reliable, comprehensive, standardized and easily updatable synthesis of the monetary costs of biological invasions worldwide3. Here we found that the total reported costs of invasions reached a minimum of US$1.288 trillion (2017 US dollars) over the past few decades (1970-2017), with an annual mean cost of US$26.8 billion. Moreover, we estimate that the annual mean cost could reach US$162.7 billion in 2017. These costs remain strongly underestimated and do not show any sign of slowing down, exhibiting a consistent threefold increase per decade. We show that the documented costs are widely distributed and have strong gaps at regional and taxonomic scales, with damage costs being an order of magnitude higher than management expenditures. Research approaches that document the costs of biological invasions need to be further improved. Nonetheless, our findings call for the implementation of consistent management actions and international policy agreements that aim to reduce the burden of invasive alien species."
},
{
"pmid": "22611848",
"title": "Modeling range dynamics in heterogeneous landscapes: invasion of the hemlock woolly adelgid in eastern North America.",
"abstract": "Range expansion by native and exotic species will continue to be a major component of global change. Anticipating the potential effects of changes in species distributions requires models capable of forecasting population spread across realistic, heterogeneous landscapes and subject to spatiotemporal variability in habitat suitability. Several decades of theory and model development, as well as increased computing power and availability of fine-resolution GIS data, now make such models possible. Still unanswered, however, is the question of how well this new generation of dynamic models will anticipate range expansion. Here we develop a spatially explicit stochastic model that combines dynamic dispersal and population processes with fine-resolution maps characterizing spatiotemporal heterogeneity in climate and habitat to model range expansion of the hemlock woolly adelgid (HWA; Adelges tsugae). We parameterize this model using multiyear data sets describing population and dispersal dynamics of HWA and apply it to eastern North America over a 57-year period (1951-2008). To evaluate the model, the observed pattern of spread of HWA during this same period was compared to model predictions. Our model predicts considerable heterogeneity in the risk of HWA invasion across space and through time, and it suggests that spatiotemporal variation in winter temperature, rather than hemlock abundance, exerts a primary control on the spread of HWA. Although the simulations generally matched the observed current extent of the invasion of HWA and patterns of anisotropic spread, it did not correctly predict when HWA was observed to arrive in different geographic regions. We attribute differences between the modeled and observed dynamics to an inability to capture the timing and direction of long-distance dispersal events that substantially affected the ensuing pattern of spread."
},
{
"pmid": "26955074",
"title": "Ecological Networks in Stored Grain: Key Postharvest Nodes for Emerging Pests, Pathogens, and Mycotoxins.",
"abstract": "Wheat is at peak quality soon after harvest. Subsequently, diverse biota use wheat as a resource in storage, including insects and mycotoxin-producing fungi. Transportation networks for stored grain are crucial to food security and provide a model system for an analysis of the population structure, evolution, and dispersal of biota in networks. We evaluated the structure of rail networks for grain transport in the United States and Eastern Australia to identify the shortest paths for the anthropogenic dispersal of pests and mycotoxins, as well as the major sources, sinks, and bridges for movement. We found important differences in the risk profile in these two countries and identified priority control points for sampling, detection, and management. An understanding of these key locations and roles within the network is a new type of basic research result in postharvest science and will provide insights for the integrated pest management of high-risk subpopulations, such as pesticide-resistant insect pests."
},
{
"pmid": "24386377",
"title": "Extinction times of epidemic outbreaks in networks.",
"abstract": "In the Susceptible-Infectious-Recovered (SIR) model of disease spreading, the time to extinction of the epidemics happens at an intermediate value of the per-contact transmission probability. Too contagious infections burn out fast in the population. Infections that are not contagious enough die out before they spread to a large fraction of people. We characterize how the maximal extinction time in SIR simulations on networks depend on the network structure. For example we find that the average distances in isolated components, weighted by the component size, is a good predictor of the maximal time to extinction. Furthermore, the transmission probability giving the longest outbreaks is larger than, but otherwise seemingly independent of, the epidemic threshold."
},
{
"pmid": "34151134",
"title": "PolSIRD: Modeling Epidemic Spread Under Intervention Policies: Analyzing the First Wave of COVID-19 in the USA.",
"abstract": "Epidemic spread in a population is traditionally modeled via compartmentalized models which represent the free evolution of disease in the absence of any intervention policies. In addition, these models assume full observability of disease cases and do not account for under-reporting. We present a mathematical model, namely PolSIRD, which accounts for the under-reporting by introducing an observation mechanism. It also captures the effects of intervention policies on the disease spread parameters by leveraging intervention policy data along with the reported disease cases. Furthermore, we allow our recurrent model to learn the initial hidden state of all compartments end-to-end along with other parameters via gradient-based training. We apply our model to the spread of the recent global outbreak of COVID-19 in the USA, where our model outperforms the methods employed by the CDC in predicting the spread. We also provide counterfactual simulations from our model to analyze the effect of lifting the intervention policies prematurely and our model correctly predicts the second wave of the epidemic."
},
{
"pmid": "29311619",
"title": "In situ immune response and mechanisms of cell damage in central nervous system of fatal cases microcephaly by Zika virus.",
"abstract": "Zika virus (ZIKV) has recently caused a pandemic disease, and many cases of ZIKV infection in pregnant women resulted in abortion, stillbirth, deaths and congenital defects including microcephaly, which now has been proposed as ZIKV congenital syndrome. This study aimed to investigate the in situ immune response profile and mechanisms of neuronal cell damage in fatal Zika microcephaly cases. Brain tissue samples were collected from 15 cases, including 10 microcephalic ZIKV-positive neonates with fatal outcome and five neonatal control flavivirus-negative neonates that died due to other causes, but with preserved central nervous system (CNS) architecture. In microcephaly cases, the histopathological features of the tissue samples were characterized in three CNS areas (meninges, perivascular space, and parenchyma). The changes found were mainly calcification, necrosis, neuronophagy, gliosis, microglial nodules, and inflammatory infiltration of mononuclear cells. The in situ immune response against ZIKV in the CNS of newborns is complex. Despite the predominant expression of Th2 cytokines, other cytokines such as Th1, Th17, Treg, Th9, and Th22 are involved to a lesser extent, but are still likely to participate in the immunopathogenic mechanisms of neural disease in fatal cases of microcephaly caused by ZIKV."
},
{
"pmid": "25007186",
"title": "Using a network model to assess risk of forest pest spread via recreational travel.",
"abstract": "Long-distance dispersal pathways, which frequently relate to human activities, facilitate the spread of alien species. One pathway of concern in North America is the possible spread of forest pests in firewood carried by visitors to campgrounds or recreational facilities. We present a network model depicting the movement of campers and, by extension, potentially infested firewood. We constructed the model from US National Recreation Reservation Service data documenting more than seven million visitor reservations (including visitors from Canada) at campgrounds nationwide. This bi-directional model can be used to identify likely origin and destination locations for a camper-transported pest. To support broad-scale decision making, we used the model to generate summary maps for 48 US states and seven Canadian provinces that depict the most likely origins of campers traveling from outside the target state or province. The maps generally showed one of two basic spatial patterns of out-of-state (or out-of-province) origin risk. In the eastern United States, the riskiest out-of-state origin locations were usually found in a localized region restricted to portions of adjacent states. In the western United States, the riskiest out-of-state origin locations were typically associated with major urban areas located far from the state of interest. A few states and the Canadian provinces showed characteristics of both patterns. These model outputs can guide deployment of resources for surveillance, firewood inspections, or other activities. Significantly, the contrasting map patterns indicate that no single response strategy is appropriate for all states and provinces. If most out-of-state campers are traveling from distant areas, it may be effective to deploy resources at key points along major roads (e.g., interstate highways), since these locations could effectively represent bottlenecks of camper movement. If most campers are from nearby areas, they may have many feasible travel routes, so a more widely distributed deployment may be necessary."
},
{
"pmid": "34679079",
"title": "PaIRKAT: A pathway integrated regression-based kernel association test with applications to metabolomics and COPD phenotypes.",
"abstract": "High-throughput data such as metabolomics, genomics, transcriptomics, and proteomics have become familiar data types within the \"-omics\" family. For this work, we focus on subsets that interact with one another and represent these \"pathways\" as graphs. Observed pathways often have disjoint components, i.e., nodes or sets of nodes (metabolites, etc.) not connected to any other within the pathway, which notably lessens testing power. In this paper we propose the Pathway Integrated Regression-based Kernel Association Test (PaIRKAT), a new kernel machine regression method for incorporating known pathway information into the semi-parametric kernel regression framework. This work extends previous kernel machine approaches. This paper also contributes an application of a graph kernel regularization method for overcoming disconnected pathways. By incorporating a regularized or \"smoothed\" graph into a score test, PaIRKAT can provide more powerful tests for associations between biological pathways and phenotypes of interest and will be helpful in identifying novel pathways for targeted clinical research. We evaluate this method through several simulation studies and an application to real metabolomics data from the COPDGene study. Our simulation studies illustrate the robustness of this method to incorrect and incomplete pathway knowledge, and the real data analysis shows meaningful improvements of testing power in pathways. PaIRKAT was developed for application to metabolomic pathway data, but the techniques are easily generalizable to other data sources with a graph-like structure."
},
{
"pmid": "19792216",
"title": "Spectral and dynamical properties in classes of sparse networks with mesoscopic inhomogeneities.",
"abstract": "We study structure, eigenvalue spectra, and random-walk dynamics in a wide class of networks with subgraphs (modules) at mesoscopic scale. The networks are grown within the model with three parameters controlling the number of modules, their internal structure as scale-free and correlated subgraphs, and the topology of connecting network. Within the exhaustive spectral analysis for both the adjacency matrix and the normalized Laplacian matrix we identify the spectral properties, which characterize the mesoscopic structure of sparse cyclic graphs and trees. The minimally connected nodes, the clustering, and the average connectivity affect the central part of the spectrum. The number of distinct modules leads to an extra peak at the lower part of the Laplacian spectrum in cyclic graphs. Such a peak does not occur in the case of topologically distinct tree subgraphs connected on a tree whereas the associated eigenvectors remain localized on the subgraphs both in trees and cyclic graphs. We also find a characteristic pattern of periodic localization along the chains on the tree for the eigenvector components associated with the largest eigenvalue lambda(L)=2 of the Laplacian. Further differences between the cyclic modular graphs and trees are found by the statistics of random walks return times and hitting patterns at nodes on these graphs. The distribution of first-return times averaged over all nodes exhibits a stretched exponential tail with the exponent sigma approximately 1/3 for trees and sigma approximately 2/3 for cyclic graphs, which is independent of their mesoscopic and global structure."
},
{
"pmid": "31615355",
"title": "Assessing the multi-pathway threat from an invasive agricultural pest: Tuta absoluta in Asia.",
"abstract": "Modern food systems facilitate rapid dispersal of pests and pathogens through multiple pathways. The complexity of spread dynamics and data inadequacy make it challenging to model the phenomenon and also to prepare for emerging invasions. We present a generic framework to study the spatio-temporal spread of invasive species as a multi-scale propagation process over a time-varying network accounting for climate, biology, seasonal production, trade and demographic information. Machine learning techniques are used in a novel manner to capture model variability and analyse parameter sensitivity. We applied the framework to understand the spread of a devastating pest of tomato, Tuta absoluta, in South and Southeast Asia, a region at the frontier of its current range. Analysis with respect to historical invasion records suggests that even with modest self-mediated spread capabilities, the pest can quickly expand its range through domestic city-to-city vegetable trade. Our models forecast that within 5-7 years, Tuta absoluta will invade all major vegetable growing areas of mainland Southeast Asia assuming unmitigated spread. Monitoring high-consumption areas can help in early detection, and targeted interventions at major production areas can effectively reduce the rate of spread."
},
{
"pmid": "19688925",
"title": "Predicting Argentine ant spread over the heterogeneous landscape using a spatially explicit stochastic model.",
"abstract": "The characteristics of spread for an invasive species should influence how environmental authorities or government agencies respond to an initial incursion. High-resolution predictions of how, where, and the speed at which a newly established invasive population will spread across the surrounding heterogeneous landscape can greatly assist appropriate and timely risk assessments and control decisions. The Argentine ant (Linepithema humile) is a worldwide invasive species that was inadvertently introduced to New Zealand in 1990. In this study, a spatially explicit stochastic simulation model of species dispersal, integrated with a geographic information system, was used to recreate the historical spread of L. humile in New Zealand. High-resolution probabilistic maps simulating local and human-assisted spread across large geographic regions were used to predict dispersal rates and pinpoint at-risk areas. The spatially explicit simulation model was compared with a uniform radial spread model with respect to predicting the observed spread of the Argentine ant. The uniform spread model was more effective predicting the observed populations early in the invasion process, but the simulation model was more successful later in the simulation. Comparison between the models highlighted that different search strategies may be needed at different stages in an invasion to optimize detection and indicates the influence that landscape suitability can have on the long-term spread of an invasive species. The modeling and predictive mapping methodology used can improve efforts to predict and evaluate species spread, not only in invasion biology, but also in conservation biology, diversity studies, and climate change studies."
},
{
"pmid": "21296708",
"title": "Video time encoding machines.",
"abstract": "We investigate architectures for time encoding and time decoding of visual stimuli such as natural and synthetic video streams (movies, animation). The architecture for time encoding is akin to models of the early visual system. It consists of a bank of filters in cascade with single-input multi-output neural circuits. Neuron firing is based on either a threshold-and-fire or an integrate-and-fire spiking mechanism with feedback. We show that analog information is represented by the neural circuits as projections on a set of band-limited functions determined by the spike sequence. Under Nyquist-type and frame conditions, the encoded signal can be recovered from these projections with arbitrary precision. For the video time encoding machine architecture, we demonstrate that band-limited video streams of finite energy can be faithfully recovered from the spike trains and provide a stable algorithm for perfect recovery. The key condition for recovery calls for the number of neurons in the population to be above a threshold value."
},
{
"pmid": "23056174",
"title": "A suite of models to support the quantitative assessment of spread in pest risk analysis.",
"abstract": "Pest Risk Analyses (PRAs) are conducted worldwide to decide whether and how exotic plant pests should be regulated to prevent invasion. There is an increasing demand for science-based risk mapping in PRA. Spread plays a key role in determining the potential distribution of pests, but there is no suitable spread modelling tool available for pest risk analysts. Existing models are species specific, biologically and technically complex, and data hungry. Here we present a set of four simple and generic spread models that can be parameterised with limited data. Simulations with these models generate maps of the potential expansion of an invasive species at continental scale. The models have one to three biological parameters. They differ in whether they treat spatial processes implicitly or explicitly, and in whether they consider pest density or pest presence/absence only. The four models represent four complementary perspectives on the process of invasion and, because they have different initial conditions, they can be considered as alternative scenarios. All models take into account habitat distribution and climate. We present an application of each of the four models to the western corn rootworm, Diabrotica virgifera virgifera, using historic data on its spread in Europe. Further tests as proof of concept were conducted with a broad range of taxa (insects, nematodes, plants, and plant pathogens). Pest risk analysts, the intended model users, found the model outputs to be generally credible and useful. The estimation of parameters from data requires insights into population dynamics theory, and this requires guidance. If used appropriately, these generic spread models provide a transparent and objective tool for evaluating the potential spread of pests in PRAs. Further work is needed to validate models, build familiarity in the user community and create a database of species parameters to help realize their potential in PRA practice."
},
{
"pmid": "19357301",
"title": "Extracting the multiscale backbone of complex weighted networks.",
"abstract": "A large number of complex systems find a natural abstraction in the form of weighted networks whose nodes represent the elements of the system and the weighted edges identify the presence of an interaction and its relative strength. In recent years, the study of an increasing number of large-scale networks has highlighted the statistical heterogeneity of their interaction pattern, with degree and weight distributions that vary over many orders of magnitude. These features, along with the large number of elements and links, make the extraction of the truly relevant connections forming the network's backbone a very challenging problem. More specifically, coarse-graining approaches and filtering techniques come into conflict with the multiscale nature of large-scale systems. Here, we define a filtering method that offers a practical procedure to extract the relevant connection backbone in complex multiscale networks, preserving the edges that represent statistically significant deviations with respect to a null model for the local assignment of weights to edges. An important aspect of the method is that it does not belittle small-scale interactions and operates at all scales defined by the weight distribution. We apply our method to real-world network instances and compare the obtained results with alternative backbone extraction techniques."
},
{
"pmid": "22701580",
"title": "Identifying highly connected counties compensates for resource limitations when evaluating national spread of an invasive pathogen.",
"abstract": "Surveying invasive species can be highly resource intensive, yet near-real-time evaluations of invasion progress are important resources for management planning. In the case of the soybean rust invasion of the United States, a linked monitoring, prediction, and communication network saved U.S. soybean growers approximately $200 M/yr. Modeling of future movement of the pathogen (Phakopsora pachyrhizi) was based on data about current disease locations from an extensive network of sentinel plots. We developed a dynamic network model for U.S. soybean rust epidemics, with counties as nodes and link weights a function of host hectarage and wind speed and direction. We used the network model to compare four strategies for selecting an optimal subset of sentinel plots, listed here in order of increasing performance: random selection, zonal selection (based on more heavily weighting regions nearer the south, where the pathogen overwinters), frequency-based selection (based on how frequently the county had been infected in the past), and frequency-based selection weighted by the node strength of the sentinel plot in the network model. When dynamic network properties such as node strength are characterized for invasive species, this information can be used to reduce the resources necessary to survey and predict invasion progress."
},
{
"pmid": "31913322",
"title": "Re-epithelialization and immune cell behaviour in an ex vivo human skin model.",
"abstract": "A large body of literature is available on wound healing in humans. Nonetheless, a standardized ex vivo wound model without disruption of the dermal compartment has not been put forward with compelling justification. Here, we present a novel wound model based on application of negative pressure and its effects for epidermal regeneration and immune cell behaviour. Importantly, the basement membrane remained intact after blister roof removal and keratinocytes were absent in the wounded area. Upon six days of culture, the wound was covered with one to three-cell thick K14+Ki67+ keratinocyte layers, indicating that proliferation and migration were involved in wound closure. After eight to twelve days, a multi-layered epidermis was formed expressing epidermal differentiation markers (K10, filaggrin, DSG-1, CDSN). Investigations about immune cell-specific manners revealed more T cells in the blister roof epidermis compared to normal epidermis. We identified several cell populations in blister roof epidermis and suction blister fluid that are absent in normal epidermis which correlated with their decrease in the dermis, indicating a dermal efflux upon negative pressure. Together, our model recapitulates the main features of epithelial wound regeneration, and can be applied for testing wound healing therapies and investigating underlying mechanisms."
},
{
"pmid": "21867251",
"title": "Decreasing the spectral radius of a graph by link removals.",
"abstract": "The decrease of the spectral radius, an important characterizer of network dynamics, by removing links is investigated. The minimization of the spectral radius by removing m links is shown to be an NP-complete problem, which suggests considering heuristic strategies. Several greedy strategies are compared, and several bounds on the decrease of the spectral radius are derived. The strategy that removes that link l=i~j with largest product (x(1))(i)(x(1))(j) of the components of the eigenvector x(1) belonging to the largest adjacency eigenvalue is shown to be superior to other strategies in most cases. Furthermore, a scaling law where the decrease in spectral radius is inversely proportional to the number of nodes N in the graph is deduced. Another sublinear scaling law of the decrease in spectral radius versus the number m of removed links is conjectured."
},
{
"pmid": "34375835",
"title": "The effect of UV exposure on conventional and degradable microplastics adsorption for Pb (II) in sediment.",
"abstract": "Plastic discharged into the environment would break down into microplastics (MPs). However, the possible impact of MPs on heavy metals in the aquatic sediment remains unknown. In order to evaluate the potential role of MPs as carriers of coexisting pollutants, the adsorption capacity of lead ions from sediment onto aged degradable and conventional MPs were investigated as a function of lead ions concentration, contact time, temperature, MPs dosage, aging time, and fulvic acid concentration. MPs were exposed to UV to obtain aged polyethylene (A-PE) and aged polylactic acid (A-PLA). The aging treatment increased the oxygen content, specific surface area and hydrophilicity of MPs. The adsorption capacity of A-PE for Pb(II) in sediment increased from 10.1525 to 10.4642 mg g-1 with the increasing aging time. However, the adsorption capacity of A-PLA for Pb(II) in sediment decreased from 9.3199 to 8.7231 mg g-1 with the increasing aging time. The adsorption capacity of MPs in sediment for Pb(II) was in the following order: A-PE > PLA > PE > A-PLA. Fulvic acid could promote the adsorption of Pb(II) by MPs in sediment. These results indicated that the aging process of the plastics in the environment would affect its role as a carrier of coexisting pollutants."
},
{
"pmid": "33397941",
"title": "Anomalous collapses of Nares Strait ice arches leads to enhanced export of Arctic sea ice.",
"abstract": "The ice arches that usually develop at the northern and southern ends of Nares Strait play an important role in modulating the export of Arctic Ocean multi-year sea ice. The Arctic Ocean is evolving towards an ice pack that is younger, thinner, and more mobile and the fate of its multi-year ice is becoming of increasing interest. Here, we use sea ice motion retrievals from Sentinel-1 imagery to report on the recent behavior of these ice arches and the associated ice fluxes. We show that the duration of arch formation has decreased over the past 20 years, while the ice area and volume fluxes along Nares Strait have both increased. These results suggest that a transition is underway towards a state where the formation of these arches will become atypical with a concomitant increase in the export of multi-year ice accelerating the transition towards a younger and thinner Arctic ice pack."
},
{
"pmid": "32742052",
"title": "Modeling Commodity Flow in the Context of Invasive Species Spread: Study of Tuta absoluta in Nepal.",
"abstract": "Trade and transport of goods is widely accepted as a primary pathway for the introduction and dispersal of invasive species. However, understanding commodity flows remains a challenge owing to its complex nature, unavailability of quality data, and lack of systematic modeling methods. A robust network-based approach is proposed to model seasonal flow of agricultural produce and examine its role in pest spread. It is applied to study the spread of Tuta absoluta, a devastating pest of tomato in Nepal. Further, the long-term establishment potential of the pest and its economic impact on the country are assessed. Our analysis indicates that regional trade plays an important role in the spread of T. absoluta. The economic impact of this invasion could range from USD 17-25 million. The proposed approach is generic and particularly suited for data-poor scenarios."
},
{
"pmid": "23642247",
"title": "Multiscale computational models of complex biological systems.",
"abstract": "Integration of data across spatial, temporal, and functional scales is a primary focus of biomedical engineering efforts. The advent of powerful computing platforms, coupled with quantitative data from high-throughput experimental methodologies, has allowed multiscale modeling to expand as a means to more comprehensively investigate biological phenomena in experimentally relevant ways. This review aims to highlight recently published multiscale models of biological systems, using their successes to propose the best practices for future model development. We demonstrate that coupling continuous and discrete systems best captures biological information across spatial scales by selecting modeling techniques that are suited to the task. Further, we suggest how to leverage these multiscale models to gain insight into biological systems using quantitative biomedical engineering methods to analyze data in nonintuitive ways. These topics are discussed with a focus on the future of the field, current challenges encountered, and opportunities yet to be realized."
},
{
"pmid": "9623998",
"title": "Collective dynamics of 'small-world' networks.",
"abstract": "Networks of coupled dynamical systems have been used to model biological oscillators, Josephson junction arrays, excitable media, neural networks, spatial games, genetic control networks and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks 'rewired' to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them 'small-world' networks, by analogy with the small-world phenomenon (popularly known as six degrees of separation. The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices."
},
{
"pmid": "26045330",
"title": "Modeling seasonal migration of fall armyworm moths.",
"abstract": "Fall armyworm, Spodoptera frugiperda (J.E. Smith), is a highly mobile insect pest of a wide range of host crops. However, this pest of tropical origin cannot survive extended periods of freezing temperature but must migrate northward each spring if it is to re-infest cropping areas in temperate regions. The northward limit of the winter-breeding region for North America extends to southern regions of Texas and Florida, but infestations are regularly reported as far north as Québec and Ontario provinces in Canada by the end of summer. Recent genetic analyses have characterized migratory pathways from these winter-breeding regions, but knowledge is lacking on the atmosphere's role in influencing the timing, distance, and direction of migratory flights. The Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model was used to simulate migratory flight of fall armyworm moths from distinct winter-breeding source areas. Model simulations identified regions of dominant immigration from the Florida and Texas source areas and overlapping immigrant populations in the Alabama-Georgia and Pennsylvania-Mid-Atlantic regions. This simulated migratory pattern corroborates a previous migratory map based on the distribution of fall armyworm haplotype profiles. We found a significant regression between the simulated first week of moth immigration and first week of moth capture (for locations which captured ≥ 10 moths), which on average indicated that the model simulated first immigration 2 weeks before first captures in pheromone traps. The results contribute to knowledge of fall armyworm population ecology on a continental scale and will aid in the prediction and interpretation of inter-annual variability of insect migration patterns including those in response to climatic change and adoption rates of transgenic cultivars."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
}
] |
Frontiers in Psychology | null | PMC8860084 | 10.3389/fpsyg.2022.838723 | Validation of the Double Mediation Model of Workplace Well-Being on the Subjective Well-Being of Technological Employees | In recent years, workplace well-being has been a popular research topic, because it is helpful to promote employees’ welfare, thereby bringing valuable personal and organizational outcomes. With the development of technology, the technology industry plays an important role in Taiwan. Although the salary and benefits provided by the technology industry are better than other industries, the work often requires a lot of time and effort. It is worth paying attention to whether a happy workplace will bring subjective well-being for the technology industry in Taiwan. This study explored the influence of workplace well-being, job involvement, and flow on the subjective well-being. The research was conducted by a questionnaire survey. A total of 256 employees in the technology industry in the Nanzi Processing Zone in Kaohsiung City, Taiwan were surveyed. Collected data were analyzed by statistical methods, such as multivariate and structural equation models. The study results indicated that workplace well-being, flow, and job involvement have a positive and significant impact on the subjective well-being. In addition to having a direct impact on subjective well-being, flow is also a significant variable to mediate the impact of workplace well-being to subjective well-being. In addition, job involvement also affects subjective well-being through flow, which means that the state of selflessness at work is the most important factor affecting subjective well-being. Finally, based on the research findings, the researcher provided practical suggestions to the government and the technology industry. | Related Works of Workplace Well-Being and FlowThere is a correlation between workplace well-being and mind flow. Wok and Hashim (2015) suggested that the focus of workplace well-being is on the individual’s experience of the work environment. The concept is therefore similar to flow, as the components of flow include clear goals, clear feedback, a sense of control, etc. All of these are consistent with the environment and conditions offered by workplace well-being. That is when an organization can provide enough conditions to make people feel well-being, then more employees have no worries in this environment. The more employees can show their best working ability, the more they can naturally experience joyful emotions at work, concentrate on work, have fun at work and forget about the passage of time.Scholars pointed out that a happy and healthy workplace allows employees to develop themselves, interact interpersonally, and use skills. A good workplace should have reasonable goals, a balance of personal safety, supportive supervision, adequate rewards for work, and meaningful work development. When employees work in such a workplace, they should be able to devote themselves. Therefore, if employees in the technology industry could experience workplace well-being, they would spend most of their time at work. This study proposes the hypothesis H5 as:H5: Workplace well-being has a positive and significant impact on flow. | [
"19271845",
"25602273",
"31984042",
"32605260",
"20157642",
"26551214",
"20179778"
] | [
{
"pmid": "19271845",
"title": "Reporting practices in confirmatory factor analysis: an overview and some recommendations.",
"abstract": "Reporting practices in 194 confirmatory factor analysis studies (1,409 factor models) published in American Psychological Association journals from 1998 to 2006 were reviewed and compared with established reporting guidelines. Three research questions were addressed: (a) how do actual reporting practices compare with published guidelines? (b) how do researchers report model fit in light of divergent perspectives on the use of ancillary fit indices (e.g., L.-T. Hu & P. M. Bentler, 1999; H. W. Marsh, K.-T., Hau, & Z. Wen, 2004)? and (c) are fit measures that support hypothesized models reported more often than fit measures that are less favorable? Results indicate some positive findings with respect to reporting practices including proposing multiple models a priori and near universal reporting of the chi-square significance test. However, many deficiencies were found such as lack of information regarding missing data and assessment of normality. Additionally, the authors found increases in reported values of some incremental fit statistics and no statistically significant evidence that researchers selectively report measures of fit that support their preferred model. Recommendations for reporting are summarized and a checklist is provided to help editors, reviewers, and authors improve reporting practices."
},
{
"pmid": "25602273",
"title": "Leisure engagement and subjective well-being: A meta-analysis.",
"abstract": "Numerous studies show a link between leisure engagement and subjective well-being (SWB). Drawing on common experiential features of leisure, psychological need theories, and bottom-up models of SWB, we suggest that leisure engagement influences SWB via leisure satisfaction. We examine the proposed cross-sectional relations and mediation model using random-effects meta-analyses that include all available populations. To provide a stronger test of causal influence, we also examine longitudinal relations between leisure satisfaction and SWB and effects of experimental leisure interventions on SWB using random effects meta-analyses of all available populations. Findings based on 37 effect sizes and 11,834 individuals reveal that leisure engagement and SWB are moderately associated (inverse-variance weighted r = .26) and mediated by leisure satisfaction. Cross-lagged regression analyses of longitudinal studies, controlling for prior SWB, reveal bottom-up effects of leisure satisfaction on SWB (β = .15) and top-down effects of SWB on leisure satisfaction (β = .16). Experimental studies reveal that leisure interventions enhance SWB (d = 1.02). Compared with working samples, retired samples exhibit a stronger relation between leisure engagement and SWB, and between leisure satisfaction and SWB. Measures of the frequency and diversity of leisure engagement are more strongly associated with SWB than measures of time spent in leisure. Overall, although not minimizing top-down influences, results are consistent with bottom-up models of SWB and suggest that the leisure domain is a potentially important target for enhancing SWB."
},
{
"pmid": "31984042",
"title": "A Happy Life: Exploring How Job Stress, Job Involvement, and Job Satisfaction Are Related to the Life Satisfaction of Chinese Prison Staff.",
"abstract": "Working in prisons is a demanding career. While a growing number of studies have explored the predictors of job stress, job involvement, and job satisfaction, very few studies have examined how job stress, job involvement, and job satisfaction effect prison staff life satisfaction. Moreover, past studies on prison staff life satisfaction have all been conducted among those working in the United States. The current study examined how job stress, job involvement and job satisfaction were associated with satisfaction with life among surveyed staff at two Chinese prisons. Job involvement and job satisfaction had positive effects on life satisfaction, while job stress had a negative effect."
},
{
"pmid": "32605260",
"title": "How Can Work Addiction Buffer the Influence of Work Intensification on Workplace Well-Being? The Mediating Role of Job Crafting.",
"abstract": "Despite growing attention to the phenomenon of intensified job demand in the workplace, empirical research investigating the underlying behavioral mechanisms that link work intensification to workplace well-being is limited. In particular, a study on whether these behavioral mechanisms are dependent on certain type of individual difference is absent. Using data collected from 356 Chinese health care professionals, this study utilized a dual-path moderated mediation model to investigate the mediating role of job crafting behavior between work intensification and workplace well-being, and the moderating role of work addiction on this indirect path. The results demonstrated that although work intensification was negatively associated with workplace well-being, this effect was more likely to take place for non-workaholics. Specifically, compared with non-workaholics, workaholics were more prone to engage in job crafting behavior in terms of seeking resources and crafting towards strengths, and therefore less likely to have reduced well-being experience. Results are discussed in terms of their implications for research and practice."
},
{
"pmid": "20157642",
"title": "Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods.",
"abstract": "The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal distribution. This article uses a simulation study to demonstrate that confidence limits are imbalanced because the distribution of the indirect effect is normal only in special cases. Two alternatives for improving the performance of confidence limits for the indirect effect are evaluated: (a) a method based on the distribution of the product of two normal random variables, and (b) resampling methods. In Study 1, confidence limits based on the distribution of the product are more accurate than methods based on an assumed normal distribution but confidence limits are still imbalanced. Study 2 demonstrates that more accurate confidence limits are obtained using resampling methods, with the bias-corrected bootstrap the best method overall."
},
{
"pmid": "26551214",
"title": "The Impact of Job Involvement on Emotional Labor to Customer-Oriented Behavior: An Empirical Study of Hospital Nurses.",
"abstract": "BACKGROUND\nHealthcare is a profession that requires a high level of emotional labor (EL). Nurses provide frontline services in hospitals and thus typically experience high levels of EL. The quality of services that nurses provide impacts on how patients evaluate the service quality of hospitals.\n\n\nPURPOSE\nThe aim of this study is to explore the relationships among EL, job involvement (JI), and customer-oriented behavior (COB) in the context of the nursing profession.\n\n\nMETHODS\nThe participants in this study were nurses at eight hospitals, all located in Taiwan. This study used a self-reporting questionnaire. Research data were gathered at two discrete periods (A and B). Questionnaire A collected data on EL and JI, and Questionnaire B collected data on COB. Five hundred questionnaires were sent out to qualified participants, and 472 valid questionnaires were returned. Hierarchical regression analysis was used to test the hypotheses.\n\n\nRESULTS\nThe expression of positive emotion (EPE) and the suppression of negative emotion (SNE) were found to positively affect the patient-oriented COB. Furthermore, the EPE was found to positively affect the task-oriented COB. In terms of the moderating effect of JI, JI was found to relate positively to the EPE, patient-oriented COB, and task-oriented COB. In addition, higher values of JI were found to weaken the relationship between the SNE and the task-oriented COB.\n\n\nCONCLUSIONS/IMPLICATIONS FOR PRACTICE\nIt has become an increasingly popular practice for hospital organizations to work to promote the COB of their nursing staffs. The results of this study prove empirically that a relationship exists among EL, COB, and JI in nurses. This study contributes to the related literature, enhances the knowledge of hospital and nursing administrators with regard to EL and COB, and offers a reference for hospital managers who are responsible for designing and executing multidisciplinary programs and for managing hospital-based human resources."
},
{
"pmid": "20179778",
"title": "Resampling and Distribution of the Product Methods for Testing Indirect Effects in Complex Models.",
"abstract": "Recent advances in testing mediation have found that certain resampling methods and tests based on the mathematical distribution of 2 normal random variables substantially outperform the traditional z test. However, these studies have primarily focused only on models with a single mediator and 2 component paths. To address this limitation, a simulation was conducted to evaluate these alternative methods in a more complex path model with multiple mediators and indirect paths with 2 and 3 paths. Methods for testing contrasts of 2 effects were evaluated also. The simulation included 1 exogenous independent variable, 3 mediators and 2 outcomes and varied sample size, number of paths in the mediated effects, test used to evaluate effects, effect sizes for each path, and the value of the contrast. Confidence intervals were used to evaluate the power and Type I error rate of each method, and were examined for coverage and bias. The bias-corrected bootstrap had the least biased confidence intervals, greatest power to detect nonzero effects and contrasts, and the most accurate overall Type I error. All tests had less power to detect 3-path effects and more inaccurate Type I error compared to 2-path effects. Confidence intervals were biased for mediated effects, as found in previous studies. Results for contrasts did not vary greatly by test, although resampling approaches had somewhat greater power and might be preferable because of ease of use and flexibility."
}
] |
Frontiers in Big Data | null | PMC8860100 | 10.3389/fdata.2021.756041 |
HPTMT Parallel Operators for High Performance Data Science and Data Engineering | Data-intensive applications are becoming commonplace in all science disciplines. They are comprised of a rich set of sub-domains such as data engineering, deep learning, and machine learning. These applications are built around efficient data abstractions and operators that suit the applications of different domains. Often lack of a clear definition of data structures and operators in the field has led to other implementations that do not work well together. The HPTMT architecture that we proposed recently, identifies a set of data structures, operators, and an execution model for creating rich data applications that links all aspects of data engineering and data science together efficiently. This paper elaborates and illustrates this architecture using an end-to-end application with deep learning and data engineering parts working together. Our analysis show that the proposed system architecture is better suited for high performance computing environments compared to the current big data processing systems. Furthermore our proposed system emphasizes the importance of efficient compact data structures such as Apache Arrow tabular data representation defined for high performance. Thus the system integration we proposed scales a sequential computation to a distributed computation retaining optimum performance along with highly usable application programming interface. | 6 Related WorkThere are many efforts to build efficient distributed operators for data science and data engineering. Frameworks like Apache Spark (Zaharia et al., 2016), Apache Flink (Carbone et al., 2015) and Map Reduce (Dean and Ghemawat, 2008) are legacy systems created for data engineering. And many programming models have been developed on top of these big data systems to facilitate data analysis (Belcastro et al., 2019). Later on, these systems adopted the data analytics domain under their umbrella of big data problems. But with the emerging requirement for high-performance computing for data science and data engineering, the existing parallel operators in these frameworks don’t provide adequate performance or flexibility (Elshawi et al., 2018). Frameworks like Pandas McKinney (2011) gained more popularity in the data science community because of their usability. Pandas only provide serial execution, and Dask (Rocklin, 2015) uses it internally (parallel Pandas) to provide parallel operators. Also, it was re-engineered as Modin (Petersohn et al., 2020) to run the dataframe operators in parallel. But these efforts are mainly focused on a driver-based asynchronous execution model, a well-known bottleneck for distributed applications.The majority of the data analytics workloads tend to use data-parallel execution or bulk synchronous parallel (loosely synchronous) mode. This idea originated in 1987 from Fox, G.C. in the article “What Have We Learnt from Using Real Parallel Machines to Solve Real Problems” Fox (1989). Later, a similar idea was published in an article by Valiant, L Valiant (1990) in 1990 which introduced the term “Bulk Synchronous Parallel”. Frameworks like PyTorch (Paszke et al., 2019) adopted this HPC philosophy, and distributed runtimes like Horovod (Sergeev and Del Balso, 2018) generalized this practice for most of the existing deep learning frameworks. They were adopting this philosophy along the same time HPC-driven big data systems like Twister2 (Fox, 2017; Abeykoon et al., 2019; Wickramasinghe et al., 2019) were created to bridge the gap between data engineering and deep learning. But with the language boundaries of Java (Ekanayake et al., 2016) and usability with native-C++ based Python implementations were favoured over JVM-based systems. PyCylon (Abeykoon et al., 2020) dataframes for distributed CPU computation and Cudf (Hernández et al., 2020) dataframes for distributed GPU computation were designed. The seamless integration of data engineering and deep learning was a possibility with such frameworks and nowadays are being widely used in the data science and data engineering sphere to do rapid prototyping and design production-friendly applications. | [
"16990858"
] | [
{
"pmid": "16990858",
"title": "The NCI60 human tumour cell line anticancer drug screen.",
"abstract": "The US National Cancer Institute (NCI) 60 human tumour cell line anticancer drug screen (NCI60) was developed in the late 1980s as an in vitro drug-discovery tool intended to supplant the use of transplantable animal tumours in anticancer drug screening. This screening model was rapidly recognized as a rich source of information about the mechanisms of growth inhibition and tumour-cell kill. Recently, its role has changed to that of a service screen supporting the cancer research community. Here I review the development, use and productivity of the screen, highlighting several outcomes that have contributed to advances in cancer chemotherapy."
}
] |
Frontiers in Robotics and AI | null | PMC8860235 | 10.3389/frobt.2022.813843 | HR1 Robot: An Assistant for Healthcare Applications | According to the World Health Organization
1,
2
the percentage of healthcare dependent population, such as elderly and people with disabilities, among others, will increase over the next years. This trend will put a strain on the health and social systems of most countries. The adoption of robots could assist these health systems in responding to this increased demand, particularly in high intensity and repetitive tasks. In a previous work, we compared a Socially Assistive Robot (SAR) with a Virtual Agent (VA) during the execution of a rehabilitation task. The SAR consisted of a humanoid R1 robot, while the Virtual Agent represented its simulated counter-part. In both cases, the agents evaluated the participants’ motions and provided verbal feedback. Participants reported higher levels of engagement when training with the SAR. Given that the architecture has been proven to be successful for a rehabilitation task, other sets of repetitive tasks could also take advantage of the platform, such as clinical tests. A commonly performed clinical trial is the Timed Up and Go (TUG), where the patient has to stand up, walk 3 m to a goal line and back, and sit down. To handle this test, we extended the architecture to evaluate lower limbs’ motions, follow the participants while continuously interacting with them, and verify that the test is completed successfully. We implemented the scenario in Gazebo, by simulating both participants and the interaction with the robot
3
. A full interactive report is created when the test is over, providing the extracted information to the specialist. We validate the architecture in three different experiments, each with 1,000 trials, using the Gazebo simulation. These experiments evaluate the ability of this architecture to analyse the patient, verify if they are able to complete the TUG test, and the accuracy of the measurements obtained during the test. This work provides the foundations towards more thorough clinical experiments with a large number of participants with a physical platform in the future. The software is publicly available in the assistive-rehab repository
4
and fully documented. | 2 Related WorkThe field of robotics in healthcare has been rapidly evolving in the last years, leading to a substantial paradigm shift. An example of this is the Da Vinci system, a surgical robot tele-operated remotely, acting as guidance tool to simultaneously provide information and keep the surgeon on the target (Moran, 2006). This type of robots has been installed and used worldwide. The use of robots for surgery has given rise to a large number of applications for use in the medical domain.In the particular sub-field of Assistive Robotics, robots are endowed with the human capabilities to aid patients and caregivers. The robot RIBA (Robot for Interactive Body Assistance) is designed with the appearance of a giant teddy bear to lift and transfer patients from a bed to a wheelchair and back (Mukai et al., 2011). The caregiver can instruct the robot through vocal commands and tactile guidance: the desired motion is set by directly touching the robot on the part related to the motion. A similar system, RoNA (Robotic Nursing Assistant), executes the same task, but is instead controlled by the operator through an external GUI (Ding et al., 2014). Chaudhary et al. (2021) plan to design a new medical robot, called BAYMAX, which will serve also as personal companion for general healthcare. The robot will be equipped with a head, comprising a camera, microphone and speakers. It will also contain a series of sensors to detect the temperature, the heartbeat and the oxygen level, and will be capable of performing regular basic check-ups, such as temperature, oxygen level check, mask verification, external injuries etc.Assistive robots can also aid patients through social interaction, rather than offering physical support (Feil-Seifer and Mataric, 2005): these are known as Socially Assistive Robots (SAR). The robot’s embodiment positively affects the users’ motivation and performance, through non-contact feedback, encouragement and constant monitoring (Brooks et al., 2012; Li, 2015; Vasco et al., 2019). The Kaspar robot is a child-sized humanoid designed to assist autistic children in learning new social communication skills, while improving their engagement abilities and attention (Wood et al., 2021). The Bandit robot consists of a humanoid torso, developed by BlueSky Robotics, mounted on a Pioneer 2DX mobile base. It has been used for engaging elderly patients in physical exercises (Fasola and Matarić, 2013) and providing therapies to stroke patients (Wade et al., 2011). Szücs et al. (2019) propose a framework which allows the therapist to define a personalized training program for each patient, choosing from a set of pre-defined movements. The software has been developed on the humanoid NAO robot controlled through vocal commands via Android smartphone interface. The same robot has been used in combination with a virtual environment for a rehabilitation task: the patient replicates the movements shown by the robot while visualizing himself inside a gamified virtual world (Ibarra Zannatha et al., 2013). The robot coaches the rehabilitation exercising, while encouraging or correcting the patient verbally. Lunardini et al. (2019), as part of the MoveCare European project, also propose to combine a robot and virtual games, with the aid of smart devices (e.g. smart ball, balance board, insoles) to monitor and assist elderly people at home. A similar combination can be seen in the work of Pham et al. (2020) within the CARESSES project, where the authors integrate a Pepper robot with a smart home environment in order to support elderly people.Another example of coaching is the work by Céspedes Gómez et al. (2021) and Irfan et al. (2020), where a study lasting 2 years and 6 months using a NAO robot to coach patients in cardiac rehabilitation proved that patients were more engaged, and generally finished the program earlier, than the ones not followed by a robot. In this work the robot was the means of interaction, while data was collected through multiple sensors, both wearable and external. In particular, in the case study presented in Irfan et al. (2020), the system was instrumental in detecting a critical situation where a patient was not feeling well, alerting the therapists and leading to medical intervention.Previous research has focused on automating the Timed Up and Go using sensors of various modalities or motion tracking systems. Three-dimensional motion capture systems have been used to measure the walking parameters with high reliability (Beerse et al., 2019). They currently represent the gold standard (Kleiner et al., 2018), but because of their cost, scale and lack of convenience, it is difficult to install these devices in community health centers. Wearable sensors based on Inertial Measurement Units (IMUs) have been extensively used in instrumenting the TUG test for their low cost and fast assessment. They have proved to be reliable and accurate in measuring the completion times (Kleiner et al., 2018). However, they require time-consuming wearing and calibration procedures that cannot usually be performed by the patients themselves, especially by those with motor limitations. Moreover, the possibility to accurately evaluate the movement kinematics, in terms of articular joints angles, through the data extracted from IMU is still under debate (Poitras et al., 2019). Ambient sensors, including temperature, infrared motion, light, door, object, and pressure sensors, have also shown promise as they remove the need to instrument the patient. Frenken et al. (2011) equipped a chair with several force sensors to monitor weight distribution and a laser range scanner to estimate the distance the subject covers. However, this system is relatively expensive, requires specialized installation and has limited range of use. Video data have also been heavily explored, as they are minimally invasive, require little setup and no direct contact with the patient. Several works have adopted Kinect sensors and their skeleton tracking modes (Lohmann et al., 2012; Kitsunezaki et al., 2013) and webcams (Berrada et al., 2007). A more thorough analysis of the application of these technologies to the TUG test can be seen on the review by Sprint et al. (2015). | [
"30594868",
"25559550",
"23827333",
"1991946",
"25594979"
] | [
{
"pmid": "30594868",
"title": "Biomechanical analysis of the timed up-and-go (TUG) test in children with and without Down syndrome.",
"abstract": "BACKGROUND\nThe timed up-and-go (TUG) test consists of multiple functional activities of daily living performed in a sequence, with the goal to complete the test as quickly as possible. Considering children with Down syndrome (DS) have been shown to take longer to complete the TUG test, it is imperative to identify which tasks are problematic for this population in order to individualize physical interventions.\n\n\nRESEARCH QUESTION\nIs the biomechanical pattern of each functional task during the TUG test different between children with DS and typically developing (TD) children?\n\n\nMETHODS\nThirteen children with DS and thirteen TD children aged 5-11 years old completed the TUG test. Kinematic data was captured using a Vicon motion capture system. We visually coded the TUG test into five phases: sit-to-stand, walk-out, turn-around, walk-in, and stand-to-sit. We focused on the center-of-mass (COM) movement in the sit-to-stand phase, spatiotemporal parameters in the walk-out phase, and intersegmental coordination in the turn-around phase.\n\n\nRESULTS AND SIGNIFICANCE\nChildren with DS took longer to complete the entire test, as well as each of the five phases. During the sit-to-stand phase, children with DS produced smaller peak vertical COM velocity, medial-lateral COM excursion, and peak knee and hip extension velocity compared to TD peers. Children with DS walked at a slower velocity during the walk-out phase. Both groups demonstrated a similar intersegmental coordination pattern between the head, thorax, and pelvis during the turn-around phase although children with DS had slower average and peak angular velocity at the head, thorax, and pelvis. Our results suggest that children with DS were less able to anticipate transitioning between motor tasks and took longer to initiate motor tasks. Our TUG analysis provides the detailed insights to help evaluate individual motor tasks as well as the transition from one task to another for clinical populations."
},
{
"pmid": "25559550",
"title": "Symptom burden in community-dwelling older people with multimorbidity: a cross-sectional study.",
"abstract": "BACKGROUND\nGlobally, the population is ageing and lives with several chronic diseases for decades. A high symptom burden is associated with a high use of healthcare, admissions to nursing homes, and reduced quality of life. The aims of this study were to describe the multidimensional symptom profile and symptom burden in community-dwelling older people with multimorbidity, and to describe factors related to symptom burden.\n\n\nMETHODS\nA cross-sectional study including 378 community-dwelling people ≥ 75 years, who had been hospitalized ≥ 3 times during the previous year, had ≥ 3 diagnoses in their medical records. The Memorial Symptom Assessment Scale was used to assess the prevalence, frequency, severity, distress and symptom burden of 31 symptoms. A multiple linear regression was performed to identify factors related to total symptom burden.\n\n\nRESULTS\nThe mean number of symptoms per participant was 8.5 (4.6), and the mean total symptom burden score was 0.62 (0.41). Pain was the symptom with the highest prevalence, frequency, severity and distress. Half of the study group reported the prevalence of lack of energy and a dry mouth. Poor vision, likelihood of depression, and diagnoses of the digestive system were independently related to the total symptom burden score.\n\n\nCONCLUSION\nThe older community-dwelling people with multimorbidity in this study suffered from a high symptom burden with a high prevalence of pain. Persons with poor vision, likelihood of depression, and diseases of the digestive system are at risk of a higher total symptom burden and might need age-specific standardized guidelines for appropriate management."
},
{
"pmid": "23827333",
"title": "Development of a system based on 3D vision, interactive virtual environments, ergonometric signals and a humanoid for stroke rehabilitation.",
"abstract": "This paper presents a stroke rehabilitation (SR) system for the upper limbs, developed as an interactive virtual environment (IVE) based on a commercial 3D vision system (a Microsoft Kinect), a humanoid robot (an Aldebaran's Nao), and devices producing ergonometric signals. In one environment, the rehabilitation routines, developed by specialists, are presented to the patient simultaneously by the humanoid and an avatar inside the IVE. The patient follows the rehabilitation task, while his avatar copies his gestures that are captured by the Kinect 3D vision system. The information of the patient movements, together with the signals obtained from the ergonometric measurement devices, is used also to supervise and to evaluate the rehabilitation progress. The IVE can also present an RGB image of the patient. In another environment, that uses the same base elements, four game routines--Touch the balls 1 and 2, Simon says, and Follow the point--are used for rehabilitation. These environments are designed to create a positive influence in the rehabilitation process, reduce costs, and engage the patient."
},
{
"pmid": "1991946",
"title": "The timed \"Up & Go\": a test of basic functional mobility for frail elderly persons.",
"abstract": "This study evaluated a modified, timed version of the \"Get-Up and Go\" Test (Mathias et al, 1986) in 60 patients referred to a Geriatric Day Hospital (mean age 79.5 years). The patient is observed and timed while he rises from an arm chair, walks 3 meters, turns, walks back, and sits down again. The results indicate that the time score is (1) reliable (inter-rater and intra-rater); (2) correlates well with log-transformed scores on the Berg Balance Scale (r = -0.81), gait speed (r = -0.61) and Barthel Index of ADL (r = -0.78); and (3) appears to predict the patient's ability to go outside alone safely. These data suggest that the timed \"Up & Go\" test is a reliable and valid test for quantifying functional mobility that may also be useful in following clinical change over time. The test is quick, requires no special equipment or training, and is easily included as part of the routine medical examination."
},
{
"pmid": "25594979",
"title": "Toward Automating Clinical Assessments: A Survey of the Timed Up and Go.",
"abstract": "Older adults often suffer from functional impairments that affect their ability to perform everyday tasks. To detect the onset and changes in abilities, healthcare professionals administer standardized assessments. Recently, technology has been utilized to complement these clinical assessments to gain a more objective and detailed view of functionality. In the clinic and at home, technology is able to provide more information about patient performance and reduce subjectivity in outcome measures. The timed up and go (TUG) test is one such assessment recently instrumented with technology in several studies, yielding promising results toward the future of automating clinical assessments. Potential benefits of technological TUG implementations include additional performance parameters, generated reports, and the ability to be self-administered in the home. In this paper, we provide an overview of the TUG test and technologies utilized for TUG instrumentation. We then critically review the technological advancements and follow up with an evaluation of the benefits and limitations of each approach. Finally, we analyze the gaps in the implementations and discuss challenges for future research toward automated self-administered assessment in the home."
}
] |
Scientific Data | 35190569 | PMC8861064 | 10.1038/s41597-022-01143-6 | A large-scale study on research code quality and execution | This article presents a study on the quality and execution of research code from publicly-available replication datasets at the Harvard Dataverse repository. Research code is typically created by a group of scientists and published together with academic papers to facilitate research transparency and reproducibility. For this study, we define ten questions to address aspects impacting research reproducibility and reuse. First, we retrieve and analyze more than 2000 replication datasets with over 9000 unique R files published from 2010 to 2020. Second, we execute the code in a clean runtime environment to assess its ease of reuse. Common coding errors were identified, and some of them were solved with automatic code cleaning to aid code execution. We find that 74% of R files failed to complete without error in the initial execution, while 56% failed when code cleaning was applied, showing that many errors can be prevented with good coding practices. We also analyze the replication datasets from journals’ collections and discuss the impact of the journal policy strictness on the code re-execution rate. Finally, based on our results, we propose a set of recommendations for code dissemination aimed at researchers, journals, and repositories. | Related WorkClaims about a reproducibility crisis attracted attention even from the popular media, and many studies on the quality and robustness of research results have been performed in the last decade2,3. Most reproducibility studies were done manually, where researchers tried to reproduce previous work by following its documentation and occasionally contacting original authors. Given that most of the datasets in our study belong to the social sciences, we reference a few reproducibility studies in this domain that emphasize its computational component (i.e., use the same data and code). Chang and Li attempt to reproduce results from 67 papers published in 13 well-regarded economic journals using the deposited supplementary material26. They successfully reproduced 33% of the results without contacting the authors and 43% with the authors’ assistance. Some of the reasons for the reduced reproducibility rate are proprietary software and missing (or sensitive) data. Stodden and collaborators conduct a study reporting on both reproducibility rate and journal policy effectiveness25. They look into 204 scientific papers published in the journal Science, which previously implemented a data sharing policy. The authors report being able to obtain resources from 44% of the papers and reproduce 26% of the findings. They conclude that while a policy represents an improvement, it does not suffice for reproducibility. These studies give strength to our analysis as the success rates are comparable. Furthermore, by examining multiple journals with various data policy strictness, we corroborate the finding that open data policy is an improvement but less effective than code review or verification in enabling code re-execution and reproducibility.Studies that focus primarily on the R programming language have been reported. Konkol and collaborators conducted an online survey among geoscientists to learn of their experience in reproducing prior work47. In addition, they conducted a reproducibility study by collecting papers that included R code and attempting to execute it. Among the 146 survey participants, 7% tried to reproduce previous results, and about a quarter of those have done that successfully. For the reproducibility part of the study, Konkol and collaborators use RStudio and a Docker image tailored to the geoscience domain. They report that two studies ran without any issues, 33 had resolvable issues, and two had issues that could not be resolved. For the 15 studies, they contacted the corresponding authors. In total, they encountered 173 issues in 39 papers. While we cannot directly compare the success rate due to the different approaches, we note that much of the reported issues overlap. In particular, issues like a wrong directory, deprecated function, missing library, missing data, and faulty calls that they report are also frequently seen in our study.Large-scale studies have the strength to process hundreds of datasets in the same manner and examine common themes. Our study is loosely inspired by the effort undertaken by Chen48 in his undergraduate coursework, though our implementation, code-cleaning and analysis goals differ. Pimentel and collaborators retrieved over 860,000 Jupyter notebooks from the Github code repository and analyzed their quality and reproducibility49. The study first attempted to prepare the notebooks’ Python environment, which was successful for about 788,813 notebooks. Out of those, 9,982 notebooks exceeded a time limit, while 570,476 failed due to an error. A total of 208,323 of the notebooks finished their execution successfully (24.11%). About 4% re-executed with the same result, which was inferred by comparing it with the existing outputs in the notebook. This result is comparable to the re-execution rate of 27% in our previous analysis of Python code from Harvard Dataverse repository6. We also note that Pimentel and collaborators performed the study on diverse Jupyter notebooks, which often include prototype development and educational coding. Our study is solely based on research code in its final (published) version. The studies are not directly comparable due to the use of different programming languages. However, we achieve a comparable result of 25% when re-executing code without code cleaning. Also, the fact that the most frequent errors relate to the libraries in both studies signals that both programming languages face similar problems in software sustainability and dependency capture. | [
"30571677",
"33608018"
] | [] |
Scientific Reports | null | PMC8861090 | 10.1038/s41598-022-06975-1 | Combination predicting model of traffic congestion index in weekdays based on LightGBM-GRU | Tree-based and deep learning methods can automatically generate useful features. Not only can it enhance the original feature representation, but it can also learn to generate new features. This paper develops a strategy based on Light Gradient Boosting Machine (LightGBM or LGB) and Gated Recurrent Unit (GRU) to generate features to improve the expression ability of limited features. Moreover, a SARIMA-GRU prediction model considering the weekly periodicity is introduced. First, LightGBM is used to learn features and enhance the original features representation; secondly, GRU neural network is used to generate features; finally, the result ensemble is used as the input for prediction. Moreover, the SARIMA-GRU model is constructed for predicting. The GRU prediction consequences are revised by the SARIMA model that a better prediction can be obtained. The experiment was carried out with the data collected by Ride-hailing in Chengdu, and four predicted indicators and two performance indexes are utilized to evaluate the model. The results validate that the model proposed has significant improvements in the accuracy and performance of each component. | Related worksFeature generationFeature generation is the process of generating new features based on existing features that enhance the representation of the original feature7. At present, the method of feature generation is mainly based on automatic learning. There are a host of paper illustrate the effect of generating features on prediction works. Steffen Rendle8 used the decomposition parameter to model all interactions between variables and utilized the decomposition parameters between feature variables to extract feature combinations. Hongtao Shi et al.9 discovered a new feature optimization approach based on deep learning and Feature Selection (FS) techniques. Jiang et al.10 first applied the GBDT strategy to automatically extract effective features and feature interactions. This essay presented a new approach for generating synthetic features to impose a prior knowledge about data representation. Moreover, there is a large amount of bibliographic11–13 that demonstrated the contribution of tree-based algorithm and deep learning to feature generation, which is enough to support the idea of this manuscript.There are many different feature generation methods. FM is suitable for highly sparse data scenarios. In practical applications, it is limited by the computational complexity, and generally only second-order crossover features can be taken into account. The FS method only focuses on the improvement of classification performance and ignores the stability of selected feature subsets on traffic data changes. The method of deep learning has many parameters and complex structure. GBDT has few parameters and a fast training process, which can combat overfitting. But there is a dependency between weak learners, which makes it difficult to train data in parallel. Combining deep learning and tree-based approaches can combine the advantages of both approaches for feature generation.PredictionAs large dataset become more accessible, research on congestion prediction also tends to deep learning. Previous studies 14–17 proposed a combinatorial model for prediction, including GRU, LSTM, CNN, etc. The results manifest that the deep learning greatly improves the accuracy of traffic prediction. Compared to LSTM, GRU has a less complex structure and can be trained faster than LSTM18. Therefore, in order to reduce the complexity of the model, GRU is used instead of LSTM. Meanwhile, researches which combine tree-based and deep learning for prediction19,20 improve reference. Traffic congestion index has complex stochastic and nonlinear characteristics, and it reveals a similar seasonality and weekly trends. There are quite a few study21–23 consider cyclical factor, convert the volatility series into a stationary series to predict.Predictive models can be summarized as statistical models and deep learning methods. Statistical models (ARIMA, Time series, etc.) are simple and effective in short-term prediction, but have higher complexity when there are more parameters to be estimated; deep learning methods (LSTM, CNN, etc.) have advantages in accuracy, and cannot completely solve the problems of gradient explosion and computational complexity. Combining the advantages of different algorithm models in prediction can make better predictions.This study combines LightGBM and GRU to construct a feature generation model to predict congestion index. The tree-based theory and deep learning algorithm are used to enhance the expression of limited features. Furthermore, the seasonal GRU model aims to reduce complexity while ensuring accuracy when prediction. | [
"34903780",
"30932853"
] | [
{
"pmid": "34903780",
"title": "Development and evaluation of bidirectional LSTM freeway traffic forecasting models using simulation data.",
"abstract": "Long short-term memory (LSTM) models provide high predictive performance through their ability to recognize longer sequences of time series data. More recently, bidirectional deep learning models (BiLSTM) have extended the LSTM capabilities by training the input data twice in forward and backward directions. In this paper, BiLSTM short term traffic forecasting models have been developed and evaluated using data from a calibrated micro-simulation model for a congested freeway in Melbourne, Australia. The simulation model was extensively calibrated and validated to a high degree of accuracy using field data collected from 55 detectors on the freeway. The base year simulation model was then used to generate loop detector data including speed, flow and occupancy which were used to develop and compare a number of LSTM models for short-term traffic prediction up to 60 min into the future. The modelling results showed that BiLSTM outperformed other predictive models for multiple prediction horizons for base year conditions. The simulation model was then adapted for future year scenarios where the traffic demand was increased by 25-100 percent to reflect potential future increases in traffic demands. The results showed superior performance of BiLSTM for multiple prediction horizons for all traffic variables."
},
{
"pmid": "30932853",
"title": "Deep Decision Tree Transfer Boosting.",
"abstract": "Instance transfer approaches consider source and target data together during the training process, and borrow examples from the source domain to augment the training data, when there is limited or no label in the target domain. Among them, boosting-based transfer learning methods (e.g., TrAdaBoost) are most widely used. When dealing with more complex data, we may consider the more complex hypotheses (e.g., a decision tree with deeper layers). However, with the fixed and high complexity of the hypotheses, TrAdaBoost and its variants may face the overfitting problems. Even worse, in the transfer learning scenario, a decision tree with deep layers may overfit different distribution data in the source domain. In this paper, we propose a new instance transfer learning method, i.e., Deep Decision Tree Transfer Boosting (DTrBoost), whose weights are learned and assigned to base learners by minimizing the data-dependent learning bounds across both source and target domains in terms of the Rademacher complexities. This guarantees that we can learn decision trees with deep layers without overfitting. The theorem proof and experimental results indicate the effectiveness of our proposed method."
}
] |
Behavior Research Methods | 34258709 | PMC8863757 | 10.3758/s13428-021-01652-z | An algorithmic approach to determine expertise development using object-related gaze pattern sequences | Eye tracking (ET) technology is increasingly utilized to quantify visual behavior in the study of the development of domain-specific expertise. However, the identification and measurement of distinct gaze patterns using traditional ET metrics has been challenging, and the insights gained shown to be inconclusive about the nature of expert gaze behavior. In this article, we introduce an algorithmic approach for the extraction of object-related gaze sequences and determine task-related expertise by investigating the development of gaze sequence patterns during a multi-trial study of a simplified airplane assembly task. We demonstrate the algorithm in a study where novice (n = 28) and expert (n = 2) eye movements were recorded in successive trials (n = 8), allowing us to verify whether similar patterns develop with increasing expertise. In the proposed approach, AOI sequences were transformed to string representation and processed using the k-mer method, a well-known method from the field of computational biology. Our results for expertise development suggest that basic tendencies are visible in traditional ET metrics, such as the fixation duration, but are much more evident for k-mers of k > 2. With increased on-task experience, the appearance of expert k-mer patterns in novice gaze sequences was shown to increase significantly (p < 0.001). The results illustrate that the multi-trial k-mer approach is suitable for revealing specific cognitive processes and can quantify learning progress using gaze patterns that include both spatial and temporal information, which could provide a valuable tool for novice training and expert assessment. | Related worksThis section aims to highlight the previously conducted research in the field of visual expertise and sequence comparison approaches, and provides an overview of the k-mer analysis approach.Eye movements in the study of visual expertiseEye tracking (ET) has established itself as a popular research tool for the study of behavioral patterns (Duchowski, 2017; Land & Hayhoe, 2001) and, due to easier accessibility of the technology, has been increasingly applied to investigate visual expertise and expertise development (Brunyé et al., 2014; Crowe et al., 2018; Kelly et al., 2016). Particularly in the field of medicine, it has been of increasing interest what constitutes expertise, to increase the effectiveness and efficiency of novice training and diagnostic accuracy (van der Gijp et al., 2017). Using ET summary statistics, Wood et al. (2013) found that during the interpretation of skeletal radiographs, experts when compared to novices exhibit shorter fixation durations and are faster to fixate on the site of the fracture. In a study on laparoscopic skill acquisition, Wilson et al. (2011) discovered that experts show more fixations on task-relevant areas, while Gegenfurtner et al. (2011) showed that experts have longer saccades and, again, shorter time to fixate on task-relevant information. Zimmermann et al. (2020) measured fewer AOI transitions between task-critical objects during expert trials compared to novices during a cardiovascular intervention. Conversely, other studies have reported that novices, not experts, focused more of their attention on the surgical task (Zheng et al., 2011) and that experts visited fewer task-relevant areas (Jaarsma et al., 2015). In their review on ET study results for visual diagnostic performance in radiology, van der Gijp et al. (2017) found conflicting results regarding the relationship between the level of expertise and ET summary statistics. While in all studies the number of fixations seems to decrease with high levels of expertise, no generalization could be made on AOI fixation durations, the number of fixations on AOIs, dwell time ratios, saccade lengths, or image coverage.Van der Gijp’s results coincide with our knowledge that expertise is highly domain-specific (Beck et al., 2013; Chi, 2006; Sheridan & Reingold, 2017) and that results based on traditional ET summary statistics cannot and should not be generalized (Fox & Faulkne, 2017; Jarodzka & Boshuizen, 2017). Hence, in order to reveal more in-depth insights into the nature of perceptual expertise development, we are faced with the challenge of finding eye movement-based metrics that can help uncover task-specific, behavior-based development of expertise, while being generally applicable to a wide range of domains.String-edit approachesFirst introduced by Noton and Stark (1971), the scanpath theory postulates that fixed viewing sequences are generated top-down as a result of the specific model of a subject. Using a string editing approach, Privitera and Stark (2000) were the first to achieve scanpath comparison that compared both the temporal and the spatial information of fixations. In string-edit approaches, gaze sequences are converted into strings of letters, where each fixation of a different AOI is assigned a specific alphabetical character (Anderson, Anderson, Kingstone, & Bischof, 2015a). Furthermore, additional information about the length of the fixation can be included by repeating a letter based on the fixation duration (Cristino et al., 2010). Finally, by counting the number of operations needed to convert one sequence into the other, by using for example the Levenshtein distance (Levenshtein, 1966), a score is calculated to assess the similarity between eye movements in the context of a task (Foulsham et al., 2012).One algorithm that was successfully adapted from computational biology to eye movement analysis is the Needleman-Wunsch algorithm (Kübler et al., 2017). Compared to the traditional string-edit approach, this algorithm allows local alignments between matching AOI patterns of two scanpath sequences.In over two decades, various algorithms have been proposed to further improve gaze sequence comparisons, such as MultiMatch (Dewhurst et al., 2012), SubsMatch 2.0 (Kübler et al., 2017), EyeMSA (Burch et al., 2018) or ScanMatch (Cristino et al., 2010). MultiMatch compares the similarity of scanpaths as geometric vectors, including measures of saccade length and direction, without needing to couple ET data to predefined AOIs (Dewhurst et al., 2012; Jarodzka et al., 2010). SubsMatch 2.0 classifies eye movements between groups based on k-mer subsequences, while EyeMSA allows pairwise and multiple sequence alignments. For an in-depth description see Nicola C. Anderson’s et al. (2015a, b) review on scanpath comparison methods.In the context of expertise development, the majority of ET studies have applied gaze sequence similarity in the following two ways: the evaluation of experience-related eye movement similarities and the classification of the expertise level based on the gaze sequence. McIntyre and Foulsham (2018b) have successfully shown that the gaze sequences of subjects, within the same level of expertise, are more similar than between subjects of different expertise groups. Castner et al. (2020) proposed a model for scanpath classification that is capable of extracting expertise-specific gaze behavior during a study of dental radiograph inspection.However, as previously mentioned in the introduction, these approaches have some known limitations. One of the biggest limitations, next to the high computational cost of pairwise sequence comparison, is that similarity calculation is an essentially reductionist approach that reduces gaze behavior to a single cumulative score. While many measures of similarity can be used as a metric to determine behavioral differences between groups (Fahimi & Bruce, 2020), it does not allow one to infer which gaze sequences are developed during the evolution of novices to experts. Measuring similarity over time would indicate that individuals behave more similarly to experts, but the question would remain as to which of the gaze sequences changed during training.Therefore, a metric is sought that, firstly, keeps the contextual temporal and spatial information of a specific task or domain, while, secondly, allows quantitative measurement of gaze patterns and, thirdly, enables one to infer the level of expertise development. Here, we propose to apply k-mer analysis object, or AOI, related gaze sequence patterns.k-mersIn the field of computational biology, it is a common approach to identify similarity relationships between DNA sequences, with the goal to gain a fundamental understanding of how biological organisms function (Liu et al., 2013). Next to the Needleman–Wunsch algorithm, k-mer analysis has established itself as a simple but effective sequence analysis method (Ren et al., 2017). Compared to sequence alignment and string-edit approaches, the k-mer method segments each sequence into subsequences of length k and counts their appearance within the sequence. Hence, sequences can be compared based on the k-mer count of each pattern, while the individual components that are contained within each subsequence are conserved. In DNA analysis, each DNA sequence is regarded as a string with four letters (A, G, C, T), and the choice of k determines the number of possible combinations, with no. of combinations=no. of AOIsk (Manekar & Sathe, 2018). Because k-mers can be applied to all sequences in character representation, they can be applied to gaze sequences in the commonly used string-edit form. Bulling et al. (2013) have applied k-mers to electrooculography (EOG) signals to recognize high-level contextual cues, and Elbattah et al. (2020) have used k-mers to describe sequence patterns of fixations and saccades to assist the automated diagnosis of autism.In the present study, we have used higher-level ET data that was created using fixation-to-object mapping. Each dwell on an AOI was assigned a specific letter. Consequently, each k-mer pattern both preserves the sequence of k successive looked-at AOIs and allows us to compare different expertise levels by evaluating the appearance count of frequently appearing patterns. | [
"25540126",
"25084012",
"20805591",
"29399620",
"22648695",
"28502407",
"25677013",
"25641371",
"27322975",
"23819461",
"27443354",
"5538847",
"28683828",
"29033865",
"25125094",
"27436353",
"25529829",
"28961443",
"21671125",
"22940835"
] | [
{
"pmid": "25540126",
"title": "A comparison of scanpath comparison methods.",
"abstract": "Interest has flourished in studying both the spatial and temporal aspects of eye movement behavior. This has sparked the development of a large number of new methods to compare scanpaths. In the present work, we present a detailed overview of common scanpath comparison measures. Each of these measures was developed to solve a specific problem, but quantifies different aspects of scanpath behavior and requires different data-processing techniques. To understand these differences, we applied each scanpath comparison method to data from an encoding and recognition experiment and compared their ability to reveal scanpath similarities within and between individuals looking at natural scenes. Results are discussed in terms of the unique aspects of scanpath behavior that the different methods quantify. We conclude by making recommendations for choosing an appropriate scanpath comparison measure."
},
{
"pmid": "25084012",
"title": "Eye movements as an index of pathologist visual expertise: a pilot study.",
"abstract": "A pilot study examined the extent to which eye movements occurring during interpretation of digitized breast biopsy whole slide images (WSI) can distinguish novice interpreters from experts, informing assessments of competency progression during training and across the physician-learning continuum. A pathologist with fellowship training in breast pathology interpreted digital WSI of breast tissue and marked the region of highest diagnostic relevance (dROI). These same images were then evaluated using computer vision techniques to identify visually salient regions of interest (vROI) without diagnostic relevance. A non-invasive eye tracking system recorded pathologists' (N = 7) visual behavior during image interpretation, and we measured differential viewing of vROIs versus dROIs according to their level of expertise. Pathologists with relatively low expertise in interpreting breast pathology were more likely to fixate on, and subsequently return to, diagnostically irrelevant vROIs relative to experts. Repeatedly fixating on the distracting vROI showed limited value in predicting diagnostic failure. These preliminary results suggest that eye movements occurring during digital slide interpretation can characterize expertise development by demonstrating differential attraction to diagnostically relevant versus visually distracting image regions. These results carry both theoretical implications and potential for monitoring and evaluating student progress and providing automated feedback and scanning guidance in educational settings."
},
{
"pmid": "20805591",
"title": "ScanMatch: a novel method for comparing fixation sequences.",
"abstract": "We present a novel approach to comparing saccadic eye movement sequences based on the Needleman-Wunsch algorithm used in bioinformatics to compare DNA sequences. In the proposed method, the saccade sequence is spatially and temporally binned and then recoded to create a sequence of letters that retains fixation location, time, and order information. The comparison of two letter sequences is made by maximizing the similarity score computed from a substitution matrix that provides the score for all letter pair substitutions and a penalty gap. The substitution matrix provides a meaningful link between each location coded by the individual letters. This link could be distance but could also encode any useful dimension, including perceptual or semantic space. We show, by using synthetic and behavioral data, the benefits of this method over existing methods. The ScanMatch toolbox for MATLAB is freely available online (www.scanmatch.co.uk)."
},
{
"pmid": "29399620",
"title": "Exploring the practicing-connections hypothesis: using gesture to support coordination of ideas in understanding a complex statistical concept.",
"abstract": "In this article, we begin to lay out a framework and approach for studying how students come to understand complex concepts in rich domains. Grounded in theories of embodied cognition, we advance the view that understanding of complex concepts requires students to practice, over time, the coordination of multiple concepts, and the connection of this system of concepts to situations in the world. Specifically, we explore the role that a teacher's gesture might play in supporting students' coordination of two concepts central to understanding in the domain of statistics: mean and standard deviation. In Study 1 we show that university students who have just taken a statistics course nevertheless have difficulty taking both mean and standard deviation into account when thinking about a statistical scenario. In Study 2 we show that presenting the same scenario with an accompanying gesture to represent variation significantly impacts students' interpretation of the scenario. Finally, in Study 3 we present evidence that instructional videos on the internet fail to leverage gesture as a means of facilitating understanding of complex concepts. Taken together, these studies illustrate an approach to translating current theories of cognition into principles that can guide instructional design."
},
{
"pmid": "22648695",
"title": "It depends on how you look at it: scanpath comparison in multiple dimensions with MultiMatch, a vector-based approach.",
"abstract": "Eye movement sequences-or scanpaths-vary depending on the stimulus characteristics and the task (Foulsham & Underwood Journal of Vision, 8(2), 6:1-17, 2008; Land, Mennie, & Rusted, Perception, 28, 1311-1328, 1999). Common methods for comparing scanpaths, however, are limited in their ability to capture both the spatial and temporal properties of which a scanpath consists. Here, we validated a new method for scanpath comparison based on geometric vectors, which compares scanpaths over multiple dimensions while retaining positional and sequential information (Jarodzka, Holmqvist, & Nyström, Symposium on Eye-Tracking Research and Applications (pp. 211-218), 2010). \"MultiMatch\" was tested in two experiments and pitted against ScanMatch (Cristino, Mathôt, Theeuwes, & Gilchrist, Behavior Research Methods, 42, 692-700, 2010), the most comprehensive adaptation of the popular Levenshtein method. In Experiment 1, we used synthetic data, demonstrating the greater sensitivity of MultiMatch to variations in spatial position. In Experiment 2, real eye movement recordings were taken from participants viewing sequences of dots, designed to elicit scanpath pairs with commonalities known to be problematic for algorithms (e.g., when one scanpath is shifted in locus or when fixations fall on either side of an AOI boundary). The results illustrate the advantages of a multidimensional approach, revealing how two scanpaths differ. For instance, if one scanpath is the reverse copy of another, the difference is in the direction but not the positions of fixations; or if a scanpath is scaled down, the difference is in the length of the saccadic vectors but not in the overall shape. As well as having enormous potential for any task in which consistency in eye movements is important (e.g., learning), MultiMatch is particularly relevant for \"eye movements to nothing\" in mental imagery and embodiment-of-cognition research, where satisfactory scanpath comparison algorithms are lacking."
},
{
"pmid": "28502407",
"title": "I spy with my little eye: Analysis of airline pilots' gaze patterns in a manual instrument flight scenario.",
"abstract": "The aim of this study was to analyze pilots' visual scanning in a manual approach and landing scenario. Manual flying skills suffer from increasing use of automation. In addition, predominantly long-haul pilots with only a few opportunities to practice these skills experience this decline. Airline pilots representing different levels of practice (short-haul vs. long-haul) had to perform a manual raw data precision approach while their visual scanning was recorded by an eye-tracking device. The analysis of gaze patterns, which are based on predominant saccades, revealed one main group of saccades among long-haul pilots. In contrast, short-haul pilots showed more balanced scanning using two different groups of saccades. Short-haul pilots generally demonstrated better manual flight performance and within this group, one type of scan pattern was found to facilitate the manual landing task more. Long-haul pilots tend to utilize visual scanning behaviors that are inappropriate for the manual ILS landing task. This lack of skills needs to be addressed by providing specific training and more practice."
},
{
"pmid": "25677013",
"title": "Expertise in clinical pathology: combining the visual and cognitive perspective.",
"abstract": "Expertise studies in the medical domain often focus on either visual or cognitive aspects of expertise. As a result, characteristics of expert behaviour are often described as either cognitive or visual abilities. This study focuses on both aspects of expertise and analyses them along three overarching constructs: (1) encapsulations, (2) efficiency, and (3) hypothesis testing. This study was carried out among clinical pathologists performing an authentic task: diagnosing microscopic slides. Participants were 13 clinical pathologists (experts), 12 residents in pathology (intermediates), and 13 medical students (novices). They all diagnosed seven cases in a virtual microscope and gave post hoc explanations for their diagnoses. The collected data included eye movements, microscope navigation, and verbal protocols. Results showed that experts used lower magnifications and verbalized their findings as diagnoses. Also, their diagnostic paths were more efficient, including fewer microscope movements and shorter reasoning chains. Experts entered relevant areas later in their diagnostic process, and visited fewer of them. Intermediates used relatively high magnifications and based their diagnoses on specific abnormalities. Also, they took longer to reach their diagnosis and checked more relevant areas. Novices searched in detail, described findings by their appearances, and uttered long reasoning chains. These results indicate that overarching constructs can justly be identified: encapsulations and efficiency are apparent in both visual and cognitive aspects of expertise."
},
{
"pmid": "25641371",
"title": "Humans have idiosyncratic and task-specific scanpaths for judging faces.",
"abstract": "Since Yarbus's seminal work, vision scientists have argued that our eye movement patterns differ depending upon our task. This has recently motivated the creation of multi-fixation pattern analysis algorithms that try to infer a person's task (or mental state) from their eye movements alone. Here, we introduce new algorithms for multi-fixation pattern analysis, and we use them to argue that people have scanpath routines for judging faces. We tested our methods on the eye movements of subjects as they made six distinct judgments about faces. We found that our algorithms could detect whether a participant is trying to distinguish angriness, happiness, trustworthiness, tiredness, attractiveness, or age. However, our algorithms were more accurate at inferring a subject's task when only trained on data from that subject than when trained on data gathered from other subjects, and we were able to infer the identity of our subjects using the same algorithms. These results suggest that (1) individuals have scanpath routines for judging faces, and that (2) these are diagnostic of that subject, but that (3) at least for the tasks we used, subjects do not converge on the same \"ideal\" scanpath pattern. Whether universal scanpath patterns exist for a task, we suggest, depends on the task's constraints and the level of expertise of the subject."
},
{
"pmid": "27322975",
"title": "The Development of Expertise in Radiology: In Chest Radiograph Interpretation, \"Expert\" Search Pattern May Predate \"Expert\" Levels of Diagnostic Accuracy for Pneumothorax Identification.",
"abstract": "Purpose To investigate the development of chest radiograph interpretation skill through medical training by measuring both diagnostic accuracy and eye movements during visual search. Materials and Methods An institutional exemption from full ethical review was granted for the study. Five consultant radiologists were deemed the reference expert group, and four radiology registrars, five senior house officers (SHOs), and six interns formed four clinician groups. Participants were shown 30 chest radiographs, 14 of which had a pneumothorax, and were asked to give their level of confidence as to whether a pneumothorax was present. Receiver operating characteristic (ROC) curve analysis was carried out on diagnostic decisions. Eye movements were recorded with a Tobii TX300 (Tobii Technology, Stockholm, Sweden) eye tracker. Four eye-tracking metrics were analyzed. Variables were compared to identify any differences between groups. All data were compared by using the Friedman nonparametric method. Results The average area under the ROC curve for the groups increased with experience (0.947 for consultants, 0.792 for registrars, 0.693 for SHOs, and 0.659 for interns; P = .009). A significant difference in diagnostic accuracy was found between consultants and registrars (P = .046). All four eye-tracking metrics decreased with experience, and there were significant differences between registrars and SHOs. Total reading time decreased with experience; it was significantly lower for registrars compared with SHOs (P = .046) and for SHOs compared with interns (P = .025). Conclusion Chest radiograph interpretation skill increased with experience, both in terms of diagnostic accuracy and visual search. The observed level of experience at which there was a significant difference was higher for diagnostic accuracy than for eye-tracking metrics. (©) RSNA, 2016 Online supplemental material is available for this article."
},
{
"pmid": "23819461",
"title": "Measuring the surgical 'learning curve': methods, variables and competency.",
"abstract": "OBJECTIVES\nTo describe how learning curves are measured and what procedural variables are used to establish a 'learning curve' (LC). To assess whether LCs are a valuable measure of competency.\n\n\nPATIENTS AND METHODS\nA review of the surgical literature pertaining to LCs was conducted using the Medline and OVID databases.\n\n\nRESULTS\nVariables should be fully defined and when possible, patient-specific variables should be used. Trainee's prior experience and level of supervision should be quantified; the case mix and complexity should ideally be constant. Logistic regression may be used to control for confounding variables. Ideally, a learning plateau should reach a predefined/expert-derived competency level, which should be fully defined. When the group splitting method is used, smaller cohorts should be used in order to narrow the range of the LC. Simulation technology and competence-based objective assessments may be used in training and assessment in LC studies.\n\n\nCONCLUSIONS\nMeasuring the surgical LC has potential benefits for patient safety and surgical education. However, standardisation in the methods and variables used to measure LCs is required. Confounding variables, such as participant's prior experience, case mix, difficulty of procedures and level of supervision, should be controlled. Competency and expert performance should be fully defined."
},
{
"pmid": "27443354",
"title": "SubsMatch 2.0: Scanpath comparison and classification based on subsequence frequencies.",
"abstract": "Our eye movements are driven by a continuous trade-off between the need for detailed examination of objects of interest and the necessity to keep an overview of our surrounding. In consequence, behavioral patterns that are characteristic for our actions and their planning are typically manifested in the way we move our eyes to interact with our environment. Identifying such patterns from individual eye movement measurements is however highly challenging. In this work, we tackle the challenge of quantifying the influence of experimental factors on eye movement sequences. We introduce an algorithm for extracting sequence-sensitive features from eye movements and for the classification of eye movements based on the frequencies of small subsequences. Our approach is evaluated against the state-of-the art on a novel and a very rich collection of eye movements data derived from four experimental settings, from static viewing tasks to highly dynamic outdoor settings. Our results show that the proposed method is able to classify eye movement sequences over a variety of experimental designs. The choice of parameters is discussed in detail with special focus on highlighting different aspects of general scanpath shape. Algorithms and evaluation data are available at: http://www.ti.uni-tuebingen.de/scanpathcomparison.html ."
},
{
"pmid": "5538847",
"title": "Scanpaths in eye movements during pattern perception.",
"abstract": "Subjects learned and recognized patterns which were marginally visible, requiring them to fixate directly each feature to which they wished to attend. Fixed \"scanpaths,\" specific to subject and pattern, appeared in their saccadic eye movements, both intermittently during learning and in initial eye movements during recognition. A proposed theory of pattern perception explains these results."
},
{
"pmid": "28683828",
"title": "VirFinder: a novel k-mer based tool for identifying viral sequences from assembled metagenomic data.",
"abstract": "BACKGROUND\nIdentifying viral sequences in mixed metagenomes containing both viral and host contigs is a critical first step in analyzing the viral component of samples. Current tools for distinguishing prokaryotic virus and host contigs primarily use gene-based similarity approaches. Such approaches can significantly limit results especially for short contigs that have few predicted proteins or lack proteins with similarity to previously known viruses.\n\n\nMETHODS\nWe have developed VirFinder, the first k-mer frequency based, machine learning method for virus contig identification that entirely avoids gene-based similarity searches. VirFinder instead identifies viral sequences based on our empirical observation that viruses and hosts have discernibly different k-mer signatures. VirFinder's performance in correctly identifying viral sequences was tested by training its machine learning model on sequences from host and viral genomes sequenced before 1 January 2014 and evaluating on sequences obtained after 1 January 2014.\n\n\nRESULTS\nVirFinder had significantly better rates of identifying true viral contigs (true positive rates (TPRs)) than VirSorter, the current state-of-the-art gene-based virus classification tool, when evaluated with either contigs subsampled from complete genomes or assembled from a simulated human gut metagenome. For example, for contigs subsampled from complete genomes, VirFinder had 78-, 2.4-, and 1.8-fold higher TPRs than VirSorter for 1, 3, and 5 kb contigs, respectively, at the same false positive rates as VirSorter (0, 0.003, and 0.006, respectively), thus VirFinder works considerably better for small contigs than VirSorter. VirFinder furthermore identified several recently sequenced virus genomes (after 1 January 2014) that VirSorter did not and that have no nucleotide similarity to previously sequenced viruses, demonstrating VirFinder's potential advantage in identifying novel viral sequences. Application of VirFinder to a set of human gut metagenomes from healthy and liver cirrhosis patients reveals higher viral diversity in healthy individuals than cirrhosis patients. We also identified contig bins containing crAssphage-like contigs with higher abundance in healthy patients and a putative Veillonella genus prophage associated with cirrhosis patients.\n\n\nCONCLUSIONS\nThis innovative k-mer based tool complements gene-based approaches and will significantly improve prokaryotic viral sequence identification, especially for metagenomic-based studies of viral ecology."
},
{
"pmid": "29033865",
"title": "The Holistic Processing Account of Visual Expertise in Medical Image Perception: A Review.",
"abstract": "In the field of medical image perception, the holistic processing perspective contends that experts can rapidly extract global information about the image, which can be used to guide their subsequent search of the image (Swensson, 1980; Nodine and Kundel, 1987; Kundel et al., 2007). In this review, we discuss the empirical evidence supporting three different predictions that can be derived from the holistic processing perspective: Expertise in medical image perception is domain-specific, experts use parafoveal and/or peripheral vision to process large regions of the image in parallel, and experts benefit from a rapid initial glimpse of an image. In addition, we discuss a pivotal recent study (Litchfield and Donovan, 2016) that seems to contradict the assumption that experts benefit from a rapid initial glimpse of the image. To reconcile this finding with the existing literature, we suggest that global processing may serve multiple functions that extend beyond the initial glimpse of the image. Finally, we discuss future research directions, and we highlight the connections between the holistic processing account and similar theoretical perspectives and findings from other domains of visual expertise."
},
{
"pmid": "25125094",
"title": "Differences in gaze behaviour of expert and junior surgeons performing open inguinal hernia repair.",
"abstract": "INTRODUCTION\nVarious fields have used gaze behaviour to evaluate task proficiency. This may also apply to surgery for the assessment of technical skill, but has not previously been explored in live surgery. The aim was to assess differences in gaze behaviour between expert and junior surgeons during open inguinal hernia repair.\n\n\nMETHODS\nGaze behaviour of expert and junior surgeons (defined by operative experience) performing the operation was recorded using eye-tracking glasses (SMI Eye Tracking Glasses 2.0, SensoMotoric Instruments, Germany). Primary endpoints were fixation frequency (steady eye gaze rate) and dwell time (fixation and saccades duration) and were analysed for designated areas of interest in the subject's visual field. Secondary endpoints were maximum pupil size, pupil rate of change (change frequency in pupil size) and pupil entropy (predictability of pupil change). NASA TLX scale measured perceived workload. Recorded metrics were compared between groups for the entire procedure and for comparable procedural segments.\n\n\nRESULTS\nTwenty-five cases were recorded, with 13 operations analysed, from 9 surgeons giving 630 min of data, recorded at 30 Hz. Experts demonstrated higher fixation frequency (median[IQR] 1.86 [0.3] vs 0.96 [0.3]; P = 0.006) and dwell time on the operative site during application of mesh (792 [159] vs 469 [109] s; P = 0.028), closure of the external oblique (1.79 [0.2] vs 1.20 [0.6]; P = 0.003) (625 [154] vs 448 [147] s; P = 0.032) and dwelled more on the sterile field during cutting of mesh (716 [173] vs 268 [297] s; P = 0.019). NASA TLX scores indicated experts found the procedure less mentally demanding than juniors (3 [2] vs 12 [5.2]; P = 0.038). No subjects reported problems with wearing of the device, or obstruction of view.\n\n\nCONCLUSION\nUse of portable eye-tracking technology in open surgery is feasible, without impinging surgical performance. Differences in gaze behaviour during open inguinal hernia repair can be seen between expert and junior surgeons and may have uses for assessment of surgical skill."
},
{
"pmid": "27436353",
"title": "How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology.",
"abstract": "Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology domain aims to identify visual search patterns associated with high perceptual performance. Databases PubMed, EMBASE, ERIC, PsycINFO, Scopus and Web of Science were searched using 'visual perception' OR 'eye tracking' AND 'radiology' and synonyms. Two authors independently screened search results and included eye tracking studies concerning visual skills in radiology published between January 1, 1994 and July 31, 2015. Two authors independently assessed study quality with the Medical Education Research Study Quality Instrument, and extracted study data with respect to design, participant and task characteristics, and variables. A thematic analysis was conducted to extract and arrange study results, and a textual narrative synthesis was applied for data integration and interpretation. The search resulted in 22 relevant full-text articles. Thematic analysis resulted in six themes that informed the relation between visual search and level of expertise: (1) time on task, (2) eye movement characteristics of experts, (3) differences in visual attention, (4) visual search patterns, (5) search patterns in cross sectional stack imaging, and (6) teaching visual search strategies. Expert search was found to be characterized by a global-focal search pattern, which represents an initial global impression, followed by a detailed, focal search-to-find mode. Specific task-related search patterns, like drilling through CT scans and systematic search in chest X-rays, were found to be related to high expert levels. One study investigated teaching of visual search strategies, and did not find a significant effect on perceptual performance. Eye tracking literature in radiology indicates several search patterns are related to high levels of expertise, but teaching novices to search as an expert may not be effective. Experimental research is needed to find out which search strategies can improve image perception in learners."
},
{
"pmid": "25529829",
"title": "Measuring dwell time percentage from head-mounted eye-tracking data--comparison of a frame-by-frame and a fixation-by-fixation analysis.",
"abstract": "Although analysing software for eye-tracking data has significantly improved in the past decades, the analysis of gaze behaviour recorded with head-mounted devices is still challenging and time-consuming. Therefore, new methods have to be tested to reduce the analysis workload while maintaining accuracy and reliability. In this article, dwell time percentages to six areas of interest (AOIs), of six participants cycling on four different roads, were analysed both frame-by-frame and in a 'fixation-by-fixation' manner. The fixation-based method is similar to the classic frame-by-frame method but instead of assigning frames, fixations are assigned to one of the AOIs. Although some considerable differences were found between the two methods, a Pearson correlation of 0.930 points out a good validity of the fixation-by-fixation method. For the analysis of gaze behaviour over an extended period of time, the fixation-based approach is a valuable and time-saving alternative for the classic frame-by-frame analysis."
},
{
"pmid": "28961443",
"title": "Eye tracking to evaluate evidence recognition in crime scene investigations.",
"abstract": "Crime scene analysts are the core of criminal investigations; decisions made at the scene greatly affect the speed of analysis and the quality of conclusions, thereby directly impacting the successful resolution of a case. If an examiner fails to recognize the pertinence of an item on scene, the analyst's theory regarding the crime will be limited. Conversely, unselective evidence collection will most likely include irrelevant material, thus increasing a forensic laboratory's backlog and potentially sending the investigation into an unproductive and costly direction. Therefore, it is critical that analysts recognize and properly evaluate forensic evidence that can assess the relative support of differing hypotheses related to event reconstruction. With this in mind, the aim of this study was to determine if quantitative eye tracking data and qualitative reconstruction accuracy could be used to distinguish investigator expertise. In order to assess this, 32 participants were successfully recruited and categorized as experts or trained novices based on their practical experiences and educational backgrounds. Each volunteer then processed a mock crime scene while wearing a mobile eye tracker, wherein visual fixations, durations, search patterns, and reconstruction accuracy were evaluated. The eye tracking data (dwell time and task percentage on areas of interest or AOIs) were compared using Earth Mover's Distance (EMD) and the Needleman-Wunsch (N-W) algorithm, revealing significant group differences for both search duration (EMD), as well as search sequence (N-W). More specifically, experts exhibited greater dissimilarity in search duration, but greater similarity in search sequences than their novice counterparts. In addition to the quantitative visual assessment of examiner variability, each participant's reconstruction skill was assessed using a 22-point binary scoring system, in which significant group differences were detected as a function of total reconstruction accuracy. This result, coupled with the fact that the study failed to detect a significant difference between the groups when evaluating the total time needed to complete the investigation, indicates that experts are more efficient and effective. Finally, the results presented here provide a basis for continued research in the use of eye trackers to assess expertise in complex and distributed environments, including suggestions for future work, and cautions regarding the degree to which visual attention can infer cognitive understanding."
},
{
"pmid": "21671125",
"title": "Gaze training enhances laparoscopic technical skill acquisition and multi-tasking performance: a randomized, controlled study.",
"abstract": "BACKGROUND\nThe operating room environment is replete with stressors and distractions that increase the attention demands of what are already complex psychomotor procedures. Contemporary research in other fields (e.g., sport) has revealed that gaze training interventions may support the development of robust movement skills. This current study was designed to examine the utility of gaze training for technical laparoscopic skills and to test performance under multitasking conditions.\n\n\nMETHODS\nThirty medical trainees with no laparoscopic experience were divided randomly into one of three treatment groups: gaze trained (GAZE), movement trained (MOVE), and discovery learning/control (DISCOVERY). Participants were fitted with a Mobile Eye gaze registration system, which measures eye-line of gaze at 25 Hz. Training consisted of ten repetitions of the \"eye-hand coordination\" task from the LAP Mentor VR laparoscopic surgical simulator while receiving instruction and video feedback (specific to each treatment condition). After training, all participants completed a control test (designed to assess learning) and a multitasking transfer test, in which they completed the procedure while performing a concurrent tone counting task.\n\n\nRESULTS\nNot only did the GAZE group learn more quickly than the MOVE and DISCOVERY groups (faster completion times in the control test), but the performance difference was even more pronounced when multitasking. Differences in gaze control (target locking fixations), rather than tool movement measures (tool path length), underpinned this performance advantage for GAZE training.\n\n\nCONCLUSIONS\nThese results suggest that although the GAZE intervention focused on training gaze behavior only, there were indirect benefits for movement behaviors and performance efficiency. Additionally, focusing on a single external target when learning, rather than on complex movement patterns, may have freed-up attentional resources that could be applied to concurrent cognitive tasks."
},
{
"pmid": "22940835",
"title": "Visual expertise in detecting and diagnosing skeletal fractures.",
"abstract": "OBJECTIVE\nFailure to identify fractures is the most common error in accident and emergency departments. Therefore, the current research aimed to understand more about the processes underlying perceptual expertise when interpreting skeletal radiographs.\n\n\nMATERIALS AND METHODS\nThirty participants, consisting of ten novices, ten intermediates, and ten experts were presented with ten clinical cases of normal and abnormal skeletal radiographs of varying difficulty (obvious or subtle) while wearing eye tracking equipment.\n\n\nRESULTS\nExperts were significantly more accurate, more confident, and faster in their diagnoses than intermediates or novices and this performance advantage was more pronounced for the subtle cases. Experts were also faster to fixate the site of the fracture and spent more relative time fixating the fracture than intermediates or novices and this was again most pronounced for subtle cases. Finally, a multiple linear regression analysis found that time to fixate the fracture was inversely related to diagnostic accuracy and explained 34 % of the variance in this variable.\n\n\nCONCLUSIONS\nThe results suggest that the performance advantage of expert radiologists is underpinned by superior pattern recognition skills, as evidenced by a quicker time to first fixate the pathology, and less time spent searching the image."
}
] |
Frontiers in Psychology | null | PMC8866172 | 10.3389/fpsyg.2022.762701 | The Use of Deep Learning-Based Gesture Interactive Robot in the Treatment of Autistic Children Under Music Perception Education | The purpose of this study was to apply deep learning to music perception education. Music perception therapy for autistic children using gesture interactive robots based on the concept of educational psychology and deep learning technology is proposed. First, the experimental problems are defined and explained based on the relevant theories of pedagogy. Next, gesture interactive robots and music perception education classrooms are studied based on recurrent neural networks (RNNs). Then, autistic children are treated by music perception, and an electroencephalogram (EEG) is used to collect the music perception effect and disease diagnosis results of children. Due to significant advantages of signal feature extraction and classification, RNN is used to analyze the EEG of autistic children receiving different music perception treatments to improve classification accuracy. The experimental results are as follows. The analysis of EEG signals proves that different people have different perceptions of music, but this difference fluctuates in a certain range. The classification accuracy of the designed model is about 72–94%, and the average classification accuracy is about 85%. The average accuracy of the model for EEG classification of autistic children is 85%, and that of healthy children is 84%. The test results with similar models also prove the excellent performance of the design model. This exploration provides a reference for applying the artificial intelligence (AI) technology in music perception education to diagnose and treat autistic children. | Recent Related WorkWorldwide scholars have conducted extensive research on the application of AI technology and music perception education. Rodgers et al. (2021) studied the digital transformation effect of AI-based facial and music bioidentification technology on cognitive and emotional state of customers, and how these effects affect their behavioral responses in value creation. The participants experimented with different music types (enhanced by music recognition and bioidentification technology). The results show that the emotion caused by music recognition and bioidentification technology plays an intermediary role in cognitive and behavioral intention. This exploration can help understand the relationship between cognition and emotion induced by AI-based facial and music bioidentification systems in shaping the behavior of customers. Rahman et al. (2021) studied the relationship between music and emotion and introduced the emotional calculation into the physiological signal analysis. The data of skin electrical activity, blood volume pulse, skin temperature, and pupil dilation were collected. The neural network was used to identify and evaluate the subjective emotions of participants to verify the application effect of music therapy in mental health nursing. Koempel (2020) studied the application of AI in the music industry, which defines music copyrights to judge whether there is a copyright problem by intelligently identifying music content. Sharda et al. (2019) studied the application of music therapy for autistic children. Music therapy is used to experience music by trained therapists to promote health. It can improve the communication skills of autistic children and other abilities of children. Matokhniuk et al. (2021) studied the therapeutic effect of music therapy in dealing with the psychological characteristics of anxiety in adolescents. Under increasing anxiety, music therapy was adopted to explore the changes in anxiety level of teenagers in the psychological situation. The results show that the psychological correction of music therapy is effective in the work of psychologists.Given the above analysis, AI technology can be used to build a more sound music curriculum system. Music therapy in the treatment of autistic children is a crucial exploration. However, there are less studies on the combination of the two. Hence, an AI-based human-computer interactive music classroom is established to treat autistic children through gesture interactive robots combined with music perception therapy. The mental state and treatment effects of children are analyzed by monitoring their EEG signals to provide a reference for research in related fields. | [
"30131673",
"31649581",
"29335454",
"33285082",
"33421942",
"30741908",
"30410461",
"29593627",
"32547458",
"32038391",
"30446036",
"30846958",
"31214081",
"33159212"
] | [
{
"pmid": "30131673",
"title": "Auditory Event-Related Potentials Associated With Music Perception in Cochlear Implant Users.",
"abstract": "A short review of the literature on auditory event-related potentials and mismatch negativities (MMN) in cochlear implant users engaged in music-related auditory perception tasks is presented. Behavioral studies that have measured the fundamental aspects of music perception in CI users have found that they usually experience poor perception of melody, pitch, harmony as well as timbre (Limb and Roy, 2014). This is thought to occur not only because of the technological and acoustic limitations of the device, but also because of the biological alterations that usually accompany deafness. In order to improve music perception and appreciation in individuals with cochlear implants, it is essential to better understand how they perceive music. As suggested by recent studies, several different electrophysiological paradigms can be used to reliably and objectively measure normal-hearing individuals' perception of fundamental musical features. These techniques, when used with individuals with cochlear implants, might contribute to determine how their peripheral and central auditory systems analyze musical excerpts. The investigation of these cortical activations can moreover give important information on other aspects related to music appreciation, such as pleasantness and emotional perception. The studies reviewed suggest that cochlear implantation alters most fundamental musical features, including pitch, timbre, melody perception, complex rhythm, and duration (e.g., Koelsch et al., 2004b; Timm et al., 2012, 2014; Zhang et al., 2013a,b; Limb and Roy, 2014). A better understanding of how individuals with cochlear implants perform on these tasks not only makes it possible to compare their performance to that of their normal-hearing peers, but can also lead to better clinical intervention and rehabilitation."
},
{
"pmid": "31649581",
"title": "The Impact of Expatriates' Cross-Cultural Adjustment on Work Stress and Job Involvement in the High-Tech Industry.",
"abstract": "The personal traits of expatriates influence their work performance in a subsidiary. Nevertheless, organizations tend to hire candidates who are suitable from the technological dimension but ignore personal and family factors. Expatriates might not be familiar with a foreign place, and most organizations do not provide the so-called cultural adjustment training. The selected expatriates often accept the job without knowing the future prospects of their career, which can result in individual and family turmoil initially. Moreover, the unknown future career prospects and concern over when they will return to the parent company can affect expatriates' work. Cross-cultural competence refers to the ability of individuals to work effectively and live normally in different cultural contexts, and this ability requires expatriate employees to adopt adaptive thinking patterns and behaviors in the host country. To explore the effect of expatriates' cross-culture adjustment on their work stress and job involvement, this study therefore uses an empirical approach in which data are collected with a questionnaire survey and proposes specific suggestions, according to the results, to aid expatriates in their personal psychological adjustment. The results show that the challenges faced by expatriate employees are derived from assigned tasks, unknown environments, language barriers, and cultural differences. Excessive pressure will impose ideological and psychological burdens upon the expatriates and even lead to physical symptoms, however, the appropriate amount of pressure can play a driving role and promote the smooth progress of the work. High-tech industry employees who can adapt to the customs and cultures of foreign countries have higher work participation and are more likely to find ways to alleviate work stress. It has also been found that the stronger the cross-cultural competence of employees, the better their adjustment to the host country and the higher their corresponding job performance."
},
{
"pmid": "29335454",
"title": "Relationship between spectrotemporal modulation detection and music perception in normal-hearing, hearing-impaired, and cochlear implant listeners.",
"abstract": "The objective of this study was to examine the relationship between spectrotemporal modulation (STM) sensitivity and the ability to perceive music. Ten normal-nearing (NH) listeners, ten hearing aid (HA) users with moderate hearing loss, and ten cochlear Implant (CI) users participated in this study. Three different types of psychoacoustic tests including spectral modulation detection (SMD), temporal modulation detection (TMD), and STM were administered. Performances on these psychoacoustic tests were compared to music perception abilities. In addition, psychoacoustic mechanisms involved in the improvement of music perception through HA were evaluated. Music perception abilities in unaided and aided conditions were measured for HA users. After that, HA benefit for music perception was correlated with aided psychoacoustic performance. STM detection study showed that a combination of spectral and temporal modulation cues were more strongly correlated with music perception abilities than spectral or temporal modulation cues measured separately. No correlation was found between music perception performance and SMD threshold or TMD threshold in each group. Also, HA benefits for melody and timbre identification were significantly correlated with a combination of spectral and temporal envelope cues though HA."
},
{
"pmid": "33285082",
"title": "Quantitative Assessment of Learning and Retention in Virtual Vocal Function Exercises.",
"abstract": "Purpose Successful voice therapy requires the patient to learn new vocal behaviors, but little is currently known regarding how vocal motor skills are improved and retained. To quantitatively characterize the motor learning process in a clinically meaningful context, a virtual task was developed based on the Vocal Function Exercises. In the virtual task, subjects control a computational model of a ball floating on a column of airflow via modifications to mean airflow (L/s) and intensity (dB-C) to keep the ball within a target range representing a normative ratio (dB × s/L). Method One vocally healthy female and one female with nonphonotraumatic vocal hyperfunction practiced the task for 11 days and completed retention testing 1 and 6 months later. The mapping between the two execution variables (airflow and intensity) and one error measure (proximity to the normative ratio) was evaluated by quantifying distributional variability (tolerance cost and noise cost) and temporal variability (scaling index of detrended fluctuation analysis). Results Both subjects reduced their error over practice and retained their performance 6 months later. Tolerance cost and noise cost were positively correlated with decreases in error during early practice and late practice, respectively. After extended practice, temporal variability was modulated to align with the task's solution manifold. Conclusions These case studies illustrated, in a healthy control and a patient with nonphonotraumatic vocal hyperfunction, that the virtual floating ball task produces quantitative measures characterizing the learning process. Future work will further investigate the task's potential to enhance clinical assessment and treatments involving voice control. Supplemental Material https://doi.org/10.23641/asha.13322891."
},
{
"pmid": "33421942",
"title": "Frontotemporal dementia, music perception and social cognition share neurobiological circuits: A meta-analysis.",
"abstract": "Frontotemporal dementia (FTD) is a neurodegenerative disease that presents with profound changes in social cognition. Music might be a sensitive probe for social cognition abilities, but underlying neurobiological substrates are unclear. We performed a meta-analysis of voxel-based morphometry studies in FTD patients and functional MRI studies for music perception and social cognition tasks in cognitively normal controls to identify robust patterns of atrophy (FTD) or activation (music perception or social cognition). Conjunction analyses were performed to identify overlapping brain regions. In total 303 articles were included: 53 for FTD (n = 1153 patients, 42.5% female; 1337 controls, 53.8% female), 28 for music perception (n = 540, 51.8% female) and 222 for social cognition in controls (n = 5664, 50.2% female). We observed considerable overlap in atrophy patterns associated with FTD, and functional activation associated with music perception and social cognition, mostly encompassing the ventral language network. We further observed overlap across all three modalities in mesolimbic, basal forebrain and striatal regions. The results of our meta-analysis suggest that music perception and social cognition share neurobiological circuits that are affected in FTD. This supports the idea that music might be a sensitive probe for social cognition abilities with implications for diagnosis and monitoring."
},
{
"pmid": "30741908",
"title": "Hybrid Music Perception Outcomes: Implications for Melody and Timbre Recognition in Cochlear Implant Recipients.",
"abstract": "OBJECTIVE\nTo examine whether or not electric-acoustic music perception outcomes, observed in a recent Hybrid L24 clinical trial, were related to the availability of low-frequency acoustic cues not present in the electric domain.\n\n\nSTUDY DESIGN\nProspective, repeated-measures, within-subject design.\n\n\nSETTING\nAcademic research hospital.\n\n\nSUBJECTS\nNine normally hearing individuals.\n\n\nINTERVENTION\nSimulated electric-acoustic hearing in normally hearing individuals.\n\n\nMAIN OUTCOMES MEASURES\nAcutely measured melody and timbre recognition scores from the University of Washington Clinical Assessment of Music Perception (CAMP) test.\n\n\nRESULTS\nMelody recognition scores were consistently better for listening conditions that included low-frequency acoustic information. Mean scores for both acoustic (73.5%, S.D. = 15.5%) and electric-acoustic (67.9%, S.D. = 21.2%) conditions were significantly better (p < 0.001) than electric alone (39.2%, S.D. = 18.1%). This was not the case for timbre recognition for which scores were more variable across simulated listening modes with no significant differences found in mean scores across electric (36.1%, S.D. = 17.7%), acoustic (38.0%, S.D. = 20.4%), and electric-acoustic (40.7%, S.D. = 19.7%) conditions (p > 0.05).\n\n\nCONCLUSION\nRecipients of hybrid cochlear implants demonstrate music perception abilities superior to those observed in traditional cochlear implant recipients. Results from the present study support the notion that electric-acoustic stimulation confers advantages related to the availability of low-frequency acoustic hearing, most particularly for melody recognition. However, timbre recognition remains more limited for both hybrid and traditional cochlear implant users. Opportunities remain for new coding strategies to improve timbre perception."
},
{
"pmid": "30410461",
"title": "Linking Empowering Leadership to Task Performance, Taking Charge, and Voice: The Mediating Role of Feedback-Seeking.",
"abstract": "Drawing upon social exchange theory, the present study focuses on the role of feedback-seeking in linking empowering leadership to task performance, taking charge, and voice. We tested the hypothesized model using data from a sample of 32 supervisors and 197 their immediate subordinates. Performing CFA, SEM, and bootstrapping, the results revealed that: (1) empowering leadership was positively associated with followers' feedback-seeking; (2) employees' feedback-seeking was positively correlated with task performance, taking charge, and voice; and (3) employees' feedback-seeking mediated the positive relationships between empowering leadership and task performance, taking charge, and voice. We make conclusions by discussing the theoretical and practical implications of these findings, alongside a discussion of the present limitations and directions for future studies."
},
{
"pmid": "29593627",
"title": "The Bright, the Dark, and the Blue Face of Narcissism: The Spectrum of Narcissism in Its Relations to the Metatraits of Personality, Self-Esteem, and the Nomological Network of Shyness, Loneliness, and Empathy.",
"abstract": "Grandiose and vulnerable narcissism seem to be uncorrelated in empirical studies, yet they share at least some theoretical similarities. In the current study, we examine the relation between grandiose (conceptualized as admiration and rivalry) and vulnerable narcissism in the context of the Big Five personality traits and metatraits, self-esteem, and their nomological network. To this end, participants (N = 314) filled in a set of self-report measures via an online survey. Rivalry was positively linked with both admiration and vulnerable narcissism. We replicated the relations of admiration and rivalry with personality traits and metatraits-as well as extended existing knowledge by providing support for the theory that vulnerable narcissism is simultaneously negatively related to the Stability and Plasticity. Higher scores on vulnerable narcissism and rivalry predicted having fragile self-esteem, whereas high scores on admiration predicted having optimal self-esteem. The assumed relations with the nomological network were confirmed, i.e., vulnerable narcissism and admiration demonstrated a contradictory pattern of relation to shyness and loneliness, whilst rivalry predicted low empathy. Our results suggest that the rivalry is between vulnerable narcissism and admiration, which supports its localization in the self-importance dimension of the narcissism spectrum model. It was concluded that whereas admiration and rivalry represent the bright and dark face of narcissism, vulnerable narcissism represents its blue face."
},
{
"pmid": "32547458",
"title": "Audiovisual Modulation in Music Perception for Musicians and Non-musicians.",
"abstract": "In audiovisual music perception, visual information from a musical instrument being played is available prior to the onset of the corresponding musical sound and consequently allows a perceiver to form a prediction about the upcoming audio music. This prediction in audiovisual music perception, compared to auditory music perception, leads to lower N1 and P2 amplitudes and latencies. Although previous research suggests that audiovisual experience, such as previous musical experience may enhance this prediction, a remaining question is to what extent musical experience modifies N1 and P2 amplitudes and latencies. Furthermore, corresponding event-related phase modulations quantified as inter-trial phase coherence (ITPC) have not previously been reported for audiovisual music perception. In the current study, audio video recordings of a keyboard key being played were presented to musicians and non-musicians in audio only (AO), video only (VO), and audiovisual (AV) conditions. With predictive movements from playing the keyboard isolated from AV music perception (AV-VO), the current findings demonstrated that, compared to the AO condition, both groups had a similar decrease in N1 amplitude and latency, and P2 amplitude, along with correspondingly lower ITPC values in the delta, theta, and alpha frequency bands. However, while musicians showed lower ITPC values in the beta-band in AV-VO compared to the AO, non-musicians did not show this pattern. Findings indicate that AV perception may be broadly correlated with auditory perception, and differences between musicians and non-musicians further indicate musical experience to be a specific factor influencing AV perception. Predicting an upcoming sound in AV music perception may involve visual predictory processes, as well as beta-band oscillations, which may be influenced by years of musical training. This study highlights possible interconnectivity in AV perception as well as potential modulation with experience."
},
{
"pmid": "32038391",
"title": "Music Perception Testing Reveals Advantages and Continued Challenges for Children Using Bilateral Cochlear Implants.",
"abstract": "A modified version of the child's Montreal Battery of Evaluation of Amusia (cMBEA) was used to assess music perception in children using bilateral cochlear implants. Our overall aim was to promote better performance by children with CIs on the cMBEA by modifying the complement of instruments used in the test and adding pieces transposed in frequency. The 10 test trials played by piano were removed and two high and two low frequency trials added to each of five subtests (20 additional). The modified cMBEA was completed by 14 children using bilateral cochlear implants and 23 peers with normal hearing. Results were compared with performance on the original version of the cMBEA previously reported in groups of similar aged children: 2 groups with normal hearing (n = 23: Hopyan et al., 2012; n = 16: Polonenko et al., 2017), 1 group using bilateral cochlear implants (CIs) (n = 26: Polonenko et al., 2017), 1 group using bimodal (hearing aid and CI) devices (n = 8: Polonenko et al., 2017), and 1 group using unilateral CI (n = 23: Hopyan et al., 2012). Children with normal hearing had high scores on the modified version of the cMBEA and there were no significant score differences from children with normal hearing who completed the original cMBEA. Children with CIs showed no significant improvement in scores on the modified cMBEA compared to peers with CIs who completed the original version of the test. The group with bilateral CIs who completed the modified cMBEA showed a trend toward better abilities to remember music compared to children listening through a unilateral CI but effects were smaller than in previous cohorts of children with bilateral CIs and bimodal devices who completed the original cMBEA. Results confirmed that musical perception changes with the type of instrument and is better for music transposed to higher rather than lower frequencies for children with normal hearing but not for children using bilateral CIs. Overall, the modified version of the cMBEA revealed that modifications to music do not overcome the limitations of the CI to improve music perception for children."
},
{
"pmid": "30446036",
"title": "A Qualitative Study of the Effects of Hearing Loss and Hearing Aid Use on Music Perception in Performing Musicians.",
"abstract": "BACKGROUND\nHearing aids (HAs) are important for the rehabilitation of individuals with hearing loss. Although the rehabilitation of speech communication is well understood, less attention has been devoted to understanding hearing-impaired instrumentalists' needs to actively participate in music. Despite efforts to adjust HA settings for music acoustics, there lacks an understanding of instrumentalists' needs and if those HA adjustments satisfy their needs.\n\n\nPURPOSE\nThe purpose of the current study was to explore the challenges that adult HA-wearing instrumentalists face, which prevent them from listening, responding to, and performing music.\n\n\nRESEARCH DESIGN\nA qualitative methodology was employed with the use of semistructured interviews conducted with adult amateur instrumentalists.\n\n\nSTUDY SAMPLE\nTwelve HA users who were amateur ensemble instrumentalists (playing instruments from the percussion, wind, reed, brass, and string families) and between the ages of 55 and 83 years (seven men and five women) provided data for analysis in this study. Amateur in this context was defined as one who engaged mindfully in pursuit of an activity.\n\n\nDATA COLLECTION AND ANALYSIS\nSemistructured interviews were conducted using an open-ended interview guide. Interviews were recorded and transcribed verbatim. Transcripts were analyzed using conventional qualitative content analysis.\n\n\nRESULTS\nThree categories emerged from the data: (1) participatory needs, (2) effects of HA use, and (3) effects of hearing loss. Participants primarily used HAs to hear the conductor's instructions to meaningfully participate in music rehearsals. Effects of HA use fell within two subcategories: HA music sound quality and use of an HA music program. The effects of hearing loss fell within three subcategories: inability to identify missing information, affected music components, and nonauditory music perception strategies.\n\n\nCONCLUSIONS\nNot surprisingly, hearing-impaired instrumentalists face challenges participating in their music activities. However, although participants articulated ways in which HAs and hearing loss affect music perception, which in turn revealed perspectives toward listening using the auditory system and other sensory systems, the primary motivation for their HA use was the need to hear the conductor's directions. These findings suggest that providing hearing-impaired instrumentalists access to musical experience via participation should be prioritized above restoring the perception of musical descriptors. Future research is needed with instrumentalists who no longer listen to or perform music because of hearing loss, so that the relationship between musical auditory deficiencies and participation can be better explored."
},
{
"pmid": "30846958",
"title": "Effect of Narcissism, Psychopathy, and Machiavellianism on Entrepreneurial Intention-The Mediating of Entrepreneurial Self-Efficacy.",
"abstract": "The driving factors behind the exploration and search for entrepreneurial intention (EI) are critical to entrepreneurship education and entrepreneurial practice. To reveal in depth the influence of personality traits on EI, our study introduces the opposite of proactive personality-the dark triad that consists of narcissism, psychopathy and Machiavellianism. Our study used the MBA students of Tianjin University as a sample to analyze the relationship between the dark triad, entrepreneurial self-efficacy (ESE) and EI. A total of 334 MBA students aged 24-47 years participated and the participation rate is 95.71%. The data collection was largely concentrated in the period from May 15 to June 5, 2018. From the overall perspective of the dark triad, the results show that the dark triad positively predicts EI, and ESE has a partial mediating effect on the dark triad and EI. From the perspective of the three members of the dark triad, the study found that narcissism/psychopathy has a negative effect on ESE and EI; narcissism/psychopathy has a non-linear effect on EI; Machiavellianism has a positive effect on ESE and EI; and ESE has a mediating effect on the three members of the dark triad and EI. In short, our research reveals that the three members of the dark triad have different effects on EI in different cultural contexts, and the research findings have certain reference value for further improvement of entrepreneurship education and entrepreneurial practice."
},
{
"pmid": "31214081",
"title": "Gratifications for Social Media Use in Entrepreneurship Courses: Learners' Perspective.",
"abstract": "The purpose of this study is to explore uses and gratifications on social media in entrepreneurship courses from the learners' perspective. The respondents must have participated in government or private entrepreneurship courses and joined the online group of those courses. Respondents are not college students, but more entrepreneurs, and their multi-attribute makes the research results and explanatory more abundant. A total of 458 valid data was collected. The results of the survey revealed four gratification factors namely trust, profit, learning, and social in online entrepreneurial groups. It is also found that the structures and of the four gratification factors vary in three social media (Line, Facebook, and WeChat) and \"trust\" outranks other factors. Most of the entrepreneurs' business is \"networking business,\" and the business unit is mostly \"micro.\" In terms of the trust factor, there are significant differences among the three social media. In short, the two gratification factors of trust and profit can be seen as specific gratifications for online entrepreneurial groups, especially the trust factor, which deserves more attention in the further research of online entrepreneurial courses on social media."
},
{
"pmid": "33159212",
"title": "Cu2O-mediated assembly of electrodeposition of Au nanoparticles onto 2D metal-organic framework nanosheets for real-time monitoring of hydrogen peroxide released from living cells.",
"abstract": "The development of metal nanoparticles (MNP) combined with a metal-organic framework (MOF) has received more and more attention due to its excellent synergistic catalytic ability, which can effectively broaden the scope of catalytic reactions and enhance the catalytic ability. In this work, we developed a novel ternary nanocomposite named Cu2O-mediated Au nanoparticle (Au NP) grown on MIL-53(Fe) for real-time monitoring of hydrogen peroxide (H2O2) released from living cells. First, Cu2O-MIL-53(Fe) was prepared by redox assembly technology, which provided the growth template, and active sites for AuCl4-. Au@Cu2O-MIL-53(Fe)/GCE biosensor was prepared by further loading nano-Au uniformly on the surface of Cu2O by electrochemical deposition. Compared to individual components, the hybrid nanocomposite showed superior electrochemical properties as electrode materials due to the synergistic effect between AuNPs, Cu2O, and MIL-53(Fe). Electrochemical measurement showed that the Au@Cu2O-MIL-53(Fe)/GCE biosensor presented a satisfactory catalytic activity towards H2O2 with a low detection limit of 1.01 μM and sensitivity of 351.57 μA mM-1 cm-2 in the linear range of 10-1520 μM. Furthermore, this biosensor was successfully used for the real-time monitoring of dynamic H2O2 activated by PMA released from living cells. And the great results of confocal fluorescence microscopy of the co-culture cells with PMA and Au@Cu2O-MIL-53(Fe) verified the reliability of the biosensor, suggesting its potential application to the monitoring of critical pathological processes at the cellular level."
}
] |
Scientific Reports | 35197542 | PMC8866432 | 10.1038/s41598-022-07111-9 | Segmentation of pancreatic ductal adenocarcinoma (PDAC) and surrounding vessels in CT images using deep convolutional neural networks and texture descriptors | Fully automated and volumetric segmentation of critical tumors may play a crucial role in diagnosis and surgical planning. One of the most challenging tumor segmentation tasks is localization of pancreatic ductal adenocarcinoma (PDAC). Exclusive application of conventional methods does not appear promising. Deep learning approaches has achieved great success in the computer aided diagnosis, especially in biomedical image segmentation. This paper introduces a framework based on convolutional neural network (CNN) for segmentation of PDAC mass and surrounding vessels in CT images by incorporating powerful classic features, as well. First, a 3D-CNN architecture is used to localize the pancreas region from the whole CT volume using 3D Local Binary Pattern (LBP) map of the original image. Segmentation of PDAC mass is subsequently performed using 2D attention U-Net and Texture Attention U-Net (TAU-Net). TAU-Net is introduced by fusion of dense Scale-Invariant Feature Transform (SIFT) and LBP descriptors into the attention U-Net. An ensemble model is then used to cumulate the advantages of both networks using a 3D-CNN. In addition, to reduce the effects of imbalanced data, a multi-objective loss function is proposed as a weighted combination of three classic losses including Generalized Dice Loss (GDL), Weighted Pixel-Wise Cross Entropy loss (WPCE) and boundary loss. Due to insufficient sample size for vessel segmentation, we used the above-mentioned pre-trained networks and fine-tuned them. Experimental results show that the proposed method improves the Dice score for PDAC mass segmentation in portal-venous phase by 7.52% compared to state-of-the-art methods in term of DSC. Besides, three dimensional visualization of the tumor and surrounding vessels can facilitate the evaluation of PDAC treatment response. | Related worksAttention U-NetU-Net architecture has been revealed to achieve accurate and robust performance in medical image segmentation19. Attention Gates (AG) were introduced that can be utilized in CNN frameworks for dense label prediction to focus on target structure without additional supervision17. Attention U-Net, an extension to standard U-Net, applies self-soft attention technique in a feed-forward CNN model for medical image segmentation18. The sensitivity of the model to target pixels is improved without utilizing complicated heuristics. Using gating module, salient features are highlighted while noisy as well as unrelated responses disambiguate. These operations are performed just before each skip connection to fuse the relevant features. High-level and low-level feature maps are fed into AG module according to Eq. (1).1\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${\varphi }_{AG}= {f}_{\varphi }\left(x\right)=Sigmoid \left({W}^{T}. ReLU \left({x}_{l }\oplus {x}_{h}\right)\right)$$\end{document}φAG=fφx=SigmoidWT.ReLUxl⊕xhwhere \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${\varphi }_{AG}$$\end{document}φAG is called the attention coefficient, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${W}^{T}$$\end{document}WT denote a linear transformation, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${x}_{l}$$\end{document}xl and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${x}_{h}$$\end{document}xh represent the low-level and the high-level feature map respectively, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$Sigmoid (x)= 1/(1+{e}^{-x})$$\end{document}Sigmoid(x)=1/(1+e-x), and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\oplus $$\end{document}⊕ denotes the matrix addition. Attention features are obtained from the element-wise product of the low-level feature map and attention coefficients [Eq. (2)].2\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${\widehat{x}}_{l}={\varphi }_{AG} \odot {x}_{l}$$\end{document}x^l=φAG⊙xlwhere \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${\widehat{x}}_{l}$$\end{document}x^l is the attention feature, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ \odot $$\end{document}⊙ represents element-wise product and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${\varphi }_{AG}$$\end{document}φAG is the attention coefficient of the higher level features.These attention features are subsequently concatenated with high level features extracted from the network in decoder path.Dense SIFTU-Net and Attention U-Net as classical semantic segmentation networks, are not significantly effective for the challenging pancreatic tumor segmentation task8. Due to the difficulty of labelling process which leads to a somewhat small sample size, these networks may not benefit from a large number of layers and channels. To solve this issue, we proposed an approach which combines selected traditional salient features with high level features in the decoder path.One of the most popular feature space transforms proposed by Lowe34 is SIFT descriptor which includes two stages: key point detector and descriptor. The SIFT algorithm is invariant to rotations, translations and scaling transformations. This property is obtained by characterizing local gradient information around a corresponding detected point of interest.The original SIFT is a sparse feature representation method for an image while dense feature representation is preferred in pixel classification. Liu et al.30 proposed the dense SIFT descriptor for object recognition and registration which eliminates the feature detection stage, while preserving the pixel-wise feature descriptor.2D and 3D local binary pattern (LBP)We can extend the previous idea of combining classical features with CNN extracted features. Two dimensional LBP, presented by Ojala et al.35, is an impressive texture descriptor to characterize the local texture patterns through encoding pixel values. It leads to strong capability of rotation and gray level invariance.Banerjee et al.31 introduced a rotationally invariant three dimensional LBP algorithm where the invariants are constructed implicitly using spherical harmonics for increasing spatial support. A Non-Gaussian statistic measure (kurtosis), was used due to loss of phase information. The number of obtained variables was equivalent to the number of spherical harmonics plus the kurtosis term. This method can locally estimate the invariant features, useful in describing the small patterns. | [
"24354378",
"27310171",
"33501273",
"27831881",
"23744670",
"25020069",
"21997252",
"31035060",
"29994628",
"30998461",
"31493112",
"33414495",
"16545965",
"30092410"
] | [
{
"pmid": "24354378",
"title": "Pancreatic ductal adenocarcinoma radiology reporting template: consensus statement of the Society of Abdominal Radiology and the American Pancreatic Association.",
"abstract": "Pancreatic ductal adenocarcinoma is an aggressive malignancy with a high mortality rate. Proper determination of the extent of disease on imaging studies at the time of staging is one of the most important steps in optimal patient management. Given the variability in expertise and definition of disease extent among different practitioners as well as frequent lack of complete reporting of pertinent imaging findings at radiologic examinations, adoption of a standardized template for radiology reporting, using universally accepted and agreed on terminology for solid pancreatic neoplasms, is needed. A consensus statement describing a standardized reporting template authored by a multi-institutional group of experts in pancreatic ductal adenocarcinoma that included radiologists, gastroenterologists, and hepatopancreatobiliary surgeons was developed under the joint sponsorship of the Society of Abdominal Radiologists and the American Pancreatic Association. Adoption of this standardized imaging reporting template should improve the decision-making process for the management of patients with pancreatic ductal adenocarcinoma by providing a complete, pertinent, and accurate reporting of disease staging to optimize treatment recommendations that can be offered to the patient. Standardization can also help to facilitate research and clinical trial design by using appropriate and consistent staging by means of resectability status, thus allowing for comparison of results among different institutions."
},
{
"pmid": "27310171",
"title": "Brain tumor segmentation with Deep Neural Networks.",
"abstract": "In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we've found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster."
},
{
"pmid": "33501273",
"title": "Improving CT Image Tumor Segmentation Through Deep Supervision and Attentional Gates.",
"abstract": "Computer Tomography (CT) is an imaging procedure that combines many X-ray measurements taken from different angles. The segmentation of areas in the CT images provides a valuable aid to physicians and radiologists in order to better provide a patient diagnose. The CT scans of a body torso usually include different neighboring internal body organs. Deep learning has become the state-of-the-art in medical image segmentation. For such techniques, in order to perform a successful segmentation, it is of great importance that the network learns to focus on the organ of interest and surrounding structures and also that the network can detect target regions of different sizes. In this paper, we propose the extension of a popular deep learning methodology, Convolutional Neural Networks (CNN), by including deep supervision and attention gates. Our experimental evaluation shows that the inclusion of attention and deep supervision results in consistent improvement of the tumor prediction accuracy across the different datasets and training sizes while adding minimal computational overhead."
},
{
"pmid": "27831881",
"title": "A Bottom-Up Approach for Pancreas Segmentation Using Cascaded Superpixels and (Deep) Image Patch Labeling.",
"abstract": "Robust organ segmentation is a prerequisite for computer-aided diagnosis, quantitative imaging analysis, pathology detection, and surgical assistance. For organs with high anatomical variability (e.g., the pancreas), previous segmentation approaches report low accuracies, compared with well-studied organs, such as the liver or heart. We present an automated bottom-up approach for pancreas segmentation in abdominal computed tomography (CT) scans. The method generates a hierarchical cascade of information propagation by classifying image patches at different resolutions and cascading (segments) superpixels. The system contains four steps: 1) decomposition of CT slice images into a set of disjoint boundary-preserving superpixels; 2) computation of pancreas class probability maps via dense patch labeling; 3) superpixel classification by pooling both intensity and probability features to form empirical statistics in cascaded random forest frameworks; and 4) simple connectivity based post-processing. Dense image patch labeling is conducted using two methods: efficient random forest classification on image histogram, location and texture features; and more expensive (but more accurate) deep convolutional neural network classification, on larger image windows (i.e., with more spatial contexts). Over-segmented 2-D CT slices by the simple linear iterative clustering approach are adopted through model/parameter calibration and labeled at the superpixel level for positive (pancreas) or negative (non-pancreas or background) classes. The proposed method is evaluated on a data set of 80 manually segmented CT volumes, using six-fold cross-validation. Its performance equals or surpasses other state-of-the-art methods (evaluated by \"leave-one-patient-out\"), with a dice coefficient of 70.7% and Jaccard index of 57.9%. In addition, the computational efficiency has improved significantly, requiring a mere 6 ~ 8 min per testing case, versus ≥ 10 h for other methods. The segmentation framework using deep patch labeling confidences is also more numerically stable, as reflected in the smaller performance metric standard deviations. Finally, we implement a multi-atlas label fusion (MALF) approach for pancreas segmentation using the same data set. Under six-fold cross-validation, our bottom-up segmentation method significantly outperforms its MALF counterpart: 70.7±13.0% versus 52.51±20.84% in dice coefficients."
},
{
"pmid": "23744670",
"title": "Automated abdominal multi-organ segmentation with subject-specific atlas generation.",
"abstract": "A robust automated segmentation of abdominal organs can be crucial for computer aided diagnosis and laparoscopic surgery assistance. Many existing methods are specialized to the segmentation of individual organs and struggle to deal with the variability of the shape and position of abdominal organs. We present a general, fully-automated method for multi-organ segmentation of abdominal computed tomography (CT) scans. The method is based on a hierarchical atlas registration and weighting scheme that generates target specific priors from an atlas database by combining aspects from multi-atlas registration and patch-based segmentation, two widely used methods in brain segmentation. The final segmentation is obtained by applying an automatically learned intensity model in a graph-cuts optimization step, incorporating high-level spatial knowledge. The proposed approach allows to deal with high inter-subject variation while being flexible enough to be applied to different organs. We have evaluated the segmentation on a database of 150 manually segmented CT images. The achieved results compare well to state-of-the-art methods, that are usually tailored to more specific questions, with Dice overlap values of 94%, 93%, 70%, and 92% for liver, kidneys, pancreas, and spleen, respectively."
},
{
"pmid": "25020069",
"title": "A generic approach to pathological lung segmentation.",
"abstract": "In this study, we propose a novel pathological lung segmentation method that takes into account neighbor prior constraints and a novel pathology recognition system. Our proposed framework has two stages; during stage one, we adapted the fuzzy connectedness (FC) image segmentation algorithm to perform initial lung parenchyma extraction. In parallel, we estimate the lung volume using rib-cage information without explicitly delineating lungs. This rudimentary, but intelligent lung volume estimation system allows comparison of volume differences between rib cage and FC based lung volume measurements. Significant volume difference indicates the presence of pathology, which invokes the second stage of the proposed framework for the refinement of segmented lung. In stage two, texture-based features are utilized to detect abnormal imaging patterns (consolidations, ground glass, interstitial thickening, tree-inbud, honeycombing, nodules, and micro-nodules) that might have been missed during the first stage of the algorithm. This refinement stage is further completed by a novel neighboring anatomy-guided segmentation approach to include abnormalities with weak textures, and pleura regions. We evaluated the accuracy and efficiency of the proposed method on more than 400 CT scans with the presence of a wide spectrum of abnormalities. To our best of knowledge, this is the first study to evaluate all abnormal imaging patterns in a single segmentation framework. The quantitative results show that our pathological lung segmentation method improves on current standards because of its high sensitivity and specificity and may have considerable potential to enhance the performance of routine clinical tasks."
},
{
"pmid": "21997252",
"title": "Supervoxel-based segmentation of mitochondria in em image stacks with learned shape features.",
"abstract": "It is becoming increasingly clear that mitochondria play an important role in neural function. Recent studies show mitochondrial morphology to be crucial to cellular physiology and synaptic function and a link between mitochondrial defects and neuro-degenerative diseases is strongly suspected. Electron microscopy (EM), with its very high resolution in all three directions, is one of the key tools to look more closely into these issues but the huge amounts of data it produces make automated analysis necessary. State-of-the-art computer vision algorithms designed to operate on natural 2-D images tend to perform poorly when applied to EM data for a number of reasons. First, the sheer size of a typical EM volume renders most modern segmentation schemes intractable. Furthermore, most approaches ignore important shape cues, relying only on local statistics that easily become confused when confronted with noise and textures inherent in the data. Finally, the conventional assumption that strong image gradients always correspond to object boundaries is violated by the clutter of distracting membranes. In this work, we propose an automated graph partitioning scheme that addresses these issues. It reduces the computational complexity by operating on supervoxels instead of voxels, incorporates shape features capable of describing the 3-D shape of the target objects, and learns to recognize the distinctive appearance of true boundaries. Our experiments demonstrate that our approach is able to segment mitochondria at a performance level close to that of a human annotator, and outperforms a state-of-the-art 3-D segmentation technique."
},
{
"pmid": "31035060",
"title": "Abdominal multi-organ segmentation with organ-attention networks and statistical fusion.",
"abstract": "Accurate and robust segmentation of abdominal organs on CT is essential for many clinical applications such as computer-aided diagnosis and computer-aided surgery. But this task is challenging due to the weak boundaries of organs, the complexity of the background, and the variable sizes of different organs. To address these challenges, we introduce a novel framework for multi-organ segmentation of abdominal regions by using organ-attention networks with reverse connections (OAN-RCs) which are applied to 2D views, of the 3D CT volume, and output estimates which are combined by statistical fusion exploiting structural similarity. More specifically, OAN is a two-stage deep convolutional network, where deep network features from the first stage are combined with the original image, in a second stage, to reduce the complex background and enhance the discriminative information for the target organs. Intuitively, OAN reduces the effect of the complex background by focusing attention so that each organ only needs to be discriminated from its local background. RCs are added to the first stage to give the lower layers more semantic information thereby enabling them to adapt to the sizes of different organs. Our networks are trained on 2D views (slices) enabling us to use holistic information and allowing efficient computation (compared to using 3D patches). To compensate for the limited cross-sectional information of the original 3D volumetric CT, e.g., the connectivity between neighbor slices, multi-sectional images are reconstructed from the three different 2D view directions. Then we combine the segmentation results from the different views using statistical fusion, with a novel term relating the structural similarity of the 2D views to the original 3D structure. To train the network and evaluate results, 13 structures were manually annotated by four human raters and confirmed by a senior expert on 236 normal cases. We tested our algorithm by 4-fold cross-validation and computed Dice-Sørensen similarity coefficients (DSC) and surface distances for evaluating our estimates of the 13 structures. Our experiments show that the proposed approach gives strong results and outperforms 2D- and 3D-patch based state-of-the-art methods in terms of DSC and mean surface distances."
},
{
"pmid": "29994628",
"title": "Automatic Multi-Organ Segmentation on Abdominal CT With Dense V-Networks.",
"abstract": "Automatic segmentation of abdominal anatomy on computed tomography (CT) images can support diagnosis, treatment planning, and treatment delivery workflows. Segmentation methods using statistical models and multi-atlas label fusion (MALF) require inter-subject image registrations, which are challenging for abdominal images, but alternative methods without registration have not yet achieved higher accuracy for most abdominal organs. We present a registration-free deep-learning-based segmentation algorithm for eight organs that are relevant for navigation in endoscopic pancreatic and biliary procedures, including the pancreas, the gastrointestinal tract (esophagus, stomach, and duodenum) and surrounding organs (liver, spleen, left kidney, and gallbladder). We directly compared the segmentation accuracy of the proposed method to the existing deep learning and MALF methods in a cross-validation on a multi-centre data set with 90 subjects. The proposed method yielded significantly higher Dice scores for all organs and lower mean absolute distances for most organs, including Dice scores of 0.78 versus 0.71, 0.74, and 0.74 for the pancreas, 0.90 versus 0.85, 0.87, and 0.83 for the stomach, and 0.76 versus 0.68, 0.69, and 0.66 for the esophagus. We conclude that the deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures."
},
{
"pmid": "30998461",
"title": "Deep Q Learning Driven CT Pancreas Segmentation With Geometry-Aware U-Net.",
"abstract": "The segmentation of pancreas is important for medical image analysis, yet it faces great challenges of class imbalance, background distractions, and non-rigid geometrical features. To address these difficulties, we introduce a deep Q network (DQN) driven approach with deformable U-Net to accurately segment the pancreas by explicitly interacting with contextual information and extract anisotropic features from pancreas. The DQN-based model learns a context-adaptive localization policy to produce a visually tightened and precise localization bounding box of the pancreas. Furthermore, deformable U-Net captures geometry-aware information of pancreas by learning geometrically deformable filters for feature extraction. The experiments on NIH dataset validate the effectiveness of the proposed framework in pancreas segmentation."
},
{
"pmid": "31493112",
"title": "Abdominal artery segmentation method from CT volumes using fully convolutional neural network.",
"abstract": "PURPOSE : The purpose of this paper is to present a fully automated abdominal artery segmentation method from a CT volume. Three-dimensional (3D) blood vessel structure information is important for diagnosis and treatment. Information about blood vessels (including arteries) can be used in patient-specific surgical planning and intra-operative navigation. Since blood vessels have large inter-patient variations in branching patterns and positions, a patient-specific blood vessel segmentation method is necessary. Even though deep learning-based segmentation methods provide good segmentation accuracy among large organs, small organs such as blood vessels are not well segmented. We propose a deep learning-based abdominal artery segmentation method from a CT volume. Because the artery is one of small organs that is difficult to segment, we introduced an original training sample generation method and a three-plane segmentation approach to improve segmentation accuracy. METHOD : Our proposed method segments abdominal arteries from an abdominal CT volume with a fully convolutional network (FCN). To segment small arteries, we employ a 2D patch-based segmentation method and an area imbalance reduced training patch generation (AIRTPG) method. AIRTPG adjusts patch number imbalances between patches with artery regions and patches without them. These methods improved the segmentation accuracies of small artery regions. Furthermore, we introduced a three-plane segmentation approach to obtain clear 3D segmentation results from 2D patch-based processes. In the three-plane approach, we performed three segmentation processes using patches generated on axial, coronal, and sagittal planes and combined the results to generate a 3D segmentation result. RESULTS : The evaluation results of the proposed method using 20 cases of abdominal CT volumes show that the averaged F-measure, precision, and recall rates were 87.1%, 85.8%, and 88.4%, respectively. This result outperformed our previous automated FCN-based segmentation method. Our method offers competitive performance compared to the previous blood vessel segmentation methods from 3D volumes. CONCLUSIONS : We developed an abdominal artery segmentation method using FCN. The 2D patch-based and AIRTPG methods effectively segmented the artery regions. In addition, the three-plane approach generated good 3D segmentation results."
},
{
"pmid": "33414495",
"title": "Plasma Hsp90 levels in patients with systemic sclerosis and relation to lung and skin involvement: a cross-sectional and longitudinal study.",
"abstract": "Our previous study demonstrated increased expression of Heat shock protein (Hsp) 90 in the skin of patients with systemic sclerosis (SSc). We aimed to evaluate plasma Hsp90 in SSc and characterize its association with SSc-related features. Ninety-two SSc patients and 92 age-/sex-matched healthy controls were recruited for the cross-sectional analysis. The longitudinal analysis comprised 30 patients with SSc associated interstitial lung disease (ILD) routinely treated with cyclophosphamide. Hsp90 was increased in SSc compared to healthy controls. Hsp90 correlated positively with C-reactive protein and negatively with pulmonary function tests: forced vital capacity and diffusing capacity for carbon monoxide (DLCO). In patients with diffuse cutaneous (dc) SSc, Hsp90 positively correlated with the modified Rodnan skin score. In SSc-ILD patients treated with cyclophosphamide, no differences in Hsp90 were found between baseline and after 1, 6, or 12 months of therapy. However, baseline Hsp90 predicts the 12-month change in DLCO. This study shows that Hsp90 plasma levels are increased in SSc patients compared to age-/sex-matched healthy controls. Elevated Hsp90 in SSc is associated with increased inflammatory activity, worse lung functions, and in dcSSc, with the extent of skin involvement. Baseline plasma Hsp90 predicts the 12-month change in DLCO in SSc-ILD patients treated with cyclophosphamide."
},
{
"pmid": "16545965",
"title": "User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability.",
"abstract": "Active contour segmentation and its robust implementation using level set methods are well-established theoretical approaches that have been studied thoroughly in the image analysis literature. Despite the existence of these powerful segmentation methods, the needs of clinical research continue to be fulfilled, to a large extent, using slice-by-slice manual tracing. To bridge the gap between methodological advances and clinical routine, we developed an open source application called ITK-SNAP, which is intended to make level set segmentation easily accessible to a wide range of users, including those with little or no mathematical expertise. This paper describes the methods and software engineering philosophy behind this new tool and provides the results of validation experiments performed in the context of an ongoing child autism neuroimaging study. The validation establishes SNAP intrarater and interrater reliability and overlap error statistics for the caudate nucleus and finds that SNAP is a highly reliable and efficient alternative to manual tracing. Analogous results for lateral ventricle segmentation are provided."
},
{
"pmid": "30092410",
"title": "A systematic study of the class imbalance problem in convolutional neural networks.",
"abstract": "In this study, we systematically investigate the impact of class imbalance on classification performance of convolutional neural networks (CNNs) and compare frequently used methods to address the issue. Class imbalance is a common problem that has been comprehensively studied in classical machine learning, yet very limited systematic research is available in the context of deep learning. In our study, we use three benchmark datasets of increasing complexity, MNIST, CIFAR-10 and ImageNet, to investigate the effects of imbalance on classification and perform an extensive comparison of several methods to address the issue: oversampling, undersampling, two-phase training, and thresholding that compensates for prior class probabilities. Our main evaluation metric is area under the receiver operating characteristic curve (ROC AUC) adjusted to multi-class tasks since overall accuracy metric is associated with notable difficulties in the context of imbalanced data. Based on results from our experiments we conclude that (i) the effect of class imbalance on classification performance is detrimental; (ii) the method of addressing class imbalance that emerged as dominant in almost all analyzed scenarios was oversampling; (iii) oversampling should be applied to the level that completely eliminates the imbalance, whereas the optimal undersampling ratio depends on the extent of imbalance; (iv) as opposed to some classical machine learning models, oversampling does not cause overfitting of CNNs; (v) thresholding should be applied to compensate for prior class probabilities when overall number of properly classified cases is of interest."
}
] |
Frontiers in Psychology | null | PMC8866447 | 10.3389/fpsyg.2021.755039 | Interactive Design Psychology and Artificial Intelligence-Based Innovative Exploration of Anglo-American Traumatic Narrative Literature | The advent of the intelligence age has injected new elements into the development of literature. The synergic modification of Anglo-American (AAL) traumatic narrative (TN) literature by artificial intelligence (AI) technology and interactive design (ID) psychology will produce new possibilities in literary creation. First, by studying natural language processing (NLP) technology, this study proposes a modification language model (LM) based on the double-layered recurrent neural network (RNN) algorithm and constructs an intelligent language modification system based on the improved LM model. The results show that the performance of the proposed model is excellent; only about 30% of the respondents like AAL literature; the lack of common cultural background, appreciation difficulties, and language barriers have become the main reasons for the decline of reading willingness of AAL literature. Finally, AI technology and ID psychology are used to modify a famous TN work respectively and synergically, and the modified work is appreciated by respondents to collect their comments. The results corroborate that 62% of the respondents like original articles, but their likability scores have decreased for individually modified work by AI or ID psychology. In comparison, under the synergic modification efforts of AI and ID psychology, the popularity of the modified work has increased slightly, with 65% of the respondents showing a likability to read. Therefore, it is concluded that literary modification by single ID psychology or AI technology will reduce the reading threshold by trading off the literary value of the original work. The core of literary creation depends on human intelligence, and AI might still not be able to generate high-standard literary works independently because human minds and thoughts cannot be controlled and predicted by machines. The research results provide new ideas and improvement directions for the field of AI-assisted writing. | Related WorkThis section is divided into two parts, namely, the current trend of digital reading and the research status of the synergy of AI technology in ID psychology.With the development of Mobile Internet (MI) technology and the widespread intelligent terminals, digital reading has gradually become a popular reading style. Many researchers have studied the digital reading status of different groups. For example, Banni et al. (2019) analyzed the demand and usage of digital cultural resources of migrant workers based on QS and interviews, revealing their major digital readings as current political news, e-books, music, movies, entertainment, and sports (Banni et al., 2019). Nazari et al. (2021) investigated the current situation of digital reading of teenagers and uncovered that the overall extracurricular reading time of students was decreasing, and the time spent on digital reading was increasing remarkably; the most favorite online activities of students were entertainment; reading habits of students had been reshaped by the current MI technology. The reading methods of people had been gradually changed into digital reading, and word-by-word reading had been shifted into hypertext-based skimming (Nazari et al., 2021).Norqulovna et al. (2021) studied the contents related to ID psychology, sociology, and technical science and explored human psychology and machine capability (Norqulovna et al., 2021). In terms of the research on AI, as early as the 1950s, some researchers have predicted that AI would gradually infiltrate human production and life in the next 50 years, resulting in different opinions and factions. The researchers have described these factions and expounded on the era of AI. Mu (2021) discussed the impact of the continuous development of Science and Technology (S&T) on human society; applied some AI algorithms to the real environment; studied AI from the perspectives of philosophy, sociology, psychology, and neurology; and put forward a new perspective on the trend of “singularity.” It also showed the impact of some AI-inspired S&T phenomena worldwide (Mu, 2021).To sum up, according to the research status both in China and outside, there are various literary works on AI in informatics and philosophy, whereas little research has been conducted on the combination of AI and ID. It is believed that with the further maturity of S&T, AI will inevitably intervene in ID. This study attempts to make research and speculation within a limited scope because the technological revolution is a dynamic process and will not stay for long in the current thinking range of people. | [
"33286673",
"32800305",
"32668290",
"31649581",
"33800724",
"34013342",
"33569767",
"31074642",
"34027198",
"34609344",
"29027871",
"29792561",
"33882053",
"30410461",
"31257586",
"29593627",
"30846958",
"31214081",
"33237926"
] | [
{
"pmid": "33286673",
"title": "Classification of Literary Works: Fractality and Complexity of the Narrative, Essay, and Research Article.",
"abstract": "A complex network as an abstraction of a language system has attracted much attention during the last decade. Linguistic typological research using quantitative measures is a current research topic based on the complex network approach. This research aims at showing the node degree, betweenness, shortest path length, clustering coefficient, and nearest neighbourhoods' degree, as well as more complex measures such as: the fractal dimension, the complexity of a given network, the Area Under Box-covering, and the Area Under the Robustness Curve. The literary works of Mexican writers were classify according to their genre. Precisely 87% of the full word co-occurrence networks were classified as a fractal. Also, empirical evidence is presented that supports the conjecture that lemmatisation of the original text is a renormalisation process of the networks that preserve their fractal property and reveal stylistic attributes by genre."
},
{
"pmid": "32800305",
"title": "Predictors of Dropout in Cognitive Processing Therapy for PTSD: An Examination of Trauma Narrative Content.",
"abstract": "Dropout rates in trauma-focused treatments for adult posttraumatic stress disorder (PTSD) are high. Most research has focused on demographic and pretreatment predictors of dropout, but findings have been inconsistent. We examined predictors of dropout in cognitive processing therapy (CPT) by coding the content of trauma narratives written in early sessions of CPT. Data are from a randomized controlled noninferiority trial of CPT and written exposure therapy (WET) in which CPT showed significantly higher dropout rates than WET (39.7% CPT vs. 6.4% WET). Participants were 51 adults with a primary diagnosis of PTSD who were receiving CPT and completed at least one of three narratives in the early sessions of CPT. Sixteen (31%) in this subsample were classified as dropouts and 35 as completers. An additional 9 participants dropped out but could not be included because they did not complete any narratives. Of the 11 participants who provided a reason for dropout, 82% reported that CPT was too distressing. The CHANGE coding system was used to code narratives for pathological trauma responses (cognitions, emotions, physiological responses) and maladaptive modes of processing (avoidance, ruminative processing, overgeneralization), each on a scale from 0 (absent) to 3 (high). Binary logistic regressions showed that, averaging across all available narratives, more negative emotions described during or around the time of the trauma predicted less dropout. More ruminative processing in the present time frame predicted lower rates of dropout, whereas more overgeneralized beliefs predicted higher rates. In the first impact statement alone, more negative emotions in the present time frame predicted lower dropout rates, but when emotional reactions had a physiological impact, dropout was higher. These findings suggest clinicians might attend to clients' written trauma narratives in CPT in order to identify indicators of dropout risk and to help increase engagement."
},
{
"pmid": "32668290",
"title": "Rhupus: a systematic literature review.",
"abstract": "\"Rhupus\" or \"rhupus syndrome\" is a poorly described and underdiagnosed disease in which features of both rheumatoid arthritis (RA) and systemic lupus erythematosus (SLE) appear in the same patient, most often sequentially. The SLE-related involvement is usually mild, dominated by hematological abnormalities and skin, serosal and renal involvement. The natural history of rhupus arthritis follows an RA-like pattern and can progress towards typical inflammatory erosions, deformations and disability. Despite the lack of consensus on the definition of rhupus and on its place in the spectrum of autoimmunity, a growing number of studies are pointing towards a true overlap between RA and SLE. However, the inclusion criteria employed in the literature during the last 4 decades are heterogeneous, making the already rare cohorts and case reports difficult to analyze. Because of this heterogeneity and due to the rarity of the disease, the prevalence, pathophysiology and natural history as well as the radiological and immunological profiles of rhupus are poorly described. Moreover, since there is no validated therapeutic strategy, treatment is based on clinicians' experience and on the results of a few studies. We herein present a systematic literature review to analyze the clinical and laboratory data of all reported rhupus patients and to provide up-to-date information about recent advances in the understanding of the pathophysiological mechanisms, diagnostic tools and treatment options."
},
{
"pmid": "31649581",
"title": "The Impact of Expatriates' Cross-Cultural Adjustment on Work Stress and Job Involvement in the High-Tech Industry.",
"abstract": "The personal traits of expatriates influence their work performance in a subsidiary. Nevertheless, organizations tend to hire candidates who are suitable from the technological dimension but ignore personal and family factors. Expatriates might not be familiar with a foreign place, and most organizations do not provide the so-called cultural adjustment training. The selected expatriates often accept the job without knowing the future prospects of their career, which can result in individual and family turmoil initially. Moreover, the unknown future career prospects and concern over when they will return to the parent company can affect expatriates' work. Cross-cultural competence refers to the ability of individuals to work effectively and live normally in different cultural contexts, and this ability requires expatriate employees to adopt adaptive thinking patterns and behaviors in the host country. To explore the effect of expatriates' cross-culture adjustment on their work stress and job involvement, this study therefore uses an empirical approach in which data are collected with a questionnaire survey and proposes specific suggestions, according to the results, to aid expatriates in their personal psychological adjustment. The results show that the challenges faced by expatriate employees are derived from assigned tasks, unknown environments, language barriers, and cultural differences. Excessive pressure will impose ideological and psychological burdens upon the expatriates and even lead to physical symptoms, however, the appropriate amount of pressure can play a driving role and promote the smooth progress of the work. High-tech industry employees who can adapt to the customs and cultures of foreign countries have higher work participation and are more likely to find ways to alleviate work stress. It has also been found that the stronger the cross-cultural competence of employees, the better their adjustment to the host country and the higher their corresponding job performance."
},
{
"pmid": "33800724",
"title": "Information Theory for Agents in Artificial Intelligence, Psychology, and Economics.",
"abstract": "This review looks at some of the central relationships between artificial intelligence, psychology, and economics through the lens of information theory, specifically focusing on formal models of decision-theory. In doing so we look at a particular approach that each field has adopted and how information theory has informed the development of the ideas of each field. A key theme is expected utility theory, its connection to information theory, the Bayesian approach to decision-making and forms of (bounded) rationality. What emerges from this review is a broadly unified formal perspective derived from three very different starting points that reflect the unique principles of each field. Each of the three approaches reviewed can, in principle at least, be implemented in a computational model in such a way that, with sufficient computational power, they could be compared with human abilities in complex tasks. However, a central critique that can be applied to all three approaches was first put forward by Savage in The Foundations of Statistics and recently brought to the fore by the economist Binmore: Bayesian approaches to decision-making work in what Savage called 'small worlds' but cannot work in 'large worlds'. This point, in various different guises, is central to some of the current debates about the power of artificial intelligence and its relationship to human-like learning and decision-making. Recent work on artificial intelligence has gone some way to bridging this gap but significant questions remain to be answered in all three fields in order to make progress in producing realistic models of human decision-making in the real world in which we live in."
},
{
"pmid": "34013342",
"title": "Extensive Cortical Connectivity of the Human Hippocampal Memory System: Beyond the \"What\" and \"Where\" Dual Stream Model.",
"abstract": "The human hippocampus is involved in forming new memories: damage impairs memory. The dual stream model suggests that object \"what\" representations from ventral stream temporal cortex project to the hippocampus via the perirhinal and then lateral entorhinal cortex, and spatial \"where\" representations from the dorsal parietal stream via the parahippocampal gyrus and then medial entorhinal cortex. The hippocampus can then associate these inputs to form episodic memories of what happened where. Diffusion tractography was used to reveal the direct connections of hippocampal system areas in humans. This provides evidence that the human hippocampus has extensive direct cortical connections, with connections that bypass the entorhinal cortex to connect with the perirhinal and parahippocampal cortex, with the temporal pole, with the posterior and retrosplenial cingulate cortex, and even with early sensory cortical areas. The connections are less hierarchical and segregated than in the dual stream model. This provides a foundation for a conceptualization for how the hippocampal memory system connects with the cerebral cortex and operates in humans. One implication is that prehippocampal cortical areas such as the parahippocampal TF and TH subregions and perirhinal cortices may implement specialized computations that can benefit from inputs from the dorsal and ventral streams."
},
{
"pmid": "33569767",
"title": "Hick's law equivalent for reaction time to individual stimuli.",
"abstract": "Hick's law, one of the few law-like relationships involving human performance, expresses choice reaction time as a linear function of the mutual information between the stimulus and response events. However, since this law was first proposed in 1952, its validity has been challenged by the fact that it only holds for the overall reaction time (RT) across all the stimuli, and does not hold for the reaction time (RTi ) for each individual stimulus. This paper introduces a new formulation in which RTi is a linear function of (1) the mutual information between the event that stimulus i occurs and the set of all potential response events and (2) the overall mutual information for all stimuli and responses. Then Hick's law for RT follows as the weighted mean of each side of the RTi equation using the stimulus probabilities as the weights. The new RTi equation incorporates the important speed-accuracy trade-off characteristic. When the performance is error-free, RTi becomes a linear function of two entropies as measures of stimulus uncertainty or unexpectancy. Reanalysis of empirical data from a variety of sources provide support for the new law-like relationship."
},
{
"pmid": "31074642",
"title": "An Empirical Study on the Artificial Intelligence Writing Evaluation System in China CET.",
"abstract": "The Artificial Intelligence Writing Evaluation system is widely used in China College English writing. It provides for both teachers and the English learners services of automated composition evaluation on the net in order that teacher's working load can be reduced and they can learn directly about the students' English writing level and that the students' English writing will be improved. Juku automated writing evaluation (AWE) is one of the most used systems among colleges and universities in China. The empirical study was conducted on the use of Juku AWE in college English teaching. Through the experiment with 114 students from 2 classes in Xi'an University and questionnaires and interview for both 30 teachers and 200 students using Juku AWE, the author finds that: (1) Using AWE does effectively help the students with their English writing; (2) Both teachers and students have positive attitude to the use of AWE in terms of immediate and clear feedback, time-saving, and arousing interests in English writing; and (3) AWE still needs to be perfected as it cannot provide proper evaluation on the text structure, content logic, and coherence. So both teachers and students should take the score from AWE objectively."
},
{
"pmid": "34027198",
"title": "Application of Artificial Intelligence powered digital writing assistant in higher education: randomized controlled trial.",
"abstract": "A major challenge in educational technology integration is to engage students with different affective characteristics. Also, how technology shapes attitude and learning behavior is still lacking. Findings from educational psychology and learning sciences have gained less traction in research. The present study was conducted to examine the efficacy of a group format of an Artificial Intelligence (AI) powered writing tool for English second postgraduate students in the English academic writing context. In the present study, (N = 120) students were randomly allocated to either the equipped AI (n = 60) or non-equipped AI (NEAI). The results of the parametric test of analyzing of covariance revealed that at post-intervention, students who participated in the AI intervention group demonstrated statistically significant improvement in the scores, of the behavioral engagement (Cohen's d = .75, 95% CI [0.38, 1.12]), of the emotional engagement Cohen's d = .82, 95% CI [0.45, 1.25], of the cognitive engagement, Cohen's d = .39,95% CI [0.04, .76], of the self-efficacy for writing, Cohen's d = .54, 95% CI [0.18, 0.91], of the positive emotions Cohen's d = . 44, 95% CI [0.08, 0.80], and of the negative emotions, Cohen's d = -.98, 95% CI [-1.36, -0.60], compared with NEAI. The results suggest that AI-powered writing tools could be an efficient tool to promote learning behavior and attitudinal technology acceptance through formative feedback and assessment for non-native postgraduate students in English academic writing."
},
{
"pmid": "29027871",
"title": "Weak evidence for increased motivated forgetting of trauma-related words in dissociated or traumatised individuals in a directed forgetting experiment.",
"abstract": "Motivated forgetting is the idea that people can block out, or forget, upsetting or traumatic memories, because there is a motivation to do so. Some researchers have cited directed forgetting studies using trauma-related words as evidence for the theory of motivated forgetting of trauma. In the current article subjects used the list method directed forgetting paradigm with both trauma-related words and positive words. After one list of words was presented subjects were directed to forget the words previously learned, and they then received another list of words. Each list was a mix of positive and trauma-related words, and the lists were counterbalanced. Later, subjects recalled as many of the words as they could, including the ones they were told to forget. Based on the theory that motivated forgetting would lead to recall deficits of trauma-related material, we created eight hypotheses. High dissociators, trauma-exposed, sexual trauma-exposed, and high dissociators with trauma-exposure participants were hypothesised to show enhanced forgetting of trauma words. Results indicated only one of eight hypotheses was supported: those higher on dissociation and trauma recalled fewer trauma words in the to-be-forgotten condition, compared to those low on dissociation and trauma. These results provide weak support for differential motivated forgetting."
},
{
"pmid": "29792561",
"title": "Testimonial Psychotherapy in Immigrant Survivors of Intimate Partner Violence: A Case Series.",
"abstract": "Testimonial psychotherapy is a therapeutic ritual for facilitating the recovery of survivors of human rights violations that focuses on sharing the trauma narrative. Originally developed in Chile as a method for collecting evidence during legal proceedings, testimonial therapy has been widely applied transculturally as a unique treatment modality for populations that are not amenable to traditional Western psychotherapy. In this case report, we first review the literature on testimonial therapy to this date. We go on to describe how testimonial therapy has been specifically adapted to facilitate recovery for immigrant survivors of intimate partner violence (IPV). We present three Latin American women who underwent testimonial psychotherapy while receiving psychiatric treatment at a Northern Virginia community clinic affiliated with the George Washington University. The therapy consisted of guided trauma narrative sessions and a Latin- American Catholic inspired reverential ceremony in a Spanish-speaking women's domestic violence group. In this case series we provide excerpts from the women's testimony and feedback from physicians who observed the ceremony. We found that testimonial psychotherapy was accepted by our three IPV survivors and logistically feasible in a small community clinic. We conceptualize testimonial psychotherapy as a humanistic therapy that focuses on strengthening the person. Our case report suggests testimonial psychotherapy as a useful adjunct to formal psychotherapy for post-traumatic stress symptoms."
},
{
"pmid": "33882053",
"title": "Artificial intelligence based writer identification generates new evidence for the unknown scribes of the Dead Sea Scrolls exemplified by the Great Isaiah Scroll (1QIsaa).",
"abstract": "The Dead Sea Scrolls are tangible evidence of the Bible's ancient scribal culture. This study takes an innovative approach to palaeography-the study of ancient handwriting-as a new entry point to access this scribal culture. One of the problems of palaeography is to determine writer identity or difference when the writing style is near uniform. This is exemplified by the Great Isaiah Scroll (1QIsaa). To this end, we use pattern recognition and artificial intelligence techniques to innovate the palaeography of the scrolls and to pioneer the microlevel of individual scribes to open access to the Bible's ancient scribal culture. We report new evidence for a breaking point in the series of columns in this scroll. Without prior assumption of writer identity, based on point clouds of the reduced-dimensionality feature-space, we found that columns from the first and second halves of the manuscript ended up in two distinct zones of such scatter plots, notably for a range of digital palaeography tools, each addressing very different featural aspects of the script samples. In a secondary, independent, analysis, now assuming writer difference and using yet another independent feature method and several different types of statistical testing, a switching point was found in the column series. A clear phase transition is apparent in columns 27-29. We also demonstrated a difference in distance variances such that the variance is higher in the second part of the manuscript. Given the statistically significant differences between the two halves, a tertiary, post-hoc analysis was performed using visual inspection of character heatmaps and of the most discriminative Fraglet sets in the script. Demonstrating that two main scribes, each showing different writing patterns, were responsible for the Great Isaiah Scroll, this study sheds new light on the Bible's ancient scribal culture by providing new, tangible evidence that ancient biblical texts were not copied by a single scribe only but that multiple scribes, while carefully mirroring another scribe's writing style, could closely collaborate on one particular manuscript."
},
{
"pmid": "30410461",
"title": "Linking Empowering Leadership to Task Performance, Taking Charge, and Voice: The Mediating Role of Feedback-Seeking.",
"abstract": "Drawing upon social exchange theory, the present study focuses on the role of feedback-seeking in linking empowering leadership to task performance, taking charge, and voice. We tested the hypothesized model using data from a sample of 32 supervisors and 197 their immediate subordinates. Performing CFA, SEM, and bootstrapping, the results revealed that: (1) empowering leadership was positively associated with followers' feedback-seeking; (2) employees' feedback-seeking was positively correlated with task performance, taking charge, and voice; and (3) employees' feedback-seeking mediated the positive relationships between empowering leadership and task performance, taking charge, and voice. We make conclusions by discussing the theoretical and practical implications of these findings, alongside a discussion of the present limitations and directions for future studies."
},
{
"pmid": "29593627",
"title": "The Bright, the Dark, and the Blue Face of Narcissism: The Spectrum of Narcissism in Its Relations to the Metatraits of Personality, Self-Esteem, and the Nomological Network of Shyness, Loneliness, and Empathy.",
"abstract": "Grandiose and vulnerable narcissism seem to be uncorrelated in empirical studies, yet they share at least some theoretical similarities. In the current study, we examine the relation between grandiose (conceptualized as admiration and rivalry) and vulnerable narcissism in the context of the Big Five personality traits and metatraits, self-esteem, and their nomological network. To this end, participants (N = 314) filled in a set of self-report measures via an online survey. Rivalry was positively linked with both admiration and vulnerable narcissism. We replicated the relations of admiration and rivalry with personality traits and metatraits-as well as extended existing knowledge by providing support for the theory that vulnerable narcissism is simultaneously negatively related to the Stability and Plasticity. Higher scores on vulnerable narcissism and rivalry predicted having fragile self-esteem, whereas high scores on admiration predicted having optimal self-esteem. The assumed relations with the nomological network were confirmed, i.e., vulnerable narcissism and admiration demonstrated a contradictory pattern of relation to shyness and loneliness, whilst rivalry predicted low empathy. Our results suggest that the rivalry is between vulnerable narcissism and admiration, which supports its localization in the self-importance dimension of the narcissism spectrum model. It was concluded that whereas admiration and rivalry represent the bright and dark face of narcissism, vulnerable narcissism represents its blue face."
},
{
"pmid": "30846958",
"title": "Effect of Narcissism, Psychopathy, and Machiavellianism on Entrepreneurial Intention-The Mediating of Entrepreneurial Self-Efficacy.",
"abstract": "The driving factors behind the exploration and search for entrepreneurial intention (EI) are critical to entrepreneurship education and entrepreneurial practice. To reveal in depth the influence of personality traits on EI, our study introduces the opposite of proactive personality-the dark triad that consists of narcissism, psychopathy and Machiavellianism. Our study used the MBA students of Tianjin University as a sample to analyze the relationship between the dark triad, entrepreneurial self-efficacy (ESE) and EI. A total of 334 MBA students aged 24-47 years participated and the participation rate is 95.71%. The data collection was largely concentrated in the period from May 15 to June 5, 2018. From the overall perspective of the dark triad, the results show that the dark triad positively predicts EI, and ESE has a partial mediating effect on the dark triad and EI. From the perspective of the three members of the dark triad, the study found that narcissism/psychopathy has a negative effect on ESE and EI; narcissism/psychopathy has a non-linear effect on EI; Machiavellianism has a positive effect on ESE and EI; and ESE has a mediating effect on the three members of the dark triad and EI. In short, our research reveals that the three members of the dark triad have different effects on EI in different cultural contexts, and the research findings have certain reference value for further improvement of entrepreneurship education and entrepreneurial practice."
},
{
"pmid": "31214081",
"title": "Gratifications for Social Media Use in Entrepreneurship Courses: Learners' Perspective.",
"abstract": "The purpose of this study is to explore uses and gratifications on social media in entrepreneurship courses from the learners' perspective. The respondents must have participated in government or private entrepreneurship courses and joined the online group of those courses. Respondents are not college students, but more entrepreneurs, and their multi-attribute makes the research results and explanatory more abundant. A total of 458 valid data was collected. The results of the survey revealed four gratification factors namely trust, profit, learning, and social in online entrepreneurial groups. It is also found that the structures and of the four gratification factors vary in three social media (Line, Facebook, and WeChat) and \"trust\" outranks other factors. Most of the entrepreneurs' business is \"networking business,\" and the business unit is mostly \"micro.\" In terms of the trust factor, there are significant differences among the three social media. In short, the two gratification factors of trust and profit can be seen as specific gratifications for online entrepreneurial groups, especially the trust factor, which deserves more attention in the further research of online entrepreneurial courses on social media."
},
{
"pmid": "33237926",
"title": "SE-stacking: Improving user purchase behavior prediction by information fusion and ensemble learning.",
"abstract": "Online shopping behavior has the characteristics of rich granularity dimension and data sparsity and presents a challenging task in e-commerce. Previous studies on user behavior prediction did not seriously discuss feature selection and ensemble design, which are important to improving the performance of machine learning algorithms. In this paper, we proposed an SE-stacking model based on information fusion and ensemble learning for user purchase behavior prediction. After successfully using the ensemble feature selection method to screen purchase-related factors, we used the stacking algorithm for user purchase behavior prediction. In our efforts to avoid the deviation of the prediction results, we optimized the model by selecting ten different types of models as base learners and modifying the relevant parameters specifically for them. Experiments conducted on a publicly available dataset show that the SE-stacking model can achieve a 98.40% F1 score, approximately 0.09% higher than the optimal base models. The SE-stacking model not only has a good application in the prediction of user purchase behavior but also has practical value when combined with the actual e-commerce scene. At the same time, this model has important significance in academic research and the development of this field."
}
] |
Nature Communications | 35197449 | PMC8866480 | 10.1038/s41467-022-28571-7 | Forecasting the outcome of spintronic experiments with Neural Ordinary Differential Equations | Deep learning has an increasing impact to assist research, allowing, for example, the discovery of novel materials. Until now, however, these artificial intelligence techniques have fallen short of discovering the full differential equation of an experimental physical system. Here we show that a dynamical neural network, trained on a minimal amount of data, can predict the behavior of spintronic devices with high accuracy and an extremely efficient simulation time, compared to the micromagnetic simulations that are usually employed to model them. For this purpose, we re-frame the formalism of Neural Ordinary Differential Equations to the constraints of spintronics: few measured outputs, multiple inputs and internal parameters. We demonstrate with Neural Ordinary Differential Equations an acceleration factor over 200 compared to micromagnetic simulations for a complex problem – the simulation of a reservoir computer made of magnetic skyrmions (20 minutes compared to three days). In a second realization, we show that we can predict the noisy response of experimental spintronic nano-oscillators to varying inputs after training Neural Ordinary Differential Equations on five milliseconds of their measured response to a different set of inputs. Neural Ordinary Differential Equations can therefore constitute a disruptive tool for developing spintronic applications in complement to micromagnetic simulations, which are time-consuming and cannot fit experiments when noise or imperfections are present. Our approach can also be generalized to other electronic devices involving dynamics. | Discussion and related worksOur approach allows learning the underlying dynamics of a physical system from time-dependent data samples. Many works today seek to use deep neural networks to predict results in physics. They are used to find abstract data representations20, recover unknown physical parameters22, or discover the specific terms of functions23–25. Other research uses recurrent neural network-based models26–28 to learn and make predictions. These methods usually incorporate prior knowledge on the physical system under consideration, such as molecular dynamics24,26, quantum mechanics20, geospatial statistics28, or kinematics25 to help their models train faster or generalize. Few of these discrete models manage to include the relevant driving series to make predictions. Neural ODEs hold many advantages over the conventional neural networks used in these works: backpropagation occurs naturally by solving a second, augmented ODE backward in time; stability is improved with the use of adaptive numerical integration methods for ODEs; constant memory cost can be achieved by not storing any intermediate quantities of the forward pass; continuously-defined dynamics can naturally incorporate data which arrives at arbitrary times. However, until our work, two challenges remained to apply Neural ODEs to the prediction of the behavior of physical systems: the impossibility, in most practical cases, to acquire the dynamics of the set of state variables of the system, but also the need to take into account the external inputs that affect their dynamics.Our work addresses both issues, and before ours, other works have attempted to solve the first issue. One way is to introduce the inductive bias via the choice of computation graphs in a neural network25,53–59. For example, by incorporating the prior knowledge of Hamiltonian mechanics58,59 or Lagrangian Mechanics55–57 into a deep learning framework, it is possible to train models that learn and respect exact conservation laws. These models were usually evaluated on systems where the conservation of energy is important. Similarly, another strategy to deal with a dataset with incomplete information is through augmentation of original dynamics60,61: extensions of Neural ODEs at the second-order60 or higher-order61, can learn the low-dimensional physical dynamics of the original system. However, nearly all the proposals mentioned above require the knowledge of additional dynamical information, such as higher-order derivatives, or extra processing of the original low-dimensional dynamics, which is not appropriate for dealing with noisy time series. Neural ODE integrated with external inputs has also been studied in some previous literature62–64. Augmented Neural ODEs63 solve the initial value problem in a higher-dimensional space, by concatenating each data point with a vector of zeros to lift points into the additional dimensions. This strategy avoids trajectories intersecting each other, and thus allows modeling more complex functions using simpler flows, while achieving lower losses, reducing computational cost, and improving stability, and generalization. Parameterized Neural ODE64 extends Neural ODEs to have a set of input parameters that specify the dynamics of the Neural ODEs model such that the dynamics of each trajectory are characterized by the input parameter instance.We emphasize here that our idea is closely related to the classical theorem of time delay embedding for state space reconstruction, where the past and future of a time series containing the information about unobserved state variables can be used to define a state at the present time. The theorem was widely applied for forecasting in many real-world engineering problems45,46,65, but was largely restricted into making very short-term predictions for lack of modeling frame specifically designed for time series. The advent of Neural ODEs, whose formalism naturally incorporates time series, allows predictions of arbitrary length and high accuracy to be made by training a system equivalent to the original physical system. Additionally, until our work, Neural ODEs-based methods for modeling time series had only been tested in a few classical physical systems, such as the ideal Mass-Spring system, Pendulum and Harmonic oscillators. Our work is the first one to apply Neural ODEs to predict the behavior of nanodevices, by resolving the above issues.Our method also provides a significant improvement in time efficiency compared to conventional simulation platforms. For example, the Mackey-Glass prediction task with a reservoir of skyrmions takes only 20 min for the trained neural ODE, while the micromagnetic simulations need 3 days (5 days) to run it on the one-skyrmion system (multi-skyrmion system). To model the dynamics of the spintronic oscillator using real experimental data, output time series of 5-ms duration are sufficient to train a complete Neural ODE model, capable of predicting the system dynamics with any input. The Neural ODE simulation time does not dependent directly on the size of the system, but only on the number of dimensions and the number of data points to be predicted; the possibility to train a Neural ODE is therefore mostly determined by the availability of appropriate training data. Therefore, constructing a reliable and accurate model is not the only purpose of Neural ODEs, they can be a strong support for fast evaluation, verification, and optimization of experiments. In this way, our work also paves a way to Neural ODE-assisted or machine learning-integrated simulation platform development.Last but not least, Neural ODEs can be used for modeling systems featuring different types of behaviors, provided that examples of the different behaviors were included in the training dataset. Supplementary Note 6 shows an example, where a single Neural ODE can model a device, which, depending on the value of the external magnetic field, exhibits either a switching or a sustained oscillatory behavior. A limitation of Neural ODE, however, is that they cannot be trained to model systems exhibiting profoundly stochastic behavior, as is sometimes observed in room-temperature switching of magnetic tunnel junction66 or domain wall motion in some regimes67. Neural ODEs are adapted for systems obeying deterministic equations. Future work regarding the modeling of stochastic behaviors of a physical system using Neural ODE remains to be explored, which could rely on recent developments of stochastic Neural ODE theory68,69.In conclusion, we have presented an efficient modeling approach for physical ODE-based systems, and highlighted its excellent performance in modeling real-world physical dynamics. The training data can be a single observed variable, even if the system features higher-dimensional dynamics. We have shown that the method can not only be applied to model ideal data from simulations, but that it is also remarkably accurate for modeling real experimental measurements including noise. The trained model shows a remarkable improvement (hundreds of times faster) in computational efficiency compared to standard micromagnetic simulations. We have shown that Neural ODE is a strong support for making experimental predictions and dealing with complex computation tasks, such as the task of Mackey-Glass time-series predictions and spoken digit recognition in reservoir computing. In particular, we demonstrate its use in modeling complex physical processes in the field of spintronics, which is considered one of the most promising future technologies for memory and computing. The proposed method is a promising tool for bridging the gap between modern machine learning and traditional research methods in experimental physics, and could be applied to a variety of physical systems. | [
"31011220",
"31534247",
"28748930",
"30374193",
"28232747",
"28508076",
"32978161",
"24162000",
"2330029",
"31913322",
"28070023",
"18352422",
"17501604",
"28127051",
"30981085"
] | [
{
"pmid": "31011220",
"title": "Thermal skyrmion diffusion used in a reshuffler device.",
"abstract": "Magnetic skyrmions in thin films can be efficiently displaced with high speed by using spin-transfer torques1,2 and spin-orbit torques3-5 at low current densities. Although this favourable combination of properties has raised expectations for using skyrmions in devices6,7, only a few publications have studied the thermal effects on the skyrmion dynamics8-10. However, thermally induced skyrmion dynamics can be used for applications11 such as unconventional computing approaches12, as they have been predicted to be useful for probabilistic computing devices13. In our work, we uncover thermal diffusive skyrmion dynamics by a combined experimental and numerical study. We probed the dynamics of magnetic skyrmions in a specially tailored low-pinning multilayer material. The observed thermally excited skyrmion motion dominates the dynamics. Analysing the diffusion as a function of temperature, we found an exponential dependence, which we confirmed by means of numerical simulations. The diffusion of skyrmions was further used in a signal reshuffling device as part of a skyrmion-based probabilistic computing architecture. Owing to its inherent two-dimensional texture, the observation of a diffusive motion of skyrmions in thin-film systems may also yield insights in soft-matter-like characteristics (for example, studies of fluctuation theorems, thermally induced roughening and so on), which thus makes it highly desirable to realize and study thermal effects in experimentally accessible skyrmion systems."
},
{
"pmid": "31534247",
"title": "Integer factorization using stochastic magnetic tunnel junctions.",
"abstract": "Conventional computers operate deterministically using strings of zeros and ones called bits to represent information in binary code. Despite the evolution of conventional computers into sophisticated machines, there are many classes of problems that they cannot efficiently address, including inference, invertible logic, sampling and optimization, leading to considerable interest in alternative computing schemes. Quantum computing, which uses qubits to represent a superposition of 0 and 1, is expected to perform these tasks efficiently1-3. However, decoherence and the current requirement for cryogenic operation4, as well as the limited many-body interactions that can be implemented, pose considerable challenges. Probabilistic computing1,5-7 is another unconventional computation scheme that shares similar concepts with quantum computing but is not limited by the above challenges. The key role is played by a probabilistic bit (a p-bit)-a robust, classical entity fluctuating in time between 0 and 1, which interacts with other p-bits in the same system using principles inspired by neural networks8. Here we present a proof-of-concept experiment for probabilistic computing using spintronics technology, and demonstrate integer factorization, an illustrative example of the optimization class of problems addressed by adiabatic9 and gated2 quantum computing. Nanoscale magnetic tunnel junctions showing stochastic behaviour are developed by modifying market-ready magnetoresistive random-access memory technology10,11 and are used to implement three-terminal p-bits that operate at room temperature. The p-bits are electrically connected to form a functional asynchronous network, to which a modified adiabatic quantum computing algorithm that implements three- and four-body interactions is applied. Factorization of integers up to 945 is demonstrated with this rudimentary asynchronous probabilistic computer using eight correlated p-bits, and the results show good agreement with theoretical predictions, thus providing a potentially scalable hardware approach to the difficult problems of optimization and sampling."
},
{
"pmid": "28748930",
"title": "Neuromorphic computing with nanoscale spintronic oscillators.",
"abstract": "Neurons in the brain behave as nonlinear oscillators, which develop rhythmic activity and interact to process information. Taking inspiration from this behaviour to realize high-density, low-power neuromorphic computing will require very large numbers of nanoscale nonlinear oscillators. A simple estimation indicates that to fit 108 oscillators organized in a two-dimensional array inside a chip the size of a thumb, the lateral dimension of each oscillator must be smaller than one micrometre. However, nanoscale devices tend to be noisy and to lack the stability that is required to process data in a reliable way. For this reason, despite multiple theoretical proposals and several candidates, including memristive and superconducting oscillators, a proof of concept of neuromorphic computing using nanoscale oscillators has yet to be demonstrated. Here we show experimentally that a nanoscale spintronic oscillator (a magnetic tunnel junction) can be used to achieve spoken-digit recognition with an accuracy similar to that of state-of-the-art neural networks. We also determine the regime of magnetization dynamics that leads to the greatest performance. These results, combined with the ability of the spintronic oscillators to interact with each other, and their long lifetime and low energy consumption, open up a path to fast, parallel, on-chip computation based on networks of oscillators."
},
{
"pmid": "30374193",
"title": "Vowel recognition with four coupled spin-torque nano-oscillators.",
"abstract": "In recent years, artificial neural networks have become the flagship algorithm of artificial intelligence1. In these systems, neuron activation functions are static, and computing is achieved through standard arithmetic operations. By contrast, a prominent branch of neuroinspired computing embraces the dynamical nature of the brain and proposes to endow each component of a neural network with dynamical functionality, such as oscillations, and to rely on emergent physical phenomena, such as synchronization2-6, for solving complex problems with small networks7-11. This approach is especially interesting for hardware implementations, because emerging nanoelectronic devices can provide compact and energy-efficient nonlinear auto-oscillators that mimic the periodic spiking activity of biological neurons12-16. The dynamical couplings between oscillators can then be used to mediate the synaptic communication between the artificial neurons. One challenge for using nanodevices in this way is to achieve learning, which requires fine control and tuning of their coupled oscillations17; the dynamical features of nanodevices can be difficult to control and prone to noise and variability18. Here we show that the outstanding tunability of spintronic nano-oscillators-that is, the possibility of accurately controlling their frequency across a wide range, through electrical current and magnetic field-can be used to address this challenge. We successfully train a hardware network of four spin-torque nano-oscillators to recognize spoken vowels by tuning their frequencies according to an automatic real-time learning rule. We show that the high experimental recognition rates stem from the ability of these oscillators to synchronize. Our results demonstrate that non-trivial pattern classification tasks can be achieved with small hardware neural networks by endowing them with nonlinear dynamical features such as oscillations and synchronization."
},
{
"pmid": "28232747",
"title": "In situ click chemistry generation of cyclooxygenase-2 inhibitors.",
"abstract": "Cyclooxygenase-2 isozyme is a promising anti-inflammatory drug target, and overexpression of this enzyme is also associated with several cancers and neurodegenerative diseases. The amino-acid sequence and structural similarity between inducible cyclooxygenase-2 and housekeeping cyclooxygenase-1 isoforms present a significant challenge to design selective cyclooxygenase-2 inhibitors. Herein, we describe the use of the cyclooxygenase-2 active site as a reaction vessel for the in situ generation of its own highly specific inhibitors. Multi-component competitive-binding studies confirmed that the cyclooxygenase-2 isozyme can judiciously select most appropriate chemical building blocks from a pool of chemicals to build its own highly potent inhibitor. Herein, with the use of kinetic target-guided synthesis, also termed as in situ click chemistry, we describe the discovery of two highly potent and selective cyclooxygenase-2 isozyme inhibitors. The in vivo anti-inflammatory activity of these two novel small molecules is significantly higher than that of widely used selective cyclooxygenase-2 inhibitors.Traditional inflammation and pain relief drugs target both cyclooxygenase 1 and 2 (COX-1 and COX-2), causing severe side effects. Here, the authors use in situ click chemistry to develop COX-2 specific inhibitors with high in vivo anti-inflammatory activity."
},
{
"pmid": "28508076",
"title": "Machine learning of accurate energy-conserving molecular force fields.",
"abstract": "Using conservation of energy-a fundamental property of closed classical and quantum mechanical systems-we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential energy surfaces of intermediate-sized molecules with an accuracy of 0.3 kcal mol-1 for energies and 1 kcal mol-1 Å̊-1 for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods."
},
{
"pmid": "32978161",
"title": "Magnetic Hamiltonian parameter estimation using deep learning techniques.",
"abstract": "Understanding spin textures in magnetic systems is extremely important to the spintronics and it is vital to extrapolate the magnetic Hamiltonian parameters through the experimentally determined spin. It can provide a better complementary link between theories and experimental results. We demonstrate deep learning can quantify the magnetic Hamiltonian from magnetic domain images. To train the deep neural network, we generated domain configurations with Monte Carlo method. The errors from the estimations was analyzed with statistical methods and confirmed the network was successfully trained to relate the Hamiltonian parameters with magnetic structure characteristics. The network was applied to estimate experimentally observed domain images. The results are consistent with the reported results, which verifies the effectiveness of our methods. On the basis of our study, we anticipate that the deep learning techniques make a bridge to connect the experimental and theoretical approaches not only in magnetism but also throughout any scientific research."
},
{
"pmid": "24162000",
"title": "Nucleation, stability and current-induced motion of isolated magnetic skyrmions in nanostructures.",
"abstract": "Magnetic skyrmions are topologically stable spin configurations, which usually originate from chiral interactions known as Dzyaloshinskii-Moriya interactions. Skyrmion lattices were initially observed in bulk non-centrosymmetric crystals, but have more recently been noted in ultrathin films, where their existence is explained by interfacial Dzyaloshinskii-Moriya interactions induced by the proximity to an adjacent layer with strong spin-orbit coupling. Skyrmions are promising candidates as information carriers for future information-processing devices due to their small size (down to a few nanometres) and to the very small current densities needed to displace skyrmion lattices. However, any practical application will probably require the creation, manipulation and detection of isolated skyrmions in magnetic thin-film nanostructures. Here, we demonstrate by numerical investigations that an isolated skyrmion can be a stable configuration in a nanostructure, can be locally nucleated by injection of spin-polarized current, and can be displaced by current-induced spin torques, even in the presence of large defects."
},
{
"pmid": "2330029",
"title": "Nonlinear forecasting as a way of distinguishing chaos from measurement error in time series.",
"abstract": "An approach is presented for making short-term predictions about the trajectories of chaotic dynamical systems. The method is applied to data on measles, chickenpox, and marine phytoplankton populations, to show how apparent noise associated with deterministic chaos can be distinguished from sampling error and other sources of externally induced environmental noise."
},
{
"pmid": "31913322",
"title": "Re-epithelialization and immune cell behaviour in an ex vivo human skin model.",
"abstract": "A large body of literature is available on wound healing in humans. Nonetheless, a standardized ex vivo wound model without disruption of the dermal compartment has not been put forward with compelling justification. Here, we present a novel wound model based on application of negative pressure and its effects for epidermal regeneration and immune cell behaviour. Importantly, the basement membrane remained intact after blister roof removal and keratinocytes were absent in the wounded area. Upon six days of culture, the wound was covered with one to three-cell thick K14+Ki67+ keratinocyte layers, indicating that proliferation and migration were involved in wound closure. After eight to twelve days, a multi-layered epidermis was formed expressing epidermal differentiation markers (K10, filaggrin, DSG-1, CDSN). Investigations about immune cell-specific manners revealed more T cells in the blister roof epidermis compared to normal epidermis. We identified several cell populations in blister roof epidermis and suction blister fluid that are absent in normal epidermis which correlated with their decrease in the dermis, indicating a dermal efflux upon negative pressure. Together, our model recapitulates the main features of epithelial wound regeneration, and can be applied for testing wound healing therapies and investigating underlying mechanisms."
},
{
"pmid": "28070023",
"title": "Magnetic skyrmion-based synaptic devices.",
"abstract": "Magnetic skyrmions are promising candidates for next-generation information carriers, owing to their small size, topological stability, and ultralow depinning current density. A wide variety of skyrmionic device concepts and prototypes have recently been proposed, highlighting their potential applications. Furthermore, the intrinsic properties of skyrmions enable new functionalities that may be inaccessible to conventional electronic devices. Here, we report on a skyrmion-based artificial synapse device for neuromorphic systems. The synaptic weight of the proposed device can be strengthened/weakened by positive/negative stimuli, mimicking the potentiation/depression process of a biological synapse. Both short-term plasticity and long-term potentiation functionalities have been demonstrated with micromagnetic simulations. This proposal suggests new possibilities for synaptic devices in neuromorphic systems with adaptive learning function."
},
{
"pmid": "18352422",
"title": "Single-shot time-resolved measurements of nanosecond-scale spin-transfer induced switching: stochastic versus deterministic aspects.",
"abstract": "Using high bandwidth resistance measurements, we study the single-shot response of tunnel junctions subjected to spin torque pulses. After the pulse onset, the switching proceeds by a ns-scale incubation delay during which the resistance is quiet, followed by a 400 ps transition terminated by a large ringing that is damped progressively. While the incubation delay fluctuates significantly, the resistance traces are reproducible once this delay is passed. After switching, the time-resolved resistance traces indicate micromagnetic configurations that are rather spatially coherent."
},
{
"pmid": "17501604",
"title": "Direct imaging of stochastic domain-wall motion driven by nanosecond current pulses.",
"abstract": "Magnetic transmission x-ray microscopy is used to directly visualize the influence of a spin-polarized current on domain walls in curved permalloy wires. Pulses of nanosecond duration and of high current density up to 1.0x10(12) A/m(2) are used to move and to deform the domain wall. The current pulse drives the wall either undisturbed, i.e., as composite particle through the wire, or causes structural changes of the magnetization. Repetitive pulse measurements reveal the stochastic nature of current-induced domain-wall motion."
},
{
"pmid": "28127051",
"title": "Ror2 signaling regulates Golgi structure and transport through IFT20 for tumor invasiveness.",
"abstract": "Signaling through the Ror2 receptor tyrosine kinase promotes invadopodia formation for tumor invasion. Here, we identify intraflagellar transport 20 (IFT20) as a new target of this signaling in tumors that lack primary cilia, and find that IFT20 mediates the ability of Ror2 signaling to induce the invasiveness of these tumors. We also find that IFT20 regulates the nucleation of Golgi-derived microtubules by affecting the GM130-AKAP450 complex, which promotes Golgi ribbon formation in achieving polarized secretion for cell migration and invasion. Furthermore, IFT20 promotes the efficiency of transport through the Golgi complex. These findings shed new insights into how Ror2 signaling promotes tumor invasiveness, and also advance the understanding of how Golgi structure and transport can be regulated."
},
{
"pmid": "30981085",
"title": "Recent advances in physical reservoir computing: A review.",
"abstract": "Reservoir computing is a computational framework suited for temporal/sequential data processing. It is derived from several recurrent neural network models, including echo state networks and liquid state machines. A reservoir computing system consists of a reservoir for mapping inputs into a high-dimensional space and a readout for pattern analysis from the high-dimensional states in the reservoir. The reservoir is fixed and only the readout is trained with a simple method such as linear regression and classification. Thus, the major advantage of reservoir computing compared to other recurrent neural networks is fast learning, resulting in low training cost. Another advantage is that the reservoir without adaptive updating is amenable to hardware implementation using a variety of physical systems, substrates, and devices. In fact, such physical reservoir computing has attracted increasing attention in diverse fields of research. The purpose of this review is to provide an overview of recent advances in physical reservoir computing by classifying them according to the type of the reservoir. We discuss the current issues and perspectives related to physical reservoir computing, in order to further expand its practical applications and develop next-generation machine learning systems."
}
] |
Scientific Reports | 35197504 | PMC8866496 | 10.1038/s41598-022-06931-z | Cascaded 3D UNet architecture for segmenting the COVID-19 infection from lung CT volume | World Health Organization (WHO) declared COVID-19 (COronaVIrus Disease 2019) as pandemic on March 11, 2020. Ever since then, the virus is undergoing different mutations, with a high rate of dissemination. The diagnosis and prognosis of COVID-19 are critical in bringing the situation under control. COVID-19 virus replicates in the lungs after entering the upper respiratory system, causing pneumonia and mortality. Deep learning has a significant role in detecting infections from the Computed Tomography (CT). With the help of basic image processing techniques and deep learning, we have developed a two stage cascaded 3D UNet to segment the contaminated area from the lungs. The first 3D UNet extracts the lung parenchyma from the CT volume input after preprocessing and augmentation. Since the CT volume is small, we apply appropriate post-processing to the lung parenchyma and input these volumes into the second 3D UNet. The second 3D UNet extracts the infected 3D volumes. With this method, clinicians can input the complete CT volume of the patient and analyze the contaminated area without having to label the lung parenchyma for each new patient. For lung parenchyma segmentation, the proposed method obtained a sensitivity of 93.47%, specificity of 98.64%, an accuracy of 98.07%, and a dice score of 92.46%. We have achieved a sensitivity of 83.33%, a specificity of 99.84%, an accuracy of 99.20%, and a dice score of 82% for lung infection segmentation. | Related worksMany works were reported for the lung infection segmentation due to COVID-19 by using the deep learning architectures.Zheng et al.9 developed a weak supervised software to identify COVID-19. A pre-trained UNet generates the lung mask, and it is input into the deep learning architecture DeCoVNet. The DeCoVNet consists of three stages—network stem, residual block, and progressive classifier to predict the likelihood of COVID-19 infection. They acquire a specificity and sensitivity of 0.911 and 0.907 using a probability of 0.5 to classify whether it is COVID positive or negative. Zhou et al.10 proposed a UNet segmentation network based on an attention mechanism. The feature representations from the encoder are given as input to the attention mechanism. The channel-wise and space-wise reweighting of these features are performed in the attention mechanism, thereby getting the most prominent features. Then these features are projected to the decoder part. To address the small lesion segmentation, focal tversky loss function is used. Jin et al.11 suggested a classification and segmentation system for COVID-19. They used the 3D UNet++ and ResNet-50 combined model to achieve a better result. Amayar et al.12 explored the encoder and decoder architecture for detecting and segmenting the infected lesions from the chest CT. This work follows a multi-task learning approach to COVID-19 classification, segmentation, and reconstruction.The Joint classification and segmentation system proposed by Wu et al.13 consists of an explainable classification system to detect COVID-19 opacifications and another pipeline to segment the opacification areas. Fan et al.14. proposed an architecture called Inf-Net, which consists of an edge attention module, parallel partial decoder, and reverse attention module. Then proposed a Semi-Supervised Inf-Net, to address the limited number of training samples. Yan et al.15 designed a deep CNN, COVID-SegNet for COVID-19 infected lung region segmentation. COVID-SegNet has two parts: an encoder and decoder. Feature variation and a progressive atrous spatial pyramid pooling are added to get important features. The residual blocks are used to avoid the gradient vanishing problem. Wang et al.16 proposed a COVID-19 Pneumonia Lesion segmentation network (COPLE-Net) to learn from the noisy images. Aswathy et al.17 developed a transfer learning method to diagnose COVID-19 and determine its severity from CT images. Nature-inspired optimization techniques have been explored in COVID-19 classification, and severity prediction by Suma et al.18. Feature extraction with different pre-trained networks and various classifiers are also investigated by Aswathy et al.19. Pang et al.20 devised a mathematical model to find the intervention and monitoring of COVID-19, infectious disease. The work represents a collaborative city, Digital Twin, with federated learning that can learn a shared model by retaining the training data. It can lead to a better solution by acquiring knowledge from multiple data sources. The analysis and the prediction of COVID-19 spread in India are studied by Kumari et al.21. Singh et al.22 proposed a method to diagnose COVID-19 from chest X-ray images. The method incorporates the wavelet decomposition to get multiresolution of the input image, and it considers three classes normal, viral pneumonia and COVID-19. Cascaded Generative Adversarial networks are also used for detection problems in many computer vision algorithms23. | [
"32081636",
"32282863",
"32105641",
"33065387",
"33600316",
"32730213",
"32730215"
] | [
{
"pmid": "32081636",
"title": "Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and coronavirus disease-2019 (COVID-19): The epidemic and the challenges.",
"abstract": "The emergence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2; previously provisionally named 2019 novel coronavirus or 2019-nCoV) disease (COVID-19) in China at the end of 2019 has caused a large global outbreak and is a major public health issue. As of 11 February 2020, data from the World Health Organization (WHO) have shown that more than 43 000 confirmed cases have been identified in 28 countries/regions, with >99% of cases being detected in China. On 30 January 2020, the WHO declared COVID-19 as the sixth public health emergency of international concern. SARS-CoV-2 is closely related to two bat-derived severe acute respiratory syndrome-like coronaviruses, bat-SL-CoVZC45 and bat-SL-CoVZXC21. It is spread by human-to-human transmission via droplets or direct contact, and infection has been estimated to have mean incubation period of 6.4 days and a basic reproduction number of 2.24-3.58. Among patients with pneumonia caused by SARS-CoV-2 (novel coronavirus pneumonia or Wuhan pneumonia), fever was the most common symptom, followed by cough. Bilateral lung involvement with ground-glass opacity was the most common finding from computed tomography images of the chest. The one case of SARS-CoV-2 pneumonia in the USA is responding well to remdesivir, which is now undergoing a clinical trial in China. Currently, controlling infection to prevent the spread of SARS-CoV-2 is the primary intervention being used. However, public health authorities should keep monitoring the situation closely, as the more we can learn about this novel virus and its associated outbreak, the better we can respond."
},
{
"pmid": "32282863",
"title": "Molecular immune pathogenesis and diagnosis of COVID-19.",
"abstract": "Coronavirus disease 2019 (COVID-19) is a kind of viral pneumonia which is caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The emergence of SARS-CoV-2 has been marked as the third introduction of a highly pathogenic coronavirus into the human population after the severe acute respiratory syndrome coronavirus (SARS-CoV) and the Middle East respiratory syndrome coronavirus (MERS-CoV) in the twenty-first century. In this minireview, we provide a brief introduction of the general features of SARS-CoV-2 and discuss current knowledge of molecular immune pathogenesis, diagnosis and treatment of COVID-19 on the base of the present understanding of SARS-CoV and MERS-CoV infections, which may be helpful in offering novel insights and potential therapeutic targets for combating the SARS-CoV-2 infection."
},
{
"pmid": "33065387",
"title": "Multi-task deep learning based CT imaging analysis for COVID-19 pneumonia: Classification and segmentation.",
"abstract": "This paper presents an automatic classification segmentation tool for helping screening COVID-19 pneumonia using chest CT imaging. The segmented lesions can help to assess the severity of pneumonia and follow-up the patients. In this work, we propose a new multitask deep learning model to jointly identify COVID-19 patient and segment COVID-19 lesion from chest CT images. Three learning tasks: segmentation, classification and reconstruction are jointly performed with different datasets. Our motivation is on the one hand to leverage useful information contained in multiple related tasks to improve both segmentation and classification performances, and on the other hand to deal with the problems of small data because each task can have a relatively small dataset. Our architecture is composed of a common encoder for disentangled feature representation with three tasks, and two decoders and a multi-layer perceptron for reconstruction, segmentation and classification respectively. The proposed model is evaluated and compared with other image segmentation techniques using a dataset of 1369 patients including 449 patients with COVID-19, 425 normal ones, 98 with lung cancer and 397 of different kinds of pathology. The obtained results show very encouraging performance of our method with a dice coefficient higher than 0.88 for the segmentation and an area under the ROC curve higher than 97% for the classification."
},
{
"pmid": "33600316",
"title": "JCS: An Explainable COVID-19 Diagnosis System by Joint Classification and Segmentation.",
"abstract": "Recently, the coronavirus disease 2019 (COVID-19) has caused a pandemic disease in over 200 countries, influencing billions of humans. To control the infection, identifying and separating the infected people is the most crucial step. The main diagnostic tool is the Reverse Transcription Polymerase Chain Reaction (RT-PCR) test. Still, the sensitivity of the RT-PCR test is not high enough to effectively prevent the pandemic. The chest CT scan test provides a valuable complementary tool to the RT-PCR test, and it can identify the patients in the early-stage with high sensitivity. However, the chest CT scan test is usually time-consuming, requiring about 21.5 minutes per case. This paper develops a novel Joint Classification and Segmentation (JCS) system to perform real-time and explainable COVID- 19 chest CT diagnosis. To train our JCS system, we construct a large scale COVID- 19 Classification and Segmentation (COVID-CS) dataset, with 144,167 chest CT images of 400 COVID- 19 patients and 350 uninfected cases. 3,855 chest CT images of 200 patients are annotated with fine-grained pixel-level labels of opacifications, which are increased attenuation of the lung parenchyma. We also have annotated lesion counts, opacification areas, and locations and thus benefit various diagnosis aspects. Extensive experiments demonstrate that the proposed JCS diagnosis system is very efficient for COVID-19 classification and segmentation. It obtains an average sensitivity of 95.0% and a specificity of 93.0% on the classification test set, and 78.5% Dice score on the segmentation test set of our COVID-CS dataset. The COVID-CS dataset and code are available at https://github.com/yuhuan-wu/JCS."
},
{
"pmid": "32730213",
"title": "Inf-Net: Automatic COVID-19 Lung Infection Segmentation From CT Images.",
"abstract": "Coronavirus Disease 2019 (COVID-19) spread globally in early 2020, causing the world to face an existential health crisis. Automated detection of lung infections from computed tomography (CT) images offers a great potential to augment the traditional healthcare strategy for tackling COVID-19. However, segmenting infected regions from CT slices faces several challenges, including high variation in infection characteristics, and low intensity contrast between infections and normal tissues. Further, collecting a large amount of data is impractical within a short time period, inhibiting the training of a deep model. To address these challenges, a novel COVID-19 Lung Infection Segmentation Deep Network (Inf-Net) is proposed to automatically identify infected regions from chest CT slices. In our Inf-Net, a parallel partial decoder is used to aggregate the high-level features and generate a global map. Then, the implicit reverse attention and explicit edge-attention are utilized to model the boundaries and enhance the representations. Moreover, to alleviate the shortage of labeled data, we present a semi-supervised segmentation framework based on a randomly selected propagation strategy, which only requires a few labeled images and leverages primarily unlabeled data. Our semi-supervised framework can improve the learning ability and achieve a higher performance. Extensive experiments on our COVID-SemiSeg and real CT volumes demonstrate that the proposed Inf-Net outperforms most cutting-edge segmentation models and advances the state-of-the-art performance."
},
{
"pmid": "32730215",
"title": "A Noise-Robust Framework for Automatic Segmentation of COVID-19 Pneumonia Lesions From CT Images.",
"abstract": "Segmentation of pneumonia lesions from CT scans of COVID-19 patients is important for accurate diagnosis and follow-up. Deep learning has a potential to automate this task but requires a large set of high-quality annotations that are difficult to collect. Learning from noisy training labels that are easier to obtain has a potential to alleviate this problem. To this end, we propose a novel noise-robust framework to learn from noisy labels for the segmentation task. We first introduce a noise-robust Dice loss that is a generalization of Dice loss for segmentation and Mean Absolute Error (MAE) loss for robustness against noise, then propose a novel COVID-19 Pneumonia Lesion segmentation network (COPLE-Net) to better deal with the lesions with various scales and appearances. The noise-robust Dice loss and COPLE-Net are combined with an adaptive self-ensembling framework for training, where an Exponential Moving Average (EMA) of a student model is used as a teacher model that is adaptively updated by suppressing the contribution of the student to EMA when the student has a large training loss. The student model is also adaptive by learning from the teacher only when the teacher outperforms the student. Experimental results showed that: (1) our noise-robust Dice loss outperforms existing noise-robust loss functions, (2) the proposed COPLE-Net achieves higher performance than state-of-the-art image segmentation networks, and (3) our framework with adaptive self-ensembling significantly outperforms a standard training process and surpasses other noise-robust training approaches in the scenario of learning from noisy labels for COVID-19 pneumonia lesion segmentation."
}
] |
Frontiers in Big Data | null | PMC8866947 | 10.3389/fdata.2022.704203 | Defense Against Explanation Manipulation | Explainable machine learning attracts increasing attention as it improves the transparency of models, which is helpful for machine learning to be trusted in real applications. However, explanation methods have recently been demonstrated to be vulnerable to manipulation, where we can easily change a model's explanation while keeping its prediction constant. To tackle this problem, some efforts have been paid to use more stable explanation methods or to change model configurations. In this work, we tackle the problem from the training perspective, and propose a new training scheme called Adversarial Training on EXplanations (ATEX) to improve the internal explanation stability of a model regardless of the specific explanation method being applied. Instead of directly specifying explanation values over data instances, ATEX only puts constraints on model predictions which avoids involving second-order derivatives in optimization. As a further discussion, we also find that explanation stability is closely related to another property of the model, i.e., the risk of being exposed to adversarial attack. Through experiments, besides showing that ATEX improves model robustness against manipulation targeting explanation, it also brings additional benefits including smoothing explanations and improving the efficacy of adversarial training if applied to the model. | 2. Related WorkModel explanations could be generally indicated and defined as the information which can help people understand the model behaviors. Typically, those useful information could be some significant features that contribute a lot to model predictions. To effectively extract explanations from models, there are two major methodologies, where the first category is based on instance perturbation (Ribeiro et al., 2016a) and the second is based on gradient information (Ancona et al., 2017). As for the first category, LIME (Ribeiro et al., 2016a) is a representative method, utilizing shallow linear models to approximate the model local behaviors with feature importance scores. Further, SHAP (Lundberg and Lee, 2017) unifies and generalizes the perturbation-based method with the aid of cooperative game theory, where each feature would be assigned with a Shapley value for explanation purposes. Some other important methods within this category can also be found in Bach et al. (2015); Datta et al. (2016), Ribeiro et al. (2018). As for the second category of methods, explanations are mainly extracted and calculated according to the model gradients. Representative methods can be found in Selvaraju et al. (2017), Shrikumar et al. (2017), Smilkov et al. (2017b), Sundararajan et al. (2017b), Chattopadhay et al. (2018), where gradients are used as an indicator for feature sensitivity toward model predictions. In this work, we specifically focus on the second category of methods for generating explanations, and aim to make explanations more robust and stable.Although model explanations are useful, it can be fragile and easy to be manipulated under certain circumstances. In Ghorbani et al. (2019a), the authors showed that the gradient-based explanations can be sensitive to imperceptible perturbations of images, which could lead to the unstructured changes in the generated salience maps. Some preliminary work has been proposed to regularize interpretation variation (Plumb et al., 2020; Wu et al., 2020), where experimental validation is limited to tabular or grid data. The work in Ross and Doshi-Velez (2018) also tries to regularize explanation, but it focuses on constraining gradient magnitude instead of stability. The approach in Kindermans et al. (2019) utilized a constant shift on the target instance to manipulate the explanation salience map, where the biases of the neural network are also changed to fit the original prediction. Besides, parameter randomization (Adebayo et al., 2018) and network fine-tuning (Heo et al., 2019) are also effective approaches in manipulating explanations. To effectively handle such issue, robust, and stable explanations are preferred for model interpretability. In Yeh et al. (2019), the authors rigorously define two concepts for generating smooth explanations (i.e., fidelity and sensitivity), and further propose to optimize these metrics for robust explanation generation. Also, the authors in Dombrowski et al. (2019), Ghorbani et al. (2019b) replace the common ReLU activation function with the softplus function, aiming to smooth the explanations during the model training process. Moreover, utilizing the Lipschitz constant of the explanations to locally lower the sensitivity to small perturbations is another valid methodology to improve the explanation robustness (Alvarez-Melis and Jaakkola, 2018; Melis and Jaakkola, 2018). Our work will specifically focus on the model training perspective for explanation stability under a relatively general setting.Besides manipulation over interpretation, a more well studied domain of machine learning security is adversarial attack and defense on model prediction. Adversarial attack on model prediction refers to perturbing input in order to change its prediction results by the model, even though most of the attacks cannot be perceived by humans (Szegedy et al., 2013; Goodfellow et al., 2014). Adversarial attack can be categorized into different categories according to the threat model, including untargeted attack VS. targeted attack (Carlini and Wagner, 2017), one-shot attack vs. iterative attack (Kurakin et al., 2016), data dependent vs. universal attack (Moosavi-Dezfooli et al., 2017), perturbation attack vs. replacement attack (Thys et al., 2019). Considering such relation between model explanation and adversarial attack, our work also discuss the potential benefit to the target model with the aid of the explanation stability. | [
"26161953",
"33894501"
] | [
{
"pmid": "26161953",
"title": "On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation.",
"abstract": "Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest. We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package."
},
{
"pmid": "33894501",
"title": "Visual interpretability in 3D brain tumor segmentation network.",
"abstract": "Medical image segmentation is a complex yet one of the most essential tasks for diagnostic procedures such as brain tumor detection. Several 3D Convolutional Neural Network (CNN) architectures have achieved remarkable results in brain tumor segmentation. However, due to the black-box nature of CNNs, the integration of such models to make decisions about diagnosis and treatment is high-risk in the domain of healthcare. It is difficult to explain the rationale behind the model's predictions due to the lack of interpretability. Hence, the successful deployment of deep learning models in the medical domain requires accurate as well as transparent predictions. In this paper, we generate 3D visual explanations to analyze the 3D brain tumor segmentation model by extending a post-hoc interpretability technique. We explore the advantages of a gradient-free interpretability approach over gradient-based approaches. Moreover, we interpret the behavior of the segmentation model with respect to the input Magnetic Resonance Imaging (MRI) images and investigate the prediction strategy of the model. We also evaluate the interpretability methodology quantitatively for medical image segmentation tasks. To deduce that our visual explanations do not represent false information, we validate the extended methodology quantitatively. We learn that the information captured by the model is coherent with the domain knowledge of human experts, making it more trustworthy. We use the BraTS-2018 dataset to train the 3D brain tumor segmentation network and perform interpretability experiments to generate visual explanations."
}
] |
Frontiers in Big Data | null | PMC8866955 | 10.3389/fdata.2022.770585 | Automated Detection of Vaping-Related Tweets on Twitter During the 2019 EVALI Outbreak Using Machine Learning Classification | There are increasingly strict regulations surrounding the purchase and use of combustible tobacco products (i.e., cigarettes); simultaneously, the use of other tobacco products, including e-cigarettes (i.e., vaping products), has dramatically increased. However, public attitudes toward vaping vary widely, and the health effects of vaping are still largely unknown. As a popular social media, Twitter contains rich information shared by users about their behaviors and experiences, including opinions on vaping. It is very challenging to identify vaping-related tweets to source useful information manually. In the current study, we proposed to develop a detection model to accurately identify vaping-related tweets using machine learning and deep learning methods. Specifically, we applied seven popular machine learning and deep learning algorithms, including Naïve Bayes, Support Vector Machine, Random Forest, XGBoost, Multilayer Perception, Transformer Neural Network, and stacking and voting ensemble models to build our customized classification model. We extracted a set of sample tweets during an outbreak of e-cigarette or vaping-related lung injury (EVALI) in 2019 and created an annotated corpus to train and evaluate these models. After comparing the performance of each model, we found that the stacking ensemble learning achieved the highest performance with an F1-score of 0.97. All models could achieve 0.90 or higher after tuning hyperparameters. The ensemble learning model has the best average performance. Our study findings provide informative guidelines and practical implications for the automated detection of themed social media data for public opinions and health surveillance purposes. | Related WorkSocial media platforms have become an essential part of public life. Previous literature has demonstrated that social media can be used to analyze public opinions on vaping and vaping-related behaviors, including their opinions between vaping and cannabis legalization (Adhikari et al., 2021), and perception of smoking behavior and emerging tobacco products (Myslín et al., 2013). Deploying predictive models with features extracted from Twitter, including tweet text, user profile information, geographic information, and sentiment, has been proven feasible in identifying vaping-related tweets in previous studies (Martinez et al., 2018). Extracted features can be considered as input variables in the standard machine learning algorithms, including SVM, Naïve Bayesian, and Random Forest, and have also been used successfully for topic analysis and detection (Aphinyanaphongs et al., 2016; Han and Kavuluru, 2016). Aphinyanaphongs et al. (2016) compared the performance of Naïve Bayes, Liblinear, Logistic Regression, and Random Forest classifiers to test the automatic detection of e-cigarette use (including e-cigarette use for smoking cessation) from tweet content (Aphinyanaphongs et al., 2016). Logistic Regression achieved the best performance (90% accuracy) for e-cigarette use detection, and Random Forest achieved the best performance (94% accuracy) for smoking cessation detection. For their Tweet sentiment analysis, positive sentiment indicates users' intention to use, the act of using, or sequel from use. Benson et al. (2020) investigated sentiment surrounding JUUL (i.e., an electronic nicotine delivery system) and vaping among youth and young adults by applying Logistic Regression, Naïve Bayes, and Random Forest for the detection of JUUL use and sentiment analysis. The Random Forest classifier achieved the best performance with 91% average detection accuracy among these classifiers. Moreover, due to their ability to learn complex non-linear functions, deep learning models have gained more popularity for detection tasks by feeding vectorized tweet contents as the model input (Visweswaran et al., 2020).To design and justify our study, we reviewed relevant studies on vaping-related tweets analysis and cross-compared the scale of their dataset, setting, and performance of various machine learning and deep learning classifiers. The comparison results are presented in Table 1.Table 1Summary and cross-comparison of vaping-related twitter studies.
Vaping-related Twitter studies
Subject
Scale of the dataset
Size of annotation
Classifier applied
Classifier setting (where applicable)
Best performance (accuracy)
Adhikari et al. (2021)Public opinions analysis about cannabis and JUUL on tweetsDj:597,000 tweets from 2016 to 2018; Dc: 3.28M tweets from 2014 to 2018500 tweets annotated from Dj, and 500 tweets annotated from DcLogistic Regression (LR), Support Vector Machine (SVM), LSTM-based Deep Neural Network (DNN)Hyperparameters were tuned for each classifierPublic opinions about cannabis and JUUL: microAUCe-cigarette: 0.93Cannabis: 0.75Myslín et al. (2013)Tobacco-relevance tweets detection, positive & negative sentiment7,362 tweets at 15-day intervals from December 2011 to July 2012 by keywordsEach of 7,362 tweets was manually classifiedNaïve Bayes (NB), K-Nearest Neighbors (K-NN), SVMRainbow toolkit 10-fold cross-validationTobacco-relevance tweets detectionNB: 0.77K-NN: 0.73SVM: 0.82Martinez et al. (2018)Public opinion about vaping investigates using sentiment analysis973 tweets selected from 193,051 geocoded tweets within the U.S., and were collected between October 28, 2015 and February 6, 2016 by keywords100 tweets were manually coded by two coders; Other tweets were single coded according to the codebook classifications
Aphinyanaphongs et al. (2016)Vaping use and the detection of vaping use for smoking cessation tweets13,146 tweets were selected from 228,145 tweets, collected between January 2010 and January 2015 by keywordsEach of 13,146 selected tweets was labeled by the classifiersNB, SVM, LR, Random Forests (RF)Parameters Settings:NB: DefaultSVM: DefaultLR: Auto search to optimize regularization parameterRF: DefaultVaping use detectionNB: 0.82SVM: 0.87LR: 0.90RF: 0.89Vaping Use for Smoking CessationNB: 0.60SVM: 0.80LR: 0.89RF: 0.94Han and Kavuluru (2016)Marketing E- cigarette tweets detection and themes analysis1,000 tweets were selected from 1,166,494 tweets obtained from April 2015 to June 2016 by keywordsBoth authors independently annotated the 1,000 tweetsSVM, LR, Convolutional Neural Network (CNN)Ten such models were run for each classifier on 10 different 80–20% train-test splits of the datasetE-cigarette tweets detectionSVM: 0.87 ± 0.01LR: 0.88 ± 0.01CNN: 0.88Benson et al. (2020)Adolescents and young adults for JUUL tweets detection and sentiment analysis4,000 tweets were selected from 11,556 unique tweets containing a JUUL-related keywordManually annotated 4,000 tweets for JUUL-related themes of use and sentimentLR, NB, RFGrid search was applied to optimize hyperparameters 10-fold cross-validationTeen JUUL use tweets detectionLR: 0.94NB: 0.78RF: 0.99Visweswaran et al. (2020)The relevance and commercial Vaping-related tweets detection, and sentiment analysis4,000 tweets were selected from 810,600 tweets extracted from August 2018 to October 2018 by vaping-related keywordsManually annotated each of 4,000 tweetsLR, RF, SVM, NB, CNN, LSTM, LSTM-CNN, BiLSTMUsed default setting for the parameters in LR, RF, SVM.Tuned hyperparameters for CNN, LSTM, LSTM-CNN, BiLSTMVaping tweets relevance detection was based on vaping-related word vector: AUCLR: 0.84RF: 0.95SVM: 0.92NB: 0.88CNN:0.94LSTM: 0.91LSTM-CNN: 0.89BiLSTM: 0.89As shown in Table 1, Logistic Regression, Random Forest, SVM, and Naïve Bayes are the most used supervised machine learning classifiers for vaping-related Twitter studies, and deep neural networks (DNN) could also perform well in the tweet classification task. Hyperparameter tuning is necessary to improve the performance when building the classifiers. The appropriate splitting way for the training and testing set and validation method is also meaningful when building the classifiers. The typical approach of using 80% training set, 20% testing set, and cross-validation was applied in the previous studies. Since most of the previous research collected tweets in a long period (6 months or longer), their results cannot reflect the impact of specific events or changing public opinion tendencies.In this study, we collected vaping-related and non-vaping-related tweets from July 2019 to September 2019. We only focused on these 3 months' peak period of EVALI outbreak in 2019 to avoid the ambiguity of long-period tweets analysis. Our clinical team also cross-checked these tweets to ensure no misclassified tweets in our dataset. We then built a detection model for vaping-related tweets by leveraging various machine learning and deep learning classifiers and cross-compared their detection performance metrics after tuning hyperparameters for each classifier. We also used ensemble learning models to compare the performance with baseline classifiers to identify the models with the highest performance. | [
"31905322",
"32876579",
"34798506",
"29933828",
"32565882",
"27795703",
"23467656",
"24029168",
"29482506",
"28334848",
"2020",
"29979920",
"23989137",
"20075479",
"27295638",
"31272518",
"32784184",
"29157442"
] | [
{
"pmid": "31905322",
"title": "Social Media- and Internet-Based Disease Surveillance for Public Health.",
"abstract": "Disease surveillance systems are a cornerstone of public health tracking and prevention. This review addresses the use, promise, perils, and ethics of social media- and Internet-based data collection for public health surveillance. Our review highlights untapped opportunities for integrating digital surveillance in public health and current applications that could be improved through better integration, validation, and clarity on rules surrounding ethical considerations. Promising developments include hybrid systems that couple traditional surveillance data with data from search queries, social media posts, and crowdsourcing. In the future, it will be important to identify opportunities for public and private partnerships, train public health experts in data science, reduce biases related to digital data (gathered from Internet use, wearable devices, etc.), and address privacy. We are on the precipice of an unprecedented opportunity to track, predict, and prevent global disease burdens in the population using digital data."
},
{
"pmid": "32876579",
"title": "Investigating the Attitudes of Adolescents and Young Adults Towards JUUL: Computational Study Using Twitter Data.",
"abstract": "BACKGROUND\nIncreases in electronic nicotine delivery system (ENDS) use among high school students from 2017 to 2019 appear to be associated with the increasing popularity of the ENDS device JUUL.\n\n\nOBJECTIVE\nWe employed a content analysis approach in conjunction with natural language processing methods using Twitter data to understand salient themes regarding JUUL use on Twitter, sentiment towards JUUL, and underage JUUL use.\n\n\nMETHODS\nBetween July 2018 and August 2019, 11,556 unique tweets containing a JUUL-related keyword were collected. We manually annotated 4000 tweets for JUUL-related themes of use and sentiment. We used 3 machine learning algorithms to classify positive and negative JUUL sentiments as well as underage JUUL mentions.\n\n\nRESULTS\nOf the annotated tweets, 78.80% (3152/4000) contained a specific mention of JUUL. Only 1.43% (45/3152) of tweets mentioned using JUUL as a method of smoking cessation, and only 6.85% (216/3152) of tweets mentioned the potential health effects of JUUL use. Of the machine learning methods used, the random forest classifier was the best performing algorithm among all 3 classification tasks (ie, positive sentiment, negative sentiment, and underage JUUL mentions).\n\n\nCONCLUSIONS\nOur findings suggest that a vast majority of Twitter users are not using JUUL to aid in smoking cessation nor do they mention the potential health benefits or detriments of JUUL use. Using machine learning algorithms to identify tweets containing underage JUUL mentions can support the timely surveillance of JUUL habits and opinions, further assisting youth-targeted public health intervention strategies."
},
{
"pmid": "34798506",
"title": "Optimized stacking, a new method for constructing ensemble surrogate models applied to DNAPL-contaminated aquifer remediation.",
"abstract": "Surfactant-enhanced aquifer remediation (SEAR) is an appropriate method for DNAPL-contaminated aquifer remediation; However, due to the high cost of the SEAR method, finding the optimal remediation scenario is usually essential. Embedding numerical simulation models of DNAPL remediation within the optimization routines are computationally expensive, and in this situation, using surrogate models instead of numerical models is a proper alternative. Ensemble methods are also utilized to enhance the accuracy of surrogate models, and in this study, the Stacking ensemble method was applied and compared with conventional methods. First, Six machine learning methods were used as surrogate models, and various feature scaling techniques were employed, and their impact on the models' performance was evaluated. Also, Bagging and Boosting homogeneous ensemble methods were used to improve the base models' accuracy. A total of six stand-alone surrogate models and 12 homogeneous ensemble models were used as the base input models of the Stacking ensemble model. Due to the large size of the Stacking model, Bayesian hyper-parameter optimization method was used to find its optimal hyper-parameters. The results showed that the Bayesian hyper-parameter optimization method had better performance than common methods such as random search and grid search. The artificial neural network model, whose input data was scaled by the power transformer method, had the best performance with a cross-validation RMSE of 0.065. The Boosting method increased the base models' accuracy more than other homogeneous methods, and the best Boosting model had a test RMSE of 0.039. The Stacking ensemble method significantly increased the base models' accuracy and performed better than other ensemble methods. The best ensemble surrogate model constructed with Stacking had a cross-validation RMSE of 0.016. Finally, a differential evolution optimization model was used by substituting the Stacking ensemble model with the numerical model, and the optimal remediation strategy was obtained at a total cost of $ 72,706."
},
{
"pmid": "29933828",
"title": "Weighing the Risks and Benefits of Electronic Cigarette Use in High-Risk Populations.",
"abstract": "This article reviews the current evidence on electronic cigarette (e-cigarette) safety and efficacy for smoking cessation, with a focus on smokers with cardiovascular disease, pulmonary disease, or serious mental illness. In the United States, adult smokers use e-cigarettes primarily to quit or reduce cigarette smoking. An understanding of the potential risks and benefits of e-cigarette use may help clinicians counsel smokers about the potential impact of e-cigarettes on health."
},
{
"pmid": "32565882",
"title": "Modeling the Spread of COVID-19 Infection Using a Multilayer Perceptron.",
"abstract": "Coronavirus (COVID-19) is a highly infectious disease that has captured the attention of the worldwide public. Modeling of such diseases can be extremely important in the prediction of their impact. While classic, statistical, modeling can provide satisfactory models, it can also fail to comprehend the intricacies contained within the data. In this paper, authors use a publicly available dataset, containing information on infected, recovered, and deceased patients in 406 locations over 51 days (22nd January 2020 to 12th March 2020). This dataset, intended to be a time-series dataset, is transformed into a regression dataset and used in training a multilayer perceptron (MLP) artificial neural network (ANN). The aim of training is to achieve a worldwide model of the maximal number of patients across all locations in each time unit. Hyperparameters of the MLP are varied using a grid search algorithm, with a total of 5376 hyperparameter combinations. Using those combinations, a total of 48384 ANNs are trained (16128 for each patient group-deceased, recovered, and infected), and each model is evaluated using the coefficient of determination (R2). Cross-validation is performed using K-fold algorithm with 5-folds. Best models achieved consists of 4 hidden layers with 4 neurons in each of those layers, and use a ReLU activation function, with R2 scores of 0.98599 for confirmed, 0.99429 for deceased, and 0.97941 for recovered patient models. When cross-validation is performed, these scores drop to 0.94 for confirmed, 0.781 for recovered, and 0.986 for deceased patient models, showing high robustness of the deceased patient model, good robustness for confirmed, and low robustness for recovered patient model."
},
{
"pmid": "27795703",
"title": "Improving Feature Representation Based on a Neural Network for Author Profiling in Social Media Texts.",
"abstract": "We introduce a lexical resource for preprocessing social media data. We show that a neural network-based feature representation is enhanced by using this resource. We conducted experiments on the PAN 2015 and PAN 2016 author profiling corpora and obtained better results when performing the data preprocessing using the developed lexical resource. The resource includes dictionaries of slang words, contractions, abbreviations, and emoticons commonly used in social media. Each of the dictionaries was built for the English, Spanish, Dutch, and Italian languages. The resource is freely available."
},
{
"pmid": "23467656",
"title": "Levels of selected carcinogens and toxicants in vapour from electronic cigarettes.",
"abstract": "SIGNIFICANCE\nElectronic cigarettes, also known as e-cigarettes, are devices designed to imitate regular cigarettes and deliver nicotine via inhalation without combusting tobacco. They are purported to deliver nicotine without other toxicants and to be a safer alternative to regular cigarettes. However, little toxicity testing has been performed to evaluate the chemical nature of vapour generated from e-cigarettes. The aim of this study was to screen e-cigarette vapours for content of four groups of potentially toxic and carcinogenic compounds: carbonyls, volatile organic compounds, nitrosamines and heavy metals.\n\n\nMATERIALS AND METHODS\nVapours were generated from 12 brands of e-cigarettes and the reference product, the medicinal nicotine inhaler, in controlled conditions using a modified smoking machine. The selected toxic compounds were extracted from vapours into a solid or liquid phase and analysed with chromatographic and spectroscopy methods.\n\n\nRESULTS\nWe found that the e-cigarette vapours contained some toxic substances. The levels of the toxicants were 9-450 times lower than in cigarette smoke and were, in many cases, comparable with trace amounts found in the reference product.\n\n\nCONCLUSIONS\nOur findings are consistent with the idea that substituting tobacco cigarettes with e-cigarettes may substantially reduce exposure to selected tobacco-specific toxicants. E-cigarettes as a harm reduction strategy among smokers unwilling to quit, warrants further study. (To view this abstract in Polish and German, please see the supplementary files online.)."
},
{
"pmid": "29482506",
"title": "Effect of the sequence data deluge on the performance of methods for detecting protein functional residues.",
"abstract": "BACKGROUND\nThe exponential accumulation of new sequences in public databases is expected to improve the performance of all the approaches for predicting protein structural and functional features. Nevertheless, this was never assessed or quantified for some widely used methodologies, such as those aimed at detecting functional sites and functional subfamilies in protein multiple sequence alignments. Using raw protein sequences as only input, these approaches can detect fully conserved positions, as well as those with a family-dependent conservation pattern. Both types of residues are routinely used as predictors of functional sites and, consequently, understanding how the sequence content of the databases affects them is relevant and timely.\n\n\nRESULTS\nIn this work we evaluate how the growth and change with time in the content of sequence databases affect five sequence-based approaches for detecting functional sites and subfamilies. We do that by recreating historical versions of the multiple sequence alignments that would have been obtained in the past based on the database contents at different time points, covering a period of 20 years. Applying the methods to these historical alignments allows quantifying the temporal variation in their performance. Our results show that the number of families to which these methods can be applied sharply increases with time, while their ability to detect potentially functional residues remains almost constant.\n\n\nCONCLUSIONS\nThese results are informative for the methods' developers and final users, and may have implications in the design of new sequencing initiatives."
},
{
"pmid": "28334848",
"title": "Systematic review of surveillance by social media platforms for illicit drug use.",
"abstract": "Background\nThe use of social media (SM) as a surveillance tool of global illicit drug use is limited. To address this limitation, a systematic review of literature focused on the ability of SM to better recognize illicit drug use trends was addressed.\n\n\nMethods\nA search was conducted in databases: PubMed, CINAHL via Ebsco, PsychINFO via Ebsco, Medline via Ebsco, ERIC, Cochrane Library, Science Direct, ABI/INFORM Complete and Communication and Mass Media Complete. Included studies were original research published in peer-reviewed journals between January 2005 and June 2015 that primarily focused on collecting data from SM platforms to track trends in illicit drug use. Excluded were studies focused on purchasing prescription drugs from illicit online pharmacies.\n\n\nResults\nSelected studies used a range of SM tools/applications, including message boards, Twitter and blog/forums/platform discussions. Limitations included relevance, a lack of standardized surveillance systems and a lack of efficient algorithms to isolate relevant items.\n\n\nConclusion\nIllicit drug use is a worldwide problem, and the rise of global social networking sites has led to the evolution of a readily accessible surveillance tool. Systematic approaches need to be developed to efficiently extract and analyze illicit drug content from social networks to supplement effective prevention programs."
},
{
"pmid": "29979920",
"title": "\"Okay, We Get It. You Vape\": An Analysis of Geocoded Content, Context, and Sentiment regarding E-Cigarettes on Twitter.",
"abstract": "The current study examined conversations on Twitter related to use and perceptions of e-cigarettes in the United States. We employed the Social Media Analytic and Research Testbed (SMART) dashboard, which was used to identify and download (via a public API) e-cigarette-related geocoded tweets. E-cigarette-related tweets were collected continuously using customized geo-targeted Twitter APIs. A total of 193,051 tweets were collected between October 2015 and February 2016. Of these tweets, a random sample of 973 geocoded tweets were selected and manually coded for information regarding source, context, and message characteristics. Our findings reveal that although over half of tweets were positive, a sizeable portion was negative or neutral. We also found that, among those tweets mentioning a stigma of e-cigarettes, most confirmed that a stigma does exist. Conversely, among tweets mentioning the harmfulness of e-cigarettes, most denied that e-cigarettes were a health hazard. These results suggest that current efforts have left the public with ambiguity regarding the potential dangers of e-cigarettes. Consequently, it is critical to communicate the public health stance on this issue to inform the public and provide counterarguments to the positive sentiments presently dominating conversations about e-cigarettes on social media. The lack of awareness and need to voice a public health position on e-cigarettes represents a vital opportunity to continue winning gains for tobacco control and prevention efforts through health communication interventions targeting e-cigarettes."
},
{
"pmid": "23989137",
"title": "Using twitter to examine smoking behavior and perceptions of emerging tobacco products.",
"abstract": "BACKGROUND\nSocial media platforms such as Twitter are rapidly becoming key resources for public health surveillance applications, yet little is known about Twitter users' levels of informedness and sentiment toward tobacco, especially with regard to the emerging tobacco control challenges posed by hookah and electronic cigarettes.\n\n\nOBJECTIVE\nTo develop a content and sentiment analysis of tobacco-related Twitter posts and build machine learning classifiers to detect tobacco-relevant posts and sentiment towards tobacco, with a particular focus on new and emerging products like hookah and electronic cigarettes.\n\n\nMETHODS\nWe collected 7362 tobacco-related Twitter posts at 15-day intervals from December 2011 to July 2012. Each tweet was manually classified using a triaxial scheme, capturing genre, theme, and sentiment. Using the collected data, machine-learning classifiers were trained to detect tobacco-related vs irrelevant tweets as well as positive vs negative sentiment, using Naïve Bayes, k-nearest neighbors, and Support Vector Machine (SVM) algorithms. Finally, phi contingency coefficients were computed between each of the categories to discover emergent patterns.\n\n\nRESULTS\nThe most prevalent genres were first- and second-hand experience and opinion, and the most frequent themes were hookah, cessation, and pleasure. Sentiment toward tobacco was overall more positive (1939/4215, 46% of tweets) than negative (1349/4215, 32%) or neutral among tweets mentioning it, even excluding the 9% of tweets categorized as marketing. Three separate metrics converged to support an emergent distinction between, on one hand, hookah and electronic cigarettes corresponding to positive sentiment, and on the other hand, traditional tobacco products and more general references corresponding to negative sentiment. These metrics included correlations between categories in the annotation scheme (phihookah-positive=0.39; phi(e-cigs)-positive=0.19); correlations between search keywords and sentiment (χ²₄=414.50, P<.001, Cramer's V=0.36), and the most discriminating unigram features for positive and negative sentiment ranked by log odds ratio in the machine learning component of the study. In the automated classification tasks, SVMs using a relatively small number of unigram features (500) achieved best performance in discriminating tobacco-related from unrelated tweets (F score=0.85).\n\n\nCONCLUSIONS\nNovel insights available through Twitter for tobacco surveillance are attested through the high prevalence of positive sentiment. This positive sentiment is correlated in complex ways with social image, personal experience, and recently popular products such as hookah and electronic cigarettes. Several apparent perceptual disconnects between these products and their health effects suggest opportunities for tobacco control education. Finally, machine classification of tobacco-related posts shows a promising edge over strictly keyword-based approaches, yielding an improved signal-to-noise ratio in Twitter data and paving the way for automated tobacco surveillance applications."
},
{
"pmid": "20075479",
"title": "Sensitivity analysis of kappa-fold cross validation in prediction error estimation.",
"abstract": "In the machine learning field, the performance of a classifier is usually measured in terms of prediction error. In most real-world problems, the error cannot be exactly calculated and it must be estimated. Therefore, it is important to choose an appropriate estimator of the error. This paper analyzes the statistical properties, bias and variance, of the kappa-fold cross-validation classification error estimator (kappa-cv). Our main contribution is a novel theoretical decomposition of the variance of the kappa-cv considering its sources of variance: sensitivity to changes in the training set and sensitivity to changes in the folds. The paper also compares the bias and variance of the estimator for different values of kappa. The experimental study has been performed in artificial domains because they allow the exact computation of the implied quantities and we can rigorously specify the conditions of experimentation. The experimentation has been performed for two classifiers (naive Bayes and nearest neighbor), different numbers of folds, sample sizes, and training sets coming from assorted probability distributions. We conclude by including some practical recommendation on the use of kappa-fold cross validation."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "31272518",
"title": "A Systematic Review of Techniques Employed for Determining Mental Health Using Social Media in Psychological Surveillance During Disasters.",
"abstract": "During disasters, people share their thoughts and emotions on social media and also provide information about the event. Mining the social media messages and updates can be helpful in understanding the emotional state of people during such unforeseen events as they are real-time data. The objective of this review is to explore the feasibility of using social media data for mental health surveillance as well as the techniques used for determining mental health using social media data during disasters. PubMed, PsycINFO, and PsycARTICLES databases were searched from 2009 to November 2018 for primary research studies. After screening and analyzing the records, 18 studies were included in this review. Twitter was the widely researched social media platform for understanding the mental health of people during a disaster. Psychological surveillance was done by identifying the sentiments expressed by people or the emotions they displayed in their social media posts. Classification of sentiments and emotions were done using lexicon-based or machine learning methods. It is not possible to conclude that a particular technique is the best performing one, because the performance of any method depends upon factors such as the disaster size, the volume of data, disaster setting, and the disaster web environment."
},
{
"pmid": "32784184",
"title": "Machine Learning Classifiers for Twitter Surveillance of Vaping: Comparative Machine Learning Study.",
"abstract": "BACKGROUND\nTwitter presents a valuable and relevant social media platform to study the prevalence of information and sentiment on vaping that may be useful for public health surveillance. Machine learning classifiers that identify vaping-relevant tweets and characterize sentiments in them can underpin a Twitter-based vaping surveillance system. Compared with traditional machine learning classifiers that are reliant on annotations that are expensive to obtain, deep learning classifiers offer the advantage of requiring fewer annotated tweets by leveraging the large numbers of readily available unannotated tweets.\n\n\nOBJECTIVE\nThis study aims to derive and evaluate traditional and deep learning classifiers that can identify tweets relevant to vaping, tweets of a commercial nature, and tweets with provape sentiments.\n\n\nMETHODS\nWe continuously collected tweets that matched vaping-related keywords over 2 months from August 2018 to October 2018. From this data set of tweets, a set of 4000 tweets was selected, and each tweet was manually annotated for relevance (vape relevant or not), commercial nature (commercial or not), and sentiment (provape or not). Using the annotated data, we derived traditional classifiers that included logistic regression, random forest, linear support vector machine, and multinomial naive Bayes. In addition, using the annotated data set and a larger unannotated data set of tweets, we derived deep learning classifiers that included a convolutional neural network (CNN), long short-term memory (LSTM) network, LSTM-CNN network, and bidirectional LSTM (BiLSTM) network. The unannotated tweet data were used to derive word vectors that deep learning classifiers can leverage to improve performance.\n\n\nRESULTS\nLSTM-CNN performed the best with the highest area under the receiver operating characteristic curve (AUC) of 0.96 (95% CI 0.93-0.98) for relevance, all deep learning classifiers including LSTM-CNN performed better than the traditional classifiers with an AUC of 0.99 (95% CI 0.98-0.99) for distinguishing commercial from noncommercial tweets, and BiLSTM performed the best with an AUC of 0.83 (95% CI 0.78-0.89) for provape sentiment. Overall, LSTM-CNN performed the best across all 3 classification tasks.\n\n\nCONCLUSIONS\nWe derived and evaluated traditional machine learning and deep learning classifiers to identify vaping-related relevant, commercial, and provape tweets. Overall, deep learning classifiers such as LSTM-CNN had superior performance and had the added advantage of requiring no preprocessing. The performance of these classifiers supports the development of a vaping surveillance system."
},
{
"pmid": "29157442",
"title": "A deep learning-based multi-model ensemble method for cancer prediction.",
"abstract": "BACKGROUND AND OBJECTIVE\nCancer is a complex worldwide health problem associated with high mortality. With the rapid development of the high-throughput sequencing technology and the application of various machine learning methods that have emerged in recent years, progress in cancer prediction has been increasingly made based on gene expression, providing insight into effective and accurate treatment decision making. Thus, developing machine learning methods, which can successfully distinguish cancer patients from healthy persons, is of great current interest. However, among the classification methods applied to cancer prediction so far, no one method outperforms all the others.\n\n\nMETHODS\nIn this paper, we demonstrate a new strategy, which applies deep learning to an ensemble approach that incorporates multiple different machine learning models. We supply informative gene data selected by differential gene expression analysis to five different classification models. Then, a deep learning method is employed to ensemble the outputs of the five classifiers.\n\n\nRESULTS\nThe proposed deep learning-based multi-model ensemble method was tested on three public RNA-seq data sets of three kinds of cancers, Lung Adenocarcinoma, Stomach Adenocarcinoma and Breast Invasive Carcinoma. The test results indicate that it increases the prediction accuracy of cancer for all the tested RNA-seq data sets as compared to using a single classifier or the majority voting algorithm.\n\n\nCONCLUSIONS\nBy taking full advantage of different classifiers, the proposed deep learning-based multi-model ensemble method is shown to be accurate and effective for cancer prediction."
}
] |
BMC Medical Informatics and Decision Making | null | PMC8867891 | 10.1186/s12911-022-01771-3 | Accurate training of the Cox proportional hazards model on vertically-partitioned data while preserving privacy | BackgroundAnalysing distributed medical data is challenging because of data sensitivity and various regulations to access and combine data. Some privacy-preserving methods are known for analyzing horizontally-partitioned data, where different organisations have similar data on disjoint sets of people. Technically more challenging is the case of vertically-partitioned data, dealing with data on overlapping sets of people. We use an emerging technology based on cryptographic techniques called secure multi-party computation (MPC), and apply it to perform privacy-preserving survival analysis on vertically-distributed data by means of the Cox proportional hazards (CPH) model. Both MPC and CPH are explained.MethodsWe use a Newton-Raphson solver to securely train the CPH model with MPC, jointly with all data holders, without revealing any sensitive data. In order to securely compute the log-partial likelihood in each iteration, we run into several technical challenges to preserve the efficiency and security of our solution. To tackle these technical challenges, we generalize a cryptographic protocol for securely computing the inverse of the Hessian matrix and develop a new method for securely computing exponentiations. A theoretical complexity estimate is given to get insight into the computational and communication effort that is needed.ResultsOur secure solution is implemented in a setting with three different machines, each presenting a different data holder, which can communicate through the internet. The MPyC platform is used for implementing this privacy-preserving solution to obtain the CPH model. We test the accuracy and computation time of our methods on three standard benchmark survival datasets. We identify future work to make our solution more efficient.ConclusionsOur secure solution is comparable with the standard, non-secure solver in terms of accuracy and convergence speed. The computation time is considerably larger, although the theoretical complexity is still cubic in the number of covariates and quadratic in the number of subjects. We conclude that this is a promising way of performing parametric survival analysis on vertically-distributed medical data, while realising high level of security and privacy. | Related workIn 2016, Shi et al. [13] presented a solution for grid logistic regression on horizontally-partitioned data. While using MPC they ran into problems of securely inverting the Hessian matrix and computing natural exponentiation, but they were able to find workarounds. As our situation is more complex, due to increased algorithm complexity and different data partitioning, we had to find different solutions for these challenges, which are described in “Secure exponentiation protocol” and “Matrix inverse protocol” sections respectively.Several publications describe approaches for privacy-preserving Cox regression. The works by Yu et al. [14] and Lu et al. [15] consider horizontally-partitioned data, whereas the recent work of Dai et al. [16] assumes vertically-partitioned data. The work by Domadiya and Rao [17] also considers vertically-partitioned healthcare data, for which they present a privacy-preserving association rule mining technique.Yu et al. preserve privacy by mapping the data to a lower dimensional space [14]. They construct their affine, sparse mapping by solving a linear program that optimizes the map in such a way that certain properties are maintained (e.g. ordering imposed by survival time, covariate selection) and thereby improve on earlier works that use random mappings. The Cox model is publicly trained on the lower-dimensional data and achieves near-optimal performance.Lu et al. design and implement a Web service, WebDISCO, for joint training of a Cox regression model [15]. Based on federated learning ideology, they achieve privacy-preservation by sharing aggregated information only instead of individual data records. The obtained model is mathematically equivalent to a model that is trained directly on the joint data.Dai et al. consider vertically-partitioned data and leverage the alternating direction method of multipliers (ADMM) [18] to directly train the model to its optimum3 [16]. Note that the ADMM method itself is iterative. The authors present their work in a client-server setting where each client only transmits aggregated intermediary results to the server in each iteration. The server performs heavier computations than the client. The subject-level data never leaves the client’s organization, although all parties must know which subjects experienced an event (not the event time). The final model is equivalent to the model that is trained directly on the joint data.Our work also assumes vertically-partitioned data, but otherwise follows a different approach from Dai et al. [16]. Firstly, instead of a direct approach, we leverage the Newton–Raphson method for iterative training of the CPH model. Secondly, we perform all computations in the encrypted domain using secure multi-party computation instead of computations in the plain where privacy is preserved through aggregation. Aggregation may provide a solid preservation of privacy; however, in practice it is hard to make this precise and obtain mathematical guarantees on the security that is provided.Our contributions are the following:A novel protocol for training a privacy-preserving CPH model in the encrypted domain. The model is trained in an iterative fashion using the Newton–Raphson method for optimizing Breslow’s partial likelihood function.Fundamental and widely-applicable protocols for computing exponentiations in the secure domain. That is, we securely compute \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$a^x$$\end{document}ax for known \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$a>0$$\end{document}a>0 and encrypted x.A new protocol for securely inverting a non-integer matrix. We use a known approach for integer matrices, and adjust it to our needs.A recursive approach for accurately computing the gradients without using floating point arithmetic.Privacy-preservation of input data during computation is an important aspect of privacy-preserving machine learning. However, preserving privacy during computation by means of aggregation or encryption does not prevent a malicious user to deduce sensitive information from the output of the computation. Although we did not look into this aspect in our work, we do want to mention some works that consider this aspect. O’Keefe et al. [19] describe several methods for what they call “confidentialising” the CPH output. For example, they suggest that using a random 95% of the training data, robust estimators and rounded or binned outputs can reduce the information leakage of the CPH output while preserving the most important characteristics. Although some of the techniques seem to improve privacy preservation, one should note that no mathematical guarantees of the effectiveness of the presented techniques are presented.Another approach is persued by Nguyên and Hui [20] and Nguyên [21], who design differentially private methods for generalized linear models and the CPH model. Differential privacy is a mathematical framework ensuring that an adversary is not able to deduce the exact private information of a targeted subject from the trained model [22]. This is achieved by adding noise to the data, the penalty function or the trained model and usually result in an accuracy-privacy trade-off. The work of Nguyên [21] does not consider distributed data. In contrast, we consider distributed data and no noise is added anywhere in the process. Both works may yield interesting and partially orthogonal complements to our work. | [
"29158232",
"22454078",
"33239719",
"31911366",
"27454168",
"26159465",
"8783440",
"26754574",
"24879473",
"21211015",
"3768846",
"9193322",
"1655202"
] | [
{
"pmid": "29158232",
"title": "Development and validation of QDiabetes-2018 risk prediction algorithm to estimate future risk of type 2 diabetes: cohort study.",
"abstract": "Objectives To derive and validate updated QDiabetes-2018 prediction algorithms to estimate the 10 year risk of type 2 diabetes in men and women, taking account of potential new risk factors, and to compare their performance with current approaches.Design Prospective open cohort study.Setting Routinely collected data from 1457 general practices in England contributing to the QResearch database: 1094 were used to develop the scores and a separate set of 363 were used to validate the scores.Participants 11.5 million people aged 25-84 and free of diabetes at baseline: 8.87 million in the derivation cohort and 2.63 million in the validation cohort.Methods Cox proportional hazards models were used in the derivation cohort to derive separate risk equations in men and women for evaluation at 10 years. Risk factors considered included those already in QDiabetes (age, ethnicity, deprivation, body mass index, smoking, family history of diabetes in a first degree relative, cardiovascular disease, treated hypertension, and regular use of corticosteroids) and new risk factors: atypical antipsychotics, statins, schizophrenia or bipolar affective disorder, learning disability, gestational diabetes, and polycystic ovary syndrome. Additional models included fasting blood glucose and glycated haemoglobin (HBA1c). Measures of calibration and discrimination were determined in the validation cohort for men and women separately and for individual subgroups by age group, ethnicity, and baseline disease status.Main outcome measure Incident type 2 diabetes recorded on the general practice record.Results In the derivation cohort, 178 314 incident cases of type 2 diabetes were identified during follow-up arising from 42.72 million person years of observation. In the validation cohort, 62 326 incident cases of type 2 diabetes were identified from 14.32 million person years of observation. All new risk factors considered met our model inclusion criteria. Model A included age, ethnicity, deprivation, body mass index, smoking, family history of diabetes in a first degree relative, cardiovascular disease, treated hypertension, and regular use of corticosteroids, and new risk factors: atypical antipsychotics, statins, schizophrenia or bipolar affective disorder, learning disability, and gestational diabetes and polycystic ovary syndrome in women. Model B included the same variables as model A plus fasting blood glucose. Model C included HBA1c instead of fasting blood glucose. All three models had good calibration and high levels of explained variation and discrimination. In women, model B explained 63.3% of the variation in time to diagnosis of type 2 diabetes (R2), the D statistic was 2.69 and the Harrell's C statistic value was 0.89. The corresponding values for men were 58.4%, 2.42, and 0.87. Model B also had the highest sensitivity compared with current recommended practice in the National Health Service based on bands of either fasting blood glucose or HBA1c. However, only 16% of patients had complete data for blood glucose measurements, smoking, and body mass index.Conclusions Three updated QDiabetes risk models to quantify the absolute risk of type 2 diabetes were developed and validated: model A does not require a blood test and can be used to identify patients for fasting blood glucose (model B) or HBA1c (model C) testing. Model B had the best performance for predicting 10 year risk of type 2 diabetes to identify those who need interventions and more intensive follow-up, improving on current approaches. Additional external validation of models B and C in datasets with more completely collected data on blood glucose would be valuable before the models are used in clinical practice."
},
{
"pmid": "22454078",
"title": "Use of aspirin postdiagnosis improves survival for colon cancer patients.",
"abstract": "BACKGROUND\nThe preventive role of non-steroid anti-inflammatory drugs (NSAIDs) and aspirin, in particular, on colorectal cancer is well established. More recently, it has been suggested that aspirin may also have a therapeutic role. Aim of the present observational population-based study was to assess the therapeutic effect on overall survival of aspirin/NSAIDs as adjuvant treatment used after the diagnosis of colorectal cancer patients.\n\n\nMETHODS\nData concerning prescriptions were obtained from PHARMO record linkage systems and all patients diagnosed with colorectal cancer (1998-2007) were selected from the Eindhoven Cancer Registry (population-based cancer registry). Aspirin/NSAID use was classified as none, prediagnosis and postdiagnosis and only postdiagnosis. Patients were defined as non-user of aspirin/NSAIDs from the date of diagnosis of the colorectal cancer to the date of first use of aspirin or NSAIDs and user from first use to the end of follow-up. Poisson regression was performed with user status as time-varying exposure.\n\n\nRESULTS\nIn total, 1176 (26%) patients were non-users, 2086 (47%) were prediagnosis and postdiagnosis users and 1219 (27%) were only postdiagnosis users (total n=4481). Compared with non-users, a survival gain was observed for aspirin users; the adjusted rate ratio (RR) was 0.77 (95% confidence interval (CI) 0.63-0.95; P=0.015). Stratified for colon and rectal, the survival gain was only present in colon cancer (adjusted RR 0.65 (95%CI 0.50-0.84; P=0.001)). For frequent users survival gain was larger (adjusted RR 0.61 (95%CI 0.46-0.81; P=0.001). In rectal cancer, aspirin use was not associated with survival (adjusted RR 1.10 (95%CI 0.79-1.54; P=0.6). The NSAIDs use was associated with decreased survival (adjusted RR 1.93 (95%CI 1.70-2.20; P<0.001).\n\n\nCONCLUSION\nAspirin use initiated or continued after diagnosis of colon cancer is associated with a lower risk of overall mortality. These findings strongly support initiation of a placebo-controlled trial that investigates the role of aspirin as adjuvant treatment in colon cancer patients."
},
{
"pmid": "33239719",
"title": "Prognostic factors analysis for oral cavity cancer survival in the Netherlands and Taiwan using a privacy-preserving federated infrastructure.",
"abstract": "The difference in incidence of oral cavity cancer (OCC) between Taiwan and the Netherlands is striking. Different risk factors and treatment expertise may result in survival differences between the two countries. However due to regulatory restrictions, patient-level analyses of combined data from the Netherlands and Taiwan are infeasible. We implemented a software infrastructure for federated analyses on data from multiple organisations. We included 41,633 patients with single-tumour OCC between 2004 and 2016, undergoing surgery, from the Taiwan Cancer Registry and Netherlands Cancer Registry. Federated Cox Proportional Hazard was used to analyse associations between patient and tumour characteristics, country, treatment and hospital volume with survival. Five factors showed differential effects on survival of OCC patients in the Netherlands and Taiwan: age at diagnosis, stage, grade, treatment and hospital volume. The risk of death for OCC patients younger than 60 years, with advanced stage, higher grade or receiving adjuvant therapy after surgery was lower in the Netherlands than in Taiwan; but patients older than 70 years, with early stage, lower grade and receiving surgery alone in the Netherlands were at higher risk of death than those in Taiwan. The mortality risk of OCC in Taiwanese patients treated in hospitals with higher hospital volume (≥ 50 surgeries per year) was lower than in Dutch patients. We conducted analyses without exchanging patient-level information, overcoming barriers for sharing privacy sensitive information. The outcomes of patients treated in the Netherlands and Taiwan were slightly different after controlling for other prognostic factors."
},
{
"pmid": "31911366",
"title": "Distributed learning on 20 000+ lung cancer patients - The Personal Health Train.",
"abstract": "BACKGROUND AND PURPOSE\nAccess to healthcare data is indispensable for scientific progress and innovation. Sharing healthcare data is time-consuming and notoriously difficult due to privacy and regulatory concerns. The Personal Health Train (PHT) provides a privacy-by-design infrastructure connecting FAIR (Findable, Accessible, Interoperable, Reusable) data sources and allows distributed data analysis and machine learning. Patient data never leaves a healthcare institute.\n\n\nMATERIALS AND METHODS\nLung cancer patient-specific databases (tumor staging and post-treatment survival information) of oncology departments were translated according to a FAIR data model and stored locally in a graph database. Software was installed locally to enable deployment of distributed machine learning algorithms via a central server. Algorithms (MATLAB, code and documentation publicly available) are patient privacy-preserving as only summary statistics and regression coefficients are exchanged with the central server. A logistic regression model to predict post-treatment two-year survival was trained and evaluated by receiver operating characteristic curves (ROC), root mean square prediction error (RMSE) and calibration plots.\n\n\nRESULTS\nIn 4 months, we connected databases with 23 203 patient cases across 8 healthcare institutes in 5 countries (Amsterdam, Cardiff, Maastricht, Manchester, Nijmegen, Rome, Rotterdam, Shanghai) using the PHT. Summary statistics were computed across databases. A distributed logistic regression model predicting post-treatment two-year survival was trained on 14 810 patients treated between 1978 and 2011 and validated on 8 393 patients treated between 2012 and 2015.\n\n\nCONCLUSION\nThe PHT infrastructure demonstrably overcomes patient privacy barriers to healthcare data sharing and enables fast data analyses across multiple institutes from different countries with different regulatory regimens. This infrastructure promotes global evidence-based medicine while prioritizing patient privacy."
},
{
"pmid": "27454168",
"title": "Secure Multi-pArty Computation Grid LOgistic REgression (SMAC-GLORE).",
"abstract": "BACKGROUND\nIn biomedical research, data sharing and information exchange are very important for improving quality of care, accelerating discovery, and promoting the meaningful secondary use of clinical data. A big concern in biomedical data sharing is the protection of patient privacy because inappropriate information leakage can put patient privacy at risk.\n\n\nMETHODS\nIn this study, we deployed a grid logistic regression framework based on Secure Multi-party Computation (SMAC-GLORE). Unlike our previous work in GLORE, SMAC-GLORE protects not only patient-level data, but also all the intermediary information exchanged during the model-learning phase.\n\n\nRESULTS\nThe experimental results demonstrate the feasibility of secure distributed logistic regression across multiple institutions without sharing patient-level data.\n\n\nCONCLUSIONS\nIn this study, we developed a circuit-based SMAC-GLORE framework. The proposed framework provides a practical solution for secure distributed logistic regression model learning."
},
{
"pmid": "26159465",
"title": "WebDISCO: a web service for distributed cox model learning without patient-level data sharing.",
"abstract": "OBJECTIVE\nThe Cox proportional hazards model is a widely used method for analyzing survival data. To achieve sufficient statistical power in a survival analysis, it usually requires a large amount of data. Data sharing across institutions could be a potential workaround for providing this added power.\n\n\nMETHODS AND MATERIALS\nThe authors develop a web service for distributed Cox model learning (WebDISCO), which focuses on the proof-of-concept and algorithm development for federated survival analysis. The sensitive patient-level data can be processed locally and only the less-sensitive intermediate statistics are exchanged to build a global Cox model. Mathematical derivation shows that the proposed distributed algorithm is identical to the centralized Cox model.\n\n\nRESULTS\nThe authors evaluated the proposed framework at the University of California, San Diego (UCSD), Emory, and Duke. The experimental results show that both distributed and centralized models result in near-identical model coefficients with differences in the range [Formula: see text] to [Formula: see text]. The results confirm the mathematical derivation and show that the implementation of the distributed model can achieve the same results as the centralized implementation.\n\n\nLIMITATION\nThe proposed method serves as a proof of concept, in which a publicly available dataset was used to evaluate the performance. The authors do not intend to suggest that this method can resolve policy and engineering issues related to the federated use of institutional data, but they should serve as evidence of the technical feasibility of the proposed approach.Conclusions WebDISCO (Web-based Distributed Cox Regression Model; https://webdisco.ucsd-dbmi.org:8443/cox/) provides a proof-of-concept web service that implements a distributed algorithm to conduct distributed survival analysis without sharing patient level data."
},
{
"pmid": "8783440",
"title": "A robust method for proportional hazards regression.",
"abstract": "In this paper we give an informal introduction to a robust method for survival analysis which is based on a modification of the usual partial likelihood estimator (PLE). Large sample results lead us to expect reduced bias for this robust estimator compared with the PLE whenever there are even slight violations of the model. In this paper we investigate three types of violation: (a) varying dependency structure of survival time and covariates over the sample; (b) omission of influential covariates, and (c) errors in the covariates. The simulations presented support the above expectation. Analyses of data sets from cancer epidemiology and from a clinical trial in lung cancer illustrate that a better fit and additional insights may be gained using robust estimators."
},
{
"pmid": "26754574",
"title": "The value of structured data elements from electronic health records for identifying subjects for primary care clinical trials.",
"abstract": "BACKGROUND\nAn increasing number of clinical trials are conducted in primary care settings. Making better use of existing data in the electronic health records to identify eligible subjects can improve efficiency of such studies. Our study aims to quantify the proportion of eligibility criteria that can be addressed with data in electronic health records and to compare the content of eligibility criteria in primary care with previous work.\n\n\nMETHODS\nEligibility criteria were extracted from primary care studies downloaded from the UK Clinical Research Network Study Portfolio. Criteria were broken into elemental statements. Two expert independent raters classified each statement based on whether or not structured data items in the electronic health record can be used to determine if the statement was true for a specific patient. Disagreements in classification were discussed until 100 % agreement was reached. Statements were also classified based on content and the percentages of each category were compared to two similar studies reported in the literature.\n\n\nRESULTS\nEligibility criteria were retrieved from 228 studies and decomposed into 2619 criteria elemental statements. 74 % of the criteria elemental statements were considered likely associated with structured data in an electronic health record. 79 % of the studies had at least 60 % of their criteria statements addressable with structured data likely to be present in an electronic health record. Based on clinical content, most frequent categories were: \"disease, symptom, and sign\", \"therapy or surgery\", and \"medication\" (36 %, 13 %, and 10 % of total criteria statements respectively). We also identified new criteria categories related to provider and caregiver attributes (2.6 % and 1 % of total criteria statements respectively).\n\n\nCONCLUSIONS\nElectronic health records readily contain much of the data needed to assess patients' eligibility for clinical trials enrollment. Eligibility criteria content categories identified by our study can be incorporated as data elements in electronic health records to facilitate their integration with clinical trial management systems."
},
{
"pmid": "24879473",
"title": "Do Sociodemographic Factors Influence Outcome in Prostate Cancer Patients Treated With External Beam Radiation Therapy?",
"abstract": "OBJECTIVES\nThe purpose of this study was to analyze the prognostic significance of sociodemographic factors on biochemical control (bNED) and overall survival (OS) in patients with prostate cancer.\n\n\nMETHODS\nProstate cancer patients treated with definitive external beam radiation therapy (EBRT)±hormone therapy from 1997 to 2006 were analyzed in this IRB-approved study. Patient demographics, treatment (Tx), and clinical outcome were obtained from electronic medical records. Median household income (mHHI) at the census block group level was obtained from the 2000 census data. Data on disease and Tx parameters included Gleason score, pre-Tx prostate-specific antigen (PSA), T stage, year of Tx, EBRT dose, and use of hormone therapy. Patients were categorized as having low-risk, intermediate-risk, or high-risk disease. Sociodemographic factors included age, race, marital status, and mHHI. Biochemical failure was defined as nadir PSA+2 ng/mL. OS was based on death from any cause.\n\n\nRESULTS\nA total of 788 consecutive patients were studied with a median follow-up of 7 years (range, 0.4 to 15 y). African Americans comprised 48% of the patients, whereas 46% of patients were white and 6% were other races. Whites had an average mHHI of $60,190 compared with $36,917 for African Americans (P<0.001). After multivariable modeling, only radiation dose was predictive for bNED (P=0.004) or OS (P=0.008). No sociodemographic factors were predictive for either outcome. Higher radiation dose predicted for better biochemical control and OS.\n\n\nCONCLUSIONS\nThis analysis suggests that sociodemographic factors are not important prognostic factors in determining outcome after EBRT for prostate cancer."
},
{
"pmid": "21211015",
"title": "A scenario analysis of the future residential requirements for people with mental health problems in Eindhoven.",
"abstract": "BACKGROUND\nDespite large-scale investments in mental health care in the community since the 1990 s, a trend towards reinstitutionalization has been visible since 2002. Since many mental health care providers regard this as an undesirable trend, the question arises: In the coming 5 years, what types of residence should be organized for people with mental health problems? The purpose of this article is to provide mental health care providers, public housing corporations, and local government with guidelines for planning organizational strategy concerning types of residence for people with mental health problems.\n\n\nMETHODS\nA scenario analysis was performed in four steps: 1) an exploration of the external environment; 2) the identification of key uncertainties; 3) the development of scenarios; 4) the translation of scenarios into guidelines for planning organizational strategy. To explore the external environment a document study was performed, and 15 semi-structured interviews were conducted. During a workshop, a panel of experts identified two key uncertainties in the external environment, and formulated four scenarios.\n\n\nRESULTS\nThe study resulted in four scenarios: 1) Integrated and independent living in the community with professional care; 2) Responsible healthcare supported by society; 3) Differentiated provision within the walls of the institution; 4) Residence in large-scale institutions but unmet need for care. From the range of aspects within the different scenarios, the panel was able to work out concrete guidelines for planning organizational strategy.\n\n\nCONCLUSIONS\nIn the context of residence for people with mental health problems, the focus should be on investment in community care and their re-integration into society. A joint effort is needed to achieve this goal. This study shows that scenario analysis leads to useful guidelines for planning organizational strategy in mental health care."
},
{
"pmid": "3768846",
"title": "Ewing's sarcoma of bone. Experience with 140 patients.",
"abstract": "The records of 140 patients with histologically verified Ewing's sarcoma of bone treated between 1969 and 1982 were studied retrospectively. Various factors thought to be relevant to prognosis were analyzed. Three statistically significant factors were found: presence of metastatic disease, elevation of the sedimentation rate, and location of the tumor in the pelvis. In addition, patients who underwent complete surgical excision of the primary lesion had a better survival rate (74% at 5 years) than those who did not (34% at 5 years). It is concluded that patients with surgically accessible lesions should undergo treatment consisting of surgery, chemotherapy, and, in selected cases, radiation."
},
{
"pmid": "9193322",
"title": "High-dose chemotherapy with autologous transplantation for persistent/relapsed ovarian cancer: a multivariate analysis of survival for 100 consecutively treated patients.",
"abstract": "PURPOSE\nTo examine the prognostic factors associated with prolonged progression-free survival (PFS) and overall survival (OS) in 100 consecutively treated women undergoing autologous stem-cell transplant for advanced ovarian cancer.\n\n\nPATIENTS AND METHODS\nFrom October 1989 to February 1996, we transplanted 100 patients with ovarian cancer following chemotherapy with high-dose carboplatin, mitoxantrone, and cyclophosphamide with or without cyclosporine (n = 70); melphalan and mitoxantrone with or without paclitaxel (n = 25); or other regimens (n = 5). Their median age was 48 years (range, 23 to 65), 70% had papillary serous histology, 72% had grade III tumors, 66% were platinum-resistant, and 61% had > or = 1 cm bulk. The median number of prior regimens was two (range, one to six). Univariate and multivariate analyses were performed to examine age (< v > or = mean), stage, initial bulk, histology, grade, response to initial therapy, number of prior regimens, time from diagnosis to transplant, transplant regimen, platinum sensitivity, and bulk (< v > or = 1 cm) at transplant.\n\n\nRESULTS\nThe median PFS and OS times for the 100 patients were 7 and 13 months. A stepwise Cox proportional hazards model identified tumor bulk (P = .0001), and cisplatin sensitivity (P = .0249) as the best predictors of PFS. Age (P = .0017), bulk at transplant (P = .0175), and platinum sensitivity (P = .0330) provided the best prediction of OS. The median PFS and OS times for the 20 patients with platinum-sensitive, < or = 1-cm disease were 19 and 30 months. No differences in OS were seen when chemotherapy or surgery was used to achieve a minimal disease state.\n\n\nCONCLUSION\nBefore consideration of high-dose therapy for recurrent/persistent advanced ovarian cancer, patients should undergo debulking surgery or chemotherapy to achieve a minimal disease state. Patients with platinum-resistant, bulky disease should not be transplanted. The optimal patients for this therapy may be those with minimal disease responsive to initial chemotherapy."
},
{
"pmid": "1655202",
"title": "Effect of repeated transcatheter arterial embolization on the survival time in patients with hepatocellular carcinoma. An analysis by the Cox proportional hazard model.",
"abstract": "One hundred fifty-eight patients with hepatocellular carcinoma (HCC) were treated by transcatheter arterial embolization (TAE) as repeatedly as possible. Survival rates at the end of the first, second, and third year were 76.5%, 54.5%, and 41.1%, respectively. In 142 patients with repeated TAE, a significantly increased number of patients with complete necrosis of tumor was observed after repetition of the therapy. Adjusting the imbalance in prognostic factors among patients by using Cox proportional hazard model, it proved that the best response during the repeated therapy, rather than the first response, was significantly associated with survival period of the patients. Aside from the factor of response to the treatment, tumor size was the worst prognostic factor at the time when diagnosis was made. Other significant factors were portal vein invasion by HCC and bilirubin. The survival period of patients with HCC treated by repeated TAE was, therefore, affected by cancer factors, liver cirrhosis factors, and therapy-responsiveness factors. It is concluded that even if complete necrosis of tumor is not obtained after the first trial, repetition of TAE is an effective measure for prolonging of survival time in patients with HCC."
}
] |
Biomedicines | null | PMC8869455 | 10.3390/biomedicines10020223 | Brain Tumor Classification Using a Combination of Variational Autoencoders and Generative Adversarial Networks | Brain tumors are a pernicious cancer with one of the lowest five-year survival rates. Neurologists often use magnetic resonance imaging (MRI) to diagnose the type of brain tumor. Automated computer-assisted tools can help them speed up the diagnosis process and reduce the burden on the health care systems. Recent advances in deep learning for medical imaging have shown remarkable results, especially in the automatic and instant diagnosis of various cancers. However, we need a large amount of data (images) to train the deep learning models in order to obtain good results. Large public datasets are rare in medicine. This paper proposes a framework based on unsupervised deep generative neural networks to solve this limitation. We combine two generative models in the proposed framework: variational autoencoders (VAEs) and generative adversarial networks (GANs). We swap the encoder–decoder network after initially training it on the training set of available MR images. The output of this swapped network is a noise vector that has information of the image manifold, and the cascaded generative adversarial network samples the input from this informative noise vector instead of random Gaussian noise. The proposed method helps the GAN to avoid mode collapse and generate realistic-looking brain tumor magnetic resonance images. These artificially generated images could solve the limitation of small medical datasets up to a reasonable extent and help the deep learning models perform acceptably. We used the ResNet50 as a classifier, and the artificially generated brain tumor images are used to augment the real and available images during the classifier training. We compared the classification results with several existing studies and state-of-the-art machine learning models. Our proposed methodology noticeably achieved better results. By using brain tumor images generated artificially by our proposed method, the classification average accuracy improved from 72.63% to 96.25%. For the most severe class of brain tumor, glioma, we achieved 0.769, 0.837, 0.833, and 0.80 values for recall, specificity, precision, and F1-score, respectively. The proposed generative model framework could be used to generate medical images in any domain, including PET (positron emission tomography) and MRI scans of various parts of the body, and the results show that it could be a useful clinical tool for medical experts. | 1.1. Related WorkIn the process of developing a machine learning-based intelligent system for the classification of brain tumors, researchers usually first perform segmentation of brain tumors by using various methods and then classify them [30]. This method improves the accuracy, but it is time consuming and takes one extra step before putting the network into the training. However, many researchers used CNNs to classify brain tumors directly without segmentation.Justin et al. [31] used three classifiers (i.e., random forest (RF), a fully connected neural network (FCNN), and a CNN) to improve the classification accuracy. The CNN attained the highest rate of accuracy, i.e., 90.26%. Tahir et al. [30] investigated various preprocessing techniques in order to improve the classification results. They used three preprocessing techniques: noise reduction, contrast enhancement, and edge detection. The various combinations of these preprocessing techniques are tested on various test sets. They assert that employing a variety of such schemes is more advantageous than relying on any single preprocessing scheme. They used the Figshare dataset and tested the SVM classifier on it, which achieved 86% accuracy.Ismael et al. [32] combined statistical features with neural networks. They extracted statistical features from the MR images for classification and used 2D discrete wavelet transforms (DWT) and Gabor filters for feature selection. They feed the segmented MR images to their proposed algorithm and obtain an average accuracy of 91.9%.Another project that sought to categorize multi-grade brain tumors can be found in [33]. A previously trained CNN model is utilized along with segmented images to implement the method. They use three different datasets to validate the model. Data augmentation was performed using various techniques to handle the class imbalance and improve accuracy. Original and augmented datasets are tested on the proposed technique. In comparison to previous works, the presented results are convincing.Nayoman et al. [34] investigated the use of CNNs and constructed seven different neural networks. One of the lightweight models performed best. Without any prior segmentation, this simple model achieves a test accuracy of 84.19%.Guo et al. [35] propose an Alzheimer’s disease classifier. In Alzheimer’s disease, abnormal protein grows in and around the brain cells. The author uses graph convolutional neural networks (GCNNs) to classify Alzheimer’s disease into 2 and 3 categories. They used the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. The proposed graph nets achieved 93% for 2 class classification compared to 95% for ResNet architecture and 69% for SVM classifier. The proposed graph CNN achieved 77% in the three-class classification, ResNet 65%, and SVM 57%.Ayadi et al. [36] used two different datasets, Figshare and Radiopaedia. One is used to classify brain tumor class, and the other is related to the classification of the stage of the brain tumor. For the classification of the main class of the tumor, they used a simple, lightweight CNN architecture.Zhou et al. [37] used only axial slices from the dataset to classify the brain tumor. They also used a simple CNN classifier.Pashaei et al. [38] proposed a method based on extreme learning machines in their study to classify the brain tumor. First, they extracted the features using CNN and used them in a kernel extreme learning machine (KELM) to build a classifier. KELM is famous for increasing the robustness of the classification task.GAN-based networks for producing synthetic medical images have gained popularity in recent years due to their exceptional performance. A variation of Cycle GAN is proposed by Liu et al. [39] that generates Computed Tomography (CT) images using the domain control module (DCM) and Pseudo Cycle Consistent module (PCCM). The DCM adds additional domain information, while the PCCM maintains the consistency of created images. Shen et al. created mass images using GANs and then filled them with contextual information by incorporating the synthetic lesions into healthy mammograms. They asserted that their suggested network can learn real-world images’ shape, context, and distribution [40].Chenjie et al. proposed a multi-stream CNN architecture for glioma tumor grading/subcategory grading that captures and integrates data from several sensors [41].Navid et al. [29] proposed a new model for brain tumor classification using CNN on the Figshare dataset. They extracted the features by using the model as a discriminator of a GAN. Then a SoftMax classifier was added to the last fully connected layer to classify three tumors. They used data augmentation to improve the results and achieve 93.01% accuracy on the random split.Other researchers have applied GANs to a variety of problems from medicine, including Shin et al. [42], who utilized a two-step GAN to generate MR images of brain parts with and without tumors [43], Ahmad used TED-GAN [44] to classify skin cancer images, and Nie [45] generated pelvic CT images.GANs have gained the attention of researchers and are extensively used in a variety of medical imaging fields these days. Researchers attempt to improve results by utilizing complex and deep architectures. All these GAN-based studies contribute in various ways, but all of them used the random Gaussian noise as an input to the generator of the GAN. In the generative medical imaging field, manipulating the input noise of GANs is still un-explored. | [
"31980109",
"26960222",
"30999261",
"29371158",
"34678239",
"31575409",
"28387340",
"33414495",
"30407589",
"34257410",
"34213667",
"31919633",
"30327856",
"31613783",
"29313301",
"34201827",
"33374377",
"33669816",
"33542422",
"34411966",
"30768835",
"34242852",
"33640650",
"34829494",
"29993445",
"32315932",
"26629992",
"28577131"
] | [
{
"pmid": "31980109",
"title": "An enhanced deep learning approach for brain cancer MRI images classification using residual networks.",
"abstract": "Cancer is the second leading cause of death after cardiovascular diseases. Out of all types of cancer, brain cancer has the lowest survival rate. Brain tumors can have different types depending on their shape, texture, and location. Proper diagnosis of the tumor type enables the doctor to make the correct treatment choice and help save the patient's life. There is a high need in the Artificial Intelligence field for a Computer Assisted Diagnosis (CAD) system to assist doctors and radiologists with the diagnosis and classification of tumors. Over recent years, deep learning has shown an optimistic performance in computer vision systems. In this paper, we propose an enhanced approach for classifying brain tumor types using Residual Networks. We evaluate the proposed model on a benchmark dataset containing 3064 MRI images of 3 brain tumor types (Meningiomas, Gliomas, and Pituitary tumors). We have achieved the highest accuracy of 99% outperforming the other previous work on the same dataset."
},
{
"pmid": "26960222",
"title": "Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images.",
"abstract": "Among brain tumors, gliomas are the most common and aggressive, leading to a very short life expectancy in their highest grade. Thus, treatment planning is a key stage to improve the quality of life of oncological patients. Magnetic resonance imaging (MRI) is a widely used imaging technique to assess these tumors, but the large amount of data produced by MRI prevents manual segmentation in a reasonable time, limiting the use of precise quantitative measurements in the clinical practice. So, automatic and reliable segmentation methods are required; however, the large spatial and structural variability among brain tumors make automatic segmentation a challenging problem. In this paper, we propose an automatic segmentation method based on Convolutional Neural Networks (CNN), exploring small 3 ×3 kernels. The use of small kernels allows designing a deeper architecture, besides having a positive effect against overfitting, given the fewer number of weights in the network. We also investigated the use of intensity normalization as a pre-processing step, which though not common in CNN-based segmentation methods, proved together with data augmentation to be very effective for brain tumor segmentation in MRI images. Our proposal was validated in the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013), obtaining simultaneously the first position for the complete, core, and enhancing regions in Dice Similarity Coefficient metric (0.88, 0.83, 0.77) for the Challenge data set. Also, it obtained the overall first position by the online evaluation platform. We also participated in the on-site BRATS 2015 Challenge using the same model, obtaining the second place, with Dice Similarity Coefficient metric of 0.78, 0.65, and 0.75 for the complete, core, and enhancing regions, respectively."
},
{
"pmid": "30999261",
"title": "Meningioma and psychiatric symptoms: An individual patient data analysis.",
"abstract": "Meningioma is a slow-growing benign tumor arising from meninges and is usually asymptomatic. Though neuropsychiatric symptoms are common in patients with brain tumors, they often can be the only manifestation in cases of meningioma. Meningiomas might present with mood symptoms, psychosis, memory disturbances, personality changes, anxiety, or anorexia nervosa. The diagnosis of meningioma could be delayed where only psychiatric symptoms are seen. A comprehensive review of the literature and individual patient data analysis was conducted, which included all case reports, and case series on meningioma and psychiatric symptoms till September 2018 with the search terms \"meningioma\" and \"psychiatric symptoms/ depression/ bipolar disorder/mania/ psychosis/ obsessive-compulsive disorder\". Search engines used included PubMed, MEDLINE, PsycINFO, Cochrane database and Google Scholar. Studies reported varied psychiatric symptoms in cases with meningioma of differing tumor site, size and lateralization. Factors which led to a neuroimaging work-up included the occurrence of sudden new or atypical psychiatric symptoms, a lack of response to typical line of treatment and the presence of neurological signs or symptoms such as headache, seizures, diplopia, urinary incontinence etc. This review emphasizes on the need of neurological examination and neuroimaging in the patients presenting to psychiatry especially with atypical symptoms."
},
{
"pmid": "29371158",
"title": "Brain Tumors.",
"abstract": "Brain tumors are common, requiring general medical providers to have a basic understanding of their diagnosis and management. The most prevalent brain tumors are intracranial metastases from systemic cancers, meningiomas, and gliomas, specifically, glioblastoma. Central nervous system metastases may occur anywhere along the neuroaxis, and require complex multidisciplinary care with neurosurgery, radiation oncology, and medical oncology. Meningiomas are tumors of the meninges, mostly benign and often managed by surgical resection, with radiation therapy and chemotherapy reserved for high-risk or refractory disease. Glioblastoma is the most common and aggressive malignant primary brain tumor, with a limited response to standard-of-care concurrent chemoradiation. The new classification of gliomas relies on molecular features, as well as histology, to arrive at an \"integrated diagnosis\" that better captures prognosis. This manuscript will review the most common brain tumors with an emphasis on their diagnosis, oncologic management, and management of medical complications."
},
{
"pmid": "34678239",
"title": "Comprehensive pharmacogenomics characterization of temozolomide response in gliomas.",
"abstract": "Recent developments in pharmacogenomics have created opportunities for predicting temozolomide response in gliomas. Temozolomide is the main first-line alkylating chemotherapeutic drug together with radiotherapy as standard treatments of high-risk gliomas after surgery. However, there are great individual differences in temozolomide response. Besides the heterogeneity of gliomas, pharmacogenomics relevant genetic polymorphisms can not only affect pharmacokinetics of temozolomide but also change anti-tumor effects of temozolomide. This review will summarize pharmacogenomic studies of temozolomide in gliomas which can lay the foundation to personalized chemotherapy."
},
{
"pmid": "31575409",
"title": "Comparison of CT and MRI images for the prediction of soft-tissue sarcoma grading and lung metastasis via a convolutional neural networks model.",
"abstract": "AIM\nTo realise the automated prediction of soft-tissue sarcoma (STS) grading and lung metastasis based on computed tomography (CT), T1-weighted (T1W) magnetic resonance imaging (MRI), and fat-suppressed T2-weighted MRI (FST2W) via the convolutional neural networks (CNN) model.\n\n\nMATERIALS AND METHODS\nMRI and CT images of 51 patients diagnosed with STS were analysed retrospectively. The patients could be divided into three groups based on disease grading: high-grade group (n=28), intermediate-grade group (n=15), low-grade group (n=8). Among these patients, 32 had lung metastasis, while the remaining 19 had no lung metastasis. The data were divided into the training, validation, and testing groups according to the ratio of 5:2:3. The receiver operating characteristic (ROC) curves and accuracy values were acquired using the testing dataset to evaluate the performance of the CNN model.\n\n\nRESULTS\nFor STS grading, the accuracy of the T1W, FST2W, CT, and the fusion of T1W and FST2W testing data were 0.86, 0.89, 0.86, and 0.85, respectively. In addition, Area Under Curve (AUC) were 0.96, 0.97, 0.97, and 0.94 respectively. For the prediction of lung metastasis, the accuracy of the T1W, FST2W, CT, and the fusion of T1W and FST2W test data were 0.92, 0.93, 0.88, and 0.91, respectively. The corresponding AUC values were 0.97, 0.96, 0.95, and 0.95, respectively. FST2W MRI performed best for predicting STS grading and lung metastasis.\n\n\nCONCLUSION\nMRI and CT images combined with the CNN model can be useful for making predictions regarding STS grading and lung metastasis, thus providing help for patient diagnosis and treatment."
},
{
"pmid": "28387340",
"title": "Advances in neuro-oncology imaging.",
"abstract": "Despite the fact that MRI has evolved to become the standard method for diagnosis and monitoring of patients with brain tumours, conventional MRI sequences have two key limitations: the inability to show the full extent of the tumour and the inability to differentiate neoplastic tissue from nonspecific, treatment-related changes after surgery, radiotherapy, chemotherapy or immunotherapy. In the past decade, PET involving the use of radiolabelled amino acids has developed into an important diagnostic tool to overcome some of the shortcomings of conventional MRI. The Response Assessment in Neuro-Oncology working group - an international effort to develop new standardized response criteria for clinical trials in brain tumours - has recommended the additional use of amino acid PET imaging for brain tumour management. Concurrently, a number of advanced MRI techniques such as magnetic resonance spectroscopic imaging and perfusion weighted imaging are under clinical evaluation to target the same diagnostic problems. This Review summarizes the clinical role of amino acid PET in relation to advanced MRI techniques for differential diagnosis of brain tumours; delineation of tumour extent for treatment planning and biopsy guidance; post-treatment differentiation between tumour progression or recurrence versus treatment-related changes; and monitoring response to therapy. An outlook for future developments in PET and MRI techniques is also presented."
},
{
"pmid": "33414495",
"title": "Plasma Hsp90 levels in patients with systemic sclerosis and relation to lung and skin involvement: a cross-sectional and longitudinal study.",
"abstract": "Our previous study demonstrated increased expression of Heat shock protein (Hsp) 90 in the skin of patients with systemic sclerosis (SSc). We aimed to evaluate plasma Hsp90 in SSc and characterize its association with SSc-related features. Ninety-two SSc patients and 92 age-/sex-matched healthy controls were recruited for the cross-sectional analysis. The longitudinal analysis comprised 30 patients with SSc associated interstitial lung disease (ILD) routinely treated with cyclophosphamide. Hsp90 was increased in SSc compared to healthy controls. Hsp90 correlated positively with C-reactive protein and negatively with pulmonary function tests: forced vital capacity and diffusing capacity for carbon monoxide (DLCO). In patients with diffuse cutaneous (dc) SSc, Hsp90 positively correlated with the modified Rodnan skin score. In SSc-ILD patients treated with cyclophosphamide, no differences in Hsp90 were found between baseline and after 1, 6, or 12 months of therapy. However, baseline Hsp90 predicts the 12-month change in DLCO. This study shows that Hsp90 plasma levels are increased in SSc patients compared to age-/sex-matched healthy controls. Elevated Hsp90 in SSc is associated with increased inflammatory activity, worse lung functions, and in dcSSc, with the extent of skin involvement. Baseline plasma Hsp90 predicts the 12-month change in DLCO in SSc-ILD patients treated with cyclophosphamide."
},
{
"pmid": "30407589",
"title": "Radiological Characteristics and Natural History of Adult IDH-Wildtype Astrocytomas with TERT Promoter Mutations.",
"abstract": "BACKGROUND\nAdult IDH-wildtype astrocytomas with TERT promoter mutations (TERTp) are associated with a poor prognosis.\n\n\nOBJECTIVE\nTo analyze the radiological presentation and natural history of adult IDH-wildtype astrocytomas with TERTp.\n\n\nMETHODS\nWe retrospectively reviewed the characteristics of 40 IDH-wildtype TERTp-mutant astrocytomas (grade II n = 19, grade III n = 21) and compared them to those of 114 IDH-mutant lower grade gliomas (LGG), of 92 IDH-wildtype TERTp-mutant glioblastomas, and of 15 IDH-wildtype TERTp-wildtype astrocytomas.\n\n\nRESULTS\nMost cases of IDH-wildtype TERTp-mutant astrocytomas occurred in patients aged >50 yr (88%) and presented as infiltrative lesions without contrast enhancement (73%) that were localized in the temporal and/or insular lobes (37.5%) or corresponded to a gliomatosis cerebri (43%). Thalamic involvement (33%) and extension to the brainstem (27%) were frequently observed, as was gyriform infiltration (33%). This radiological presentation was different from that of IDH-mutant LGG, IDH-wildtype TERTp-mutant glioblastomas, and IDH-wildtype TERTp-wildtype astrocytomas. Tumor evolution before treatment initiation was assessable in 17 cases. Ten cases demonstrated a rapid growth characterized by the apparition of a ring-like contrast enhancement and/or a median velocity of diametric expansion (VDE) ≥8 mm/yr but 7 cases displayed a slow growth (VDE <8 mm/yr) that could last several years before anaplastic transformation. Median overall survival of IDH-wildtype TERTp-mutant astrocytomas was 27 mo.\n\n\nCONCLUSION\nIDH-wildtype TERTp-mutant astrocytomas typically present as nonenhancing temporo-insular infiltrative lesions or as gliomatosis cerebri in patients aged >50 yr. In the absence of treatment, although rapid tumor growth is frequent, an initial falsely reassuring, slow growth can be observed."
},
{
"pmid": "34257410",
"title": "Prognostic stratification for IDH-wild-type lower-grade astrocytoma by Sanger sequencing and copy-number alteration analysis with MLPA.",
"abstract": "The characteristics of IDH-wild-type lower-grade astrocytoma remain unclear. According to cIMPACT-NOW update 3, IDH-wild-type astrocytomas with any of the following factors show poor prognosis: combination of chromosome 7 gain and 10 loss (+ 7/- 10), and/or EGFR amplification, and/or TERT promoter (TERTp) mutation. Multiplex ligation-dependent probe amplification (MLPA) can detect copy number alterations at reasonable cost. The purpose of this study was to identify a precise, cost-effective method for stratifying the prognosis of IDH-wild-type astrocytoma. Sanger sequencing, MLPA, and quantitative methylation-specific PCR were performed for 42 IDH-wild-type lower-grade astrocytomas surgically treated at Kyoto University Hospital, and overall survival was analysed for 40 patients who underwent first surgery. Of the 42 IDH-wild-type astrocytomas, 21 were classified as grade 4 using cIMPACT-NOW update 3 criteria and all had either TERTp mutation or EGFR amplification. Kaplan-Meier analysis confirmed the prognostic significance of cIMPACT-NOW criteria, and World Health Organization grade was also prognostic. Cox regression hazard model identified independent significant prognostic indicators of PTEN loss (risk ratio, 9.75; p < 0.001) and PDGFRA amplification (risk ratio, 13.9; p = 0.002). The classification recommended by cIMPACT-NOW update 3 could be completed using Sanger sequencing and MLPA. Survival analysis revealed PTEN and PDGFRA were significant prognostic factors for IDH-wild-type lower-grade astrocytoma."
},
{
"pmid": "34213667",
"title": "Clinical value of 3'-deoxy-3'-[18F]fluorothymidine-positron emission tomography for diagnosis, staging and assessing therapy response in lung cancer.",
"abstract": "Lung cancer has the highest mortality rate of any tumour type. The main driver of lung tumour growth and development is uncontrolled cellular proliferation. Poor patient outcomes are partly the result of the limited range of effective anti-cancer therapies available and partly due to the limited accuracy of biomarkers to report on cell proliferation rates in patients. Accordingly, accurate methods of diagnosing, staging and assessing response to therapy are crucial to improve patient outcomes. One effective way of assessing cell proliferation is to employ non-invasive evaluation using 3'-deoxy-3'-[18F]fluorothymidine ([18F]FLT) positron emission tomography [18F]FLT-PET. [18F]FLT, unlike the most commonly used PET tracer [18F]fluorodeoxyglucose ([18F]FDG), can specifically report on cell proliferation and does not accumulate in inflammatory cells. Therefore, this radiotracer could exhibit higher specificity in diagnosis and staging, along with more accurate monitoring of therapy response at early stages in the treatment cycle. This review summarises and evaluates published studies on the clinical use of [18F]FLT to diagnose, stage and assess response to therapy in lung cancer."
},
{
"pmid": "31919633",
"title": "Simultaneous FET-PET and contrast-enhanced MRI based on hybrid PET/MR improves delineation of tumor spatial biodistribution in gliomas: a biopsy validation study.",
"abstract": "PURPOSE\nGlioma treatment planning requires precise tumor delineation, which is typically performed with contrast-enhanced (CE) MRI. However, CE MRI fails to reflect the entire extent of glioma. O-(2-18F-fluoroethyl)-L-tyrosine (18F-FET) PET may detect tumor volumes missed by CE MRI. We investigated the clinical value of simultaneous FET-PET and CE MRI in delineating tumor extent before treatment planning. Guided stereotactic biopsy was used to validate the findings.\n\n\nMETHODS\nConventional MRI and 18F-FET PET were performed simultaneously on a hybrid PET/MR in 33 patients with histopathologically confirmed glioma. Tumor volumes were quantified using a tumor-to-brain ratio ≥ 1.6 (VPET) and a visual threshold (VCE). We visually assessed abnormal areas on FLAIR images and calculated Dice's coefficient (DSC), overlap volume (OV), discrepancy-PET, and discrepancy-CE. Additionally, several stereotactic biopsy samples were taken from \"matched\" or \"mismatched\" FET-PET and CE MRI regions.\n\n\nRESULTS\nAmong 31 patients (93.94%), FET-PET delineated significantly larger tumor volumes than CE MRI (77.84 ± 51.74 cm3 vs. 34.59 ± 27.07 cm3, P < 0.05). Of the 21 biopsy samples obtained from regions with increased FET uptake, all were histopathologically confirmed as glioma tissue or tumor infiltration, whereas only 13 showed enhancement on CE MRI. Among all patients, the spatial similarity between VPET and VCE was low (average DSC 0.56 ± 0.22), while the overlap was high (average OV 0.95 ± 0.08). The discrepancy-CE and discrepancy-PET were lower than 10% in 28 and 0 patients, respectively. Eleven patients showed VPET partially beyond abnormal signal areas on FLAIR images.\n\n\nCONCLUSION\nThe metabolically active biodistribution of gliomas delineated with FET-PET significantly exceeds tumor volume on CE MRI, and histopathology confirms these findings. Our preliminary results indicate that combining the anatomic and molecular information obtained from conventional MRI and FET-PET would reveal a more accurate glioma extent, which is critical for individualized treatment planning."
},
{
"pmid": "30327856",
"title": "FET PET reveals considerable spatial differences in tumour burden compared to conventional MRI in newly diagnosed glioblastoma.",
"abstract": "PURPOSE\nAreas of contrast enhancement (CE) on MRI are usually the target for resection or radiotherapy target volume definition in glioblastomas. However, the solid tumour mass may extend beyond areas of CE. Amino acid PET can detect parts of the tumour that show no CE. We systematically investigated tumour volumes delineated by amino acid PET and MRI in patients with newly diagnosed, untreated glioblastoma.\n\n\nMETHODS\nPreoperatively, 50 patients with neuropathologically confirmed glioblastoma underwent O-(2-[18F]-fluoroethyl)-L-tyrosine (FET) PET, and fluid-attenuated inversion recovery (FLAIR) and contrast-enhanced MRI. Areas of CE were manually segmented. FET PET tumour volumes were segmented using a tumour-to-brain ratio of ≥1.6. The percentage overlap volumes, and Dice and Jaccard spatial similarity coefficients (DSC, JSC) were calculated. FLAIR images were evaluated visually.\n\n\nRESULTS\nIn 43 patients (86%), the FET tumour volume was significantly larger than the CE volume (21.5 ± 14.3 mL vs. 9.4 ± 11.3 mL; P < 0.001). Forty patients (80%) showed both increased uptake of FET and CE. In these 40 patients, the spatial similarity between FET uptake and CE was low (mean DSC 0.39 ± 0.21, mean JSC 0.26 ± 0.16). Ten patients (20%) showed no CE, and one of these patients showed no FET uptake. In five patients (10%), increased FET uptake was present outside areas of FLAIR hyperintensity.\n\n\nCONCLUSION\nOur results show that the metabolically active tumour volume delineated by FET PET is significantly larger than tumour volume delineated by CE. Furthermore, the results strongly suggest that the information derived from both imaging modalities should be integrated into the management of patients with newly diagnosed glioblastoma."
},
{
"pmid": "31613783",
"title": "Automated Brain Tumor Segmentation Using Multimodal Brain Scans: A Survey Based on Models Submitted to the BraTS 2012-2018 Challenges.",
"abstract": "Reliable brain tumor segmentation is essential for accurate diagnosis and treatment planning. Since manual segmentation of brain tumors is a highly time-consuming, expensive and subjective task, practical automated methods for this purpose are greatly appreciated. But since brain tumors are highly heterogeneous in terms of location, shape, and size, developing automatic segmentation methods has remained a challenging task over decades. This paper aims to review the evolution of automated models for brain tumor segmentation using multimodal MR images. In order to be able to make a just comparison between different methods, the proposed models are studied for the most famous benchmark for brain tumor segmentation, namely the BraTS challenge [1]. The BraTS 2012-2018 challenges and the state-of-the-art automated models employed each year are analysed. The changing trend of these automated methods since 2012 are studied and the main parameters that affect the performance of different models are analysed."
},
{
"pmid": "29313301",
"title": "Identification and classification of brain tumor MRI images with feature extraction using DWT and probabilistic neural network.",
"abstract": "The identification, segmentation and detection of infecting area in brain tumor MRI images are a tedious and time-consuming task. The different anatomy structure of human body can be visualized by an image processing concepts. It is very difficult to have vision about the abnormal structures of human brain using simple imaging techniques. Magnetic resonance imaging technique distinguishes and clarifies the neural architecture of human brain. MRI technique contains many imaging modalities that scans and capture the internal structure of human brain. In this study, we have concentrated on noise removal technique, extraction of gray-level co-occurrence matrix (GLCM) features, DWT-based brain tumor region growing segmentation to reduce the complexity and improve the performance. This was followed by morphological filtering which removes the noise that can be formed after segmentation. The probabilistic neural network classifier was used to train and test the performance accuracy in the detection of tumor location in brain MRI images. The experimental results achieved nearly 100% accuracy in identifying normal and abnormal tissues from brain MR images demonstrating the effectiveness of the proposed technique."
},
{
"pmid": "34201827",
"title": "Towards Clinical Application of Artificial Intelligence in Ultrasound Imaging.",
"abstract": "Artificial intelligence (AI) is being increasingly adopted in medical research and applications. Medical AI devices have continuously been approved by the Food and Drug Administration in the United States and the responsible institutions of other countries. Ultrasound (US) imaging is commonly used in an extensive range of medical fields. However, AI-based US imaging analysis and its clinical implementation have not progressed steadily compared to other medical imaging modalities. The characteristic issues of US imaging owing to its manual operation and acoustic shadows cause difficulties in image quality control. In this review, we would like to introduce the global trends of medical AI research in US imaging from both clinical and basic perspectives. We also discuss US image preprocessing, ingenious algorithms that are suitable for US imaging analysis, AI explainability for obtaining informed consent, the approval process of medical AI devices, and future perspectives towards the clinical application of AI-based US diagnostic support technologies."
},
{
"pmid": "33374377",
"title": "Using Artificial Neural Network to Discriminate Parkinson's Disease from Other Parkinsonisms by Focusing on Putamen of Dopamine Transporter SPECT Images.",
"abstract": "BACKGROUND\nThe challenge of differentiating, at an early stage, Parkinson's disease from parkinsonism caused by other disorders remains unsolved. We proposed using an artificial neural network (ANN) to process images of dopamine transporter single-photon emission computed tomography (DAT-SPECT).\n\n\nMETHODS\nAbnormal DAT-SPECT images of subjects with Parkinson's disease and parkinsonism caused by other disorders were divided into training and test sets. Striatal regions of the images were segmented by using an active contour model and were used as the data to perform transfer learning on a pre-trained ANN to discriminate Parkinson's disease from parkinsonism caused by other disorders. A support vector machine trained using parameters of semi-quantitative measurements including specific binding ratio and asymmetry index was used for comparison.\n\n\nRESULTS\nThe predictive accuracy of the ANN classifier (86%) was higher than that of the support vector machine classifier (68%). The sensitivity and specificity of the ANN classifier in predicting Parkinson's disease were 81.8% and 88.6%, respectively.\n\n\nCONCLUSIONS\nThe ANN classifier outperformed classical biomarkers in differentiating Parkinson's disease from parkinsonism caused by other disorders. This classifier can be readily included into standalone computer software for clinical application."
},
{
"pmid": "33669816",
"title": "Reinforcement Learning for Radiotherapy Dose Fractioning Automation.",
"abstract": "External beam radiotherapy cancer treatment aims to deliver dose fractions to slowly destroy a tumor while avoiding severe side effects in surrounding healthy tissues. To automate the dose fraction schedules, this paper investigates how deep reinforcement learning approaches (based on deep Q network and deep deterministic policy gradient) can learn from a model of a mixture of tumor and healthy cells. A 2D tumor growth simulation is used to simulate radiation effects on tissues and thus training an agent to automatically optimize dose fractionation. Results show that initiating treatment with large dose per fraction, and then gradually reducing it, is preferred to the standard approach of using a constant dose per fraction."
},
{
"pmid": "33542422",
"title": "Applying artificial intelligence to longitudinal imaging analysis of vestibular schwannoma following radiosurgery.",
"abstract": "Artificial intelligence (AI) has been applied with considerable success in the fields of radiology, pathology, and neurosurgery. It is expected that AI will soon be used to optimize strategies for the clinical management of patients based on intensive imaging follow-up. Our objective in this study was to establish an algorithm by which to automate the volumetric measurement of vestibular schwannoma (VS) using a series of parametric MR images following radiosurgery. Based on a sample of 861 consecutive patients who underwent Gamma Knife radiosurgery (GKRS) between 1993 and 2008, the proposed end-to-end deep-learning scheme with automated pre-processing pipeline was applied to a series of 1290 MR examinations (T1W+C, and T2W parametric MR images). All of which were performed under consistent imaging acquisition protocols. The relative volume difference (RVD) between AI-based volumetric measurements and clinical measurements performed by expert radiologists were + 1.74%, - 0.31%, - 0.44%, - 0.19%, - 0.01%, and + 0.26% at each follow-up time point, regardless of the state of the tumor (progressed, pseudo-progressed, or regressed). This study outlines an approach to the evaluation of treatment responses via novel volumetric measurement algorithm, and can be used longitudinally following GKRS for VS. The proposed deep learning AI scheme is applicable to longitudinal follow-up assessments following a variety of therapeutic interventions."
},
{
"pmid": "34411966",
"title": "FA-GAN: Fused attentive generative adversarial networks for MRI image super-resolution.",
"abstract": "High-resolution magnetic resonance images can provide fine-grained anatomical information, but acquiring such data requires a long scanning time. In this paper, a framework called the Fused Attentive Generative Adversarial Networks(FA-GAN) is proposed to generate the super- resolution MR image from low-resolution magnetic resonance images, which can reduce the scanning time effectively but with high resolution MR images. In the framework of the FA-GAN, the local fusion feature block, consisting of different three-pass networks by using different convolution kernels, is proposed to extract image features at different scales. And the global feature fusion module, including the channel attention module, the self-attention module, and the fusion operation, is designed to enhance the important features of the MR image. Moreover, the spectral normalization process is introduced to make the discriminator network stable. 40 sets of 3D magnetic resonance images (each set of images contains 256 slices) are used to train the network, and 10 sets of images are used to test the proposed method. The experimental results show that the PSNR and SSIM values of the super-resolution magnetic resonance image generated by the proposed FA-GAN method are higher than the state-of-the-art reconstruction methods."
},
{
"pmid": "30768835",
"title": "Feature enhancement framework for brain tumor segmentation and classification.",
"abstract": "Automatic medical image analysis is one of the key tasks being used by the medical community for disease diagnosis and treatment planning. Statistical methods are the major algorithms used and consist of few steps including preprocessing, feature extraction, segmentation, and classification. Performance of such statistical methods is an important factor for their successful adaptation. The results of these algorithms depend on the quality of images fed to the processing pipeline: better the images, higher the results. Preprocessing is the pipeline phase that attempts to improve the quality of images before applying the chosen statistical method. In this work, popular preprocessing techniques are investigated from different perspectives where these preprocessing techniques are grouped into three main categories: noise removal, contrast enhancement, and edge detection. All possible combinations of these techniques are formed and applied on different image sets which are then passed to a predefined pipeline of feature extraction, segmentation, and classification. Classification results are calculated using three different measures: accuracy, sensitivity, and specificity while segmentation results are calculated using dice similarity score. Statistics of five high scoring combinations are reported for each data set. Experimental results show that application of proper preprocessing techniques could improve the classification and segmentation results to a greater extent. However, the combinations of these techniques depend on the characteristics and type of data set used."
},
{
"pmid": "34242852",
"title": "CT synthesis from MRI using multi-cycle GAN for head-and-neck radiation therapy.",
"abstract": "Magnetic Resonance Imaging (MRI) guided Radiation Therapy is a hot topic in the current studies of radiotherapy planning, which requires using MRI to generate synthetic Computed Tomography (sCT). Despite recent progress in image-to-image translation, it remains challenging to apply such techniques to generate high-quality medical images. This paper proposes a novel framework named Multi-Cycle GAN, which uses the Pseudo-Cycle Consistent module to control the consistency of generation and the domain control module to provide additional identical constraints. Besides, we design a new generator named Z-Net to improve the accuracy of anatomy details. Extensive experiments show that Multi-Cycle GAN outperforms state-of-the-art CT synthesis methods such as Cycle GAN, which improves MAE to 0.0416, ME to 0.0340, PSNR to 39.1053."
},
{
"pmid": "33640650",
"title": "Mass Image Synthesis in Mammogram with Contextual Information Based on GANs.",
"abstract": "BACKGROUND AND OBJECTIVE\nIn medical imaging, the scarcity of labeled lesion data has hindered the application of many deep learning algorithms. To overcome this problem, the simulation of diverse lesions in medical images is proposed. However, synthesizing labeled mass images in mammograms is still challenging due to the lack of consistent patterns in shape, margin, and contextual information. Therefore, we aim to generate various labeled medical images based on contextual information in mammograms.\n\n\nMETHODS\nIn this paper, we propose a novel approach based on GANs to generate various mass images and then perform contextual infilling by inserting the synthetic lesions into healthy screening mammograms. Through incorporating features of both realistic mass images and corresponding masks into the adversarial learning scheme, the generator can not only learn the distribution of the real mass images but also capture the matching shape, margin and context information.\n\n\nRESULTS\nTo demonstrate the effectiveness of our proposed method, we conduct experiments on publicly available mammogram database of DDSM and a private database provided by Nanfang Hospital in China. Qualitative and quantitative evaluations validate the effectiveness of our approach. Additionally, through the data augmentation by image generation of the proposed method, an improvement of 5.03% in detection rate can be achieved over the same model trained on original real lesion images.\n\n\nCONCLUSIONS\nThe results show that the data augmentation based on our method increases the diversity of dataset. Our method can be viewed as one of the first steps toward generating labeled breast mass images for precise detection and can be extended in other medical imaging domains to solve similar problems."
},
{
"pmid": "34829494",
"title": "Improving Skin Cancer Classification Using Heavy-Tailed Student T-Distribution in Generative Adversarial Networks (TED-GAN).",
"abstract": "Deep learning has gained immense attention from researchers in medicine, especially in medical imaging. The main bottleneck is the unavailability of sufficiently large medical datasets required for the good performance of deep learning models. This paper proposes a new framework consisting of one variational autoencoder (VAE), two generative adversarial networks, and one auxiliary classifier to artificially generate realistic-looking skin lesion images and improve classification performance. We first train the encoder-decoder network to obtain the latent noise vector with the image manifold's information and let the generative adversarial network sample the input from this informative noise vector in order to generate the skin lesion images. The use of informative noise allows the GAN to avoid mode collapse and creates faster convergence. To improve the diversity in the generated images, we use another GAN with an auxiliary classifier, which samples the noise vector from a heavy-tailed student t-distribution instead of a random noise Gaussian distribution. The proposed framework was named TED-GAN, with T from the t-distribution and ED from the encoder-decoder network which is part of the solution. The proposed framework could be used in a broad range of areas in medical imaging. We used it here to generate skin lesion images and have obtained an improved classification performance on the skin lesion classification task, rising from 66% average accuracy to 92.5%. The results show that TED-GAN has a better impact on the classification task because of its diverse range of generated images due to the use of a heavy-tailed t-distribution."
},
{
"pmid": "29993445",
"title": "Medical Image Synthesis with Deep Convolutional Adversarial Networks.",
"abstract": "Medical imaging plays a critical role in various clinical applications. However, due to multiple considerations such as cost and radiation dose, the acquisition of certain image modalities may be limited. Thus, medical image synthesis can be of great benefit by estimating a desired imaging modality without incurring an actual scan. In this paper, we propose a generative adversarial approach to address this challenging problem. Specifically, we train a fully convolutional network (FCN) to generate a target image given a source image. To better model a nonlinear mapping from source to target and to produce more realistic target images, we propose to use the adversarial learning strategy to better model the FCN. Moreover, the FCN is designed to incorporate an image-gradient-difference-based loss function to avoid generating blurry target images. Long-term residual unit is also explored to help the training of the network. We further apply Auto-Context Model to implement a context-aware deep convolutional adversarial network. Experimental results show that our method is accurate and robust for synthesizing target images from the corresponding source images. In particular, we evaluate our method on three datasets, to address the tasks of generating CT from MRI and generating 7T MRI from 3T MRI images. Our method outperforms the state-of-the-art methods under comparison in all datasets and tasks."
},
{
"pmid": "32315932",
"title": "Generative adversarial networks with decoder-encoder output noises.",
"abstract": "In recent years, research on image generation has been developing very fast. The generative adversarial network (GAN) emerges as a promising framework, which uses adversarial training to improve the generative ability of its generator. However, since GAN and most of its variants use randomly sampled noises as the input of their generators, they have to learn a mapping function from a whole random distribution to the image manifold. As the structures of the random distribution and the image manifold are generally different, this results in GAN and its variants difficult to train and converge. In this paper, we propose a novel deep model called generative adversarial networks with decoder-encoder output noises (DE-GANs), which take advantage of both the adversarial training and the variational Bayesian inference to improve GAN and its variants on image generation performances. DE-GANs use a pre-trained decoder-encoder architecture to map the random noise vectors to informative ones and feed them to the generator of the adversarial networks. Since the decoder-encoder architecture is trained with the same data set as the generator, its output vectors, as the inputs of the generator, could carry the intrinsic distribution information of the training images, which greatly improves the learnability of the generator and the quality of the generated images. Extensive experiments demonstrate the effectiveness of the proposed model, DE-GANs."
},
{
"pmid": "28577131",
"title": "Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions.",
"abstract": "Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends."
}
] |
Brain Sciences | null | PMC8870383 | 10.3390/brainsci12020270 | Semantic Feature Extraction Using SBERT for Dementia Detection | Dementia is a neurodegenerative disease that leads to the development of cognitive deficits, such as aphasia, apraxia, and agnosia. It is currently considered one of the most significant major medical problems worldwide, primarily affecting the elderly. This condition gradually impairs the patient’s cognition, eventually leading to the inability to perform everyday tasks without assistance. Since dementia is an incurable disease, early detection plays an important role in delaying its progression. Because of this, tools and methods have been developed to help accurately diagnose patients in their early stages. State-of-the-art methods have shown that the use of syntactic-type linguistic features provides a sensitive and noninvasive tool for detecting dementia in its early stages. However, these methods lack relevant semantic information. In this work, we propose a novel methodology, based on the semantic features approach, by using sentence embeddings computed by Siamese BERT networks (SBERT), along with support vector machine (SVM), K-nearest neighbors (KNN), random forest, and an artificial neural network (ANN) as classifiers. Our methodology extracted 17 features that provide demographic, lexical, syntactic, and semantic information from 550 oral production samples of elderly controls and people with Alzheimer’s disease, provided by the DementiaBank Pitt Corpus database. To quantify the relevance of the extracted features for the dementia classification task, we calculated the mutual information score, which demonstrates a dependence between our features and the MMSE score. The experimental classification performance metrics, such as the accuracy, precision, recall, and F1 score (77, 80, 80, and 80%, respectively), validate that our methodology performs better than syntax-based methods and the BERT approach when only the linguistic features are used. | 2. Related WorkNLP algorithms based on deep learning present a vast area of opportunity to make forays into the healthcare domain because of their ability to analyze large amounts of multimodal data through computational processing [17,18]. However, these NLP methods require a large amount of data to perform well. Unfortunately, clinical databases in the study of dementia present scarcities and limited access to this type of information. Faced with this drawback, Masrani et al. [19] constructed a corpus of several thousand blog entries, some from people with dementia, and some from control persons. They used this dataset to design an AD classifier on the basis of random forest and KNN, and the neural networks achieved accuracies of 84%. The linguistic features of AD patients have been used for the classification of this disease. It is important to mention that the datasets are formed by the speech data in the works described below. However, the analysis is text-based since, during the evaluation, the oral utterances are recorded and then transcribed under a protocol in which most of the information about the subject’s oral performance, such as pauses and cadence, is maintained. The state-of-the-art methods carried out can be divided into two stages: the first stage requires human intervention to perform the evaluations and preprocess the data; and the second stage consists of automatic methods, with the corpus as input, and the classification labels as output.Roak et al. [20] have explored pause counting and syntactic complexity analyses extracted from audio transcripts built manually from 74 recordings created during neuropsychological examinations of those who were assigned as healthy, and those diagnosed with mild cognitive impairment, by which they performed a classification model on the basis of a support vector machine (SVM) with a second-order polynomial kernel, obtaining an accuracy rate of 86% from their classification. Other studies propose to strengthen the classification models by adding demographic information [21,22], electronic medical recordings (EMR) [23], and medical measures, such as MMSE scores [24]. Karlekar et al. [25] propose a classification method of three artificial neural models based on convolutional neural networks (CNNs), long short-term memory (LSTM-RNNs), and their combination, in order to distinguish between the language samples of AD and control patients, achieving an 84.9 % accuracy in the Pitt Corpus database, provided by DementiaBank, composed of 243 controls and 307 AD patients. Solis-Rosas et al. [26] performed an in-depth syntactic analysis, including pauses, filler words, formulated words, restarts, repetitions, incomplete utterances, and fuzzy speech. They performed two automatic classification methods, one using a 3-layer ANN, and a second one using an SVM with a polynomial kernel. Their maximum classification rate reached 86.42% accuracy on the Carolina Conversations Collection, which contains 256 samples from the conversations of dementia patients and healthy people [27]. In another study, Eyigoz et al. [28] use linguistic variables with clinical and demographic variables in their prediction models. Extracted from the Framingham heart study database [29], their study achieved 70% accuracy.In recent years, some research has been performed in which the use of the acoustic features extracted from the neuropsychological test’s audio records, in addition to the linguistic features, has been proposed [30,31,32,33]. These works use the ADReSS Challenge database [34], which was formed to perform two main tasks: the first is an MMSE score regression task that is used to create a model to infer the subject’s MMSE score, on the basis of the speech production during a neuropsychological assessment; and the second is an AD classification task, where the production of a model to predict the label of “AD” or “non-AD” for a speech session is required. The dataset contains samples from 78 non-AD subjects and 78 AD subjects. Mostly, an SVM model, a random forest model, and some neural network architectures were employed to try to solve these tasks, achieving accuracy rates of 77% and 77%, a recall of 76%, and an F1 score of 77%.Balagopalan et al. [35] explore, for the first time (to the best of the authors’ knowledge), the use of semantic features as the average cosine distance between the utterances and the average cosine distance between the 300-dimensional word2vec utterances and the picture content units, in addition to the acoustic and linguistic features and the text classification by BERT. After the classification stage, they obtained an accuracy of 81%, a precision of 83%, a recall of 79%, and an F1 score of 81%. In the case of the classification by BERT, the metrics achieved were as follows: 83% accuracy; 86% precision; 79% recall; and 83% F1 score.Through this brief overview of some of the projects conducted on the search for dementia patterns using NLP techniques, we can observe that mainly lexical and syntactic linguistic features were deployed. However, it remains to be explored how semantic elements provide helpful information when performing the automated detection of dementia. In this work, the extraction and study of the semantic features to evaluate the presence of dementia is carried out. Hence, lexical, syntactic, and semantic analyses, based on NLP, were performed. | [
"30772072",
"19797896",
"26539107",
"29213912",
"22199464",
"29986849",
"30833698",
"31276468",
"33294808",
"8198470"
] | [
{
"pmid": "30772072",
"title": "[Prevalence of dementia in the elderly in Latin America: A systematic review].",
"abstract": "BACKGROUND AND OBJECTIVE\nDementia is a growing public health problem. It involves the impairment of several cognitive functions, generating mental and physical disability, and therefore greater functional dependence. There is limited epidemiological information which reveals an approximate prevalence in older adults from Latin America. The objective of this study was to determine the prevalence of dementia in the older adult population of Latin America, and its distribution according to geographic area and gender.\n\n\nMATERIALS AND METHODS\nA systematic review was carried out in databases: PubMed, Ovid, Lilacs, Cochrane, Scielo and Google Scholar, in order to identify studies that estimate the prevalence of dementia in urban and / or rural population over 65 years of age.\n\n\nRESULTS\nOn February 2018, the literature search yielded 357 publications. The overall prevalence of dementia in the older adult population of Latin America was found to be 11%, prevailing more in female gender and urban people.\n\n\nCONCLUSION\nThe prevalence of dementia in Latin America is higher than registered previously, and even than in other continents."
},
{
"pmid": "19797896",
"title": "Semantic markers in the diagnosis of neurodegenerative dementias.",
"abstract": "BACKGROUND\nThe search for early cognitive markers of Alzheimer's disease (AD) has focused on episodic memory and spatiotemporal orientation. However, recent research suggests that semantic memory is also impaired in the preclinical stages of AD.\n\n\nMETHODS\nAge- and education-matched groups of participants with AD, mild cognitive impairment, and subjective memory complaints and healthy controls were assessed with 16 cognitive tests encompassing attention, orientation, episodic and semantic memory, and language tasks.\n\n\nRESULTS\nThe battery correctly distinguished AD patients from healthy seniors in 92% of the cases. Three semantic memory-based tasks turned out to be particularly powerful for this purpose.\n\n\nCONCLUSION\nOur results suggest that semantic memory tasks should be included in the battery of tests for the evaluation of cognitively impaired patients."
},
{
"pmid": "26539107",
"title": "Speaking in Alzheimer's Disease, is That an Early Sign? Importance of Changes in Language Abilities in Alzheimer's Disease.",
"abstract": "It is known that Alzheimer's disease (AD) influences the temporal characteristics of spontaneous speech. These phonetical changes are present even in mild AD. Based on this, the question arises whether an examination based on language analysis could help the early diagnosis of AD and if so, which language and speech characteristics can identify AD in its early stage. The purpose of this article is to summarize the relation between prodromal and manifest AD and language functions and language domains. Based on our research, we are inclined to claim that AD can be more sensitively detected with the help of a linguistic analysis than with other cognitive examinations. The temporal characteristics of spontaneous speech, such as speech tempo, number of pauses in speech, and their length are sensitive detectors of the early stage of the disease, which enables an early simple linguistic screening for AD. However, knowledge about the unique features of the language problems associated with different dementia variants still has to be improved and refined."
},
{
"pmid": "29213912",
"title": "Analysis of word number and content in discourse of patients with mild to moderate Alzheimer's disease.",
"abstract": "Alzheimer's disease (AD) is characterized by impairments in memory and other cognitive functions such as language, which can be affected in all aspects including discourse. A picture description task is considered an effective way of obtaining a discourse sample whose key feature is the ability to retrieve appropriate lexical items. There is no consensus on findings showing that performance in content processing of spoken discourse deteriorates from the mildest phase of AD.\n\n\nOBJECTIVE\nTo compare the quantity and quality of discourse among patients with mild to moderate AD and controls.\n\n\nMETHODS\nA cross-sectional study was designed. Subjects aged 50 years and older of both sexes, with one year or more of education, were divided into three groups: control (CG), mild AD (ADG1) and moderate AD (ADG2). Participants were asked to describe the \"cookie theft\" picture. The total number of complete words spoken and information units (IU) were included in the analysis.\n\n\nRESULTS\nThere was no significant difference among groups in terms of age, schooling and sex. For number of words spoken, the CG performed significantly better than both the ADG 1 and ADG2, but no difference between the two latter groups was found. CG produced almost twice as many information units as the ADG1 and more than double that of the ADG2. Moreover, ADG2 patients had worse performance on IUs compared to the ADG1.\n\n\nCONCLUSION\nDecreased performance in quantity and content of discourse was evident in patients with AD from the mildest phase, but only content (IU) continued to worsen with disease progression."
},
{
"pmid": "22199464",
"title": "Spoken Language Derived Measures for Detecting Mild Cognitive Impairment.",
"abstract": "Spoken responses produced by subjects during neuropsychological exams can provide diagnostic markers beyond exam performance. In particular, characteristics of the spoken language itself can discriminate between subject groups. We present results on the utility of such markers in discriminating between healthy elderly subjects and subjects with mild cognitive impairment (MCI). Given the audio and transcript of a spoken narrative recall task, a range of markers are automatically derived. These markers include speech features such as pause frequency and duration, and many linguistic complexity measures. We examine measures calculated from manually annotated time alignments (of the transcript with the audio) and syntactic parse trees, as well as the same measures calculated from automatic (forced) time alignments and automatic parses. We show statistically significant differences between clinical subject groups for a number of measures. These differences are largely preserved with automation. We then present classification results, and demonstrate a statistically significant improvement in the area under the ROC curve (AUC) when using automatic spoken language derived features in addition to the neuropsychological test scores. Our results indicate that using multiple, complementary measures can aid in automatic detection of MCI."
},
{
"pmid": "29986849",
"title": "Unsupervised Machine Learning to Identify High Likelihood of Dementia in Population-Based Surveys: Development and Validation Study.",
"abstract": "BACKGROUND\nDementia is increasing in prevalence worldwide, yet frequently remains undiagnosed, especially in low- and middle-income countries. Population-based surveys represent an underinvestigated source to identify individuals at risk of dementia.\n\n\nOBJECTIVE\nThe aim is to identify participants with high likelihood of dementia in population-based surveys without the need of the clinical diagnosis of dementia in a subsample.\n\n\nMETHODS\nUnsupervised machine learning classification (hierarchical clustering on principal components) was developed in the Health and Retirement Study (HRS; 2002-2003, N=18,165 individuals) and validated in the Survey of Health, Ageing and Retirement in Europe (SHARE; 2010-2012, N=58,202 individuals).\n\n\nRESULTS\nUnsupervised machine learning classification identified three clusters in HRS: cluster 1 (n=12,231) without any functional or motor limitations, cluster 2 (N=4841) with walking/climbing limitations, and cluster 3 (N=1093) with both functional and walking/climbing limitations. Comparison of cluster 3 with previously published predicted probabilities of dementia in HRS showed that it identified high likelihood of dementia (probability of dementia >0.95; area under the curve [AUC]=0.91). Removing either cognitive or both cognitive and behavioral measures did not impede accurate classification (AUC=0.91 and AUC=0.90, respectively). Three clusters with similar profiles were identified in SHARE (cluster 1: n=40,223; cluster 2: n=15,644; cluster 3: n=2335). Survival rate of participants from cluster 3 reached 39.2% (n=665 deceased) in HRS and 62.2% (n=811 deceased) in SHARE after a 3.9-year follow-up. Surviving participants from cluster 3 in both cohorts worsened their functional and mobility performance over the same period.\n\n\nCONCLUSIONS\nUnsupervised machine learning identifies high likelihood of dementia in population-based surveys, even without cognitive and behavioral measures and without the need of clinical diagnosis of dementia in a subsample of the population. This method could be used to tackle the global challenge of dementia."
},
{
"pmid": "30833698",
"title": "Prediction of future cognitive impairment among the community elderly: A machine-learning based approach.",
"abstract": "The early detection of cognitive impairment is a key issue among the elderly. Although neuroimaging, genetic, and cerebrospinal measurements show promising results, high costs and invasiveness hinder their widespread use. Predicting cognitive impairment using easy-to-collect variables by non-invasive methods for community-dwelling elderly is useful prior to conducting such a comprehensive evaluation. This study aimed to develop a machine learning-based predictive model for future cognitive impairment. A total of 3424 community elderly without cognitive impairment were included from the nationwide dataset. The gradient boosting machine (GBM) was exploited to predict cognitive impairment after 2 years. The GBM performance was good (sensitivity = 0.967; specificity = 0.825; and AUC = 0.921). This study demonstrated that a machine learning-based predictive model might be used to screen future cognitive impairment using variables, which are commonly collected in community health care institutions. With efforts of enhancing the predictive performance, such a machine learning-based approach can further contribute to the improvement of the cognitive function in community elderly."
},
{
"pmid": "31276468",
"title": "Identifying incident dementia by applying machine learning to a very large administrative claims dataset.",
"abstract": "Alzheimer's disease and related dementias (ADRD) are highly prevalent conditions, and prior efforts to develop predictive models have relied on demographic and clinical risk factors using traditional logistical regression methods. We hypothesized that machine-learning algorithms using administrative claims data may represent a novel approach to predicting ADRD. Using a national de-identified dataset of more than 125 million patients including over 10,000 clinical, pharmaceutical, and demographic variables, we developed a cohort to train a machine learning model to predict ADRD 4-5 years in advance. The Lasso algorithm selected a 50-variable model with an area under the curve (AUC) of 0.693. Top diagnosis codes in the model were memory loss (780.93), Parkinson's disease (332.0), mild cognitive impairment (331.83) and bipolar disorder (296.80), and top pharmacy codes were psychoactive drugs. Machine learning algorithms can rapidly develop predictive models for ADRD with massive datasets, without requiring hypothesis-driven feature engineering."
},
{
"pmid": "33294808",
"title": "Linguistic markers predict onset of Alzheimer's disease.",
"abstract": "BACKGROUND\nThe aim of this study is to use classification methods to predict future onset of Alzheimer's disease in cognitively normal subjects through automated linguistic analysis.\n\n\nMETHODS\nTo study linguistic performance as an early biomarker of AD, we performed predictive modeling of future diagnosis of AD from a cognitively normal baseline of Framingham Heart Study participants. The linguistic variables were derived from written responses to the cookie-theft picture-description task. We compared the predictive performance of linguistic variables with clinical and neuropsychological variables. The study included 703 samples from 270 participants out of which a dataset consisting of a single sample from 80 participants was held out for testing. Half of the participants in the test set developed AD symptoms before 85 years old, while the other half did not. All samples in the test set were collected during the cognitively normal period (before MCI). The mean time to diagnosis of mild AD was 7.59 years.\n\n\nFINDINGS\nSignificant predictive power was obtained, with AUC of 0.74 and accuracy of 0.70 when using linguistic variables. The linguistic variables most relevant for predicting onset of AD have been identified in the literature as associated with cognitive decline in dementia.\n\n\nINTERPRETATION\nThe results suggest that language performance in naturalistic probes expose subtle early signs of progression to AD in advance of clinical diagnosis of impairment.\n\n\nFUNDING\nPfizer, Inc. provided funding to obtain data from the Framingham Heart Study Consortium, and to support the involvement of IBM Research in the initial phase of the study. The data used in this study was supported by Framingham Heart Study's National Heart, Lung, and Blood Institute contract (N01-HC-25195), and by grants from the National Institute on Aging grants (R01-AG016495, R01-AG008122) and the National Institute of Neurological Disorders and Stroke (R01-NS017950)."
},
{
"pmid": "8198470",
"title": "The natural history of Alzheimer's disease. Description of study cohort and accuracy of diagnosis.",
"abstract": "OBJECTIVE\nWe describe the sampling, initial evaluation, and final diagnostic classification of subjects enrolled in a natural history study of Alzheimer's disease (AD).\n\n\nDESIGN\nVolunteer cohort study.\n\n\nSETTING\nMultidisciplinary behavioral neurology research clinic.\n\n\nPATIENTS OR OTHER PARTICIPANTS\nThree-hundred nineteen individuals were enrolled in the Alzheimer Research Program between March 1983 and March 1988. Of these, 204 were originally classified with AD, 102 were normal elderly control subjects, and 13 were considered special cases.\n\n\nMAIN OUTCOME MEASURES\nFinal consensus clinical diagnosis, final neuropathologic diagnosis, and death.\n\n\nRESULTS\nOf the 204 patients enrolled in the study, re-review after as many as 5 years of follow-up resulted in a final clinical classification of 188 with probable AD. Seven patients were believed to have a significant vascular component to the dementia, three were found to have developed depression, and six were excluded on other clinical grounds. Neuropathologic examination of 50 brains indicated definite AD in 43. After removing these seven misdiagnosed patients, the final group of probable/definite AD totaled 181 individuals. Accuracy of the baseline clinical diagnosis relative to neuropathology was 86%, and when follow-up clinical data were considered, 91.4%. Detailed neuropsychological testing yielded high sensitivity (0.988) and specificity (0.983) to dementia. Analyses of survival time from study entry until death revealed that older patients were significantly more likely to die during follow-up, but neither sex, years of education, nor pattern of cognitive impairment were related to survival.\n\n\nCONCLUSIONS\nThese data provide the descriptive basis for future studies of this cohort. They indicate that longitudinal follow-up of demented cases increases accuracy of diagnosis, and that detailed cognitive testing aids in early classification."
}
] |
Brain Sciences | null | PMC8870633 | 10.3390/brainsci12020139 | Evaluation of the Effect of the Dynamic Behavior and Topology Co-Learning of Neurons and Synapses on the Small-Sample Learning Ability of Spiking Neural Network | Small sample learning ability is one of the most significant characteristics of the human brain. However, its mechanism is yet to be fully unveiled. In recent years, brain-inspired artificial intelligence has become a very hot research domain. Researchers explored brain-inspired technologies or architectures to construct neural networks that could achieve human-alike intelligence. In this work, we presented our effort at evaluation of the effect of dynamic behavior and topology co-learning of neurons and synapses on the small sample learning ability of spiking neural network. Results show that the dynamic behavior and topology co-learning mechanism of neurons and synapses presented in our work could significantly reduce the number of required samples, while maintaining a reasonable performance on the MNIST data-set, resulting in a very lightweight neural network structure. | 2. Related WorksSince SNN is more similar with the human brain in the aspect of biological characteristics, it has drawn a lot of researchers into this domain [11,12,13,14,15,16,17,18,19]. Many learning algorithms for SNN have already been presented.In 2013, Beyeler et al. presented their hierarchical spiking neural network (SNN) that integrates a low-level memory encoding mechanism with a higher-level decision process to perform a visual classification task in real-time [20]. Their SNN consists of Izhikevich neurons and conductance-based synapses for realistic approximation of neuronal dynamics. And the learning method for their SNN is a spike-timing-dependent plasticity (STDP) synaptic learning rule with additional synaptic dynamics for memory encoding, and an accumulator model for memory retrieval and categorization.In 2017, Srinivasan et al. [21] presented a neuronal potential and spike-count based enhanced learning scheme which additionally accounts for the spiking frequency to further the efficiency of synaptic learning. Also in 2017, Matsubara et al. proposed a SNN learning model which could adjust synaptic efficacy and axonal conduction delay in both unsupervised and supervised manners [22]. The same year, Iyer et al. [23] and Shrestha et al. [24] also presented their unsupervised learning algorithms for SNN.Cho et al. presented their 2048-neuron globally asynchronous locally synchronous (GALS) spiking neural network (SNN) chip in 2019 [25]. They allow neurons to specialize to excitatory or inhibitory, and apply distance-based pruning to cut communication and memory for the sake of scalability. Rathi et al. presented a sparse SNN topology where noncritical connections are pruned to reduce the network size, and the remaining critical synapses are weight quantized to accommodate for limited conductance states [26]. In their work, pruning is based on the power law weight-dependent spike timing dependent plasticity model; synapses between pre- and post-neuron with high spike correlation are retained, whereas synapses with low correlation or uncorrelated spiking activity are pruned. Also in that year, Zhao et al. presented an SNN with an energy-efficient and low-cost processor that was based on a mechanism with increased biological plausibility, i.e., a frequency adaptive neural model instead of a Poisson-spiking neural model [27].In 2020, Qi et al. proposed a hybrid framework combining the convolutional neural networks (CNNs) and SNNs named deep CovDenseSNN, such that SNNs can make use of the feature extraction ability of CNNs during the encoding stage, but still process features with the unsupervised learning rule of spiking neurons [28]. In 2021, Tsur proposed a method based on the gradient descent method. Their method uses the Neural Engineering Framework (NEF), which has also been evaluated with CNNs and DNNs [29]. | [
"31776490",
"2213141",
"12991237",
"20876015",
"23340243",
"27723607",
"10966623",
"23994510",
"31733521"
] | [
{
"pmid": "31776490",
"title": "Towards spike-based machine intelligence with neuromorphic computing.",
"abstract": "Guided by brain-like 'spiking' computational frameworks, neuromorphic computing-brain-inspired computing for machine intelligence-promises to realize artificial intelligence while reducing the energy requirements of computing platforms. This interdisciplinary field began with the implementation of silicon circuits for biological neural routines, but has evolved to encompass the hardware implementation of algorithms with spike-based encoding and event-driven representations. Here we provide an overview of the developments in neuromorphic computing for both algorithms and hardware and highlight the fundamentals of learning and hardware frameworks. We discuss the main challenges and the future prospects of neuromorphic computing, with emphasis on algorithm-hardware codesign."
},
{
"pmid": "2213141",
"title": "A circuit for detection of interaural time differences in the brain stem of the barn owl.",
"abstract": "Detection of interaural time differences underlies azimuthal sound localization in the barn owl Tyto alba. Axons of the cochlear nucleus magnocellularis, and their targets in the binaural nucleus laminaris, form the circuit responsible for encoding these interaural time differences. The nucleus laminaris receives bilateral inputs from the cochlear nucleus magnocellularis such that axons from the ipsilateral cochlear nucleus enter the nucleus laminaris dorsally, while contralateral axons enter from the ventral side. This interdigitating projection to the nucleus laminaris is tonotopic, and the afferents are both sharply tuned and matched in frequency to the neighboring afferents. Recordings of phase-locked spikes in the afferents show an orderly change in the arrival time of the spikes as a function of distance from the point of their entry into the nucleus laminaris. The same range of conduction time (160 mu sec) was found over the 700-mu m depth of the nucleus laminaris for all frequencies examined (4-7.5 kHz) and corresponds to the range of interaural time differences available to the barn owl. The estimated conduction velocity in the axons is low (3-5 m/sec) and may be regulated by short internodal distances (60 mu m) within the nucleus laminaris. Neurons of the nucleus laminaris have large somata and very short dendrites. These cells are frequency selective and phase-lock to both monaural and binaural stimuli. The arrival time of phase-locked spikes in many of these neurons differs between the ipsilateral and contralateral inputs. When this disparity is nullified by imposition of an appropriate interaural time difference, the neurons respond maximally. The number of spikes elicited in response to a favorable interaural time difference is roughly double that elicited by a monaural stimulus. Spike counts for unfavorable interaural time differences fall well below monaural response levels. These findings indicate that the magnocellular afferents work as delay lines, and the laminaris neurons work as co-incidence detectors. The orderly distribution of conduction times, the predictability of favorable interaural time differences from monaural phase responses, and the pattern of the anatomical projection from the nucleus laminaris to the central nucleus of the inferior colliculus suggest that interaural time differences and their phase equivalents are mapped in each frequency band along the dorsoventral axis of the nucleus laminaris."
},
{
"pmid": "20876015",
"title": "SWAT: a spiking neural network training algorithm for classification problems.",
"abstract": "This paper presents a synaptic weight association training (SWAT) algorithm for spiking neural networks (SNNs). SWAT merges the Bienenstock-Cooper-Munro (BCM) learning rule with spike timing dependent plasticity (STDP). The STDP/BCM rule yields a unimodal weight distribution where the height of the plasticity window associated with STDP is modulated causing stability after a period of training. The SNN uses a single training neuron in the training phase where data associated with all classes is passed to this neuron. The rule then maps weights to the classifying output neurons to reflect similarities in the data across the classes. The SNN also includes both excitatory and inhibitory facilitating synapses which create a frequency routing capability allowing the information presented to the network to be routed to different hidden layer neurons. A variable neuron threshold level simulates the refractory period. SWAT is initially benchmarked against the nonlinearly separable Iris and Wisconsin Breast Cancer datasets. Results presented show that the proposed training algorithm exhibits a convergence accuracy of 95.5% and 96.2% for the Iris and Wisconsin training sets, respectively, and 95.3% and 96.7% for the testing sets, noise experiments show that SWAT has a good generalization capability. SWAT is also benchmarked using an isolated digit automatic speech recognition (ASR) system where a subset of the TI46 speech corpus is used. Results show that with SWAT as the classifier, the ASR system provides an accuracy of 98.875% for training and 95.25% for testing."
},
{
"pmid": "23340243",
"title": "Dynamic evolving spiking neural networks for on-line spatio- and spectro-temporal pattern recognition.",
"abstract": "On-line learning and recognition of spatio- and spectro-temporal data (SSTD) is a very challenging task and an important one for the future development of autonomous machine learning systems with broad applications. Models based on spiking neural networks (SNN) have already proved their potential in capturing spatial and temporal data. One class of them, the evolving SNN (eSNN), uses a one-pass rank-order learning mechanism and a strategy to evolve a new spiking neuron and new connections to learn new patterns from incoming data. So far these networks have been mainly used for fast image and speech frame-based recognition. Alternative spike-time learning methods, such as Spike-Timing Dependent Plasticity (STDP) and its variant Spike Driven Synaptic Plasticity (SDSP), can also be used to learn spatio-temporal representations, but they usually require many iterations in an unsupervised or semi-supervised mode of learning. This paper introduces a new class of eSNN, dynamic eSNN, that utilise both rank-order learning and dynamic synapses to learn SSTD in a fast, on-line mode. The paper also introduces a new model called deSNN, that utilises rank-order learning and SDSP spike-time learning in unsupervised, supervised, or semi-supervised modes. The SDSP learning is used to evolve dynamically the network changing connection weights that capture spatio-temporal spike data clusters both during training and during recall. The new deSNN model is first illustrated on simple examples and then applied on two case study applications: (1) moving object recognition using address-event representation (AER) with data collected using a silicon retina device; (2) EEG SSTD recognition for brain-computer interfaces. The deSNN models resulted in a superior performance in terms of accuracy and speed when compared with other SNN models that use either rank-order or STDP learning. The reason is that the deSNN makes use of both the information contained in the order of the first input spikes (which information is explicitly present in input data streams and would be crucial to consider in some tasks) and of the information contained in the timing of the following spikes that is learned by the dynamic synapses as a whole spatio-temporal pattern."
},
{
"pmid": "27723607",
"title": "Mapping, Learning, Visualization, Classification, and Understanding of fMRI Data in the NeuCube Evolving Spatiotemporal Data Machine of Spiking Neural Networks.",
"abstract": "This paper introduces a new methodology for dynamic learning, visualization, and classification of functional magnetic resonance imaging (fMRI) as spatiotemporal brain data. The method is based on an evolving spatiotemporal data machine of evolving spiking neural networks (SNNs) exemplified by the NeuCube architecture [1]. The method consists of several steps: mapping spatial coordinates of fMRI data into a 3-D SNN cube (SNNc) that represents a brain template; input data transformation into trains of spikes; deep, unsupervised learning in the 3-D SNNc of spatiotemporal patterns from data; supervised learning in an evolving SNN classifier; parameter optimization; and 3-D visualization and model interpretation. Two benchmark case study problems and data are used to illustrate the proposed methodology-fMRI data collected from subjects when reading affirmative or negative sentences and another one-on reading a sentence or seeing a picture. The learned connections in the SNNc represent dynamic spatiotemporal relationships derived from the fMRI data. They can reveal new information about the brain functions under different conditions. The proposed methodology allows for the first time to analyze dynamic functional and structural connectivity of a learned SNN model from fMRI data. This can be used for a better understanding of brain activities and also for online generation of appropriate neurofeedback to subjects for improved brain functions. For example, in this paper, tracing the 3-D SNN model connectivity enabled us for the first time to capture prominent brain functional pathways evoked in language comprehension. We found stronger spatiotemporal interaction between left dorsolateral prefrontal cortex and left temporal while reading a negated sentence. This observation is obviously distinguishable from the patterns generated by either reading affirmative sentences or seeing pictures. The proposed NeuCube-based methodology offers also a superior classification accuracy when compared with traditional AI and statistical methods. The created NeuCube-based models of fMRI data are directly and efficiently implementable on high performance and low energy consumption neuromorphic platforms for real-time applications."
},
{
"pmid": "10966623",
"title": "Competitive Hebbian learning through spike-timing-dependent synaptic plasticity.",
"abstract": "Hebbian models of development and learning require both activity-dependent synaptic plasticity and a mechanism that induces competition between different synapses. One form of experimentally observed long-term synaptic plasticity, which we call spike-timing-dependent plasticity (STDP), depends on the relative timing of pre- and postsynaptic action potentials. In modeling studies, we find that this form of synaptic modification can automatically balance synaptic strengths to make postsynaptic firing irregular but more sensitive to presynaptic spike timing. It has been argued that neurons in vivo operate in such a balanced regime. Synapses modifiable by STDP compete for control of the timing of postsynaptic action potentials. Inputs that fire the postsynaptic neuron with short latency or that act in correlated groups are able to compete most successfully and develop strong synapses, while synapses of longer-latency or less-effective inputs are weakened."
},
{
"pmid": "23994510",
"title": "Categorization and decision-making in a neurobiologically plausible spiking network using a STDP-like learning rule.",
"abstract": "Understanding how the human brain is able to efficiently perceive and understand a visual scene is still a field of ongoing research. Although many studies have focused on the design and optimization of neural networks to solve visual recognition tasks, most of them either lack neurobiologically plausible learning rules or decision-making processes. Here we present a large-scale model of a hierarchical spiking neural network (SNN) that integrates a low-level memory encoding mechanism with a higher-level decision process to perform a visual classification task in real-time. The model consists of Izhikevich neurons and conductance-based synapses for realistic approximation of neuronal dynamics, a spike-timing-dependent plasticity (STDP) synaptic learning rule with additional synaptic dynamics for memory encoding, and an accumulator model for memory retrieval and categorization. The full network, which comprised 71,026 neurons and approximately 133 million synapses, ran in real-time on a single off-the-shelf graphics processing unit (GPU). The network was constructed on a publicly available SNN simulator that supports general-purpose neuromorphic computer chips. The network achieved 92% correct classifications on MNIST in 100 rounds of random sub-sampling, which is comparable to other SNN approaches and provides a conservative and reliable performance metric. Additionally, the model correctly predicted reaction times from psychophysical experiments. Because of the scalability of the approach and its neurobiological fidelity, the current model can be extended to an efficient neuromorphic implementation that supports more generalized object recognition and decision-making architectures found in the brain."
},
{
"pmid": "31733521",
"title": "Deep CovDenseSNN: A hierarchical event-driven dynamic framework with spiking neurons in noisy environment.",
"abstract": "Neurons in the brain use an event signal, termed spike, encode temporal information for neural computation. Spiking neural networks (SNNs) take this advantage to serve as biological relevant models. However, the effective encoding of sensory information and also its integration with downstream neurons of SNNs are limited by the current shallow structures and learning algorithms. To tackle this limitation, this paper proposes a novel hybrid framework combining the feature learning ability of continuous-valued convolutional neural networks (CNNs) and SNNs, named deep CovDenseSNN, such that SNNs can make use of feature extraction ability of CNNs during the encoding stage, but still process features with unsupervised learning rule of spiking neurons. We evaluate them on MNIST and its variations to show that our model can extract and transmit more important information than existing models, especially for anti-noise ability in the noisy environment. The proposed architecture provides efficient ways to perform feature representation and recognition in a consistent temporal learning framework, which is easily adapted to neuromorphic hardware implementations and bring more biological realism into modern image classification models, with the hope that the proposed framework can inform us how sensory information is transmitted and represented in the brain."
}
] |
Brain Sciences | null | PMC8870645 | 10.3390/brainsci12020281 | A Hebbian Approach to Non-Spatial Prelinguistic Reasoning | This research integrates key concepts of Computational Neuroscience, including the Bienestock-CooperMunro (BCM) rule, Spike Timing-Dependent Plasticity Rules (STDP), and the Temporal Difference Learning algorithm, with an important structure of Deep Learning (Convolutional Networks) to create an architecture with the potential of replicating observations of some cognitive experiments (particularly, those that provided some basis for sequential reasoning) while sharing the advantages already achieved by the previous proposals. In particular, we present Ring Model B, which is capable of associating visual with auditory stimulus, performing sequential predictions, and predicting reward from experience. Despite its simplicity, we considered such abilities to be a first step towards the formulation of more general models of prelinguistic reasoning. | 2. Related WorkThis research integrates two main concepts of Computational Neuroscience (BCM, STDP) and one related to Machine Learning (ConvNets, specifically Deep Learning) and another concept that originated as a Reinforcement Learning algorithm but nowadays is relevant in the field of Computational Neuroscience as a model of Dopamine reward prediction. This mixture is unusual in the literature, even though we can find works that try to understand the exact relationship between synaptic plasticity rules BCM and STDP [19,20], or include both rules in the same context [21,22]. Other papers try to integrate BCM, STDP, and Reinforcement Learning [23].Papers that implement Hebbian-based rules in a typical Machine Learning context have also been published. In [24], BCM theory, Competitive Hebbian Learning, and Stochastic Gradient Descent are considered to derive a new learning rule. The integration of Hebbian-based learning with ConvNets has also been proposed [25,26,27,28], but BCM learning rules have been barely considered [29]. In addition, some of the previous works focused on improving the TDL algorithm, taking into account the results of [1], which includes the articles by [30,31,32,33].Spiking Neural Networks (SNNs) are a bioinspired approach for neural networks, even though Deep SNNs have not yet achieved the results of deep Artificial Neural Networks (ANNs) [34]. STDP has been usually implemented in SNNs, in architectures such as a the neuromorphic SpiNNaker [35] or TrueNorth [36]. Other neuromorphic implementations of STDP were also proposed [37,38,39,40]. In the case of [41], the authors presented a deep convolutional network with STDP learning. Some properties of STDP in SNNs have been revealed, which is the case of [42], showing the emergence of Bayesian computation with STDP. One remarkable application of STDP in a Machine Learning problem was achieved by [43], reaching an accuracy of 95 % in classification of the dataset MNIST. Moreover, it was an attempt to understand the Backpropagation algorithm with STDP theory [44]. For a full review of different applications of SNNs with Hebbian-based rules, including STDP and BCM, see [45].2.1. Preliminary WorkIn [29], a neural architecture with a convolutional network was proposed. The convolutional network with pre-trained weights operates as a feature extractor. A final layer with Hebbian learning enables performing real-time learning for image classification. This network can be used to teach the system to discriminate visual stimuli. The usage of Convolutional Neural Networks (ConvNets) in cognitive architectures is controversial for some authors, as some researchers (such as [46]) do not consider them as proper models of Visual Cortex. However, the models of ConvNets capture some of the basic principles of the Hierarchical Model of visual perception. In addition, ConvNets have achieved great success on large image recognition tasks. Moreover, ConvNets are the best model to explain the neural representations of the Inferior Temporal cortex [47], which have been labeled as the place where complex visual recognition occurs. For some authors, such as [48], these results show that it is possible to admit deep neural networks are cognitive models. In our case, we will state that our attempt tries to propose an artificial architecture able to simulate some cognitive experiments, while the search for more bio-inspired systems is an ambition that might not be reached yet.2.2. Experimental Results on Animal LearningOne astonishing advance in the field of computational cognitive sciences was the development of the Temporal Difference Learning (TDL) algorithm as well as its interpretation as a model of the Dopamine Reward System [49,50,51]. This model is particularly good for our purposes because it provides an explicit mechanism of prediction of reward, which is relevant in the context of Reinforcement Learning. Nevertheless, the experiments performed in rats by [1] showed some of the limits of the TDL method, by showing that some inferences do not require previous experience.A more specific goal for this work consists of integrating the computational models of ConvNets and TDL with the BCM and STDP learning rules to develop an architecture that emulates grosso modo the observations of articles such as [1]. This system might not only be able to learn to differentiate complex visual stimuli but also to perform inferences with the learned stimuli and (artificial) rewards. More details of the work of Sadacca et al., (2016) will be given in Section 3.4. | [
"26949249",
"4727084",
"7153672",
"7054394",
"28097513",
"23080416",
"9738497",
"9852584",
"17393292",
"21447233",
"32764728",
"22080608",
"12816564",
"25653580",
"31911652",
"17571943",
"31019458",
"34438195",
"31003893",
"27618944",
"31024137",
"29760527",
"30682710",
"31110470",
"23423540",
"29328958",
"23633941",
"26941637",
"25375136",
"30795896",
"8774460",
"9054347",
"8985014",
"18275283",
"29096115",
"23162000",
"6470767",
"27256552",
"33536211",
"33397941",
"28390862",
"25269553"
] | [
{
"pmid": "26949249",
"title": "Midbrain dopamine neurons compute inferred and cached value prediction errors in a common framework.",
"abstract": "Midbrain dopamine neurons have been proposed to signal reward prediction errors as defined in temporal difference (TD) learning algorithms. While these models have been extremely powerful in interpreting dopamine activity, they typically do not use value derived through inference in computing errors. This is important because much real world behavior - and thus many opportunities for error-driven learning - is based on such predictions. Here, we show that error-signaling rat dopamine neurons respond to the inferred, model-based value of cues that have not been paired with reward and do so in the same framework as they track the putative cached value of cues previously paired with reward. This suggests that dopamine neurons access a wider variety of information than contemplated by standard TD models and that, while their firing conforms to predictions of TD models in some cases, they may not be restricted to signaling errors from TD predictions."
},
{
"pmid": "4727084",
"title": "Long-lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path.",
"abstract": "1. The after-effects of repetitive stimulation of the perforant path fibres to the dentate area of the hippocampal formation have been examined with extracellular micro-electrodes in rabbits anaesthetized with urethane.2. In fifteen out of eighteen rabbits the population response recorded from granule cells in the dentate area to single perforant path volleys was potentiated for periods ranging from 30 min to 10 hr after one or more conditioning trains at 10-20/sec for 10-15 sec, or 100/sec for 3-4 sec.3. The population response was analysed in terms of three parameters: the amplitude of the population excitatory post-synaptic potential (e.p.s.p.), signalling the depolarization of the granule cells, and the amplitude and latency of the population spike, signalling the discharge of the granule cells.4. All three parameters were potentiated in 29% of the experiments; in other experiments in which long term changes occurred, potentiation was confined to one or two of the three parameters. A reduction in the latency of the population spike was the commonest sign of potentiation, occurring in 57% of all experiments. The amplitude of the population e.p.s.p. was increased in 43%, and of the population spike in 40%, of all experiments.5. During conditioning at 10-20/sec there was massive potentiation of the population spike (;frequency potentiation'). The spike was suppressed during stimulation at 100/sec. Both frequencies produced long-term potentiation.6. The results suggest that two independent mechanisms are responsible for long-lasting potentiation: (a) an increase in the efficiency of synaptic transmission at the perforant path synapses; (b) an increase in the excitability of the granule cell population."
},
{
"pmid": "7054394",
"title": "Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex.",
"abstract": "The development of stimulus selectivity in the primary sensory cortex of higher vertebrates is considered in a general mathematical framework. A synaptic evolution scheme of a new kind is proposed in which incoming patterns rather than converging afferents compete. The change in the efficacy of a given synapse depends not only on instantaneous pre- and postsynaptic activities but also on a slowly varying time-averaged value of the postsynaptic activity. Assuming an appropriate nonlinear form for this dependence, development of selectivity is obtained under quite general conditions on the sensory environment. One does not require nonlinearity of the neuron's integrative power nor does one need to assume any particular form for intracortical circuitry. This is first illustrated in simple cases, e.g., when the environment consists of only two different stimuli presented alternately in a random manner. The following formal statement then holds: the state of the system converges with probability 1 to points of maximum selectivity in the state space. We next consider the problem of early development of orientation selectivity and binocular interaction in primary visual cortex. Giving the environment an appropriate form, we obtain orientation tuning curves and ocular dominance comparable to what is observed in normally reared adult cats or monkeys. Simulations with binocular input and various types of normal or altered environments show good agreement with the relevant experimental data. Experiments are suggested that could test our theory further."
},
{
"pmid": "28097513",
"title": "Stable Control of Firing Rate Mean and Variance by Dual Homeostatic Mechanisms.",
"abstract": "Homeostatic processes that provide negative feedback to regulate neuronal firing rates are essential for normal brain function. Indeed, multiple parameters of individual neurons, including the scale of afferent synapse strengths and the densities of specific ion channels, have been observed to change on homeostatic time scales to oppose the effects of chronic changes in synaptic input. This raises the question of whether these processes are controlled by a single slow feedback variable or multiple slow variables. A single homeostatic process providing negative feedback to a neuron's firing rate naturally maintains a stable homeostatic equilibrium with a characteristic mean firing rate; but the conditions under which multiple slow feedbacks produce a stable homeostatic equilibrium have not yet been explored. Here we study a highly general model of homeostatic firing rate control in which two slow variables provide negative feedback to drive a firing rate toward two different target rates. Using dynamical systems techniques, we show that such a control system can be used to stably maintain a neuron's characteristic firing rate mean and variance in the face of perturbations, and we derive conditions under which this happens. We also derive expressions that clarify the relationship between the homeostatic firing rate targets and the resulting stable firing rate mean and variance. We provide specific examples of neuronal systems that can be effectively regulated by dual homeostasis. One of these examples is a recurrent excitatory network, which a dual feedback system can robustly tune to serve as an integrator."
},
{
"pmid": "23080416",
"title": "The BCM theory of synapse modification at 30: interaction of theory with experiment.",
"abstract": "Thirty years have passed since the publication of Elie Bienenstock, Leon Cooper and Paul Munro's 'Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex', known as the BCM theory of synaptic plasticity. This theory has guided experimentalists to discover some fundamental properties of synaptic plasticity and has provided a mathematical structure that bridges molecular mechanisms and systems-level consequences of learning and memory storage."
},
{
"pmid": "9738497",
"title": "A critical window for cooperation and competition among developing retinotectal synapses.",
"abstract": "In the developing frog visual system, topographic refinement of the retinotectal projection depends on electrical activity. In vivo whole-cell recording from developing Xenopus tectal neurons shows that convergent retinotectal synapses undergo activity-dependent cooperation and competition following correlated pre- and postsynaptic spiking within a narrow time window. Synaptic inputs activated repetitively within 20 ms before spiking of the tectal neuron become potentiated, whereas subthreshold inputs activated within 20 ms after spiking become depressed. Thus both the initial synaptic strength and the temporal order of activation are critical for heterosynaptic interactions among convergent synaptic inputs during activity-dependent refinement of developing neural networks."
},
{
"pmid": "9852584",
"title": "Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type.",
"abstract": "In cultures of dissociated rat hippocampal neurons, persistent potentiation and depression of glutamatergic synapses were induced by correlated spiking of presynaptic and postsynaptic neurons. The relative timing between the presynaptic and postsynaptic spiking determined the direction and the extent of synaptic changes. Repetitive postsynaptic spiking within a time window of 20 msec after presynaptic activation resulted in long-term potentiation (LTP), whereas postsynaptic spiking within a window of 20 msec before the repetitive presynaptic activation led to long-term depression (LTD). Significant LTP occurred only at synapses with relatively low initial strength, whereas the extent of LTD did not show obvious dependence on the initial synaptic strength. Both LTP and LTD depended on the activation of NMDA receptors and were absent in cases in which the postsynaptic neurons were GABAergic in nature. Blockade of L-type calcium channels with nimodipine abolished the induction of LTD and reduced the extent of LTP. These results underscore the importance of precise spike timing, synaptic strength, and postsynaptic cell type in the activity-induced modification of central synapses and suggest that Hebb's rule may need to incorporate a quantitative consideration of spike timing that reflects the narrow and asymmetric window for the induction of synaptic modification."
},
{
"pmid": "17393292",
"title": "Formation of feedforward networks and frequency synchrony by spike-timing-dependent plasticity.",
"abstract": "Spike-timing-dependent plasticity (STDP) with asymmetric learning windows is commonly found in the brain and useful for a variety of spike-based computations such as input filtering and associative memory. A natural consequence of STDP is establishment of causality in the sense that a neuron learns to fire with a lag after specific presynaptic neurons have fired. The effect of STDP on synchrony is elusive because spike synchrony implies unitary spike events of different neurons rather than a causal delayed relationship between neurons. We explore how synchrony can be facilitated by STDP in oscillator networks with a pacemaker. We show that STDP with asymmetric learning windows leads to self-organization of feedforward networks starting from the pacemaker. As a result, STDP drastically facilitates frequency synchrony. Even though differences in spike times are lessened as a result of synaptic plasticity, the finite time lag remains so that perfect spike synchrony is not realized. In contrast to traditional mechanisms of large-scale synchrony based on mutual interaction of coupled neurons, the route to synchrony discovered here is enslavement of downstream neurons by upstream ones. Facilitation of such feedforward synchrony does not occur for STDP with symmetric learning windows."
},
{
"pmid": "21447233",
"title": "Why do humans reason? Arguments for an argumentative theory.",
"abstract": "Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing, but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found."
},
{
"pmid": "32764728",
"title": "Neuronal vector coding in spatial cognition.",
"abstract": "Several types of neurons involved in spatial navigation and memory encode the distance and direction (that is, the vector) between an agent and items in its environment. Such vectorial information provides a powerful basis for spatial cognition by representing the geometric relationships between the self and the external world. Here, we review the explicit encoding of vectorial information by neurons in and around the hippocampal formation, far from the sensory periphery. The parahippocampal, retrosplenial and parietal cortices, as well as the hippocampal formation and striatum, provide a plethora of examples of vector coding at the single neuron level. We provide a functional taxonomy of cells with vectorial receptive fields as reported in experiments and proposed in theoretical work. The responses of these neurons may provide the fundamental neural basis for the (bottom-up) representation of environmental layout and (top-down) memory-guided generation of visuospatial imagery and navigational planning."
},
{
"pmid": "22080608",
"title": "A triplet spike-timing-dependent plasticity model generalizes the Bienenstock-Cooper-Munro rule to higher-order spatiotemporal correlations.",
"abstract": "Synaptic strength depresses for low and potentiates for high activation of the postsynaptic neuron. This feature is a key property of the Bienenstock-Cooper-Munro (BCM) synaptic learning rule, which has been shown to maximize the selectivity of the postsynaptic neuron, and thereby offers a possible explanation for experience-dependent cortical plasticity such as orientation selectivity. However, the BCM framework is rate-based and a significant amount of recent work has shown that synaptic plasticity also depends on the precise timing of presynaptic and postsynaptic spikes. Here we consider a triplet model of spike-timing-dependent plasticity (STDP) that depends on the interactions of three precisely timed spikes. Triplet STDP has been shown to describe plasticity experiments that the classical STDP rule, based on pairs of spikes, has failed to capture. In the case of rate-based patterns, we show a tight correspondence between the triplet STDP rule and the BCM rule. We analytically demonstrate the selectivity property of the triplet STDP rule for orthogonal inputs and perform numerical simulations for nonorthogonal inputs. Moreover, in contrast to BCM, we show that triplet STDP can also induce selectivity for input patterns consisting of higher-order spatiotemporal correlations, which exist in natural stimuli and have been measured in the brain. We show that this sensitivity to higher-order correlations can be used to develop direction and speed selectivity."
},
{
"pmid": "12816564",
"title": "Relating STDP to BCM.",
"abstract": "We demonstrate that the BCM learning rule follows directly from STDP when pre- and postsynaptic neurons fire uncorrelated or weakly correlated Poisson spike trains, and only nearest-neighbor spike interactions are taken into account."
},
{
"pmid": "25653580",
"title": "A framework for plasticity implementation on the SpiNNaker neural architecture.",
"abstract": "Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system."
},
{
"pmid": "31911652",
"title": "U1 snRNP regulates cancer cell migration and invasion in vitro.",
"abstract": "Stimulated cells and cancer cells have widespread shortening of mRNA 3'-untranslated regions (3'UTRs) and switches to shorter mRNA isoforms due to usage of more proximal polyadenylation signals (PASs) in introns and last exons. U1 snRNP (U1), vertebrates' most abundant non-coding (spliceosomal) small nuclear RNA, silences proximal PASs and its inhibition with antisense morpholino oligonucleotides (U1 AMO) triggers widespread premature transcription termination and mRNA shortening. Here we show that low U1 AMO doses increase cancer cells' migration and invasion in vitro by up to 500%, whereas U1 over-expression has the opposite effect. In addition to 3'UTR length, numerous transcriptome changes that could contribute to this phenotype are observed, including alternative splicing, and mRNA expression levels of proto-oncogenes and tumor suppressors. These findings reveal an unexpected role for U1 homeostasis (available U1 relative to transcription) in oncogenic and activated cell states, and suggest U1 as a potential target for their modulation."
},
{
"pmid": "17571943",
"title": "Reinforcement learning, spike-time-dependent plasticity, and the BCM rule.",
"abstract": "Learning agents, whether natural or artificial, must update their internal parameters in order to improve their behavior over time. In reinforcement learning, this plasticity is influenced by an environmental signal, termed a reward, that directs the changes in appropriate directions. We apply a recently introduced policy learning algorithm from machine learning to networks of spiking neurons and derive a spike-time-dependent plasticity rule that ensures convergence to a local optimum of the expected average reward. The approach is applicable to a broad class of neuronal models, including the Hodgkin-Huxley model. We demonstrate the effectiveness of the derived rule in several toy problems. Finally, through statistical analysis, we show that the synaptic plasticity rule established is closely related to the widely used BCM rule, for which good biological evidence exists."
},
{
"pmid": "31019458",
"title": "Deep Learning With Asymmetric Connections and Hebbian Updates.",
"abstract": "We show that deep networks can be trained using Hebbian updates yielding similar performance to ordinary back-propagation on challenging image datasets. To overcome the unrealistic symmetry in connections between layers, implicit in back-propagation, the feedback weights are separate from the feedforward weights. The feedback weights are also updated with a local rule, the same as the feedforward weights-a weight is updated solely based on the product of activity of the units it connects. With fixed feedback weights as proposed in Lillicrap et al. (2016) performance degrades quickly as the depth of the network increases. If the feedforward and feedback weights are initialized with the same values, as proposed in Zipser and Rumelhart (1990), they remain the same throughout training thus precisely implementing back-propagation. We show that even when the weights are initialized differently and at random, and the algorithm is no longer performing back-propagation, performance is comparable on challenging datasets. We also propose a cost function whose derivative can be represented as a local Hebbian update on the last layer. Convolutional layers are updated with tied weights across space, which is not biologically plausible. We show that similar performance is achieved with untied layers, also known as locally connected layers, corresponding to the connectivity implied by the convolutional layers, but where weights are untied and updated separately. In the linear case we show theoretically that the convergence of the error to zero is accelerated by the update of the feedback weights."
},
{
"pmid": "34438195",
"title": "Hebbian semi-supervised learning in a sample efficiency setting.",
"abstract": "We propose to address the issue of sample efficiency, in Deep Convolutional Neural Networks (DCNN), with a semi-supervised training strategy that combines Hebbian learning with gradient descent: all internal layers (both convolutional and fully connected) are pre-trained using an unsupervised approach based on Hebbian learning, and the last fully connected layer (the classification layer) is trained using Stochastic Gradient Descent (SGD). In fact, as Hebbian learning is an unsupervised learning method, its potential lies in the possibility of training the internal layers of a DCNN without labels. Only the final fully connected layer has to be trained with labeled examples. We performed experiments on various object recognition datasets, in different regimes of sample efficiency, comparing our semi-supervised (Hebbian for internal layers + SGD for the final fully connected layer) approach with end-to-end supervised backprop training, and with semi-supervised learning based on Variational Auto-Encoder (VAE). The results show that, in regimes where the number of available labeled samples is low, our semi-supervised approach outperforms the other approaches in almost all the cases."
},
{
"pmid": "31003893",
"title": "Reinforcement Learning, Fast and Slow.",
"abstract": "Deep reinforcement learning (RL) methods have driven impressive advances in artificial intelligence in recent years, exceeding human performance in domains ranging from Atari to Go to no-limit poker. This progress has drawn the attention of cognitive scientists interested in understanding human learning. However, the concern has been raised that deep RL may be too sample-inefficient - that is, it may simply be too slow - to provide a plausible model of how humans learn. In the present review, we counter this critique by describing recently developed techniques that allow deep RL to operate more nimbly, solving problems much more quickly than previous methods. Although these techniques were developed in an AI context, we propose that they may have rich implications for psychology and neuroscience. A key insight, arising from these AI methods, concerns the fundamental connection between fast RL and slower, more incremental forms of learning."
},
{
"pmid": "27618944",
"title": "Reinforcement Learning and Episodic Memory in Humans and Animals: An Integrative Framework.",
"abstract": "We review the psychology and neuroscience of reinforcement learning (RL), which has experienced significant progress in the past two decades, enabled by the comprehensive experimental study of simple learning and decision-making tasks. However, one challenge in the study of RL is computational: The simplicity of these tasks ignores important aspects of reinforcement learning in the real world: (a) State spaces are high-dimensional, continuous, and partially observable; this implies that (b) data are relatively sparse and, indeed, precisely the same situation may never be encountered twice; furthermore, (c) rewards depend on the long-term consequences of actions in ways that violate the classical assumptions that make RL tractable. A seemingly distinct challenge is that, cognitively, theories of RL have largely involved procedural and semantic memory, the way in which knowledge about action values or world models extracted gradually from many experiences can drive choice. This focus on semantic memory leaves out many aspects of memory, such as episodic memory, related to the traces of individual events. We suggest that these two challenges are related. The computational challenge can be dealt with, in part, by endowing RL systems with episodic memory, allowing them to (a) efficiently approximate value functions over complex state spaces, (b) learn with very little data, and (c) bridge long-term dependencies between actions and rewards. We review the computational theory underlying this proposal and the empirical evidence to support it. Our proposal suggests that the ubiquitous and diverse roles of memory in RL may function as part of an integrated learning system."
},
{
"pmid": "31024137",
"title": "The successor representation in human reinforcement learning.",
"abstract": "Theories of reward learning in neuroscience have focused on two families of algorithms thought to capture deliberative versus habitual choice. 'Model-based' algorithms compute the value of candidate actions from scratch, whereas 'model-free' algorithms make choice more efficient but less flexible by storing pre-computed action values. We examine an intermediate algorithmic family, the successor representation, which balances flexibility and efficiency by storing partially computed action values: predictions about future events. These pre-computation strategies differ in how they update their choices following changes in a task. The successor representation's reliance on stored predictions about future states predicts a unique signature of insensitivity to changes in the task's sequence of events, but flexible adjustment following changes to rewards. We provide evidence for such differential sensitivity in two behavioural studies with humans. These results suggest that the successor representation is a computational substrate for semi-flexible choice in humans, introducing a subtler, more cognitive notion of habit."
},
{
"pmid": "29760527",
"title": "Prefrontal cortex as a meta-reinforcement learning system.",
"abstract": "Over the past 20 years, neuroscience research on reward-based learning has converged on a canonical model, under which the neurotransmitter dopamine 'stamps in' associations between situations, actions and rewards by modulating the strength of synaptic connections between neurons. However, a growing number of recent findings have placed this standard model under strain. We now draw on recent advances in artificial intelligence to introduce a new theory of reward-based learning. Here, the dopamine system trains another part of the brain, the prefrontal cortex, to operate as its own free-standing learning system. This new perspective accommodates the findings that motivated the standard model, but also deals gracefully with a wider range of observations, providing a fresh foundation for future research."
},
{
"pmid": "30682710",
"title": "Deep learning in spiking neural networks.",
"abstract": "In recent years, deep learning has revolutionized the field of machine learning, for computer vision in particular. In this approach, a deep (multilayer) artificial neural network (ANN) is trained, most often in a supervised manner using backpropagation. Vast amounts of labeled training examples are required, but the resulting classification accuracy is truly impressive, sometimes outperforming humans. Neurons in an ANN are characterized by a single, static, continuous-valued activation. Yet biological neurons use discrete spikes to compute and transmit information, and the spike times, in addition to the spike rates, matter. Spiking neural networks (SNNs) are thus more biologically realistic than ANNs, and are arguably the only viable option if one wants to understand how the brain computes at the neuronal description level. The spikes of biological neurons are sparse in time and space, and event-driven. Combined with bio-plausible local learning rules, this makes it easier to build low-power, neuromorphic hardware for SNNs. However, training deep SNNs remains a challenge. Spiking neurons' transfer function is usually non-differentiable, which prevents using backpropagation. Here we review recent supervised and unsupervised methods to train deep SNNs, and compare them in terms of accuracy and computational cost. The emerging picture is that SNNs still lag behind ANNs in terms of accuracy, but the gap is decreasing, and can even vanish on some tasks, while SNNs typically require many fewer operations and are the better candidates to process spatio-temporal data."
},
{
"pmid": "31110470",
"title": "Memory-Efficient Synaptic Connectivity for Spike-Timing- Dependent Plasticity.",
"abstract": "Spike-Timing-Dependent Plasticity (STDP) is a bio-inspired local incremental weight update rule commonly used for online learning in spike-based neuromorphic systems. In STDP, the intensity of long-term potentiation and depression in synaptic efficacy (weight) between neurons is expressed as a function of the relative timing between pre- and post-synaptic action potentials (spikes), while the polarity of change is dependent on the order (causality) of the spikes. Online STDP weight updates for causal and acausal relative spike times are activated at the onset of post- and pre-synaptic spike events, respectively, implying access to synaptic connectivity both in forward (pre-to-post) and reverse (post-to-pre) directions. Here we study the impact of different arrangements of synaptic connectivity tables on weight storage and STDP updates for large-scale neuromorphic systems. We analyze the memory efficiency for varying degrees of density in synaptic connectivity, ranging from crossbar arrays for full connectivity to pointer-based lookup for sparse connectivity. The study includes comparison of storage and access costs and efficiencies for each memory arrangement, along with a trade-off analysis of the benefits of each data structure depending on application requirements and budget. Finally, we present an alternative formulation of STDP via a delayed causal update mechanism that permits efficient weight access, requiring no more than forward connectivity lookup. We show functional equivalence of the delayed causal updates to the original STDP formulation, with substantial savings in storage and access costs and efficiencies for networks with sparse synaptic connectivity as typically encountered in large-scale models in computational neuroscience."
},
{
"pmid": "23423540",
"title": "STDP and STDP variations with memristors for spiking neuromorphic learning systems.",
"abstract": "In this paper we review several ways of realizing asynchronous Spike-Timing-Dependent-Plasticity (STDP) using memristors as synapses. Our focus is on how to use individual memristors to implement synaptic weight multiplications, in a way such that it is not necessary to (a) introduce global synchronization and (b) to separate memristor learning phases from memristor performing phases. In the approaches described, neurons fire spikes asynchronously when they wish and memristive synapses perform computation and learn at their own pace, as it happens in biological neural systems. We distinguish between two different memristor physics, depending on whether they respond to the original \"moving wall\" or to the \"filament creation and annihilation\" models. Independent of the memristor physics, we discuss two different types of STDP rules that can be implemented with memristors: either the pure timing-based rule that takes into account the arrival time of the spikes from the pre- and the post-synaptic neurons, or a hybrid rule that takes into account only the timing of pre-synaptic spikes and the membrane potential and other state variables of the post-synaptic neuron. We show how to implement these rules in cross-bar architectures that comprise massive arrays of memristors, and we discuss applications for artificial vision."
},
{
"pmid": "29328958",
"title": "STDP-based spiking deep convolutional neural networks for object recognition.",
"abstract": "Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware solutions."
},
{
"pmid": "23633941",
"title": "Bayesian computation emerges in generic cortical microcircuits through spike-timing-dependent plasticity.",
"abstract": "The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex."
},
{
"pmid": "26941637",
"title": "Unsupervised learning of digit recognition using spike-timing-dependent plasticity.",
"abstract": "In order to understand how the mammalian neocortex is performing computations, two things are necessary; we need to have a good understanding of the available neuronal processing units and mechanisms, and we need to gain a better understanding of how those mechanisms are combined to build functioning systems. Therefore, in recent years there is an increasing interest in how spiking neural networks (SNN) can be used to perform complex computations or solve pattern recognition tasks. However, it remains a challenging task to design SNNs which use biologically plausible mechanisms (especially for learning new patterns), since most such SNN architectures rely on training in a rate-based network and subsequent conversion to a SNN. We present a SNN for digit recognition which is based on mechanisms with increased biological plausibility, i.e., conductance-based instead of current-based synapses, spike-timing-dependent plasticity with time-dependent weight change, lateral inhibition, and an adaptive spiking threshold. Unlike most other systems, we do not use a teaching signal and do not present any class labels to the network. Using this unsupervised learning scheme, our architecture achieves 95% accuracy on the MNIST benchmark, which is better than previous SNN implementations without supervision. The fact that we used no domain-specific knowledge points toward the general applicability of our network design. Also, the performance of our network scales well with the number of neurons used and shows similar performance for four different learning rules, indicating robustness of the full combination of mechanisms, which suggests applicability in heterogeneous biological neural networks."
},
{
"pmid": "25375136",
"title": "Deep supervised, but not unsupervised, models may explain IT cortical representation.",
"abstract": "Inferior temporal (IT) cortex in human and nonhuman primates serves visual object recognition. Computational object-vision models, although continually improving, do not yet reach human performance. It is unclear to what extent the internal representations of computational models can explain the IT representation. Here we investigate a wide range of computational model representations (37 in total), testing their categorization performance and their ability to account for the IT representational geometry. The models include well-known neuroscientific object-recognition models (e.g. HMAX, VisNet) along with several models from computer vision (e.g. SIFT, GIST, self-similarity features, and a deep convolutional neural network). We compared the representational dissimilarity matrices (RDMs) of the model representations with the RDMs obtained from human IT (measured with fMRI) and monkey IT (measured with cell recording) for the same set of stimuli (not used in training the models). Better performing models were more similar to IT in that they showed greater clustering of representational patterns by category. In addition, better performing models also more strongly resembled IT in terms of their within-category representational dissimilarities. Representational geometries were significantly correlated between IT and many of the models. However, the categorical clustering observed in IT was largely unexplained by the unsupervised models. The deep convolutional network, which was trained by supervision with over a million category-labeled images, reached the highest categorization performance and also best explained IT, although it did not fully explain the IT data. Combining the features of this model with appropriate weights and adding linear combinations that maximize the margin between animate and inanimate objects and between faces and other objects yielded a representation that fully explained our IT data. Overall, our results suggest that explaining IT requires computational features trained through supervised learning to emphasize the behaviorally important categorical divisions prominently reflected in IT."
},
{
"pmid": "30795896",
"title": "Deep Neural Networks as Scientific Models.",
"abstract": "Artificial deep neural networks (DNNs) initially inspired by the brain enable computers to solve cognitive tasks at which humans excel. In the absence of explanations for such cognitive phenomena, in turn cognitive scientists have started using DNNs as models to investigate biological cognition and its neural basis, creating heated debate. Here, we reflect on the case from the perspective of philosophy of science. After putting DNNs as scientific models into context, we discuss how DNNs can fruitfully contribute to cognitive science. We claim that beyond their power to provide predictions and explanations of cognitive phenomena, DNNs have the potential to contribute to an often overlooked but ubiquitous and fundamental use of scientific models: exploration."
},
{
"pmid": "8774460",
"title": "A framework for mesencephalic dopamine systems based on predictive Hebbian learning.",
"abstract": "We develop a theoretical framework that shows how mesencephalic dopamine systems could distribute to their targets a signal that represents information about future expectations. In particular, we show how activity in the cerebral cortex can make predictions about future receipt of reward and how fluctuations in the activity levels of neurons in diffuse dopamine systems above and below baseline levels would represent errors in these predictions that are delivered to cortical and subcortical targets. We present a model for how such errors could be constructed in a real brain that is consistent with physiological results for a subset of dopaminergic neurons located in the ventral tegmental area and surrounding dopaminergic neurons. The theory also makes testable predictions about human choice behavior on a simple decision-making task. Furthermore, we show that, through a simple influence on synaptic plasticity, fluctuations in dopamine release can act to change the predictions in an appropriate manner."
},
{
"pmid": "9054347",
"title": "A neural substrate of prediction and reward.",
"abstract": "The capacity to predict future events permits a creature to detect, model, and manipulate the causal structure of its interactions with its environment. Behavioral experiments suggest that learning is driven by changes in the expectations about future salient events such as rewards and punishments. Physiological work has recently complemented these studies by identifying dopaminergic neurons in the primate whose fluctuating output apparently signals changes or errors in the predictions of future salient and rewarding events. Taken together, these findings can be understood through quantitative theories of adaptive optimizing control."
},
{
"pmid": "8985014",
"title": "Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs.",
"abstract": "Activity-driven modifications in synaptic connections between neurons in the neocortex may occur during development and learning. In dual whole-cell voltage recordings from pyramidal neurons, the coincidence of postsynaptic action potentials (APs) and unitary excitatory postsynaptic potentials (EPSPs) was found to induce changes in EPSPs. Their average amplitudes were differentially up- or down-regulated, depending on the precise timing of postsynaptic APs relative to EPSPs. These observations suggest that APs propagating back into dendrites serve to modify single active synaptic connections, depending on the pattern of electrical activity in the pre- and postsynaptic neurons."
},
{
"pmid": "18275283",
"title": "Spike timing-dependent plasticity: a Hebbian learning rule.",
"abstract": "Spike timing-dependent plasticity (STDP) as a Hebbian synaptic learning rule has been demonstrated in various neural circuits over a wide spectrum of species, from insects to humans. The dependence of synaptic modification on the order of pre- and postsynaptic spiking within a critical window of tens of milliseconds has profound functional implications. Over the past decade, significant progress has been made in understanding the cellular mechanisms of STDP at both excitatory and inhibitory synapses and of the associated changes in neuronal excitability and synaptic integration. Beyond the basic asymmetric window, recent studies have also revealed several layers of complexity in STDP, including its dependence on dendritic location, the nonlinear integration of synaptic modification induced by complex spike trains, and the modulation of STDP by inhibitory and neuromodulatory inputs. Finally, the functional consequences of STDP have been examined directly in an increasing number of neural circuits in vivo."
},
{
"pmid": "29096115",
"title": "Model-based predictions for dopamine.",
"abstract": "Phasic dopamine responses are thought to encode a prediction-error signal consistent with model-free reinforcement learning theories. However, a number of recent findings highlight the influence of model-based computations on dopamine responses, and suggest that dopamine prediction errors reflect more dimensions of an expected outcome than scalar reward value. Here, we review a selection of these recent results and discuss the implications and complications of model-based predictions for computational theories of dopamine and learning."
},
{
"pmid": "23162000",
"title": "Orbitofrontal cortex supports behavior and learning using inferred but not cached values.",
"abstract": "Computational and learning theory models propose that behavioral control reflects value that is both cached (computed and stored during previous experience) and inferred (estimated on the fly on the basis of knowledge of the causal structure of the environment). The latter is thought to depend on the orbitofrontal cortex. Yet some accounts propose that the orbitofrontal cortex contributes to behavior by signaling \"economic\" value, regardless of the associative basis of the information. We found that the orbitofrontal cortex is critical for both value-based behavior and learning when value must be inferred but not when a cached value is sufficient. The orbitofrontal cortex is thus fundamental for accessing model-based representations of the environment to compute value rather than for signaling value per se."
},
{
"pmid": "6470767",
"title": "Stimulus-selective properties of inferior temporal neurons in the macaque.",
"abstract": "Previous studies have reported that some neurons in the inferior temporal (IT) cortex respond selectively to highly specific complex objects. In the present study, we conducted the first systematic survey of the responses of IT neurons to both simple stimuli, such as edges and bars, and highly complex stimuli, such as models of flowers, snakes, hands, and faces. If a neuron responded to any of these stimuli, we attempted to isolate the critical stimulus features underlying the response. We found that many of the responsive neurons responded well to virtually every stimulus tested. The remaining, stimulus-selective cells were often selective along the dimensions of shape, color, or texture of a stimulus, and this selectivity was maintained throughout a large receptive field. Although most IT neurons do not appear to be \"detectors\" for complex objects, we did find a separate population of cells that responded selectively to faces. The responses of these cells were dependent on the configuration of specific face features, and their selectivity was maintained over changes in stimulus size and position. A particularly high incidence of such cells was found deep in the superior temporal sulcus. These results indicate that there may be specialized mechanisms for the analysis of faces in IT cortex."
},
{
"pmid": "27256552",
"title": "Over the river, through the woods: cognitive maps in the hippocampus and orbitofrontal cortex.",
"abstract": "The hippocampus and the orbitofrontal cortex (OFC) both have important roles in cognitive processes such as learning, memory and decision making. Nevertheless, research on the OFC and hippocampus has proceeded largely independently, and little consideration has been given to the importance of interactions between these structures. Here, evidence is reviewed that the hippocampus and OFC encode parallel, but interactive, cognitive 'maps' that capture complex relationships between cues, actions, outcomes and other features of the environment. A better understanding of the interactions between the OFC and hippocampus is important for understanding the neural bases of flexible, goal-directed decision making."
},
{
"pmid": "33536211",
"title": "Scalable representation of time in the hippocampus.",
"abstract": "Hippocampal \"time cells\" encode specific moments of temporally organized experiences that may support hippocampal functions for episodic memory. However, little is known about the reorganization of the temporal representation of time cells during changes in temporal structures of episodes. We investigated CA1 neuronal activity during temporal bisection tasks, in which the sets of time intervals to be discriminated were designed to be extended or contracted across the blocks of trials. Assemblies of neurons encoded elapsed time during the interval, and the representation was scaled when the set of interval times was varied. Theta phase precession and theta sequences of time cells were also scalable, and the fine temporal relationships were preserved between pairs in theta cycles. Moreover, theta sequences reflected the rats' decisions on the basis of their time estimation. These findings demonstrate that scalable features of time cells may support the capability of flexible temporal representation for memory formation."
},
{
"pmid": "33397941",
"title": "Anomalous collapses of Nares Strait ice arches leads to enhanced export of Arctic sea ice.",
"abstract": "The ice arches that usually develop at the northern and southern ends of Nares Strait play an important role in modulating the export of Arctic Ocean multi-year sea ice. The Arctic Ocean is evolving towards an ice pack that is younger, thinner, and more mobile and the fate of its multi-year ice is becoming of increasing interest. Here, we use sea ice motion retrievals from Sentinel-1 imagery to report on the recent behavior of these ice arches and the associated ice fluxes. We show that the duration of arch formation has decreased over the past 20 years, while the ice area and volume fluxes along Nares Strait have both increased. These results suggest that a transition is underway towards a state where the formation of these arches will become atypical with a concomitant increase in the export of multi-year ice accelerating the transition towards a younger and thinner Arctic ice pack."
},
{
"pmid": "28390862",
"title": "Phase precession: a neural code underlying episodic memory?",
"abstract": "In the hippocampal formation, the sequential activation of place-specific cells represents a conceptual model for the spatio-temporal events that assemble episodic memories. The imprinting of behavioral sequences in hippocampal networks might be achieved via spike-timing-dependent plasticity and phase precession of the spiking activity of neurons. It is unclear, however, whether phase precession plays an active role by enabling sequence learning via synaptic plasticity or whether phase precession passively reflects retrieval dynamics. Here we examine these possibilities in the context of potential mechanisms generating phase precession. Knowledge of these mechanisms would allow to selectively alter phase precession and test its role in episodic memory. We finally review the few successful approaches to degrade phase precession and the resulting impact on behavior."
},
{
"pmid": "25269553",
"title": "Time cells in the hippocampus: a new dimension for mapping memories.",
"abstract": "Recent studies have revealed the existence of hippocampal neurons that fire at successive moments in temporally structured experiences. Several studies have shown that such temporal coding is not attributable to external events, specific behaviours or spatial dimensions of an experience. Instead, these cells represent the flow of time in specific memories and have therefore been dubbed 'time cells'. The firing properties of time cells parallel those of hippocampal place cells; time cells thus provide an additional dimension that is integrated with spatial mapping. The robust representation of both time and space in the hippocampus suggests a fundamental mechanism for organizing the elements of experience into coherent memories."
}
] |
Diagnostics | null | PMC8870748 | 10.3390/diagnostics12020325 | Multi-Channel Based Image Processing Scheme for Pneumonia Identification | Pneumonia is a prevalent severe respiratory infection that affects the distal and alveoli airways. Across the globe, it is a serious public health issue that has caused high mortality rate of children below five years old and the aged citizens who must have had previous chronic-related ailment. Pneumonia can be caused by a wide range of microorganisms, including virus, fungus, bacteria, which varies greatly across the globe. The spread of the ailment has gained computer-aided diagnosis (CAD) attention. This paper presents a multi-channel-based image processing scheme to automatically extract features and identify pneumonia from chest X-ray images. The proposed approach intends to address the problem of low quality and identify pneumonia in CXR images. Three channels of CXR images, namely, the Local Binary Pattern (LBP), Contrast Enhanced Canny Edge Detection (CECED), and Contrast Limited Adaptive Histogram Equalization (CLAHE) CXR images are processed by deep neural networks. CXR-related features of LBP images are extracted using shallow CNN, features of the CLAHE CXR images are extracted by pre-trained inception-V3, whereas the features of CECED CXR images are extracted using pre-trained MobileNet-V3. The final feature weights of the three channels are concatenated and softmax classification is utilized to determine the final identification result. The proposed network can accurately classify pneumonia according to the experimental result. The proposed method tested on publicly available dataset reports accuracy of 98.3%, sensitivity of 98.9%, and specificity of 99.2%. Compared with the single models and the state-of-the-art models, our proposed network achieves comparable performance. | 2. Related WorksIn recent years, publications of the some of the state-of-the-art models used the deep learning (DL) approach for the automatic classification and detection of pneumonia from X-ray images. This section reviews and examines the most up-to-date approaches for detecting and classifying pneumonia with DL.Various investigations and research studies based on artificial intelligence and deep learning have been conducted in the area of disease diagnosis using medical imagery such as chest X-rays (CXR), ultrasound scans, computed tomography (CT) scans, MRI scans, and so on. Deep learning is perhaps the most widely used and reliable medical imaging approach available. It is a quick and accurate way for diagnosing variety of ailments. There are models that have been explicitly trained to classify different categories of specific ailment based on the disease type. These models have proven to achieve satisfactory results in the medical science utilizing image analysis for the detection of cardiac abnormalities, tumors, cancer, and a variety of different applications.Deep learning has been used to distinguish between scan images of COVID-19-infected and non-infected patients as proposed by Shah et al. [13] using a self-developed framework called CTnet-10, hence, achieving an accuracy of 82.1%. However, to further enhance the accuracy, different pretrained models were introduced to the CT scan dataset and VGG-19 attain a better performance of 94.5%.Deep neural network has the ability to improve the output of instruments utilized in pharmaceutical processes for analytical technology. Maruthamuthu et al. [14] developed a technique for identifying the contamination of micro-organisms using DL strategies based on Raman spectroscopy. The categorization of numerous sample consisting of micro-organisms and bacteria mixed with Chinese Hamster Ovary cells achieved an accuracy of 95% to 100% using a convolutional neural network (CNN).Looking at a situation of very large margin of unlabeled dataset, Maruthamuthu et al. [15], demonstrate the superiority of a data-driven strategy over data analytics and machine learning technique. The data analytics strategy, however, has the potential to improve the development of efficient proteins and affinity ligands, affecting the downstream refinement and processing of mAb products, according to the authors. The authors propose a deep neural network approach, which is also called a black box for the construction of soft sensors, which uses accumulated past data from process operation that is not present in a mechanistic orphenomenological approach. The data are incorporated to get apolynomial expression to generate product as a function of the inputs such as cell mass, substrate concentration, and initial product (streptokinase). Given enough dataset collected from various sources under different condition, the accurate data fit with automated error correction generated factor is utilized to compute the major outputs such as the number of workable cells and extent of product as a result of the marginal calibrated input parameters such as inoculum and substrate. Even though the molecular relationship of these variables to cell metabolism is uncertain, yet the deep learning model accounts for the impacts of other conditions such as excitation, individual critical media properties, mixing rate, and variations in metabolic concentrations.An application of the Inception-v3 model is used by Kermany et al. [16] to classify different types of pneumonia infections in pediatric patients. This method extracts features and utilizes the area under curve (AUC) as a metric to categorize bacterial and viral pneumonia. Santosh et al. [17] suggested a method for detecting pulmonary problems by comparing the symmetrical shape of the CXR lung region. Roux et al. [18] sought to determine the prevalence, intensity, and adverse outcomes for pneumonia in infants during their first year of life in a South African cohort study. They set up pneumonia monitoring systems and recorded outpatient pneumonia and pneumonia that required hospitalization. In addition, combined Poisson regression was used to calculate pneumonia incidence rate ratios.Table 1 gives a summary of the related works with the following observations:Several DL techniques are thoroughly implemented for the classification task.More so, researchers in [27,28,31,32] use more than five DL techniques to evaluate the performance.The popular medical mode of imaging for the classification and detection of pneumonia-related ailment is the chest X-ray.The publishers focus on either binary classification or multi-classification, although just a few considered the multi-classification task.Some evaluation metrics such as accuracy, sensitivity, and specificity were utilized to evaluate the efficacy of DL models approaches whereas precision, recall, and/or F1-score metrics were used in [25,26] to evaluate the model performance.However, there was no account where low quality was considered as a challenge for better classification performance of pneumonia. Therefore, we put this into consideration by preprocessing our images into LBP, CECED, and CLAHE for multi-channel weighted fusion technique using neural network for the identification of pneumonia. Additionally, we carried out a study to compare the performance of each single model and the weighted fusion model. | [
"33425044",
"31049186",
"33729998",
"33643764",
"27590198",
"33523309",
"33063423",
"32839030",
"29474911",
"29727280",
"25617203",
"27922974",
"32457819",
"30517102",
"31262537",
"32524445",
"32837749",
"33163973",
"32397844",
"33799220",
"33872157",
"32730210",
"33560995",
"33739926",
"33983881"
] | [
{
"pmid": "33425044",
"title": "Pneumonia Classification Using Deep Learning from Chest X-ray Images During COVID-19.",
"abstract": "The outbreak of the novel corona virus disease (COVID-19) in December 2019 has led to global crisis around the world. The disease was declared pandemic by World Health Organization (WHO) on 11th of March 2020. Currently, the outbreak has affected more than 200 countries with more than 37 million confirmed cases and more than 1 million death tolls as of 10 October 2020. Reverse-transcription polymerase chain reaction (RT-PCR) is the standard method for detection of COVID-19 disease, but it has many challenges such as false positives, low sensitivity, expensive, and requires experts to conduct the test. As the number of cases continue to grow, there is a high need for developing a rapid screening method that is accurate, fast, and cheap. Chest X-ray (CXR) scan images can be considered as an alternative or a confirmatory approach as they are fast to obtain and easily accessible. Though the literature reports a number of approaches to classify CXR images and detect the COVID-19 infections, the majority of these approaches can only recognize two classes (e.g., COVID-19 vs. normal). However, there is a need for well-developed models that can classify a wider range of CXR images belonging to the COVID-19 class itself such as the bacterial pneumonia, the non-COVID-19 viral pneumonia, and the normal CXR scans. The current work proposes the use of a deep learning approach based on pretrained AlexNet model for the classification of COVID-19, non-COVID-19 viral pneumonia, bacterial pneumonia, and normal CXR scans obtained from different public databases. The model was trained to perform two-way classification (i.e., COVID-19 vs. normal, bacterial pneumonia vs. normal, non-COVID-19 viral pneumonia vs. normal, and COVID-19 vs. bacterial pneumonia), three-way classification (i.e., COVID-19 vs. bacterial pneumonia vs. normal), and four-way classification (i.e., COVID-19 vs. bacterial pneumonia vs. non-COVID-19 viral pneumonia vs. normal). For non-COVID-19 viral pneumonia and normal (healthy) CXR images, the proposed model achieved 94.43% accuracy, 98.19% sensitivity, and 95.78% specificity. For bacterial pneumonia and normal CXR images, the model achieved 91.43% accuracy, 91.94% sensitivity, and 100% specificity. For COVID-19 pneumonia and normal CXR images, the model achieved 99.16% accuracy, 97.44% sensitivity, and 100% specificity. For classification CXR images of COVID-19 pneumonia and non-COVID-19 viral pneumonia, the model achieved 99.62% accuracy, 90.63% sensitivity, and 99.89% specificity. For the three-way classification, the model achieved 94.00% accuracy, 91.30% sensitivity, and 84.78%. Finally, for the four-way classification, the model achieved an accuracy of 93.42%, sensitivity of 89.18%, and specificity of 98.92%."
},
{
"pmid": "31049186",
"title": "An Efficient Deep Learning Approach to Pneumonia Classification in Healthcare.",
"abstract": "This study proposes a convolutional neural network model trained from scratch to classify and detect the presence of pneumonia from a collection of chest X-ray image samples. Unlike other methods that rely solely on transfer learning approaches or traditional handcrafted techniques to achieve a remarkable classification performance, we constructed a convolutional neural network model from scratch to extract features from a given chest X-ray image and classify it to determine if a person is infected with pneumonia. This model could help mitigate the reliability and interpretability challenges often faced when dealing with medical imagery. Unlike other deep learning classification tasks with sufficient image repository, it is difficult to obtain a large amount of pneumonia dataset for this classification task; therefore, we deployed several data augmentation algorithms to improve the validation and classification accuracy of the CNN model and achieved remarkable validation accuracy."
},
{
"pmid": "33729998",
"title": "Large-scale screening to distinguish between COVID-19 and community-acquired pneumonia using infection size-aware classification.",
"abstract": "The worldwide spread of coronavirus disease (COVID-19) has become a threat to global public health. It is of great importance to rapidly and accurately screen and distinguish patients with COVID-19 from those with community-acquired pneumonia (CAP). In this study, a total of 1,658 patients with COVID-19 and 1,027 CAP patients underwent thin-section CT and were enrolled. All images were preprocessed to obtain the segmentations of infections and lung fields. A set of handcrafted location-specific features was proposed to best capture the COVID-19 distribution pattern, in comparison to the conventional CT severity score (CT-SS) and radiomics features. An infection size-aware random forest method (iSARF) was proposed for discriminating COVID-19 from CAP. Experimental results show that the proposed method yielded its best performance when using the handcrafted features, with a sensitivity of 90.7%, a specificity of 87.2%, and an accuracy of 89.4% over state-of-the-art classifiers. Additional tests on 734 subjects, with thick slice images, demonstrates great generalizability. It is anticipated that our proposed framework could assist clinical decision making."
},
{
"pmid": "33643764",
"title": "Design ensemble deep learning model for pneumonia disease classification.",
"abstract": "With the recent spread of the SARS-CoV-2 virus, computer-aided diagnosis (CAD) has received more attention. The most important CAD application is to detect and classify pneumonia diseases using X-ray images, especially, in a critical period as pandemic of covid-19 that is kind of pneumonia. In this work, we aim to evaluate the performance of single and ensemble learning models for the pneumonia disease classification. The ensembles used are mainly based on fined-tuned versions of (InceptionResNet_V2, ResNet50 and MobileNet_V2). We collected a new dataset containing 6087 chest X-ray images in which we conduct comprehensive experiments. As a result, for a single model, we found out that InceptionResNet_V2 gives 93.52% of F1 score. In addition, ensemble of 3 models (ResNet50 with MobileNet_V2 with InceptionResNet_V2) shows more accurate than other ensembles constructed (94.84% of F1 score)."
},
{
"pmid": "27590198",
"title": "Glaucoma detection using entropy sampling and ensemble learning for automatic optic cup and disc segmentation.",
"abstract": "We present a novel method to segment retinal images using ensemble learning based convolutional neural network (CNN) architectures. An entropy sampling technique is used to select informative points thus reducing computational complexity while performing superior to uniform sampling. The sampled points are used to design a novel learning framework for convolutional filters based on boosting. Filters are learned in several layers with the output of previous layers serving as the input to the next layer. A softmax logistic classifier is subsequently trained on the output of all learned filters and applied on test images. The output of the classifier is subject to an unsupervised graph cut algorithm followed by a convex hull transformation to obtain the final segmentation. Our proposed algorithm for optic cup and disc segmentation outperforms existing methods on the public DRISHTI-GS data set on several metrics."
},
{
"pmid": "33523309",
"title": "Diagnosis of COVID-19 using CT scan images and deep learning techniques.",
"abstract": "Early diagnosis of the coronavirus disease in 2019 (COVID-19) is essential for controlling this pandemic. COVID-19 has been spreading rapidly all over the world. There is no vaccine available for this virus yet. Fast and accurate COVID-19 screening is possible using computed tomography (CT) scan images. The deep learning techniques used in the proposed method is based on a convolutional neural network (CNN). Our manuscript focuses on differentiating the CT scan images of COVID-19 and non-COVID 19 CT using different deep learning techniques. A self-developed model named CTnet-10 was designed for the COVID-19 diagnosis, having an accuracy of 82.1%. Also, other models that we tested are DenseNet-169, VGG-16, ResNet-50, InceptionV3, and VGG-19. The VGG-19 proved to be superior with an accuracy of 94.52% as compared to all other deep learning models. Automated diagnosis of COVID-19 from the CT scan pictures can be used by the doctors as a quick and efficient method for COVID-19 screening."
},
{
"pmid": "33063423",
"title": "Raman spectra-based deep learning: A tool to identify microbial contamination.",
"abstract": "Deep learning has the potential to enhance the output of in-line, on-line, and at-line instrumentation used for process analytical technology in the pharmaceutical industry. Here, we used Raman spectroscopy-based deep learning strategies to develop a tool for detecting microbial contamination. We built a Raman dataset for microorganisms that are common contaminants in the pharmaceutical industry for Chinese Hamster Ovary (CHO) cells, which are often used in the production of biologics. Using a convolution neural network (CNN), we classified the different samples comprising individual microbes and microbes mixed with CHO cells with an accuracy of 95%-100%. The set of 12 microbes spans across Gram-positive and Gram-negative bacteria as well as fungi. We also created an attention map for different microbes and CHO cells to highlight which segments of the Raman spectra contribute the most to help discriminate between different species. This dataset and algorithm provide a route for implementing Raman spectroscopy for detecting microbial contamination in the pharmaceutical industry."
},
{
"pmid": "32839030",
"title": "Process Analytical Technologies and Data Analytics for the Manufacture of Monoclonal Antibodies.",
"abstract": "Process analytical technology (PAT) for the manufacture of monoclonal antibodies (mAbs) is defined by an integrated set of advanced and automated methods that analyze the compositions and biophysical properties of cell culture fluids, cell-free product streams, and biotherapeutic molecules that are ultimately formulated into concentrated products. In-line or near-line probes and systems are remarkably well developed, although challenges remain in the determination of the absence of viral loads, detecting microbial or mycoplasma contamination, and applying data-driven deep learning to process monitoring and soft sensors. In this review, we address the current status of PAT for both batch and continuous processing steps and discuss its potential impact on facilitating the continuous manufacture of biotherapeutics."
},
{
"pmid": "29474911",
"title": "Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning.",
"abstract": "The implementation of clinical-decision support algorithms for medical imaging faces challenges with reliability and interpretability. Here, we establish a diagnostic tool based on a deep-learning framework for the screening of patients with common treatable blinding retinal diseases. Our framework utilizes transfer learning, which trains a neural network with a fraction of the data of conventional approaches. Applying this approach to a dataset of optical coherence tomography images, we demonstrate performance comparable to that of human experts in classifying age-related macular degeneration and diabetic macular edema. We also provide a more transparent and interpretable diagnosis by highlighting the regions recognized by the neural network. We further demonstrate the general applicability of our AI system for diagnosis of pediatric pneumonia using chest X-ray images. This tool may ultimately aid in expediting the diagnosis and referral of these treatable conditions, thereby facilitating earlier treatment, resulting in improved clinical outcomes. VIDEO ABSTRACT."
},
{
"pmid": "29727280",
"title": "Automated Chest X-Ray Screening: Can Lung Region Symmetry Help Detect Pulmonary Abnormalities?",
"abstract": "Our primary motivator is the need for screening HIV+ populations in resource-constrained regions for exposure to Tuberculosis, using posteroanterior chest radiographs (CXRs). The proposed method is motivated by the observation that radiological examinations routinely conduct bilateral comparisons of the lung field. In addition, the abnormal CXRs tend to exhibit changes in the lung shape, size, and content (textures), and in overall, reflection symmetry between them. We analyze the lung region symmetry using multi-scale shape features, and edge plus texture features. Shape features exploit local and global representation of the lung regions, while edge and texture features take internal content, including spatial arrangements of the structures. For classification, we have performed voting-based combination of three different classifiers: Bayesian network, multilayer perception neural networks, and random forest. We have used three CXR benchmark collections made available by the U.S. National Library of Medicine and the National Institute of Tuberculosis and Respiratory Diseases, India, and have achieved a maximum abnormality detection accuracy (ACC) of 91.00% and area under the ROC curve (AUC) of 0.96. The proposed method outperforms the previously reported methods by more than 5% in ACC and 3% in AUC."
},
{
"pmid": "25617203",
"title": "Incidence and severity of childhood pneumonia in the first year of life in a South African birth cohort: the Drakenstein Child Health Study.",
"abstract": "BACKGROUND\nChildhood pneumonia causes substantial mortality and morbidity. Accurate measurements of pneumonia incidence are scarce in low-income and middle-income countries, particularly after implementation of pneumococcal conjugate vaccine. We aimed to assess the incidence, severity, and risk factors for pneumonia in the first year of life in children enrolled in a South African birth cohort.\n\n\nMETHODS\nThis birth cohort study is being done at two sites in Paarl, a periurban area of South Africa. We enrolled pregnant women (>18 years) and followed up mother-infant pairs to 1 year of age. We obtained data for risk factors and respiratory symptoms. Children received 13-valent pneumococcal conjugate vaccine according to national immunisation schedules. We established pneumonia surveillance systems and documented episodes of ambulatory pneumonia and pneumonia warranting hospital admission. We calculated incidence rate ratios for pneumonia with mixed-effects Poisson regression.\n\n\nFINDINGS\nBetween May 29, 2012 and May 31, 2014, we enrolled 697 infants who accrued 513 child-years of follow-up. We recorded 141 pneumonia episodes, with an incidence of 0·27 episodes per child-year (95% CI 0·23-0·32). 32 (23%) pneumonia cases were severe pneumonia, with an incidence of 0·06 episodes per child-year (95% CI 0·04-0·08). Two (1%) of 141 pneumonia episodes led to death from pneumonia. Maternal HIV, maternal smoking, male sex, and malnutrition were associated with an increased incidence of pneumonia.\n\n\nINTERPRETATION\nPneumonia incidence was high in the first year of life, despite a strong immunisation programme including 13-valent pneumococcal conjugate vaccine. Incidence was associated with pneumonia risk factors that are amenable to interventions. Prevention of childhood pneumonia through public health interventions to address these risk factors should be strengthened.\n\n\nFUNDING\nBill & Melinda Gates Foundation, South African Thoracic Society, Federation of Infectious Diseases Societies of South Africa, and University of Cape Town."
},
{
"pmid": "27922974",
"title": "Training and Validating a Deep Convolutional Neural Network for Computer-Aided Detection and Classification of Abnormalities on Frontal Chest Radiographs.",
"abstract": "OBJECTIVES\nConvolutional neural networks (CNNs) are a subtype of artificial neural network that have shown strong performance in computer vision tasks including image classification. To date, there has been limited application of CNNs to chest radiographs, the most frequently performed medical imaging study. We hypothesize CNNs can learn to classify frontal chest radiographs according to common findings from a sufficiently large data set.\n\n\nMATERIALS AND METHODS\nOur institution's research ethics board approved a single-center retrospective review of 35,038 adult posterior-anterior chest radiographs and final reports performed between 2005 and 2015 (56% men, average age of 56, patient type: 24% inpatient, 39% outpatient, 37% emergency department) with a waiver for informed consent. The GoogLeNet CNN was trained using 3 graphics processing units to automatically classify radiographs as normal (n = 11,702) or into 1 or more of cardiomegaly (n = 9240), consolidation (n = 6788), pleural effusion (n = 7786), pulmonary edema (n = 1286), or pneumothorax (n = 1299). The network's performance was evaluated using receiver operating curve analysis on a test set of 2443 radiographs with the criterion standard being board-certified radiologist interpretation.\n\n\nRESULTS\nUsing 256 × 256-pixel images as input, the network achieved an overall sensitivity and specificity of 91% with an area under the curve of 0.964 for classifying a study as normal (n = 1203). For the abnormal categories, the sensitivity, specificity, and area under the curve, respectively, were 91%, 91%, and 0.962 for pleural effusion (n = 782), 82%, 82%, and 0.868 for pulmonary edema (n = 356), 74%, 75%, and 0.850 for consolidation (n = 214), 81%, 80%, and 0.875 for cardiomegaly (n = 482), and 78%, 78%, and 0.861 for pneumothorax (n = 167).\n\n\nCONCLUSIONS\nCurrent deep CNN architectures can be trained with modest-sized medical data sets to achieve clinically useful performance at detecting and excluding common pathology on chest radiographs."
},
{
"pmid": "32457819",
"title": "Visualization and Interpretation of Convolutional Neural Network Predictions in Detecting Pneumonia in Pediatric Chest Radiographs.",
"abstract": "Pneumonia affects 7% of the global population, resulting in 2 million pediatric deaths every year. Chest X-ray (CXR) analysis is routinely performed to diagnose the disease. Computer-aided diagnostic (CADx) tools aim to supplement decision-making. These tools process the handcrafted and/or convolutional neural network (CNN) extracted image features for visual recognition. However, CNNs are perceived as black boxes since their performance lack explanations. This is a serious bottleneck in applications involving medical screening/diagnosis since poorly interpreted model behavior could adversely affect the clinical decision. In this study, we evaluate, visualize, and explain the performance of customized CNNs to detect pneumonia and further differentiate between bacterial and viral types in pediatric CXRs. We present a novel visualization strategy to localize the region of interest (ROI) that is considered relevant for model predictions across all the inputs that belong to an expected class. We statistically validate the models' performance toward the underlying tasks. We observe that the customized VGG16 model achieves 96.2% and 93.6% accuracy in detecting the disease and distinguishing between bacterial and viral pneumonia respectively. The model outperforms the state-of-the-art in all performance metrics and demonstrates reduced bias and improved generalization."
},
{
"pmid": "30517102",
"title": "Automatic classification of pediatric pneumonia based on lung ultrasound pattern recognition.",
"abstract": "Pneumonia is one of the major causes of child mortality, yet with a timely diagnosis, it is usually curable with antibiotic therapy. In many developing regions, diagnosing pneumonia remains a challenge, due to shortages of medical resources. Lung ultrasound has proved to be a useful tool to detect lung consolidation as evidence of pneumonia. However, diagnosis of pneumonia by ultrasound has limitations: it is operator-dependent, and it needs to be carried out and interpreted by trained personnel. Pattern recognition and image analysis is a potential tool to enable automatic diagnosis of pneumonia consolidation without requiring an expert analyst. This paper presents a method for automatic classification of pneumonia using ultrasound imaging of the lungs and pattern recognition. The approach presented here is based on the analysis of brightness distribution patterns present in rectangular segments (here called \"characteristic vectors\") from the ultrasound digital images. In a first step we identified and eliminated the skin and subcutaneous tissue (fat and muscle) in lung ultrasound frames, and the \"characteristic vectors\"were analyzed using standard neural networks using artificial intelligence methods. We analyzed 60 lung ultrasound frames corresponding to 21 children under age 5 years (15 children with confirmed pneumonia by clinical examination and X-rays, and 6 children with no pulmonary disease) from a hospital based population in Lima, Peru. Lung ultrasound images were obtained using an Ultrasonix ultrasound device. A total of 1450 positive (pneumonia) and 1605 negative (normal lung) vectors were analyzed with standard neural networks, and used to create an algorithm to differentiate lung infiltrates from healthy lung. A neural network was trained using the algorithm and it was able to correctly identify pneumonia infiltrates, with 90.9% sensitivity and 100% specificity. This approach may be used to develop operator-independent computer algorithms for pneumonia diagnosis using ultrasound in young children."
},
{
"pmid": "31262537",
"title": "A transfer learning method with deep residual network for pediatric pneumonia diagnosis.",
"abstract": "BACKGROUND AND OBJECTIVE\nComputer aided diagnosis systems based on deep learning and medical imaging is increasingly becoming research hotspots. At the moment, the classical convolutional neural network generates classification results by hierarchically abstracting the original image. These abstract features are less sensitive to the position and orientation of the object, and this lack of spatial information limits the further improvement of image classification accuracy. Therefore, how to develop a suitable neural network framework and training strategy in practical clinical applications to avoid this problem is a topic that researchers need to continue to explore.\n\n\nMETHODS\nWe propose a deep learning framework that combines residual thought and dilated convolution to diagnose and detect childhood pneumonia. Specifically, based on an understanding of the nature of the child pneumonia image classification task, the proposed method uses the residual structure to overcome the over-fitting and the degradation problems of the depth model, and utilizes dilated convolution to overcome the problem of loss of feature space information caused by the increment in depth of the model. Furthermore, in order to overcome the problem of difficulty in training model due to insufficient data and the negative impact of the introduction of structured noise on the performance of the model, we use the model parameters learned on large-scale datasets in the same field to initialize our model through transfer learning.\n\n\nRESULTS\nOur proposed method has been evaluated for extracting texture features associated with pneumonia and for accurately identifying the performance of areas of the image that best indicate pneumonia. The experimental results of the test dataset show that the recall rate of the method on children pneumonia classification task is 96.7%, and the f1-score is 92.7%. Compared with the prior art methods, this approach can effectively solve the problem of low image resolution and partial occlusion of the inflammatory area in children chest X-ray images.\n\n\nCONCLUSIONS\nThe novel framework focuses on the application of advanced classification that directly performs lesion characterization, and has high reliability in the classification task of children pneumonia."
},
{
"pmid": "32524445",
"title": "Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks.",
"abstract": "In this study, a dataset of X-ray images from patients with common bacterial pneumonia, confirmed Covid-19 disease, and normal incidents, was utilized for the automatic detection of the Coronavirus disease. The aim of the study is to evaluate the performance of state-of-the-art convolutional neural network architectures proposed over the recent years for medical image classification. Specifically, the procedure called Transfer Learning was adopted. With transfer learning, the detection of various abnormalities in small medical image datasets is an achievable target, often yielding remarkable results. The datasets utilized in this experiment are two. Firstly, a collection of 1427 X-ray images including 224 images with confirmed Covid-19 disease, 700 images with confirmed common bacterial pneumonia, and 504 images of normal conditions. Secondly, a dataset including 224 images with confirmed Covid-19 disease, 714 images with confirmed bacterial and viral pneumonia, and 504 images of normal conditions. The data was collected from the available X-ray images on public medical repositories. The results suggest that Deep Learning with X-ray imaging may extract significant biomarkers related to the Covid-19 disease, while the best accuracy, sensitivity, and specificity obtained is 96.78%, 98.66%, and 96.46% respectively. Since by now, all diagnostic tests show failure rates such as to raise concerns, the probability of incorporating X-rays into the diagnosis of the disease could be assessed by the medical community, based on the findings, while more research to evaluate the X-ray approach from different aspects may be conducted."
},
{
"pmid": "32837749",
"title": "A Deep Learning System to Screen Novel Coronavirus Disease 2019 Pneumonia.",
"abstract": "The real-time reverse transcription-polymerase chain reaction (RT-PCR) detection of viral RNA from sputum or nasopharyngeal swab had a relatively low positive rate in the early stage of coronavirus disease 2019 (COVID-19). Meanwhile, the manifestations of COVID-19 as seen through computed tomography (CT) imaging show individual characteristics that differ from those of other types of viral pneumonia such as influenza-A viral pneumonia (IAVP). This study aimed to establish an early screening model to distinguish COVID-19 from IAVP and healthy cases through pulmonary CT images using deep learning techniques. A total of 618 CT samples were collected: 219 samples from 110 patients with COVID-19 (mean age 50 years; 63 (57.3%) male patients); 224 samples from 224 patients with IAVP (mean age 61 years; 156 (69.6%) male patients); and 175 samples from 175 healthy cases (mean age 39 years; 97 (55.4%) male patients). All CT samples were contributed from three COVID-19-designated hospitals in Zhejiang Province, China. First, the candidate infection regions were segmented out from the pulmonary CT image set using a 3D deep learning model. These separated images were then categorized into the COVID-19, IAVP, and irrelevant to infection (ITI) groups, together with the corresponding confidence scores, using a location-attention classification model. Finally, the infection type and overall confidence score for each CT case were calculated using the Noisy-OR Bayesian function. The experimental result of the benchmark dataset showed that the overall accuracy rate was 86.7% in terms of all the CT cases taken together. The deep learning models established in this study were effective for the early screening of COVID-19 patients and were demonstrated to be a promising supplementary diagnostic method for frontline clinical doctors."
},
{
"pmid": "33163973",
"title": "Ensemble of CheXNet and VGG-19 Feature Extractor with Random Forest Classifier for Pediatric Pneumonia Detection.",
"abstract": "Pneumonia, an acute respiratory infection, causes serious breathing hindrance by damaging lung/s. Recovery of pneumonia patients depends on the early diagnosis of the disease and proper treatment. This paper proposes an ensemble method-based pneumonia diagnosis from Chest X-ray images. The deep Convolutional Neural Networks (CNNs)-CheXNet and VGG-19 are trained and used to extract features from given X-ray images. These features are then ensembled for classification. To overcome data irregularity problem, Random Under Sampler (RUS), Random Over Sampler (ROS) and Synthetic Minority Oversampling Technique (SMOTE) are applied on the ensembled feature vector. The ensembled feature vector is then classified using several Machine Learning (ML) classification techniques (Random Forest, Adaptive Boosting, K-Nearest Neighbors). Among these methods, Random Forest got better performance metrics than others on the available standard dataset. Comparison with existing methods shows that the proposed method attains improved classification accuracy, AUC values and outperforms all other models providing 98.93% accurate prediction. The model also exhibits potential generalization capacity when tested on different dataset. Outcomes of this study can be great to use for pneumonia diagnosis from chest X-ray images."
},
{
"pmid": "32397844",
"title": "Using X-ray images and deep learning for automated detection of coronavirus disease.",
"abstract": "Coronavirus is still the leading cause of death worldwide. There are a set number of COVID-19 test units accessible in emergency clinics because of the expanding cases daily. Therefore, it is important to implement an automatic detection and classification system as a speedy elective finding choice to forestall COVID-19 spreading among individuals. Medical images analysis is one of the most promising research areas, it provides facilities for diagnosis and making decisions of a number of diseases such as Coronavirus. This paper conducts a comparative study of the use of the recent deep learning models (VGG16, VGG19, DenseNet201, Inception_ResNet_V2, Inception_V3, Resnet50, and MobileNet_V2) to deal with detection and classification of coronavirus pneumonia. The experiments were conducted using chest X-ray & CT dataset of 6087 images (2780 images of bacterial pneumonia, 1493 of coronavirus, 231 of Covid19, and 1583 normal) and confusion matrices are used to evaluate model performances. Results found out that the use of inception_Resnet_V2 and Densnet201 provide better results compared to other models used in this work (92.18% accuracy for Inception-ResNetV2 and 88.09% accuracy for Densnet201).Communicated by Ramaswamy H. Sarma."
},
{
"pmid": "33799220",
"title": "Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images.",
"abstract": "Computer-aided diagnosis for the reliable and fast detection of coronavirus disease (COVID-19) has become a necessity to prevent the spread of the virus during the pandemic to ease the burden on the healthcare system. Chest X-ray (CXR) imaging has several advantages over other imaging and detection techniques. Numerous works have been reported on COVID-19 detection from a smaller set of original X-ray images. However, the effect of image enhancement and lung segmentation of a large dataset in COVID-19 detection was not reported in the literature. We have compiled a large X-ray dataset (COVQU) consisting of 18,479 CXR images with 8851 normal, 6012 non-COVID lung infections, and 3616 COVID-19 CXR images and their corresponding ground truth lung masks. To the best of our knowledge, this is the largest public COVID positive database and the lung masks. Five different image enhancement techniques: histogram equalization (HE), contrast limited adaptive histogram equalization (CLAHE), image complement, gamma correction, and balance contrast enhancement technique (BCET) were used to investigate the effect of image enhancement techniques on COVID-19 detection. A novel U-Net model was proposed and compared with the standard U-Net model for lung segmentation. Six different pre-trained Convolutional Neural Networks (CNNs) (ResNet18, ResNet50, ResNet101, InceptionV3, DenseNet201, and ChexNet) and a shallow CNN model were investigated on the plain and segmented lung CXR images. The novel U-Net model showed an accuracy, Intersection over Union (IoU), and Dice coefficient of 98.63%, 94.3%, and 96.94%, respectively for lung segmentation. The gamma correction-based enhancement technique outperforms other techniques in detecting COVID-19 from the plain and the segmented lung CXR images. Classification performance from plain CXR images is slightly better than the segmented lung CXR images; however, the reliability of network performance is significantly improved for the segmented lung images, which was observed using the visualization technique. The accuracy, precision, sensitivity, F1-score, and specificity were 95.11%, 94.55%, 94.56%, 94.53%, and 95.59% respectively for the segmented lung images. The proposed approach with very reliable and comparable performance will boost the fast and robust COVID-19 detection using chest X-ray images."
},
{
"pmid": "33872157",
"title": "Convolutional Sparse Support Estimator-Based COVID-19 Recognition From X-Ray Images.",
"abstract": "Coronavirus disease (COVID-19) has been the main agenda of the whole world ever since it came into sight. X-ray imaging is a common and easily accessible tool that has great potential for COVID-19 diagnosis and prognosis. Deep learning techniques can generally provide state-of-the-art performance in many classification tasks when trained properly over large data sets. However, data scarcity can be a crucial obstacle when using them for COVID-19 detection. Alternative approaches such as representation-based classification [collaborative or sparse representation (SR)] might provide satisfactory performance with limited size data sets, but they generally fall short in performance or speed compared to the neural network (NN)-based methods. To address this deficiency, convolution support estimation network (CSEN) has recently been proposed as a bridge between representation-based and NN approaches by providing a noniterative real-time mapping from query sample to ideally SR coefficient support, which is critical information for class decision in representation-based techniques. The main premises of this study can be summarized as follows: 1) A benchmark X-ray data set, namely QaTa-Cov19, containing over 6200 X-ray images is created. The data set covering 462 X-ray images from COVID-19 patients along with three other classes; bacterial pneumonia, viral pneumonia, and normal. 2) The proposed CSEN-based classification scheme equipped with feature extraction from state-of-the-art deep NN solution for X-ray images, CheXNet, achieves over 98% sensitivity and over 95% specificity for COVID-19 recognition directly from raw X-ray images when the average performance of 5-fold cross validation over QaTa-Cov19 data set is calculated. 3) Having such an elegant COVID-19 assistive diagnosis performance, this study further provides evidence that COVID-19 induces a unique pattern in X-rays that can be discriminated with high accuracy."
},
{
"pmid": "32730210",
"title": "Prior-Attention Residual Learning for More Discriminative COVID-19 Screening in CT Images.",
"abstract": "We propose a conceptually simple framework for fast COVID-19 screening in 3D chest CT images. The framework can efficiently predict whether or not a CT scan contains pneumonia while simultaneously identifying pneumonia types between COVID-19 and Interstitial Lung Disease (ILD) caused by other viruses. In the proposed method, two 3D-ResNets are coupled together into a single model for the two above-mentioned tasks via a novel prior-attention strategy. We extend residual learning with the proposed prior-attention mechanism and design a new so-called prior-attention residual learning (PARL) block. The model can be easily built by stacking the PARL blocks and trained end-to-end using multi-task losses. More specifically, one 3D-ResNet branch is trained as a binary classifier using lung images with and without pneumonia so that it can highlight the lesion areas within the lungs. Simultaneously, inside the PARL blocks, prior-attention maps are generated from this branch and used to guide another branch to learn more discriminative representations for the pneumonia-type classification. Experimental results demonstrate that the proposed framework can significantly improve the performance of COVID-19 screening. Compared to other methods, it achieves a state-of-the-art result. Moreover, the proposed method can be easily extended to other similar clinical applications such as computer-aided detection and diagnosis of pulmonary nodules in CT images, glaucoma lesions in Retina fundus images, etc."
},
{
"pmid": "33560995",
"title": "Multiscale Attention Guided Network for COVID-19 Diagnosis Using Chest X-Ray Images.",
"abstract": "Coronavirus disease 2019 (COVID-19) is one of the most destructive pandemic after millennium, forcing the world to tackle a health crisis. Automated lung infections classification using chest X-ray (CXR) images could strengthen diagnostic capability when handling COVID-19. However, classifying COVID-19 from pneumonia cases using CXR image is a difficult task because of shared spatial characteristics, high feature variation and contrast diversity between cases. Moreover, massive data collection is impractical for a newly emerged disease, which limited the performance of data thirsty deep learning models. To address these challenges, Multiscale Attention Guided deep network with Soft Distance regularization (MAG-SD) is proposed to automatically classify COVID-19 from pneumonia CXR images. In MAG-SD, MA-Net is used to produce prediction vector and attention from multiscale feature maps. To improve the robustness of trained model and relieve the shortage of training data, attention guided augmentations along with a soft distance regularization are posed, which aims at generating meaningful augmentations and reduce noise. Our multiscale attention model achieves better classification performance on our pneumonia CXR image dataset. Plentiful experiments are proposed for MAG-SD which demonstrates its unique advantage in pneumonia classification over cutting-edge models. The code is available at https://github.com/JasonLeeGHub/MAG-SD."
},
{
"pmid": "33739926",
"title": "Lung Lesion Localization of COVID-19 From Chest CT Image: A Novel Weakly Supervised Learning Method.",
"abstract": "Chest computed tomography (CT) image data is necessary for early diagnosis, treatment, and prognosis of Coronavirus Disease 2019 (COVID-19). Artificial intelligence has been tried to help clinicians in improving the diagnostic accuracy and working efficiency of CT. Whereas, existing supervised approaches on CT image of COVID-19 pneumonia require voxel-based annotations for training, which take a lot of time and effort. This paper proposed a weakly-supervised method for COVID-19 lesion localization based on generative adversarial network (GAN) with image-level labels only. We first introduced a GAN-based framework to generate normal-looking CT slices from CT slices with COVID-19 lesions. We then developed a novel feature match strategy to improve the reality of generated images by guiding the generator to capture the complex texture of chest CT images. Finally, the localization map of lesions can be easily obtained by subtracting the output image from its corresponding input image. By adding a classifier branch to the GAN-based framework to classify localization maps, we can further develop a diagnosis system with improved classification accuracy. Three CT datasets from hospitals of Sao Paulo, Italian Society of Medical and Interventional Radiology, and China Medical University about COVID-19 were collected in this article for evaluation. Our weakly supervised learning method obtained AUC of 0.883, dice coefficient of 0.575, accuracy of 0.884, sensitivity of 0.647, specificity of 0.929, and F1-score of 0.640, which exceeded other widely used weakly supervised object localization methods by a significant margin. We also compared the proposed method with fully supervised learning methods in COVID-19 lesion segmentation task, the proposed weakly supervised method still leads to a competitive result with dice coefficient of 0.575. Furthermore, we also analyzed the association between illness severity and visual score, we found that the common severity cohort had the largest sample size as well as the highest visual score which suggests our method can help rapid diagnosis of COVID-19 patients, especially in massive common severity cohort. In conclusion, we proposed this novel method can serve as an accurate and efficient tool to alleviate the bottleneck of expert annotation cost and advance the progress of computer-aided COVID-19 diagnosis."
},
{
"pmid": "33983881",
"title": "Joint Learning of 3D Lesion Segmentation and Classification for Explainable COVID-19 Diagnosis.",
"abstract": "Given the outbreak of COVID-19 pandemic and the shortage of medical resource, extensive deep learning models have been proposed for automatic COVID-19 diagnosis, based on 3D computed tomography (CT) scans. However, the existing models independently process the 3D lesion segmentation and disease classification, ignoring the inherent correlation between these two tasks. In this paper, we propose a joint deep learning model of 3D lesion segmentation and classification for diagnosing COVID-19, called DeepSC-COVID, as the first attempt in this direction. Specifically, we establish a large-scale CT database containing 1,805 3D CT scans with fine-grained lesion annotations, and reveal 4 findings about lesion difference between COVID-19 and community acquired pneumonia (CAP). Inspired by our findings, DeepSC-COVID is designed with 3 subnets: a cross-task feature subnet for feature extraction, a 3D lesion subnet for lesion segmentation, and a classification subnet for disease diagnosis. Besides, the task-aware loss is proposed for learning the task interaction across the 3D lesion and classification subnets. Different from all existing models for COVID-19 diagnosis, our model is interpretable with fine-grained 3D lesion distribution. Finally, extensive experimental results show that the joint learning framework in our model significantly improves the performance of 3D lesion segmentation and disease classification in both efficiency and efficacy."
}
] |
Foods | null | PMC8870927 | 10.3390/foods11040602 | A Machine Learning Method for the Quantitative Detection of Adulterated Meat Using a MOS-Based E-Nose | Meat adulteration is a global problem which undermines market fairness and harms people with allergies or certain religious beliefs. In this study, a novel framework in which a one-dimensional convolutional neural network (1DCNN) serves as a backbone and a random forest regressor (RFR) serves as a regressor, named 1DCNN-RFR, is proposed for the quantitative detection of beef adulterated with pork using electronic nose (E-nose) data. The 1DCNN backbone extracted a sufficient number of features from a multichannel input matrix converted from the raw E-nose data. The RFR improved the regression performance due to its strong prediction ability. The effectiveness of the 1DCNN-RFR framework was verified by comparing it with four other models (support vector regression model (SVR), RFR, backpropagation neural network (BPNN), and 1DCNN). The proposed 1DCNN-RFR framework performed best in the quantitative detection of beef adulterated with pork. This study indicated that the proposed 1DCNN-RFR framework could be used as an effective tool for the quantitative detection of meat adulteration. | 2.4. Related Works2.4.1. Principal Component Analysis (PCA)Principal component analysis (PCA) is a commonly utilized method for feature extraction [25]. PCA is mathematically defined as an orthogonal linear transformation that transforms data into a new coordinate system such that the greatest variance by some projection of the data comes to lie along the first coordinate (the first principal component), the second greatest variance along the second coordinate, and so on [26]. Generally, the first few principal components must contribute at least 85% of the variance, or else the PCA method would be considered unsuitable because too much of the original information would be lost [27]. The first few principal components to make up a cumulative contribution exceeding 95% contain nearly all the information of the original data [24]. PCA is arguably the most popular multivariate statistical technique and has been applied in nearly all scientific disciplines [28].2.4.2. Convolutional Neural Network (CNN)The CNN is a supervised feed-forward deep learning network designed to process data that come in the form of multiple arrays [29]. Basically, CNN are composed of three types of layers: the convolutional, pooling, and fully-connected layers [30,31]. The convolutional layer is composed of several convolutional kernels which are used to compute different feature maps and the pooling layer merges semantically similar features into one and prevents overfitting. After several convolutional and pooling layers, there will be one or more fully connected layers which take all neurons from the previous layer and connect them to every single neuron of the current layer to generate global semantic information. CNNs have provided excellent performance solutions to various problems in image classification, object detection, games and decisions, and natural language processing [32].2.4.3. Random Forest (RF)Random forest (RF) is a combination of tree predictors such that each tree depends on the values from a random vector sampled independently and where all the trees in the forest have the same distribution [33]. The aim of the RF is to create a large number of uncorrelated decision tree models to produce more accurate predictions [34]. According to the strong law of large numbers, an increasing number of decision tree models leads to better generalizations and prevention of overfitting [35]. For the construction of the RF, N bootstrap samples are first drawn from the original training set (with replacement). Then, for each bootstrap sample, an unpruned classification or regression tree is grown with the following modifications: at each node, random sample m (m < M) of the predictors (each sample contains M predictors) is taken and the best split from among those variables is chosen. The second step is repeated until the node can no longer be split without pruning. Finally, the generated decision trees are formed into a random forest that is used to classify or regress the new data [36,37]. Compared to other machine learning methods, RF has various advantages, including low complexity, fast computing speed, lower overfitting, etc. [38].2.4.4. Evaluation MetricsIn this study, three evaluation metrics, including the coefficient of determination (R2), the root mean square error (RMSE), and the mean absolute error (MAE), were used to evaluate the regression performance of the four models and the proposed framework. The R2 is usually presented as an estimate of the percentage of variance within the response variable explained by its (linear) relationship with the explanatory variables [39]. The RMSE represents the standard deviation of the differences between the predicted values and the observed values of the samples [19]. The MAE is defined as the average absolute difference between the predicted values and the observed values of the samples. The evaluation metrics are defined in Equations (1)–(3).
(1)R2=1−∑i=1nyi−y^i2∑i=1nyi−y¯2
(2)RMSE=∑i=1nyi−y^i2n
(3)MAE=1n∑i=1nyi−y^i
where n is the number of the samples in the training set or the test set; yi is the actual value of the ith sample; y^i is the predicted value of the ith sample; and y¯ is the mean of the actual value. | [
"31453165",
"32029865",
"30651163",
"32932080",
"29054141",
"33670564",
"21420795",
"34818744",
"31390746",
"33917735",
"30583545",
"26017442",
"33398067",
"30306349"
] | [
{
"pmid": "31453165",
"title": "The employment of Fourier transform infrared spectroscopy coupled with chemometrics techniques for traceability and authentication of meat and meat products.",
"abstract": "Meat-based food such as meatball and sausages are important sources of protein needed for the human body. Due to different prices, some unethical producers try to adulterate high-price meat such as beef with lower priced meat like pork and rat meat to gain economical profits, therefore, reliable and fast analytical techniques should be developed, validated, and applied for meat traceability and authenticity. Some instrumental techniques have been applied for the detection of meat adulteration, mainly relied on DNA and protein using polymerase chain reaction and chromatographic methods, respectively. But, this method is time-consuming, needs a sophisticated instrument, involves complex sample preparation which make the method is not suitable for routine analysis. As a consequence, a simpler method based on spectroscopic principles should be continuously developed. Food samples are sometimes complex which resulted in complex chemical responses. Fortunately, a statistical method called with chemometrics could solve the problems related to complex chemical data. This mini-review highlights the application of Fourier-transform infrared spectroscopy coupled with numerous chemometrics techniques for authenticity and traceability of meat and meat-based products."
},
{
"pmid": "32029865",
"title": "Development of a real-time PCR assay for the identification and quantification of bovine ingredient in processed meat products.",
"abstract": "In order to find fraudulent species substitution in meat products, a highly sensitive and rapid assay for meat species identification and quantification is urgently needed. In this study, species-specific primers and probes were designed from the mitochondrial cytb (cytochrome b) fragment for identification and quantification of bovine ingredient in commercial meat products. Bovine samples and non-bovine ones were used to identify the specificity, sensitivity, and applicability of established assay. Results showed that the primers and probes were highly specific for bovine ingredient in meat products. The absolute detection limit of the real-time PCR method was 0.025 ng DNA, and the relative detection limit was 0.002% (w/w) of positive samples. The quantitative real-time PCR assay was validated on simulated meat samples and high in the precision and accuracy. In order to demonstrate the applicability and reliability of the proposed assay in practical products, the 22 commercial meat products including salted, jerkies, and meatball were used. The results indicated the established method has a good stability in detection of bovine ingredient in real food. The established method in this study showed specificity and sensitivity in identification and quantification of beef meat in processed meat products."
},
{
"pmid": "30651163",
"title": "Specific Identification of the Adulterated Components in Beef or Mutton Meats Using Multiplex PCR.",
"abstract": "Background: The fraudulent substitution of cheap and low-quality meat for expensive and good-quality meats to gain profit is a common practice in industries worldwide. Adulteration of fox, raccoon, or mink in commercial beef and mutton meat in the supermarket has become a serious problem. Objective: To ensure the meat quality and safety, we have developed a multiplex PCR method to identify the fox, mink, and raccoon components adulterated in beef or mutton with very low detection limits. Methods: PCR primers were designed and tested by examining the size of PCR product, the nuclease digestion products, and DNA sequencing. Results: After primer interference tests, we have established a double PCR method that can clearly identify fox, mink, or raccoon components in beef meat and mutton meat at the 1% (w/w) level. Triplex PCR and quadruple PCR have been also developed, which are able to identify any two types of components or three mixed components in beef meat unambiguously. Conclusions: We have developed multiplex PCR systems. The duplex PCR systems can identify one component (fox, raccoon, or mink) adulterated in beef meat or mutton meat without question, and triplex PCR and quadruple PCR can discriminate two components and three components adulterated in beef meat. Highlights: These methods are convenient, low-cost, highly specific and reliable, and of a great value for meat quality control and food safety quarantine."
},
{
"pmid": "32932080",
"title": "A novel duplex SYBR Green real-time PCR with melting curve analysis method for beef adulteration detection.",
"abstract": "An efficient and reliable duplex SYBR Green real-time quantitative PCR (qPCR) method for beef products adulteration detection was developed based on bovine specific and vertebrate universal primers. By analyzing the numbers, positions (Tm value) of melting curve peaks of the duplex PCR products, we simultaneously identified bovine and preliminary screened non-bovine in samples, and also semi-quantified the bovine percentage according to the area ratios of peaks. All of these were necessary for adulteration determination. The specific and universal primers were designed based on mitochondrial genes ND4 and 16S rRNA respectively, their amplicons Tm values were 72.6 ± 0.5 °C and 79-81 °C. There might be some other peaks at 74-78 °C and above 81 °C if non-bovine components existed. Thelimit of detectionwas 1 pgforbovineDNA, and1 - 30 pg fornon-bovineDNAbasedon differentspecies."
},
{
"pmid": "29054141",
"title": "Quantitative Detection of Horse Contamination in Cooked Meat Products by ELISA.",
"abstract": "Concerns about the contamination of meat products with horse meat and new regulations for the declaration of meat adulterants have highlighted the need for a rapid test to detect horse meat adulteration. To address this need, Microbiologique, Inc., has developed a sandwich ELISA that can quantify the presence of horse meat down to 0.1% (w/w) in cooked pork, beef, chicken, goat, and lamb meats. This horse meat authentication ELISA has an analytical sensitivity of 0.000030 and 0.000046% (w/v) for cooked and autoclaved horse meat, respectively, and an analytical range of quantitation of 0.05-0.8% (w/v) in the absence of other meats. The assay is rapid and can be completed in 1 h and 10 min. Moreover, the assay is specific for cooked horse meat and does not demonstrate any cross-reactivity with xenogeneic cooked meat sources."
},
{
"pmid": "33670564",
"title": "Non-Destructive Spectroscopic and Imaging Techniques for the Detection of Processed Meat Fraud.",
"abstract": "In recent years, meat authenticity awareness has increased and, in the fight to combat meat fraud, various analytical methods have been proposed and subsequently evaluated. Although these methods have shown the potential to detect low levels of adulteration with high reliability, they are destructive, time-consuming, labour-intensive, and expensive. Therefore, rendering them inappropriate for rapid analysis and early detection, particularly under the fast-paced production and processing environment of the meat industry. However, modern analytical methods could improve this process as the food industry moves towards methods that are non-destructive, non-invasive, simple, and on-line. This review investigates the feasibility of different non-destructive techniques used for processed meat authentication which could provide the meat industry with reliable and accurate real-time monitoring, in the near future."
},
{
"pmid": "21420795",
"title": "Rapid identification of pork for halal authentication using the electronic nose and gas chromatography mass spectrometer with headspace analyzer.",
"abstract": "The volatile compounds of pork, other meats and meat products were studied using an electronic nose and gas chromatography mass spectrometer with headspace analyzer (GCMS-HS) for halal verification. The zNose™ was successfully employed for identification and differentiation of pork and pork sausages from beef, mutton and chicken meats and sausages which were achieved using a visual odor pattern called VaporPrint™, derived from the frequency of the surface acoustic wave (SAW) detector of the electronic nose. GCMS-HS was employed to separate and analyze the headspace gasses from samples into peaks corresponding to individual compounds for the purpose of identification. Principal component analysis (PCA) was applied for data interpretation. Analysis by PCA was able to cluster and discriminate pork from other types of meats and sausages. It was shown that PCA could provide a good separation of the samples with 67% of the total variance accounted by PC1."
},
{
"pmid": "34818744",
"title": "Identification and quantification of fox meat in meat products by liquid chromatography-tandem mass spectrometry.",
"abstract": "Over the years, food adulteration has become an important global problem, threatening public health safety and the healthy development of food industry. This study established a liquid chromatography-tandem mass (LC-MS/MS) method for accurate identification and quantitative analysis of fox meat products. High-resolution mass was used for data collection, and Proteome Discoverer was used for data analysis to screen fox-specific peptides. Multivariate statistical analysis was conducted using the data obtained from the label-free analysis of different contents of simulated samples. Samples with different contents were distinguished without interfering with each other, suggesting the feasibility of quantitative analysis of fox meat content. The linear correlation coefficient and recovery rate were calculated to determine the fox peptides that can be used for accurate quantification. The established LC-MS/MS method can be used for the accurate identification and quantification of actual samples. In addition, this method can provide technical support for law enforcement departments."
},
{
"pmid": "31390746",
"title": "Rapid Identification of Rainbow Trout Adulteration in Atlantic Salmon by Raman Spectroscopy Combined with Machine Learning.",
"abstract": "This study intends to evaluate the utilization potential of the combined Raman spectroscopy and machine learning approach to quickly identify the rainbow trout adulteration in Atlantic salmon. The adulterated samples contained various concentrations (0-100% w/w at 10% intervals) of rainbow trout mixed into Atlantic salmon. Spectral preprocessing methods, such as first derivative, second derivative, multiple scattering correction (MSC), and standard normal variate, were employed. Unsupervised algorithms, such as recursive feature elimination, genetic algorithm (GA), and simulated annealing, and supervised K-means clustering (KM) algorithm were used for selecting important spectral bands to reduce the spectral complexity and improve the model stability. Finally, the performances of various machine learning models, including linear regression, nonlinear regression, regression tree, and rule-based models, were verified and compared. The results denoted that the developed GA-KM-Cubist machine learning model achieved satisfactory results based on MSC preprocessing. The determination coefficient (R2) and root mean square error of prediction sets (RMSEP) in the test sets were 0.87 and 10.93, respectively. These results indicate that Raman spectroscopy can be used as an effective Atlantic salmon adulteration identification method; further, the developed model can be used for quantitatively analyzing the rainbow trout adulteration in Atlantic salmon."
},
{
"pmid": "33917735",
"title": "A Machine Learning Method for the Fine-Grained Classification of Green Tea with Geographical Indication Using a MOS-Based Electronic Nose.",
"abstract": "Chinese green tea is known for its health-functional properties. There are many green tea categories, which have sub-categories with geographical indications (GTSGI). Several high-quality GTSGI planted in specific areas are labeled as famous GTSGI (FGTSGI) and are expensive. However, the subtle differences between the categories complicate the fine-grained classification of the GTSGI. This study proposes a novel framework consisting of a convolutional neural network backbone (CNN backbone) and a support vector machine classifier (SVM classifier), namely, CNN-SVM for the classification of Maofeng green tea categories (six sub-categories) and Maojian green tea categories (six sub-categories) using electronic nose data. A multi-channel input matrix was constructed for the CNN backbone to extract deep features from different sensor signals. An SVM classifier was employed to improve the classification performance due to its high discrimination ability for small sample sizes. The effectiveness of this framework was verified by comparing it with four other machine learning models (SVM, CNN-Shi, CNN-SVM-Shi, and CNN). The proposed framework had the best performance for classifying the GTSGI and identifying the FGTSGI. The high accuracy and strong robustness of the CNN-SVM show its potential for the fine-grained classification of multiple highly similar teas."
},
{
"pmid": "30583545",
"title": "Bionic Electronic Nose Based on MOS Sensors Array and Machine Learning Algorithms Used for Wine Properties Detection.",
"abstract": "In this study, a portable electronic nose (E-nose) prototype is developed using metal oxide semiconductor (MOS) sensors to detect odors of different wines. Odor detection facilitates the distinction of wines with different properties, including areas of production, vintage years, fermentation processes, and varietals. Four popular machine learning algorithms-extreme gradient boosting (XGBoost), random forest (RF), support vector machine (SVM), and backpropagation neural network (BPNN)-were used to build identification models for different classification tasks. Experimental results show that BPNN achieved the best performance, with accuracies of 94% and 92.5% in identifying production areas and varietals, respectively; and SVM achieved the best performance in identifying vintages and fermentation processes, with accuracies of 67.3% and 60.5%, respectively. Results demonstrate the effectiveness of the developed E-nose, which could be used to distinguish different wines based on their properties following selection of an optimal algorithm."
},
{
"pmid": "26017442",
"title": "Deep learning.",
"abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech."
},
{
"pmid": "33398067",
"title": "Fast automated detection of COVID-19 from medical images using convolutional neural networks.",
"abstract": "Coronavirus disease 2019 (COVID-19) is a global pandemic posing significant health risks. The diagnostic test sensitivity of COVID-19 is limited due to irregularities in specimen handling. We propose a deep learning framework that identifies COVID-19 from medical images as an auxiliary testing method to improve diagnostic sensitivity. We use pseudo-coloring methods and a platform for annotating X-ray and computed tomography images to train the convolutional neural network, which achieves a performance similar to that of experts and provides high scores for multiple statistical indices (F1 scores > 96.72% (0.9307, 0.9890) and specificity >99.33% (0.9792, 1.0000)). Heatmaps are used to visualize the salient features extracted by the neural network. The neural network-based regression provides strong correlations between the lesion areas in the images and five clinical indicators, resulting in high accuracy of the classification framework. The proposed method represents a potential computer-aided diagnosis method for COVID-19 in clinical practice."
},
{
"pmid": "30306349",
"title": "Evaluating parameters for ligand-based modeling with random forest on sparse data sets.",
"abstract": "Ligand-based predictive modeling is widely used to generate predictive models aiding decision making in e.g. drug discovery projects. With growing data sets and requirements on low modeling time comes the necessity to analyze data sets efficiently to support rapid and robust modeling. In this study we analyzed four data sets and studied the efficiency of machine learning methods on sparse data structures, utilizing Morgan fingerprints of different radii and hash sizes, and compared with molecular signatures descriptor of different height. We specifically evaluated the effect these parameters had on modeling time, predictive performance, and memory requirements using two implementations of random forest; Scikit-learn as well as FEST. We also compared with a support vector machine implementation. Our results showed that unhashed fingerprints yield significantly better accuracy than hashed fingerprints ([Formula: see text]), with no pronounced deterioration in modeling time and memory usage. Furthermore, the fast execution and low memory usage of the FEST algorithm suggest that it is a good alternative for large, high dimensional sparse data. Both support vector machines and random forest performed equally well but results indicate that the support vector machine was better at using the extra information from larger values of the Morgan fingerprint's radius."
}
] |
Diagnostics | null | PMC8871002 | 10.3390/diagnostics12020414 | Automatic Left Ventricle Segmentation from Short-Axis Cardiac MRI Images Based on Fully Convolutional Neural Network | Background: Left ventricle (LV) segmentation using a cardiac magnetic resonance imaging (MRI) dataset is critical for evaluating global and regional cardiac functions and diagnosing cardiovascular diseases. LV clinical metrics such as LV volume, LV mass and ejection fraction (EF) are frequently extracted based on the LV segmentation from short-axis MRI images. Manual segmentation to assess such functions is tedious and time-consuming for medical experts to diagnose cardiac pathologies. Therefore, a fully automated LV segmentation technique is required to assist medical experts in working more efficiently. Method: This paper proposes a fully convolutional network (FCN) architecture for automatic LV segmentation from short-axis MRI images. Several experiments were conducted in the training phase to compare the performance of the network and the U-Net model with various hyper-parameters, including optimization algorithms, epochs, learning rate, and mini-batch size. In addition, a class weighting method was introduced to avoid having a high imbalance of pixels in the classes of image’s labels since the number of background pixels was significantly higher than the number of LV and myocardium pixels. Furthermore, effective image conversion with pixel normalization was applied to obtain exact features representing target organs (LV and myocardium). The segmentation models were trained and tested on a public dataset, namely the evaluation of myocardial infarction from the delayed-enhancement cardiac MRI (EMIDEC) dataset. Results: The dice metric, Jaccard index, sensitivity, and specificity were used to evaluate the network’s performance, with values of 0.93, 0.87, 0.98, and 0.94, respectively. Based on the experimental results, the proposed network outperforms the standard U-Net model and is an advanced fully automated method in terms of segmentation performance. Conclusion: This proposed method is applicable in clinical practice for doctors to diagnose cardiac diseases from short-axis MRI images. | 2. Related WorksIn recent years, segmentation and quantification of the LV from cardiac MRI images have received much attention to diagnose cardiovascular disease. Many studies have proposed semi-automatic segmentation methods to delineate the LV borders, such as active contour [17,18], level set [19,20,21], graph cut [22], dynamic programming, and atlas-based models. These traditional segmentation methods necessitate user intervention, which is a time-consuming and tedious task. The difference between semi-automatic and fully automatic segmentation is that the latter is better suited to process large batches of cardiac MRI images.For segmenting the LV and myocardium from CMR images, CNNs in various orders have been proposed. Dangi et al. [23] created a CNN-based multi-task learning (MTL) model for simultaneous LV segmentation and quantification. They used the U-net architecture [24], separating segmentation and regression at the final upsampling layer. This network is capable of learning feature representation while also improving generalization. Moradi et al. [25] developed a deep-learning-based method called MFP-U-net for LV segmentation from echocardiography images, and they designed a network with a feature pyramid that can detect and recognize the LV in MRI. Wu et al. [26] proposed an automatic segmentation model for the LV from cardiac MRI. They used a CNN model to locate the LV and the U-net model to segment it. Abdeltawab et al. [10] devised a framework that begins with FCN-based localization of the LV and extraction of the heart section’s ROI. The extracted ROIs are then fed into the FCN2 network, which segments the LV cavity and myocardium. Dong et al. [27] proposed a CNN-based model with two parallel subnetworks to detect endocardium and epicardium contours of the LV, incorporating the MTL concept. The FCN [28] is a CNN expansion with different last layers used for different tasks. Traditional CNN methods, for example, use fully connected layers for image classification to predict objects, whereas an FCN applies a deconvolution (transposed) layer instead of a fully connected layer in semantic segmentation. Several FCN-based models have been used to improve LV segmentation performance [29,30,31]. The network proposed by Cui et al. [32] was an attention U-Net model based on an FCN structure for cardiac short-axis MRI segmentation. U-Net [24] has been commonly applied in medical image segmentation, particularly in the segmentation of cardiac images [25,33,34].Some researchers used a hybrid model that combined deep learning methods with traditional models to achieve an optimal LV segmentation performance from short-axis cardiac MRI images. For example, Ngo et al. [35] used a deep learning model combined with a level set for automatic LV segmentation. Avendi et al. [36] developed a fully automatic segmentation model for the LV using deep learning algorithms and deformable models. Due to the strong correlation between sequential frames during the cardiac cycle, a 3D model with a recurrent neural network (RNN) has been proposed. Long short-term memory (LSTM) is a popular RNN [37] technique for detecting heart motion using spatiotemporal dynamics. Zhang et al. [38] created a multi-level LSTM model for LV segmentation that used low-resolution level features to train one model and high-resolution level features to train another. Additionally, due to the large slice thickness, Baumgartner et al. [39] found that segmentation by 2D CNN performed better than 3D CNN. Furthermore, due to significant morphological differences in LV shape across slices caused by heart movement, RNN models reproduce incorrect features and require high computational costs. Bernard et al. [40] conducted a benchmark study and discovered that FCNs are used in most advanced algorithms for LV segmentation from short-axis MRI images.In recent years, researchers have been paying more attention to the segmentation of LV boundaries (endo- and epicardium) from short-axis MRI images. Table 1 summarizes the most recent studies in LV segmentation from short-axis MRI using deep learning models. Furthermore, the LV segmentation challenges [40,41,42] and benchmark datasets with ground truth contours are provided. Deep learning methods have lately obtained excellent results in the segmentation of medical images. CNN is one of the most widely used methods in medical image analysis [23,43] among these approaches. Medical images are segmented at the pixel level, as opposed to image-level classification [27]. Traditional CNN methods must be improved in order to achieve robust semantic segmentation. Furthermore, according to recent research, image pixel class imbalance can affect CNN performance during classification and segmentation [44]. Buda et al. [45] provided a thorough analysis of the CNN class imbalance problem. Data-level methods and classifier methods are two types of solutions to this problem. Oversampling [46] and data augmentation [47] are data-level methods that work with training datasets, whereas classifier-level methods such as cost-sensitive learning [48], hard mining [49], and loss function work with model training options.Pixel imbalance between the target class and the background class has a significant effect on segmentation performance, which requires an effective solution. Hence, various methods have been proposed to deal with this issue; for example, the focal Tversky loss function (FTL) was introduced by Cui et al. [32], Dong et al. [27] applied cross-entropy loss function instead of the dice loss function, and Wang et al. [15] used dynamic pixel-wise (PW) weighting. In addition, the authors normalized the pixel intensity of the input images to improve the learning ability of the models. Cui et al. [32] used mean–variance normalization (MVN) to normalize the pixel intensity on an input image by subtracting the difference from its average value and dividing by its standard deviation, and Wang et al. [15] used min-max normalization. Based on the above literature, in this study we created a 2D FCN technique with fewer parameters for accurately segmenting the LV and myocardium from short-axis MRI images. After using appropriate normalization and conversion techniques, the input images were used to extract pixels. The 2D PNG images have some advantages compared with NIfTI images, such as flexible image visualization, augmentation (rotation, cropping, and rescaling), and efficient exclusion of unwanted images. | [
"25520374",
"31038407",
"28103561",
"27295650",
"30390512",
"32222684",
"32866695",
"33812305",
"31465788",
"33805558",
"27277021",
"26740057",
"28108373",
"31671333",
"32325284",
"27244717",
"30932820",
"30636630",
"34004500",
"27423113",
"26917105",
"29994302",
"24091241",
"30092410",
"31476360",
"29157240",
"26886969",
"28437634",
"33260108",
"30387757"
] | [
{
"pmid": "31038407",
"title": "Deep Learning for Diagnosis of Chronic Myocardial Infarction on Nonenhanced Cardiac Cine MRI.",
"abstract": "Background Renal impairment is common in patients with coronary artery disease and, if severe, late gadolinium enhancement (LGE) imaging for myocardial infarction (MI) evaluation cannot be performed. Purpose To develop a fully automatic framework for chronic MI delineation via deep learning on non-contrast material-enhanced cardiac cine MRI. Materials and Methods In this retrospective single-center study, a deep learning model was developed to extract motion features from the left ventricle and delineate MI regions on nonenhanced cardiac cine MRI collected between October 2015 and March 2017. Patients with chronic MI, as well as healthy control patients, had both nonenhanced cardiac cine (25 phases per cardiac cycle) and LGE MRI examinations. Eighty percent of MRI examinations were used for the training data set and 20% for the independent testing data set. Chronic MI regions on LGE MRI were defined as ground truth. Diagnostic performance was assessed by analysis of the area under the receiver operating characteristic curve (AUC). MI area and MI area percentage from nonenhanced cardiac cine and LGE MRI were compared by using the Pearson correlation, paired t test, and Bland-Altman analysis. Results Study participants included 212 patients with chronic MI (men, 171; age, 57.2 years ± 12.5) and 87 healthy control patients (men, 42; age, 43.3 years ± 15.5). Using the full cardiac cine MRI, the per-segment sensitivity and specificity for detecting chronic MI in the independent test set was 89.8% and 99.1%, respectively, with an AUC of 0.94. There were no differences between nonenhanced cardiac cine and LGE MRI analyses in number of MI segments (114 vs 127, respectively; P = .38), per-patient MI area (6.2 cm2 ± 2.8 vs 5.5 cm2 ± 2.3, respectively; P = .27; correlation coefficient, r = 0.88), and MI area percentage (21.5% ± 17.3 vs 18.5% ± 15.4; P = .17; correlation coefficient, r = 0.89). Conclusion The proposed deep learning framework on nonenhanced cardiac cine MRI enables the confirmation (presence), detection (position), and delineation (transmurality and size) of chronic myocardial infarction. However, future larger-scale multicenter studies are required for a full validation. Published under a CC BY 4.0 license. Online supplemental material is available for this article. See also the editorial by Leiner in this issue."
},
{
"pmid": "28103561",
"title": "Statistical shape modeling of the left ventricle: myocardial infarct classification challenge.",
"abstract": "Statistical shape modeling is a powerful tool for visualizing and quantifying geometric and functional patterns of the heart. After myocardial infarction (MI), the left ventricle typically remodels in response to physiological challenges. Several methods have been proposed in the literature to describe statistical shape changes. Which method best characterizes left ventricular remodeling after MI is an open research question. A better descriptor of remodeling is expected to provide a more accurate evaluation of disease status in MI patients. We therefore designed a challenge to test shape characterization in MI given a set of three-dimensional left ventricular surface points. The training set comprised 100 MI patients, and 100 asymptomatic volunteers (AV). The challenge was initiated in 2015 at the Statistical Atlases and Computational Models of the Heart workshop, in conjunction with the MICCAI conference. The training set with labels was provided to participants, who were asked to submit the likelihood of MI from a different (validation) set of 200 cases (100 AV and 100 MI). Sensitivity, specificity, accuracy and area under the receiver operating characteristic curve were used as the outcome measures. The goals of this challenge were to (1) establish a common dataset for evaluating statistical shape modeling algorithms in MI, and (2) test whether statistical shape modeling provides additional information characterizing MI patients over standard clinical measures. Eleven groups with a wide variety of classification and feature extraction approaches participated in this challenge. All methods achieved excellent classification results with accuracy ranges from 0.83 to 0.98. The areas under the receiver operating characteristic curves were all above 0.90. Four methods showed significantly higher performance than standard clinical measures. The dataset and software for evaluation are available from the Cardiac Atlas Project website1."
},
{
"pmid": "27295650",
"title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.",
"abstract": "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available."
},
{
"pmid": "30390512",
"title": "Fully convolutional multi-scale residual DenseNets for cardiac segmentation and automated cardiac diagnosis using ensemble of classifiers.",
"abstract": "Deep fully convolutional neural network (FCN) based architectures have shown great potential in medical image segmentation. However, such architectures usually have millions of parameters and inadequate number of training samples leading to over-fitting and poor generalization. In this paper, we present a novel DenseNet based FCN architecture for cardiac segmentation which is parameter and memory efficient. We propose a novel up-sampling path which incorporates long skip and short-cut connections to overcome the feature map explosion in conventional FCN based architectures. In order to process the input images at multiple scales and view points simultaneously, we propose to incorporate Inception module's parallel structures. We propose a novel dual loss function whose weighting scheme allows to combine advantages of cross-entropy and Dice loss leading to qualitative improvements in segmentation. We demonstrate computational efficacy of incorporating conventional computer vision techniques for region of interest detection in an end-to-end deep learning based segmentation framework. From the segmentation maps we extract clinically relevant cardiac parameters and hand-craft features which reflect the clinical diagnostic analysis and train an ensemble system for cardiac disease classification. We validate our proposed network architecture on three publicly available datasets, namely: (i) Automated Cardiac Diagnosis Challenge (ACDC-2017), (ii) Left Ventricular segmentation challenge (LV-2011), (iii) 2015 Kaggle Data Science Bowl cardiac challenge data. Our approach in ACDC-2017 challenge stood second place for segmentation and first place in automated cardiac disease diagnosis tasks with an accuracy of 100% on a limited testing set (n=50). In the LV-2011 challenge our approach attained 0.74 Jaccard index, which is so far the highest published result in fully automated algorithms. In the Kaggle challenge our approach for LV volume gave a Continuous Ranked Probability Score (CRPS) of 0.0127, which would have placed us tenth in the original challenge. Our approach combined both cardiac segmentation and disease diagnosis into a fully automated framework which is computationally efficient and hence has the potential to be incorporated in computer-aided diagnosis (CAD) tools for clinical application."
},
{
"pmid": "32222684",
"title": "A deep learning-based approach for automatic segmentation and quantification of the left ventricle from cardiac cine MR images.",
"abstract": "Cardiac MRI has been widely used for noninvasive assessment of cardiac anatomy and function as well as heart diagnosis. The estimation of physiological heart parameters for heart diagnosis essentially require accurate segmentation of the Left ventricle (LV) from cardiac MRI. Therefore, we propose a novel deep learning approach for the automated segmentation and quantification of the LV from cardiac cine MR images. We aim to achieve lower errors for the estimated heart parameters compared to the previous studies by proposing a novel deep learning segmentation method. Our framework starts by an accurate localization of the LV blood pool center-point using a fully convolutional neural network (FCN) architecture called FCN1. Then, a region of interest (ROI) that contains the LV is extracted from all heart sections. The extracted ROIs are used for the segmentation of LV cavity and myocardium via a novel FCN architecture called FCN2. The FCN2 network has several bottleneck layers and uses less memory footprint than conventional architectures such as U-net. Furthermore, a new loss function called radial loss that minimizes the distance between the predicted and true contours of the LV is introduced into our model. Following myocardial segmentation, functional and mass parameters of the LV are estimated. Automated Cardiac Diagnosis Challenge (ACDC-2017) dataset was used to validate our framework, which gave better segmentation, accurate estimation of cardiac parameters, and produced less error compared to other methods applied on the same dataset. Furthermore, we showed that our segmentation approach generalizes well across different datasets by testing its performance on a locally acquired dataset. To sum up, we propose a deep learning approach that can be translated into a clinical tool for heart diagnosis."
},
{
"pmid": "32866695",
"title": "Fully automatic segmentation of right and left ventricle on short-axis cardiac MRI images.",
"abstract": "Cardiac magnetic resonance imaging (CMR) is a widely used non-invasive imaging modality for evaluating cardiovascular diseases. CMR is the gold standard method for left and right ventricular functional assessment due to its ability to characterize myocardial structure and function and low intra- and inter-observer variability. However the post-processing segmentation during the functional evaluation is time-consuming and challenging. A fully automated segmentation method can assist the experts; therefore, they can do more efficient work. In this paper, a regression-based fully automated method is presented for the right- and left ventricle segmentation. For training and evaluation, our dataset contained MRI short-axis scans of 5570 patients, who underwent CMR examinations at Heart and Vascular Center, Semmelweis University Budapest. Our approach is novel and after training the state-of-the-art algorithm on our dataset, our algorithm proved to be superior on both of the ventricles. The evaluation metrics were the Dice index, Hausdorff distance and volume related parameters. We have achieved average Dice index for the left endocardium: 0.927, left epicardium: 0.940 and right endocardium: 0.873 on our dataset. We have also compared the performance of the algorithm to the human-level segmentation on both ventricles and it is similar to experienced readers for the left, and comparable for the right ventricle. We also evaluated the proposed algorithm on the ACDC dataset, which is publicly available, with and without transfer learning. The results on ACDC were also satisfying and similar to human observers. Our method is lightweight, fast to train and does not require more than 2 GB GPU memory for execution and training."
},
{
"pmid": "33812305",
"title": "Automated left and right ventricular chamber segmentation in cardiac magnetic resonance images using dense fully convolutional neural network.",
"abstract": "BACKGROUND AND OBJECTIVE\nSegmentation of the left ventricular (LV) myocardium (Myo) and RV endocardium on cine cardiac magnetic resonance (CMR) images represents an essential step for cardiac-function evaluation and diagnosis. In order to have a common reference for comparing segmentation algorithms, several CMR image datasets were made available, but in general they do not include the most apical and basal slices, and/or gold standard tracing is limited to only one of the two ventricles, thus not fully corresponding to real clinical practice. Our aim was to develop a deep learning (DL) approach for automated segmentation of both RV and LV chambers from short-axis (SAX) CMR images, reporting separately the performance for basal slices, together with the applied criterion of choice.\n\n\nMETHOD\nA retrospectively selected database (DB1) of 210 cine sequences (3 pathology groups) was considered: images (GE, 1.5 T) were acquired at Centro Cardiologico Monzino (Milan, Italy), and end-diastolic (ED) and end-systolic frames (ES) were manually segmented (gold standard, GS). Automatic ED and ES RV and LV segmentation were performed with a U-Net inspired architecture, where skip connections were redesigned introducing dense blocks to alleviate the semantic gap between the U-Net encoder and decoder. The proposed architecture was trained including: A) the basal slices where the Myo surrounded the LV for at least the 50% and all the other slice; B) all the slices where the Myo completely surrounded the LV. To evaluate the clinical relevance of the proposed architecture in a practical use case scenario, a graphical user interface was developed to allow clinicians to revise, and correct when needed, the automatic segmentation. Additionally, to assess generalizability, analysis of CMR images obtained in 12 healthy volunteers (DB2) with different equipment (Siemens, 3T) and settings was performed.\n\n\nRESULTS\nThe proposed architecture outperformed the original U-Net. Comparing the performance on DB1 between the two criteria, no significant differences were measured when considering all slices together, but were present when only basal slices were examined. Automatic and manually-adjusted segmentation performed similarly compared to the GS (bias±95%LoA): LVEDV -1±12 ml, LVESV -1±14 ml, RVEDV 6±12 ml, RVESV 6±14 ml, ED LV mass 6±26 g, ES LV mass 5±26 g). Also, generalizability showed very similar performance, with Dice scores of 0.944 (LV), 0.908 (RV) and 0.852 (Myo) on DB1, and 0.940 (LV), 0.880 (RV), and 0.856 (Myo) on DB2.\n\n\nCONCLUSIONS\nOur results support the potential of DL methods for accurate LV and RV contours segmentation and the advantages of dense skip connections in alleviating the semantic gap generated when high level features are concatenated with lower level feature. The evaluation on our dataset, considering separately the performance on basal and apical slices, reveals the potential of DL approaches for fast, accurate and reliable automated cardiac segmentation in a real clinical setting."
},
{
"pmid": "31465788",
"title": "Dynamic pixel-wise weighting-based fully convolutional neural networks for left ventricle segmentation in short-axis MRI.",
"abstract": "Left ventricle (LV) segmentation in cardiac MRI is an essential procedure for quantitative diagnosis of various cardiovascular diseases. In this paper, we present a novel fully automatic left ventricle segmentation approach based on convolutional neural networks. The proposed network fully takes advantages of the hierarchical architecture and integrate the multi-scale feature together for segmenting the myocardial region of LV. Moreover, we put forward a dynamic pixel-wise weighting strategy, which can dynamically adjust the weight of each pixel according to the segmentation accuracy of upper layer and force the pixel classifier to take more attention on the misclassified ones. By this way, the LV segmentation performance of our method can be improved a lot especially for the apical and basal slices in cine MR images. The experiments on the CAP database demonstrate that our method achieves a substantial improvement compared with other well-know deep learning methods. Beside these, we discussed two major limitations in convolutional neural networks-based semantic segmentation methods for LV segmentation."
},
{
"pmid": "33805558",
"title": "Edge-Sensitive Left Ventricle Segmentation Using Deep Reinforcement Learning.",
"abstract": "Deep reinforcement learning (DRL) has been utilized in numerous computer vision tasks, such as object detection, autonomous driving, etc. However, relatively few DRL methods have been proposed in the area of image segmentation, particularly in left ventricle segmentation. Reinforcement learning-based methods in earlier works often rely on learning proper thresholds to perform segmentation, and the segmentation results are inaccurate due to the sensitivity of the threshold. To tackle this problem, a novel DRL agent is designed to imitate the human process to perform LV segmentation. For this purpose, we formulate the segmentation problem as a Markov decision process and innovatively optimize it through DRL. The proposed DRL agent consists of two neural networks, i.e., First-P-Net and Next-P-Net. The First-P-Net locates the initial edge point, and the Next-P-Net locates the remaining edge points successively and ultimately obtains a closed segmentation result. The experimental results show that the proposed model has outperformed the previous reinforcement learning methods and achieved comparable performances compared with deep learning baselines on two widely used LV endocardium segmentation datasets, namely Automated Cardiac Diagnosis Challenge (ACDC) 2017 dataset, and Sunnybrook 2009 dataset. Moreover, the proposed model achieves higher F-measure accuracy compared with deep learning methods when training with a very limited number of samples."
},
{
"pmid": "27277021",
"title": "Simultaneous extraction of endocardial and epicardial contours of the left ventricle by distance regularized level sets.",
"abstract": "PURPOSE\nSegmentation of the cardiac left ventricle (LV) is still an open problem and is challenging due to the poor contrast between tissues around the epicardium and image artifacts. To extract the endocardium and epicardium of the cardiac left ventricle accurately, the authors propose a two-layer level set approach for segmentation of the LV from cardiac magnetic resonance short-axis images.\n\n\nMETHODS\nIn the proposed method, the endocardium and epicardium are represented by two specified level contours of a level set function. Segmentation of the LV is formulated as a problem of optimizing the level set function such that these two level contours best fit the epicardium and endocardium, subject to a distance regularization (DR) term to preserve a smoothly varying distance between them. The DR term introduces a desirable interaction between the two level contours of a single level set function, which contributes to preserve the anatomical geometry of the epicardium and endocardium of the LV. In addition, the proposed method has an intrinsic ability to deal with intensity inhomogeneity in MR images, which is a common image artifact in MRI.\n\n\nRESULTS\nTheir method is quantitatively validated by experiments on the datasets for the MICCAI 2009 grand challenge on left ventricular segmentation and the MICCAI 2013 challenge workshop on segmentation: algorithms, theory and applications (SATA). To overcome discontinuity of 2D segmentation results at some adjacent slices for a few cases, the authors extend distance regularized two-layer level set to 3D to refine the segmentation results. The corresponding metrics for their method are better than the methods in the MICCAI 2009 challenge. Their method was ranked at the first place in terms of Hausdorff distance and the second place in terms of Dice similarity coefficient in the MICCAI 2013 challenge.\n\n\nCONCLUSIONS\nExperimental results demonstrate the advantages of their method in terms of segmentation accuracy and consistency with the heart anatomy."
},
{
"pmid": "26740057",
"title": "Distance regularized two level sets for segmentation of left and right ventricles from cine-MRI.",
"abstract": "This paper presents a new level set method for segmentation of cardiac left and right ventricles. We extend the edge based distance regularized level set evolution (DRLSE) model in Li et al. (2010) to a two-level-set formulation, with the 0-level set and k-level set representing the endocardium and epicardium, respectively. The extraction of endocardium and epicardium is obtained as a result of the interactive curve evolution of the 0 and k level sets derived from the proposed variational level set formulation. The initialization of the level set function in the proposed two-level-set DRLSE model is generated from roughly located endocardium, which can be performed by applying the original DRLSE model. Experimental results have demonstrated the effectiveness of the proposed two-level-set DRLSE model."
},
{
"pmid": "28108373",
"title": "Left ventricle segmentation via two-layer level sets with circular shape constraint.",
"abstract": "This paper proposes a circular shape constraint and a novel two-layer level set method for the segmentation of the left ventricle (LV) from short-axis magnetic resonance images without training any shape models. Since the shape of LV throughout the apex-base axis is close to a ring shape, we propose a circle fitting term in the level set framework to detect the endocardium. The circle fitting term imposes a penalty on the evolving contour from its fitting circle, and thereby handles quite well with issues in LV segmentation, especially the presence of outflow track in basal slices and the intensity overlap between TPM and the myocardium. To extract the whole myocardium, the circle fitting term is incorporated into two-layer level set method. The endocardium and epicardium are respectively represented by two specified level contours of the level set function, which are evolved by an edge-based and a region-based active contour model. The proposed method has been quantitatively validated on the public data set from MICCAI 2009 challenge on the LV segmentation. Experimental results and comparisons with state-of-the-art demonstrate the accuracy and robustness of our method."
},
{
"pmid": "31671333",
"title": "MFP-Unet: A novel deep learning based approach for left ventricle segmentation in echocardiography.",
"abstract": "Segmentation of the Left ventricle (LV) is a crucial step for quantitative measurements such as area, volume, and ejection fraction. However, the automatic LV segmentation in 2D echocardiographic images is a challenging task due to ill-defined borders, and operator dependence issues (insufficient reproducibility). U-net, which is a well-known architecture in medical image segmentation, addressed this problem through an encoder-decoder path. Despite outstanding overall performance, U-net ignores the contribution of all semantic strengths in the segmentation procedure. In the present study, we have proposed a novel architecture to tackle this drawback. Feature maps in all levels of the decoder path of U-net are concatenated, their depths are equalized, and up-sampled to a fixed dimension. This stack of feature maps would be the input of the semantic segmentation layer. The performance of the proposed model was evaluated using two sets of echocardiographic images: one public dataset and one prepared dataset. The proposed network yielded significantly improved results when comparing with results from U-net, dilated U-net, Unet++, ACNN, SHG, and deeplabv3. An average Dice Metric (DM) of 0.953, Hausdorff Distance (HD) of 3.49, and Mean Absolute Distance (MAD) of 1.12 are achieved in the public dataset. The correlation graph, bland-altman analysis, and box plot showed a great agreement between automatic and manually calculated volume, area, and length."
},
{
"pmid": "32325284",
"title": "Left ventricle automatic segmentation in cardiac MRI using a combined CNN and U-net approach.",
"abstract": "Cardiovascular diseases can be effectively prevented from worsening through early diagnosis. To this end, various methods have been proposed to detect the disease source by analyzing cardiac magnetic resonance images (MRI), wherein left ventricular segmentation plays an indispensable role. However, since the left ventricle (LV) is easily confused with other regions in cardiac MRI, segmentation of the LV is a challenging problem. To address this issue, we propose a composite model combining CNN and U-net to accurately segment the LV. In our model, CNN is used to locate the region of interest (ROI) and the U-net network achieve segmentation of LV. We used the cardiac MRI datasets of the MICCAI 2009 left ventricular segmentation challenge to train and test our model and demonstrated the accuracy and robustness of the proposed model. The proposed model achieved state-of-the-art results. The metrics are Dice metric (DM), volumetric overlap error (VOE) and Hausdorff distance (HD), in which DM reaches 0.951, VOE reaches 0.053 and HD reaches 3.641."
},
{
"pmid": "27244717",
"title": "Fully Convolutional Networks for Semantic Segmentation.",
"abstract": "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, improve on the previous best result in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional networks achieve improved segmentation of PASCAL VOC (30% relative improvement to 67.2% mean IU on 2012), NYUDv2, SIFT Flow, and PASCAL-Context, while inference takes one tenth of a second for a typical image."
},
{
"pmid": "30932820",
"title": "Dilated-Inception Net: Multi-Scale Feature Aggregation for Cardiac Right Ventricle Segmentation.",
"abstract": "Segmentation of cardiac ventricle from magnetic resonance images is significant for cardiac disease diagnosis, progression assessment, and monitoring cardiac conditions. Manual segmentation is so time consuming, tedious, and subjective that automated segmentation methods are highly desired in practice. However, conventional segmentation methods performed poorly in cardiac ventricle, especially in the right ventricle. Compared with the left ventricle, whose shape is a simple thick-walled circle, the structure of the right ventricle is more complex due to ambiguous boundary, irregular cavity, and variable crescent shape. Hence, effective feature extractors and segmentation models are preferred. In this paper, we propose a dilated-inception net (DIN) to extract and aggregate multi-scale features for right ventricle segmentation. The DIN outperforms many state-of-the-art models on the benchmark database of right ventricle segmentation challenge. In addition, the experimental results indicate that the proposed model has potential to reach expert-level performance in right ventricular epicardium segmentation. More importantly, DIN behaves similarly to clinical expert with high correlation coefficients in four clinical cardiac indices. Therefore, the proposed DIN is promising for automated cardiac right ventricle segmentation in clinical applications."
},
{
"pmid": "30636630",
"title": "Automated analysis of cardiovascular magnetic resonance myocardial native T1 mapping images using fully convolutional neural networks.",
"abstract": "BACKGROUND\nCardiovascular magnetic resonance (CMR) myocardial native T1 mapping allows assessment of interstitial diffuse fibrosis. In this technique, the global and regional T1 are measured manually by drawing region of interest in motion-corrected T1 maps. The manual analysis contributes to an already lengthy CMR analysis workflow and impacts measurements reproducibility. In this study, we propose an automated method for combined myocardium segmentation, alignment, and T1 calculation for myocardial T1 mapping.\n\n\nMETHODS\nA deep fully convolutional neural network (FCN) was used for myocardium segmentation in T1 weighted images. The segmented myocardium was then resampled on a polar grid, whose origin is located at the center-of-mass of the segmented myocardium. Myocardium T1 maps were reconstructed from the resampled T1 weighted images using curve fitting. The FCN was trained and tested using manually segmented images for 210 patients (5 slices, 11 inversion times per patient). An additional image dataset for 455 patients (5 slices and 11 inversion times per patient), analyzed by an expert reader using a semi-automatic tool, was used to validate the automatically calculated global and regional T1 values. Bland-Altman analysis, Pearson correlation coefficient, r, and the Dice similarity coefficient (DSC) were used to evaluate the performance of the FCN-based analysis on per-patient and per-slice basis. Inter-observer variability was assessed using intraclass correlation coefficient (ICC) of the T1 values calculated by the FCN-based automatic method and two readers.\n\n\nRESULTS\nThe FCN achieved fast segmentation (< 0.3 s/image) with high DSC (0.85 ± 0.07). The automatically and manually calculated T1 values (1091 ± 59 ms and 1089 ± 59 ms, respectively) were highly correlated in per-patient (r = 0.82; slope = 1.01; p < 0.0001) and per-slice (r = 0.72; slope = 1.01; p < 0.0001) analyses. Bland-Altman analysis showed good agreement between the automated and manual measurements with 95% of measurements within the limits-of-agreement in both per-patient and per-slice analyses. The intraclass correllation of the T1 calculations by the automatic method vs reader 1 and reader 2 was respectively 0.86/0.56 and 0.74/0.49 in the per-patient/per-slice analyses, which were comparable to that between two expert readers (=0.72/0.58 in per-patient/per-slice analyses).\n\n\nCONCLUSION\nThe proposed FCN-based image processing platform allows fast and automatic analysis of myocardial native T1 mapping images mitigating the burden and observer-related variability of manual analysis."
},
{
"pmid": "34004500",
"title": "Multiscale attention guided U-Net architecture for cardiac segmentation in short-axis MRI images.",
"abstract": "BACKGROUND AND OBJECTIVE\nAutomatic cardiac segmentation plays an utmost role in the diagnosis and quantification of cardiovascular diseases.\n\n\nMETHODS\nThis paper proposes a new cardiac segmentation method in short-axis Magnetic Resonance Imaging (MRI) images, called attention U-Net architecture with input image pyramid and deep supervised output layers (AID), which can fully-automatically learn to pay attention to target structures of various sizes and shapes. During each training process, the model continues to learn how to emphasize the desired features and suppress irrelevant areas in the original images, effectively improving the accuracy of cardiac segmentation. At the same time, we introduce the Focal Tversky Loss (FTL), which can effectively solve the problem of high imbalance in the amount of data between the target class and the background class during cardiac image segmentation. In order to obtain a better representation of intermediate features, we add a multi-scale input pyramid to the attention network.\n\n\nRESULTS\nThe proposed cardiac segmentation technique is tested on the public Left Ventricle Segmentation Challenge (LVSC) dataset, which is shown to achieve 0.75, 0.87 and 0.92 for Jaccard Index, Sensitivity and Specificity, respectively. Experimental results demonstrate that the proposed method is able to improve the segmentation accuracy compared with the standard U-Net, and achieves comparable performance to the most advanced fully-automated methods.\n\n\nCONCLUSIONS\nGiven its effectiveness and advantages, the proposed method can facilitate cardiac segmentation in short-axis MRI images in clinical practice."
},
{
"pmid": "27423113",
"title": "Combining deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance.",
"abstract": "We introduce a new methodology that combines deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance (MR) data. This combination is relevant for segmentation problems, where the visual object of interest presents large shape and appearance variations, but the annotated training set is small, which is the case for various medical image analysis applications, including the one considered in this paper. In particular, level set methods are based on shape and appearance terms that use small training sets, but present limitations for modelling the visual object variations. Deep learning methods can model such variations using relatively small amounts of annotated training, but they often need to be regularised to produce good generalisation. Therefore, the combination of these methods brings together the advantages of both approaches, producing a methodology that needs small training sets and produces accurate segmentation results. We test our methodology on the MICCAI 2009 left ventricle segmentation challenge database (containing 15 sequences for training, 15 for validation and 15 for testing), where our approach achieves the most accurate results in the semi-automated problem and state-of-the-art results for the fully automated challenge."
},
{
"pmid": "26917105",
"title": "A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI.",
"abstract": "Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively."
},
{
"pmid": "29994302",
"title": "Deep Learning Techniques for Automatic MRI Cardiac Multi-Structures Segmentation and Diagnosis: Is the Problem Solved?",
"abstract": "Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac magnetic resonance images (multi-slice 2-D cine MRI) is a common clinical task to establish diagnosis. The automation of the corresponding tasks has thus been the subject of intense research over the past decades. In this paper, we introduce the \"Automatic Cardiac Diagnosis Challenge\" dataset (ACDC), the largest publicly available and fully annotated dataset for the purpose of cardiac MRI (CMR) assessment. The dataset contains data from 150 multi-equipments CMRI recordings with reference measurements and classification from two medical experts. The overarching objective of this paper is to measure how far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies. In the wake of the 2017 MICCAI-ACDC challenge, we report results from deep learning methods provided by nine research groups for the segmentation task and four groups for the classification task. Results show that the best methods faithfully reproduce the expert analysis, leading to a mean value of 0.97 correlation score for the automatic extraction of clinical indices and an accuracy of 0.96 for automatic diagnosis. These results clearly open the door to highly accurate and fully automatic analysis of cardiac CMRI. We also identify scenarios for which deep learning methods are still failing. Both the dataset and detailed results are publicly available online, while the platform will remain open for new submissions."
},
{
"pmid": "24091241",
"title": "A collaborative resource to build consensus for automated left ventricular segmentation of cardiac MR images.",
"abstract": "A collaborative framework was initiated to establish a community resource of ground truth segmentations from cardiac MRI. Multi-site, multi-vendor cardiac MRI datasets comprising 95 patients (73 men, 22 women; mean age 62.73±11.24years) with coronary artery disease and prior myocardial infarction, were randomly selected from data made available by the Cardiac Atlas Project (Fonseca et al., 2011). Three semi- and two fully-automated raters segmented the left ventricular myocardium from short-axis cardiac MR images as part of a challenge introduced at the STACOM 2011 MICCAI workshop (Suinesiaputra et al., 2012). Consensus myocardium images were generated based on the Expectation-Maximization principle implemented by the STAPLE algorithm (Warfield et al., 2004). The mean sensitivity, specificity, positive predictive and negative predictive values ranged between 0.63 and 0.85, 0.60 and 0.98, 0.56 and 0.94, and 0.83 and 0.92, respectively, against the STAPLE consensus. Spatial and temporal agreement varied in different amounts for each rater. STAPLE produced high quality consensus images if the region of interest was limited to the area of discrepancy between raters. To maintain the quality of the consensus, an objective measure based on the candidate automated rater performance distribution is proposed. The consensus segmentation based on a combination of manual and automated raters were more consistent than any particular rater, even those with manual input. The consensus is expected to improve with the addition of new automated contributions. This resource is open for future contributions, and is available as a test bed for the evaluation of new segmentation algorithms, through the Cardiac Atlas Project (www.cardiacatlas.org)."
},
{
"pmid": "30092410",
"title": "A systematic study of the class imbalance problem in convolutional neural networks.",
"abstract": "In this study, we systematically investigate the impact of class imbalance on classification performance of convolutional neural networks (CNNs) and compare frequently used methods to address the issue. Class imbalance is a common problem that has been comprehensively studied in classical machine learning, yet very limited systematic research is available in the context of deep learning. In our study, we use three benchmark datasets of increasing complexity, MNIST, CIFAR-10 and ImageNet, to investigate the effects of imbalance on classification and perform an extensive comparison of several methods to address the issue: oversampling, undersampling, two-phase training, and thresholding that compensates for prior class probabilities. Our main evaluation metric is area under the receiver operating characteristic curve (ROC AUC) adjusted to multi-class tasks since overall accuracy metric is associated with notable difficulties in the context of imbalanced data. Based on results from our experiments we conclude that (i) the effect of class imbalance on classification performance is detrimental; (ii) the method of addressing class imbalance that emerged as dominant in almost all analyzed scenarios was oversampling; (iii) oversampling should be applied to the level that completely eliminates the imbalance, whereas the optimal undersampling ratio depends on the extent of imbalance; (iv) as opposed to some classical machine learning models, oversampling does not cause overfitting of CNNs; (v) thresholding should be applied to compensate for prior class probabilities when overall number of properly classified cases is of interest."
},
{
"pmid": "31476360",
"title": "A data augmentation approach to train fully convolutional networks for left ventricle segmentation.",
"abstract": "Left ventricle (LV) segmentation plays an important role in the diagnosis of cardiovascular diseases. The cardiac contractile function can be quantified by measuring the segmentation results of LVs. Fully convolutional networks (FCNs) have been proven to be able to segment images. However, a large number of annotated images are required to train the network to avoid overfitting, which is a challenge for LV segmentation owing to the limited small number of available training samples. In this paper, we analyze the influence of augmenting training samples used in an FCN for LV segmentation, and propose a data augmentation approach based on shape models to train the FCN from a few samples. We show that the balanced training samples affect the performance of FCNs greatly. Experiments on four public datasets demonstrate that the FCN trained by our augmented data outperforms most existing automated segmentation methods with respect to several commonly used evaluation measures."
},
{
"pmid": "29157240",
"title": "Automatic diagnosis of imbalanced ophthalmic images using a cost-sensitive deep convolutional neural network.",
"abstract": "BACKGROUND\nOcular images play an essential role in ophthalmological diagnoses. Having an imbalanced dataset is an inevitable issue in automated ocular diseases diagnosis; the scarcity of positive samples always tends to result in the misdiagnosis of severe patients during the classification task. Exploring an effective computer-aided diagnostic method to deal with imbalanced ophthalmological dataset is crucial.\n\n\nMETHODS\nIn this paper, we develop an effective cost-sensitive deep residual convolutional neural network (CS-ResCNN) classifier to diagnose ophthalmic diseases using retro-illumination images. First, the regions of interest (crystalline lens) are automatically identified via twice-applied Canny detection and Hough transformation. Then, the localized zones are fed into the CS-ResCNN to extract high-level features for subsequent use in automatic diagnosis. Second, the impacts of cost factors on the CS-ResCNN are further analyzed using a grid-search procedure to verify that our proposed system is robust and efficient.\n\n\nRESULTS\nQualitative analyses and quantitative experimental results demonstrate that our proposed method outperforms other conventional approaches and offers exceptional mean accuracy (92.24%), specificity (93.19%), sensitivity (89.66%) and AUC (97.11%) results. Moreover, the sensitivity of the CS-ResCNN is enhanced by over 13.6% compared to the native CNN method.\n\n\nCONCLUSION\nOur study provides a practical strategy for addressing imbalanced ophthalmological datasets and has the potential to be applied to other medical images. The developed and deployed CS-ResCNN could serve as computer-aided diagnosis software for ophthalmologists in clinical application."
},
{
"pmid": "26886969",
"title": "Fast Convolutional Neural Network Training Using Selective Data Sampling: Application to Hemorrhage Detection in Color Fundus Images.",
"abstract": "Convolutional neural networks (CNNs) are deep learning network architectures that have pushed forward the state-of-the-art in a range of computer vision applications and are increasingly popular in medical image analysis. However, training of CNNs is time-consuming and challenging. In medical image analysis tasks, the majority of training examples are easy to classify and therefore contribute little to the CNN learning process. In this paper, we propose a method to improve and speed-up the CNN training for medical image analysis tasks by dynamically selecting misclassified negative samples during training. Training samples are heuristically sampled based on classification by the current status of the CNN. Weights are assigned to the training samples and informative samples are more likely to be included in the next CNN training iteration. We evaluated and compared our proposed method by training a CNN with (SeS) and without (NSeS) the selective sampling method. We focus on the detection of hemorrhages in color fundus images. A decreased training time from 170 epochs to 60 epochs with an increased performance-on par with two human experts-was achieved with areas under the receiver operating characteristics curve of 0.894 and 0.972 on two data sets. The SeS CNN statistically outperformed the NSeS CNN on an independent test set."
},
{
"pmid": "28437634",
"title": "Convolutional neural network regression for short-axis left ventricle segmentation in cardiac cine MR sequences.",
"abstract": "Automated left ventricular (LV) segmentation is crucial for efficient quantification of cardiac function and morphology to aid subsequent management of cardiac pathologies. In this paper, we parameterize the complete (all short axis slices and phases) LV segmentation task in terms of the radial distances between the LV centerpoint and the endo- and epicardial contours in polar space. We then utilize convolutional neural network regression to infer these parameters. Utilizing parameter regression, as opposed to conventional pixel classification, allows the network to inherently reflect domain-specific physical constraints. We have benchmarked our approach primarily against the publicly-available left ventricle segmentation challenge (LVSC) dataset, which consists of 100 training and 100 validation cardiac MRI cases representing a heterogeneous mix of cardiac pathologies and imaging parameters across multiple centers. Our approach attained a .77 Jaccard index, which is the highest published overall result in comparison to other automated algorithms. To test general applicability, we also evaluated against the Kaggle Second Annual Data Science Bowl, where the evaluation metric was the indirect clinical measures of LV volume rather than direct myocardial contours. Our approach attained a Continuous Ranked Probability Score (CRPS) of .0124, which would have ranked tenth in the original challenge. With this we demonstrate the effectiveness of convolutional neural network regression paired with domain-specific features in clinical segmentation."
},
{
"pmid": "33260108",
"title": "Automated left ventricular segmentation from cardiac magnetic resonance images via adversarial learning with multi-stage pose estimation network and co-discriminator.",
"abstract": "Left ventricular (LV) segmentation is essential for the early diagnosis of cardiovascular diseases, which has been reported as the leading cause of death all over the world. However, automated LV segmentation from cardiac magnetic resonance images (CMRI) using the traditional convolutional neural networks (CNNs) is still a challenging task due to the limited labeled CMRI data and low tolerances to irregular scales, shapes and deformations of LV. In this paper, we propose an automated LV segmentation method based on adversarial learning by integrating a multi-stage pose estimation network (MSPN) and a co-discrimination network. Different from existing CNNs, we use a MSPN with multi-scale dilated convolution (MDC) modules to enhance the ranges of receptive field for deep feature extraction. To fully utilize both labeled and unlabeled CMRI data, we propose a novel generative adversarial network (GAN) framework for LV segmentation by combining MSPN with co-discrimination networks. Specifically, the labeled CMRI are first used to initialize our segmentation network (MSPN) and co-discrimination network. Our GAN training includes two different kinds of epochs fed with both labeled and unlabeled CMRI data alternatively, which are different from the traditional CNNs only relied on the limited labeled samples to train the segmentation networks. As both ground truth and unlabeled samples are involved in guiding training, our method not only can converge faster but also obtain a better performance in LV segmentation. Our method is evaluated using MICCAI 2009 and 2017 challenge databases. Experimental results show that our method has obtained promising performance in LV segmentation, which also outperforms the state-of-the-art methods in terms of LV segmentation accuracy from the comparison results."
},
{
"pmid": "30387757",
"title": "Direct Segmentation-Based Full Quantification for Left Ventricle via Deep Multi-Task Regression Learning Network.",
"abstract": "Quantitative analysis of the heart is extremely necessary and significant for detecting and diagnosing heart disease, yet there are still some challenges. In this study, we propose a new end-to-end segmentation-based deep multi-task regression learning model (Indices-JSQ) to make a holonomic quantitative analysis of the left ventricle (LV), which contains a segmentation network (Img2Contour) and multi-task regression network (Contour2Indices). First, Img2Contour, which contains a deep convolutional encoder-decoder module, is designed to obtain the LV contour. Then, the predicted contour is fed as input to Contour2Indices for full quantification. On the whole, we take into account the relationship between different tasks, which can serve as a complementary advantage. Meanwhile, instead of using images directly from the original dataset, we creatively use the segmented contour of the original image to estimate the cardiac indices to achieve better and more accurate results. We make experiments on MR sequences of 145 subjects and gain the experimental results of 157 mm 2, 2.43 mm, 1.29 mm, and 0.87 on areas, dimensions, regional wall thicknesses, and Dice Metric, respectively. It intuitively shows that the proposed method outperforms the other state-of-the-art methods and demonstrates that our method has a great potential in cardiac MR images segmentation, comprehensive clinical assessment, and diagnosis."
}
] |
Diagnostics | null | PMC8871067 | 10.3390/diagnostics12020345 | Ensembles of Convolutional Neural Networks for Survival Time Estimation of High-Grade Glioma Patients from Multimodal MRI | Glioma is the most common type of primary malignant brain tumor. Accurate survival time prediction for glioma patients may positively impact treatment planning. In this paper, we develop an automatic survival time prediction tool for glioblastoma patients along with an effective solution to the limited availability of annotated medical imaging datasets. Ensembles of snapshots of three dimensional (3D) deep convolutional neural networks (CNN) are applied to Magnetic Resonance Image (MRI) data to predict survival time of high-grade glioma patients. Additionally, multi-sequence MRI images were used to enhance survival prediction performance. A novel way to leverage the potential of ensembles to overcome the limitation of labeled medical image availability is shown. This new classification method separates glioblastoma patients into long- and short-term survivors. The BraTS (Brain Tumor Image Segmentation) 2019 training dataset was used in this work. Each patient case consisted of three MRI sequences (T1CE, T2, and FLAIR). Our training set contained 163 cases while the test set included 46 cases. The best known prediction accuracy of 74% for this type of problem was achieved on the unseen test set. | 2. Related WorkThe task of survival prediction of glioma from MRI images is challenging. Several studies applying various methods and approaches using the BraTS dataset are reviewed in this section.Authors of [13,14,15] proposed the use of some handcrafted and radiomics features extracted from automatically segmented volumes with region labels to train a random forest regression model to predict the survival time of GBM patients in days. These studies achieved respectively 52%, 51.7%, and 27.25% accuracy on the validation set of the BraTS (Brain Tumor Image Segmentation) 2019 challenge.Authors of [16] performed a two category (short- and long-term) survival classification task using a linear discriminant classifier that was trained with deep features extracted from a pre-trained convolutional neural network (CNN). The study achieved 68.8% accuracy when doing 5-fold cross validation on the BraTS 2017 dataset.Ensemble learning was used by authors in [17]. They extracted handcrafted and radiomics features from automatically segmented MRI images of high grade gliomas and created an ensemble of multiple classifiers, including random forests, support vector machines, and multilayer perceptrons, to predict overall survival (OS) time on the BraTS 2018 testing set. They obtained an accuracy of 52%.The authors of [18] achieved first place in the BraTS 2020 challenge for the overall survival prediction task (61.7% accuracy). They extracted segmentation features along with patient’s age to classify the patients into three groups (long-, short- and, mid-term survivors) using an ensemble of a linear regression model and random forests classifier.Over the last decade, there was increasing interest in ensemble learning for tumor segmentation tasks as well. Ensemble learning was ubiquitous in the BraTS 2017–2020 challenges, being used in almost all of the top-ranked methods. The winner of the BraTS 2017 challenge for GBM tumor segmentation [19] was an ensemble of two fully convolutional network models (FCN), and a U-net each generating separate class confidence maps. Then, each class was created by averaging the confidence maps of the individual ensemble models for each voxel. This study reached dice scores of 0.90, 0.82, and 0.75 for the whole tumor, tumor core, and enhancing tumor, respectively for the BraTS 2017 validation set. Authors in [20] built an ensemble of UNet-based deep networks trained in a multi-fold setting to perform segmentation of brain tumors from the T2 Fluid Attenuated Inversion Recovery (T2-FLAIR) sequences. They achieved a dice score of 0.882 for the BraTS 2018 set. | [
"34185076",
"30445539",
"29260225",
"28982791",
"30195984",
"21339920",
"31637430",
"32116623",
"31980106",
"25494501",
"27326665",
"29531073",
"30277442",
"29993848",
"34296969",
"26188015",
"19269895",
"15279715",
"21548745",
"21088844",
"27686946",
"15758010",
"30412261",
"29660006",
"18847337",
"22258713"
] | [
{
"pmid": "34185076",
"title": "The 2021 WHO Classification of Tumors of the Central Nervous System: a summary.",
"abstract": "The fifth edition of the WHO Classification of Tumors of the Central Nervous System (CNS), published in 2021, is the sixth version of the international standard for the classification of brain and spinal cord tumors. Building on the 2016 updated fourth edition and the work of the Consortium to Inform Molecular and Practical Approaches to CNS Tumor Taxonomy, the 2021 fifth edition introduces major changes that advance the role of molecular diagnostics in CNS tumor classification. At the same time, it remains wedded to other established approaches to tumor diagnosis such as histology and immunohistochemistry. In doing so, the fifth edition establishes some different approaches to both CNS tumor nomenclature and grading and it emphasizes the importance of integrated diagnoses and layered reports. New tumor types and subtypes are introduced, some based on novel diagnostic technologies such as DNA methylome profiling. The present review summarizes the major general changes in the 2021 fifth edition classification and the specific changes in each taxonomic category. It is hoped that this summary provides an overview to facilitate more in-depth exploration of the entire fifth edition of the WHO Classification of Tumors of the Central Nervous System."
},
{
"pmid": "29260225",
"title": "Effect of Tumor-Treating Fields Plus Maintenance Temozolomide vs Maintenance Temozolomide Alone on Survival in Patients With Glioblastoma: A Randomized Clinical Trial.",
"abstract": "Importance\nTumor-treating fields (TTFields) is an antimitotic treatment modality that interferes with glioblastoma cell division and organelle assembly by delivering low-intensity alternating electric fields to the tumor.\n\n\nObjective\nTo investigate whether TTFields improves progression-free and overall survival of patients with glioblastoma, a fatal disease that commonly recurs at the initial tumor site or in the central nervous system.\n\n\nDesign, Setting, and Participants\nIn this randomized, open-label trial, 695 patients with glioblastoma whose tumor was resected or biopsied and had completed concomitant radiochemotherapy (median time from diagnosis to randomization, 3.8 months) were enrolled at 83 centers (July 2009-2014) and followed up through December 2016. A preliminary report from this trial was published in 2015; this report describes the final analysis.\n\n\nInterventions\nPatients were randomized 2:1 to TTFields plus maintenance temozolomide chemotherapy (n = 466) or temozolomide alone (n = 229). The TTFields, consisting of low-intensity, 200 kHz frequency, alternating electric fields, was delivered (≥ 18 hours/d) via 4 transducer arrays on the shaved scalp and connected to a portable device. Temozolomide was administered to both groups (150-200 mg/m2) for 5 days per 28-day cycle (6-12 cycles).\n\n\nMain Outcomes and Measures\nProgression-free survival (tested at α = .046). The secondary end point was overall survival (tested hierarchically at α = .048). Analyses were performed for the intent-to-treat population. Adverse events were compared by group.\n\n\nResults\nOf the 695 randomized patients (median age, 56 years; IQR, 48-63; 473 men [68%]), 637 (92%) completed the trial. Median progression-free survival from randomization was 6.7 months in the TTFields-temozolomide group and 4.0 months in the temozolomide-alone group (HR, 0.63; 95% CI, 0.52-0.76; P < .001). Median overall survival was 20.9 months in the TTFields-temozolomide group vs 16.0 months in the temozolomide-alone group (HR, 0.63; 95% CI, 0.53-0.76; P < .001). Systemic adverse event frequency was 48% in the TTFields-temozolomide group and 44% in the temozolomide-alone group. Mild to moderate skin toxicity underneath the transducer arrays occurred in 52% of patients who received TTFields-temozolomide vs no patients who received temozolomide alone.\n\n\nConclusions and Relevance\nIn the final analysis of this randomized clinical trial of patients with glioblastoma who had received standard radiochemotherapy, the addition of TTFields to maintenance temozolomide chemotherapy vs maintenance temozolomide alone, resulted in statistically significant improvement in progression-free survival and overall survival. These results are consistent with the previous interim analysis.\n\n\nTrial Registration\nclinicaltrials.gov Identifier: NCT00916409."
},
{
"pmid": "28982791",
"title": "Radiomics in Brain Tumor: Image Assessment, Quantitative Feature Descriptors, and Machine-Learning Approaches.",
"abstract": "Radiomics describes a broad set of computational methods that extract quantitative features from radiographic images. The resulting features can be used to inform imaging diagnosis, prognosis, and therapy response in oncology. However, major challenges remain for methodologic developments to optimize feature extraction and provide rapid information flow in clinical settings. Equally important, to be clinically useful, predictive radiomic properties must be clearly linked to meaningful biologic characteristics and qualitative imaging properties familiar to radiologists. Here we use a cross-disciplinary approach to highlight studies in radiomics. We review brain tumor radiologic studies (eg, imaging interpretation) through computational models (eg, computer vision and machine learning) that provide novel clinical insights. We outline current quantitative image feature extraction and prediction strategies with different levels of available clinical classes for supporting clinical decision-making. We further discuss machine-learning challenges and data opportunities to advance radiomic studies."
},
{
"pmid": "30195984",
"title": "Deep convolutional neural networks for brain image analysis on magnetic resonance imaging: a review.",
"abstract": "In recent years, deep convolutional neural networks (CNNs) have shown record-shattering performance in a variety of computer vision problems, such as visual object recognition, detection and segmentation. These methods have also been utilised in medical image analysis domain for lesion segmentation, anatomical segmentation and classification. We present an extensive literature review of CNN techniques applied in brain magnetic resonance imaging (MRI) analysis, focusing on the architectures, pre-processing, data-preparation and post-processing strategies available in these works. The aim of this study is three-fold. Our primary goal is to report how different CNN architectures have evolved, discuss state-of-the-art strategies, condense their results obtained using public datasets and examine their pros and cons. Second, this paper is intended to be a detailed reference of the research activity in deep CNN for brain MRI analysis. Finally, we present a perspective on the future of CNNs in which we hint some of the research directions in subsequent years."
},
{
"pmid": "21339920",
"title": "Prognostic factors for long-term survival after glioblastoma.",
"abstract": "Long-term survivors of glioblastoma (GB) are rare. Several variables besides tumor size and location determine a patient's survival chances: age at diagnosis, where younger patients often receive more aggressive treatment that is multimodal; functional status, which has a significant negative correlation with age; and histologic and genetic markers."
},
{
"pmid": "31637430",
"title": "A novel fully automated MRI-based deep-learning method for classification of IDH mutation status in brain gliomas.",
"abstract": "BACKGROUND\nIsocitrate dehydrogenase (IDH) mutation status has emerged as an important prognostic marker in gliomas. Currently, reliable IDH mutation determination requires invasive surgical procedures. The purpose of this study was to develop a highly accurate, MRI-based, voxelwise deep-learning IDH classification network using T2-weighted (T2w) MR images and compare its performance to a multicontrast network.\n\n\nMETHODS\nMultiparametric brain MRI data and corresponding genomic information were obtained for 214 subjects (94 IDH-mutated, 120 IDH wild-type) from The Cancer Imaging Archive and The Cancer Genome Atlas. Two separate networks were developed, including a T2w image-only network (T2-net) and a multicontrast (T2w, fluid attenuated inversion recovery, and T1 postcontrast) network (TS-net) to perform IDH classification and simultaneous single label tumor segmentation. The networks were trained using 3D Dense-UNets. Three-fold cross-validation was performed to generalize the networks' performance. Receiver operating characteristic analysis was also performed. Dice scores were computed to determine tumor segmentation accuracy.\n\n\nRESULTS\nT2-net demonstrated a mean cross-validation accuracy of 97.14% ± 0.04 in predicting IDH mutation status, with a sensitivity of 0.97 ± 0.03, specificity of 0.98 ± 0.01, and an area under the curve (AUC) of 0.98 ± 0.01. TS-net achieved a mean cross-validation accuracy of 97.12% ± 0.09, with a sensitivity of 0.98 ± 0.02, specificity of 0.97 ± 0.001, and an AUC of 0.99 ± 0.01. The mean whole tumor segmentation Dice scores were 0.85 ± 0.009 for T2-net and 0.89 ± 0.006 for TS-net.\n\n\nCONCLUSION\nWe demonstrate high IDH classification accuracy using only T2-weighted MR images. This represents an important milestone toward clinical translation."
},
{
"pmid": "32116623",
"title": "Segmenting Brain Tumor Using Cascaded V-Nets in Multimodal MR Images.",
"abstract": "In this work, we propose a novel cascaded V-Nets method to segment brain tumor substructures in multimodal brain magnetic resonance imaging. Although V-Net has been successfully used in many segmentation tasks, we demonstrate that its performance could be further enhanced by using a cascaded structure and ensemble strategy. Briefly, our baseline V-Net consists of four levels with encoding and decoding paths and intra- and inter-path skip connections. Focal loss is chosen to improve performance on hard samples as well as balance the positive and negative samples. We further propose three preprocessing pipelines for multimodal magnetic resonance images to train different models. By ensembling the segmentation probability maps obtained from these models, segmentation result is further improved. In other hand, we propose to segment the whole tumor first, and then divide it into tumor necrosis, edema, and enhancing tumor. Experimental results on BraTS 2018 online validation set achieve average Dice scores of 0.9048, 0.8364, and 0.7748 for whole tumor, tumor core and enhancing tumor, respectively. The corresponding values for BraTS 2018 online testing set are 0.8761, 0.7953, and 0.7364, respectively. We also evaluate the proposed method in two additional data sets from local hospitals comprising of 28 and 28 subjects, and the best results are 0.8635, 0.8036, and 0.7217, respectively. We further make a prediction of patient overall survival by ensembling multiple classifiers for long, mid and short groups, and achieve accuracy of 0.519, mean square error of 367240 and Spearman correlation coefficient of 0.168 for BraTS 2018 online testing set."
},
{
"pmid": "31980106",
"title": "Fully-automated deep learning-powered system for DCE-MRI analysis of brain tumors.",
"abstract": "Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays an important role in diagnosis and grading of brain tumors. Although manual DCE biomarker extraction algorithms boost the diagnostic yield of DCE-MRI by providing quantitative information on tumor prognosis and prediction, they are time-consuming and prone to human errors. In this paper, we propose a fully-automated, end-to-end system for DCE-MRI analysis of brain tumors. Our deep learning-powered technique does not require any user interaction, it yields reproducible results, and it is rigorously validated against benchmark and clinical data. Also, we introduce a cubic model of the vascular input function used for pharmacokinetic modeling which significantly decreases the fitting error when compared with the state of the art, alongside a real-time algorithm for determination of the vascular input region. An extensive experimental study, backed up with statistical tests, showed that our system delivers state-of-the-art results while requiring less than 3 min to process an entire input DCE-MRI study using a single GPU."
},
{
"pmid": "25494501",
"title": "The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).",
"abstract": "In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource."
},
{
"pmid": "27326665",
"title": "Radiomic Profiling of Glioblastoma: Identifying an Imaging Predictor of Patient Survival with Improved Performance over Established Clinical and Radiologic Risk Models.",
"abstract": "Purpose To evaluate whether radiomic feature-based magnetic resonance (MR) imaging signatures allow prediction of survival and stratification of patients with newly diagnosed glioblastoma with improved accuracy compared with that of established clinical and radiologic risk models. Materials and Methods Retrospective evaluation of data was approved by the local ethics committee and informed consent was waived. A total of 119 patients (allocated in a 2:1 ratio to a discovery [n = 79] or validation [n = 40] set) with newly diagnosed glioblastoma were subjected to radiomic feature extraction (12 190 features extracted, including first-order, volume, shape, and texture features) from the multiparametric (contrast material-enhanced T1-weighted and fluid-attenuated inversion-recovery imaging sequences) and multiregional (contrast-enhanced and unenhanced) tumor volumes. Radiomic features of patients in the discovery set were subjected to a supervised principal component (SPC) analysis to predict progression-free survival (PFS) and overall survival (OS) and were validated in the validation set. The performance of a Cox proportional hazards model with the SPC analysis predictor was assessed with C index and integrated Brier scores (IBS, lower scores indicating higher accuracy) and compared with Cox models based on clinical (age and Karnofsky performance score) and radiologic (Gaussian normalized relative cerebral blood volume and apparent diffusion coefficient) parameters. Results SPC analysis allowed stratification based on 11 features of patients in the discovery set into a low- or high-risk group for PFS (hazard ratio [HR], 2.43; P = .002) and OS (HR, 4.33; P < .001), and the results were validated successfully in the validation set for PFS (HR, 2.28; P = .032) and OS (HR, 3.45; P = .004). The performance of the SPC analysis (OS: IBS, 0.149; C index, 0.654; PFS: IBS, 0.138; C index, 0.611) was higher compared with that of the radiologic (OS: IBS, 0.175; C index, 0.603; PFS: IBS, 0.149; C index, 0.554) and clinical risk models (OS: IBS, 0.161, C index, 0.640; PFS: IBS, 0.139; C index, 0.599). The performance of the SPC analysis model was further improved when combined with clinical data (OS: IBS, 0.142; C index, 0.696; PFS: IBS, 0.132; C index, 0.637). Conclusion An 11-feature radiomic signature that allows prediction of survival and stratification of patients with newly diagnosed glioblastoma was identified, and improved performance compared with that of established clinical and radiologic risk models was demonstrated. (©) RSNA, 2016 Online supplemental material is available for this article."
},
{
"pmid": "29531073",
"title": "Predicting cancer outcomes from histology and genomics using convolutional networks.",
"abstract": "Cancer histology reflects underlying molecular processes and disease progression and contains rich phenotypic information that is predictive of patient outcomes. In this study, we show a computational approach for learning patient outcomes from digital pathology images using deep learning to combine the power of adaptive machine learning algorithms with traditional survival models. We illustrate how these survival convolutional neural networks (SCNNs) can integrate information from both histology images and genomic biomarkers into a single unified framework to predict time-to-event outcomes and show prediction accuracy that surpasses the current clinical paradigm for predicting the overall survival of patients diagnosed with glioma. We use statistical sampling techniques to address challenges in learning survival from histology images, including tumor heterogeneity and the need for large training cohorts. We also provide insights into the prediction mechanisms of SCNNs, using heat map visualization to show that SCNNs recognize important structures, like microvascular proliferation, that are related to prognosis and that are used by pathologists in grading. These results highlight the emerging role of deep learning in precision medicine and suggest an expanding utility for computational analysis of histology in the future practice of pathology."
},
{
"pmid": "30277442",
"title": "Radiomic MRI Phenotyping of Glioblastoma: Improving Survival Prediction.",
"abstract": "Purpose To investigate whether radiomic features at MRI improve survival prediction in patients with glioblastoma multiforme (GBM) when they are integrated with clinical and genetic profiles. Materials and Methods Data in patients with a diagnosis of GBM between December 2009 and January 2017 (217 patients) were retrospectively reviewed up to May 2017 and allocated to training and test sets (3:1 ratio). Radiomic features (n = 796) were extracted from multiparametric MRI. A random survival forest (RSF) model was trained with the radiomic features along with clinical and genetic profiles (O-6-methylguanine-DNA-methyltransferase promoter methylation and isocitrate dehydrogenase 1 mutation statuses) to predict overall survival (OS) and progression-free survival (PFS). The RSF models were validated on the test set. The incremental values of radiomic features were evaluated by using the integrated area under the receiver operating characteristic curve (iAUC). Results The 217 patients had a mean age of 57.9 years, and there were 87 female patients (age range, 22-81 years) and 130 male patients (age range, 17-85 years). The median OS and PFS of patients were 352 days (range, 20-1809 days) and 264 days (range, 21-1809 days), respectively. The RSF radiomics models were successfully validated on the test set (iAUC, 0.652 [95% confidence interval {CI}, 0.524, 0.769] and 0.590 [95% CI: 0.502, 0.689] for OS and PFS, respectively). The addition of a radiomics model to clinical and genetic profiles improved survival prediction when compared with models containing clinical and genetic profiles alone (P = .04 and .03 for OS and PFS, respectively). Conclusion Radiomic MRI phenotyping can improve survival prediction when integrated with clinical and genetic profiles and thus has potential as a practical imaging biomarker. © RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Jain and Lui in this issue."
},
{
"pmid": "29993848",
"title": "Novel Radiomic Features Based on Joint Intensity Matrices for Predicting Glioblastoma Patient Survival Time.",
"abstract": "This paper presents a novel set of image texture features generalizing standard grey-level co-occurrence matrices (GLCM) to multimodal image data through joint intensity matrices (JIMs). These are used to predict the survival of glioblastoma multiforme (GBM) patients from multimodal MRI data. The scans of 73 GBM patients from the Cancer Imaging Archive are used in our study. Necrosis, active tumor, and edema/invasion subregions of GBM phenotypes are segmented using the coregistration of contrast-enhanced T1-weighted (CE-T1) images and its corresponding fluid-attenuated inversion recovery (FLAIR) images. Texture features are then computed from the JIM of these GBM subregions and a random forest model is employed to classify patients into short or long survival groups. Our survival analysis identified JIM features in necrotic (e.g., entropy and inverse-variance) and edema (e.g., entropy and contrast) subregions that are moderately correlated with survival time (i.e., Spearman rank correlation of 0.35). Moreover, nine features were found to be associated with GBM survival with a Hazard-ratio range of 0.38-2.1 and a significance level of p < 0.05 following Holm-Bonferroni correction. These features also led to the highest accuracy in a univariate analysis for predicting the survival group of patients, with AUC values in the range of 68-70%. Considering multiple features for this task, JIM features led to significantly higher AUC values than those based on standard GLCMs and gene expression. Furthermore, an AUC of 77.56% with p = 0.003 was achieved when combining JIM, GLCM, and gene expression features into a single radiogenomic signature. In summary, our study demonstrated the usefulness of modeling the joint intensity characteristics of CE-T1 and FLAIR images for predicting the prognosis of patients with GBM."
},
{
"pmid": "34296969",
"title": "Combining MRI and Histologic Imaging Features for Predicting Overall Survival in Patients with Glioma.",
"abstract": "Purpose To test the hypothesis that combined features from MR and digital histopathologic images more accurately predict overall survival (OS) in patients with glioma compared with MRI or histopathologic features alone. Materials and Methods Multiparametric MR and histopathologic images in patients with a diagnosis of glioma (high- or low-grade glioma [HGG or LGG]) were obtained from The Cancer Imaging Archive (original images acquired 1983-2008). An extensive set of engineered features such as intensity, histogram, and texture were extracted from delineated tumor regions in MR and histopathologic images. Cox proportional hazard regression and support vector machine classification (SVC) models were applied to (a) MRI features only (MRIcox/svc), histopathologic features only (HistoPathcox/svc), and (c) combined MRI and histopathologic features (MRI+HistoPathcox/svc) and evaluated in a split train-test configuration. Results A total of 171 patients (mean age, 51 years ± 15; 91 men) were included with HGG (n = 75) and LGG (n = 96). Median OS was 467 days (range, 3-4752 days) for all patients, 350 days (range, 15-1561 days) for HGG, and 595 days (range, 3-4752 days) for LGG. The MRI+HistoPathcox model demonstrated higher concordance index (C-index) compared with MRIcox and HistoPathcox models on all patients (C-index, 0.79 vs 0.70 [P = .02; MRIcox] and 0.67 [P = .01; HistoPathcox]), patients with HGG (C-index, 0.78 vs 0.68 [P = .03; MRIcox] and 0.64 [P = .01; HistoPathcox]), and patients with LGG (C-index, 0.88 vs 0.62 [P = .008; MRIcox] and 0.62 [P = .006; HistoPathcox]). In binary classification, the MRI+HistoPathsvc model (area under the receiver operating characteristic curve [AUC], 0.86 [95% CI: 0.80, 0.95]) had higher performance than the MRIsvc model (AUC, 0.68 [95% CI: 0.50, 0.81]; P = .01) and the HistoPathsvc model (AUC, 0.72 [95% CI: 0.60, 0.85]; P = .04). Conclusion The model combining features from MR and histopathologic images had higher accuracy in predicting OS compared with the models with MR or histopathologic images alone. Keywords: Survival Prediction, Gliomas, Digital Pathology Imaging, MR Imaging, Machine Learning Supplemental material is available for this article."
},
{
"pmid": "26188015",
"title": "Imaging patterns predict patient survival and molecular subtype in glioblastoma via machine learning techniques.",
"abstract": "BACKGROUND\nMRI characteristics of brain gliomas have been used to predict clinical outcome and molecular tumor characteristics. However, previously reported imaging biomarkers have not been sufficiently accurate or reproducible to enter routine clinical practice and often rely on relatively simple MRI measures. The current study leverages advanced image analysis and machine learning algorithms to identify complex and reproducible imaging patterns predictive of overall survival and molecular subtype in glioblastoma (GB).\n\n\nMETHODS\nOne hundred five patients with GB were first used to extract approximately 60 diverse features from preoperative multiparametric MRIs. These imaging features were used by a machine learning algorithm to derive imaging predictors of patient survival and molecular subtype. Cross-validation ensured generalizability of these predictors to new patients. Subsequently, the predictors were evaluated in a prospective cohort of 29 new patients.\n\n\nRESULTS\nSurvival curves yielded a hazard ratio of 10.64 for predicted long versus short survivors. The overall, 3-way (long/medium/short survival) accuracy in the prospective cohort approached 80%. Classification of patients into the 4 molecular subtypes of GB achieved 76% accuracy.\n\n\nCONCLUSIONS\nBy employing machine learning techniques, we were able to demonstrate that imaging patterns are highly predictive of patient survival. Additionally, we found that GB subtypes have distinctive imaging phenotypes. These results reveal that when imaging markers related to infiltration, cell density, microvascularity, and blood-brain barrier compromise are integrated via advanced pattern analysis methods, they form very accurate predictive biomarkers. These predictive markers used solely preoperative images, hence they can significantly augment diagnosis and treatment of GB patients."
},
{
"pmid": "19269895",
"title": "Effects of radiotherapy with concomitant and adjuvant temozolomide versus radiotherapy alone on survival in glioblastoma in a randomised phase III study: 5-year analysis of the EORTC-NCIC trial.",
"abstract": "BACKGROUND\nIn 2004, a randomised phase III trial by the European Organisation for Research and Treatment of Cancer (EORTC) and National Cancer Institute of Canada Clinical Trials Group (NCIC) reported improved median and 2-year survival for patients with glioblastoma treated with concomitant and adjuvant temozolomide and radiotherapy. We report the final results with a median follow-up of more than 5 years.\n\n\nMETHODS\nAdult patients with newly diagnosed glioblastoma were randomly assigned to receive either standard radiotherapy or identical radiotherapy with concomitant temozolomide followed by up to six cycles of adjuvant temozolomide. The methylation status of the methyl-guanine methyl transferase gene, MGMT, was determined retrospectively from the tumour tissue of 206 patients. The primary endpoint was overall survival. Analyses were by intention to treat. This trial is registered with Clinicaltrials.gov, number NCT00006353.\n\n\nFINDINGS\nBetween Aug 17, 2000, and March 22, 2002, 573 patients were assigned to treatment. 278 (97%) of 286 patients in the radiotherapy alone group and 254 (89%) of 287 in the combined-treatment group died during 5 years of follow-up. Overall survival was 27.2% (95% CI 22.2-32.5) at 2 years, 16.0% (12.0-20.6) at 3 years, 12.1% (8.5-16.4) at 4 years, and 9.8% (6.4-14.0) at 5 years with temozolomide, versus 10.9% (7.6-14.8), 4.4% (2.4-7.2), 3.0% (1.4-5.7), and 1.9% (0.6-4.4) with radiotherapy alone (hazard ratio 0.6, 95% CI 0.5-0.7; p<0.0001). A benefit of combined therapy was recorded in all clinical prognostic subgroups, including patients aged 60-70 years. Methylation of the MGMT promoter was the strongest predictor for outcome and benefit from temozolomide chemotherapy.\n\n\nINTERPRETATION\nBenefits of adjuvant temozolomide with radiotherapy lasted throughout 5 years of follow-up. A few patients in favourable prognostic categories survive longer than 5 years. MGMT methylation status identifies patients most likely to benefit from the addition of temozolomide.\n\n\nFUNDING\nEORTC, NCIC, Nélia and Amadeo Barletta Foundation, Schering-Plough."
},
{
"pmid": "15279715",
"title": "Prognostic factors for survival of patients with glioblastoma: recursive partitioning analysis.",
"abstract": "Survival for patients with glioblastoma multiforme is short, and current treatments provide limited benefit. Therefore, there is interest in conducting phase 2 trials of experimental treatments in newly diagnosed patients. However, this requires historical data with which to compare the experimental therapies. Knowledge of prognostic markers would also allow stratification into risk groups for phase 3 randomized trials. In this retrospective study of 832 glioblastoma multiforme patients enrolled into prospective clinical trials at the time of initial diagnosis, we evaluated several potential prognostic markers for survival to establish risk groups. Analyses were done using both Cox proportional hazards modeling and recursive partitioning analyses. Initially, patients from 8 clinical trials, 6 of which included adjuvant chemotherapy, were included. Subsequent analyses excluded trials with interstitial brachytherapy, and finally included only nonbrachytherapy trials with planned adjuvant chemotherapy. The initial analysis defined 4 risk groups. The 2 lower risk groups included patients under the age of 40, the lowest risk group being young patients with tumor in the frontal lobe only. An intermediate-risk group included patients with Karnofsky performance status (KPS) >70, subtotal or total resection, and age between 40 and 65. The highest risk group included all patients over 65 and patients between 40 and 65 with either KPS<80 or biopsy only. Subgroup analyses indicated that inclusion of adjuvant chemotherapy provides an increase in survival, although that improvement tends to be minimal for patients over age 65, for patients over age 40 with KPS less than 80, and for those treated with brachytherapy."
},
{
"pmid": "21548745",
"title": "Treatment outcomes for patients with glioblastoma multiforme and a low Karnofsky Performance Scale score on presentation to a tertiary care institution. Clinical article.",
"abstract": "OBJECT\nThe object of this study was to determine the benefit of surgery, radiation, and chemotherapy for patients with glioblastoma multiforme (GBM) and a low Karnofsky Performance Scale (KPS) score.\n\n\nMETHODS\nThe authors retrospectively evaluated the records of patients who underwent primary treatment for pathologically confirmed GBM and with a KPS score ≤ 50 on initial evaluation for radiation therapy at a tertiary care institution between 1977 and 2006. Seventy-four patients with a median age of 69 years (range 19-88 years) and a median KPS score of 50 (range 20-50) were retrospectively grouped into the Radiation Therapy Oncology Group (RTOG) recursive partitioning analysis (RPA) Classes IV (11 patients), V (15 patients), and VI (48 patients). Patients underwent biopsy (38 patients) or tumor resection (36 patients). Forty-seven patients received radiation. Nineteen patients also received chemotherapy (53% temozolomide), initiated concurrently (47%) or after radiotherapy.\n\n\nRESULTS\nThe median survival overall was 2.3 months (range 0.2-48 months). Median survival stratified by RPA Classes IV, V, and VI was 6.6, 6.6, and 1.8 months, respectively (p < 0.001, log-rank test). Median survival for patients receiving radiation (5.2 months) was greater than that for patients who declined radiation (1.6 months, p < 0.001). Patients in RPA Class VI appeared to benefit from radiotherapy only when tumor resection was also performed. The median survival from treatment initiation was greater for patients receiving chemotherapy concomitantly with radiotherapy (9.8 months) as compared with radiotherapy alone (1.7 months, p = 0.002). Of 20 patients seen for follow-up in the clinic at a median of 48 days (range 24-196 days) following radiotherapy, 70% were noted to have an improvement in the KPS score of between 10 and 30 points from the baseline score. On multivariate analysis, only RPA class (p = 0.01), resection (HR = 0.37, p = 0.001), and radiation therapy (HR = 0.39, p = 0.02) were significant predictors of a decreased mortality rate.\n\n\nCONCLUSIONS\nPatients with a KPS score ≤ 50 appear to have increased survival and functional status following tumor resection and radiation. The extent of benefit from concomitant chemotherapy is unclear. Future studies may benefit from reporting that utilizes a prognostic classification system such as the RTOG RPA class, which has been shown to be effective at separating outcomes even in patients with low performance status. Patients with GBMs and low KPS scores need to be evaluated in prospective studies to identify the extent to which different therapies improve outcomes."
},
{
"pmid": "21088844",
"title": "Patients with IDH1 wild type anaplastic astrocytomas exhibit worse prognosis than IDH1-mutated glioblastomas, and IDH1 mutation status accounts for the unfavorable prognostic effect of higher age: implications for classification of gliomas.",
"abstract": "WHO grading of human brain tumors extends beyond a strictly histological grading system by providing a basis predictive for the clinical behavior of the respective neoplasm. For example, patients with glioblastoma WHO grade IV usually show a less favorable clinical course and receive more aggressive first-line treatment than patients with anaplastic astrocytoma WHO grade III. Here we provide evidence that the IDH1 status is more prognostic for overall survival than standard histological criteria that differentiate high-grade astrocytomas. We sequenced the isocitrate dehydrogenase 1 gene (IDH1) at codon 132 in 382 patients with anaplastic astrocytoma and glioblastoma from the NOA-04 trial and from a prospective translational cohort study of the German Glioma Network. Patients with anaplastic astrocytomas carried IDH1 mutations in 60%, and patients with glioblastomas in 7.2%. IDH1 was the most prominent single prognostic factor (RR 2.7; 95% CI 1.6-4.5) followed by age, diagnosis and MGMT. The sequence from more favorable to poorer outcome was (1) anaplastic astrocytoma with IDH1 mutation, (2) glioblastoma with IDH1 mutation, (3) anaplastic astrocytoma without IDH1 mutation and (4) glioblastoma without IDH1 mutation (p < 0.0001). In this combined set of anaplastic astrocytomas and glioblastomas both, IDH1 mutation and IDH1 expression status were of greater prognostic relevance than histological diagnosis according to the current WHO classification system. Our data indicate that much of the prognostic significance of patient age is due to the predominant occurrence of IDH1 mutations in younger patients. Immunohistochemistry using a mutation-specific antibody recognizing the R132H mutation yielded similar results. We propose to complement the current WHO classification and grading of high-grade astrocytic gliomas by the IDH1 mutation status and to use this combined histological and molecular classification in future clinical trials."
},
{
"pmid": "27686946",
"title": "Temozolomide chemotherapy versus radiotherapy in high-risk low-grade glioma (EORTC 22033-26033): a randomised, open-label, phase 3 intergroup study.",
"abstract": "BACKGROUND\nOutcome of low-grade glioma (WHO grade II) is highly variable, reflecting molecular heterogeneity of the disease. We compared two different, single-modality treatment strategies of standard radiotherapy versus primary temozolomide chemotherapy in patients with low-grade glioma, and assessed progression-free survival outcomes and identified predictive molecular factors.\n\n\nMETHODS\nFor this randomised, open-label, phase 3 intergroup study (EORTC 22033-26033), undertaken in 78 clinical centres in 19 countries, we included patients aged 18 years or older who had a low-grade (WHO grade II) glioma (astrocytoma, oligoastrocytoma, or oligodendroglioma) with at least one high-risk feature (aged >40 years, progressive disease, tumour size >5 cm, tumour crossing the midline, or neurological symptoms), and without known HIV infection, chronic hepatitis B or C virus infection, or any condition that could interfere with oral drug administration. Eligible patients were randomly assigned (1:1) to receive either conformal radiotherapy (up to 50·4 Gy; 28 doses of 1·8 Gy once daily, 5 days per week for up to 6·5 weeks) or dose-dense oral temozolomide (75 mg/m2 once daily for 21 days, repeated every 28 days [one cycle], for a maximum of 12 cycles). Random treatment allocation was done online by a minimisation technique with prospective stratification by institution, 1p deletion (absent vs present vs undetermined), contrast enhancement (yes vs no), age (<40 vs ≥40 years), and WHO performance status (0 vs ≥1). Patients, treating physicians, and researchers were aware of the assigned intervention. A planned analysis was done after 216 progression events occurred. Our primary clinical endpoint was progression-free survival, analysed by intention-to-treat; secondary outcomes were overall survival, adverse events, neurocognitive function (will be reported separately), health-related quality of life and neurological function (reported separately), and correlative analyses of progression-free survival by molecular markers (1p/19q co-deletion, MGMT promoter methylation status, and IDH1/IDH2 mutations). This trial is closed to accrual but continuing for follow-up, and is registered at the European Trials Registry, EudraCT 2004-002714-11, and at ClinicalTrials.gov, NCT00182819.\n\n\nFINDINGS\nBetween Sept 23, 2005, and March 26, 2010, 707 patients were registered for the study. Between Dec 6, 2005, and Dec 21, 2012, we randomly assigned 477 patients to receive either radiotherapy (n=240) or temozolomide chemotherapy (n=237). At a median follow-up of 48 months (IQR 31-56), median progression-free survival was 39 months (95% CI 35-44) in the temozolomide group and 46 months (40-56) in the radiotherapy group (unadjusted hazard ratio [HR] 1·16, 95% CI 0·9-1·5, p=0·22). Median overall survival has not been reached. Exploratory analyses in 318 molecularly-defined patients confirmed the significantly different prognosis for progression-free survival in the three recently defined molecular low-grade glioma subgroups (IDHmt, with or without 1p/19q co-deletion [IDHmt/codel], or IDH wild type [IDHwt]; p=0·013). Patients with IDHmt/non-codel tumours treated with radiotherapy had a longer progression-free survival than those treated with temozolomide (HR 1·86 [95% CI 1·21-2·87], log-rank p=0·0043), whereas there were no significant treatment-dependent differences in progression-free survival for patients with IDHmt/codel and IDHwt tumours. Grade 3-4 haematological adverse events occurred in 32 (14%) of 236 patients treated with temozolomide and in one (<1%) of 228 patients treated with radiotherapy, and grade 3-4 infections occurred in eight (3%) of 236 patients treated with temozolomide and in two (1%) of 228 patients treated with radiotherapy. Moderate to severe fatigue was recorded in eight (3%) patients in the radiotherapy group (grade 2) and 16 (7%) in the temozolomide group. 119 (25%) of all 477 patients had died at database lock. Four patients died due to treatment-related causes: two in the temozolomide group and two in the radiotherapy group.\n\n\nINTERPRETATION\nOverall, there was no significant difference in progression-free survival in patients with low-grade glioma when treated with either radiotherapy alone or temozolomide chemotherapy alone. Further data maturation is needed for overall survival analyses and evaluation of the full predictive effects of different molecular subtypes for future individualised treatment choices.\n\n\nFUNDING\nMerck Sharpe & Dohme-Merck & Co, Canadian Cancer Society, Swiss Cancer League, UK National Institutes of Health, Australian National Health and Medical Research Council, US National Cancer Institute, European Organisation for Research and Treatment of Cancer Cancer Research Fund."
},
{
"pmid": "15758010",
"title": "MGMT gene silencing and benefit from temozolomide in glioblastoma.",
"abstract": "BACKGROUND\nEpigenetic silencing of the MGMT (O6-methylguanine-DNA methyltransferase) DNA-repair gene by promoter methylation compromises DNA repair and has been associated with longer survival in patients with glioblastoma who receive alkylating agents.\n\n\nMETHODS\nWe tested the relationship between MGMT silencing in the tumor and the survival of patients who were enrolled in a randomized trial comparing radiotherapy alone with radiotherapy combined with concomitant and adjuvant treatment with temozolomide. The methylation status of the MGMT promoter was determined by methylation-specific polymerase-chain-reaction analysis.\n\n\nRESULTS\nThe MGMT promoter was methylated in 45 percent of 206 assessable cases. Irrespective of treatment, MGMT promoter methylation was an independent favorable prognostic factor (P<0.001 by the log-rank test; hazard ratio, 0.45; 95 percent confidence interval, 0.32 to 0.61). Among patients whose tumor contained a methylated MGMT promoter, a survival benefit was observed in patients treated with temozolomide and radiotherapy; their median survival was 21.7 months (95 percent confidence interval, 17.4 to 30.4), as compared with 15.3 months (95 percent confidence interval, 13.0 to 20.9) among those who were assigned to only radiotherapy (P=0.007 by the log-rank test). In the absence of methylation of the MGMT promoter, there was a smaller and statistically insignificant difference in survival between the treatment groups.\n\n\nCONCLUSIONS\nPatients with glioblastoma containing a methylated MGMT promoter benefited from temozolomide, whereas those who did not have a methylated MGMT promoter did not have such a benefit."
},
{
"pmid": "30412261",
"title": "Updates in prognostic markers for gliomas.",
"abstract": "Gliomas are the most common primary malignant brain tumor in adults. The traditional classification of gliomas has been based on histologic features and tumor grade. The advent of sophisticated molecular diagnostic techniques has led to a deeper understanding of genomic drivers implicated in gliomagenesis, some of which have important prognostic implications. These advances have led to an extensive revision of the World Health Organization classification of diffuse gliomas to include molecular markers such as isocitrate dehydrogenase mutation, 1p/19q codeletion, and histone mutations as integral components of brain tumor classification. Here, we report a comprehensive analysis of molecular prognostic factors for patients with gliomas, including those mentioned above, but also extending to others such as telomerase reverse transcriptase promoter mutations, O6-methylguanine-DNA methyltransferase promoter methylation, glioma cytosine-phosphate-guanine island methylator phenotype DNA methylation, and epidermal growth factor receptor alterations."
},
{
"pmid": "29660006",
"title": "Validation of postoperative residual contrast-enhancing tumor volume as an independent prognostic factor for overall survival in newly diagnosed glioblastoma.",
"abstract": "Background\nIn the current study, we pooled imaging data in newly diagnosed glioblastoma (GBM) patients from international multicenter clinical trials, single institution databases, and multicenter clinical trial consortiums to identify the relationship between postoperative residual enhancing tumor volume and overall survival (OS).\n\n\nMethods\nData from 1511 newly diagnosed GBM patients from 5 data sources were included in the current study: (i) a single institution database from UCLA (N = 398; Discovery); (ii) patients from the Ben and Cathy Ivy Foundation for Early Phase Clinical Trials Network Radiogenomics Database (N = 262 from 8 centers; Confirmation); (iii) the chemoradiation placebo arm from an international phase III trial (AVAglio; N = 394 from 120 locations in 23 countries; Validation); (iv) the experimental arm from AVAglio examining chemoradiation plus bevacizumab (N = 404 from 120 locations in 23 countries; Exploratory Set 1); and (v) an Alliance (N0874) phase I/II trial of vorinostat plus chemoradiation (N = 53; Exploratory Set 2). Postsurgical, residual enhancing disease was quantified using T1 subtraction maps. Multivariate Cox regression models were used to determine influence of clinical variables, O6-methylguanine-DNA methyltransferase (MGMT) status, and residual tumor volume on OS.\n\n\nResults\nA log-linear relationship was observed between postoperative, residual enhancing tumor volume and OS in newly diagnosed GBM treated with standard chemoradiation. Postoperative tumor volume is a prognostic factor for OS (P < 0.01), regardless of therapy, age, and MGMT promoter methylation status.\n\n\nConclusion\nPostsurgical, residual contrast-enhancing disease significantly negatively influences survival in patients with newly diagnosed GBM treated with chemoradiation with or without concomitant experimental therapy."
},
{
"pmid": "18847337",
"title": "Glioma surgery using a multimodal navigation system with integrated metabolic images.",
"abstract": "OBJECT\nA multimodal neuronavigation system using metabolic images with PET and anatomical images from MR images is described here for glioma surgery. The efficacy of the multimodal neuronavigation system was evaluated by comparing the results with that of the conventional navigation system, which routinely uses anatomical images from MR and CT imaging as guides.\n\n\nMETHODS\nThirty-three patients with cerebral glioma underwent 36 operations with the aid of either a multimodal or conventional navigation system. All of the patients were preliminarily examined using PET with l-methyl-[11C] methionine (MET) for surgical planning. Seventeen of the operations were performed with the multimodal navigation system by integrating the MET-PET images with anatomical MR images. The other 19 operations were performed using a conventional navigation system based solely on MR imaging.\n\n\nRESULTS\nThe multimodal navigation system proved to be more useful than the conventional navigation system in determining the area to be resected by providing a clearer tumor boundary, especially in cases of recurrent tumor that had lost a normal gyral pattern. The multimodal navigation system was therefore more effective than the conventional navigation system in decreasing the mass of the tumor remnant in the resectable portion. A multivariate regression analysis revealed that the multimodal navigation system-guided surgery benefited patient survival significantly more than the conventional navigation-guided surgery (p = 0.016, odds ratio 0.52 [95% confidence interval 0.29-0.88]).\n\n\nCONCLUSIONS\nThe authors' preliminary intrainstitutional comparison between the 2 navigation systems suggested the possible premise of multimodal navigation. The multimodal navigation system using MET-PET fusion imaging is an interesting technique that may prove to be valuable in the future."
},
{
"pmid": "22258713",
"title": "Quantitative volumetric analysis of gliomas with sequential MRI and ¹¹C-methionine PET assessment: patterns of integration in therapy planning.",
"abstract": "PURPOSE\nThe aim of the study was to evaluate the volumetric integration patterns of standard MRI and (11)C-methionine positron emission tomography (PET) images in the surgery planning of gliomas and their relationship to the histological grade.\n\n\nMETHODS\nWe studied 23 patients with suspected or previously treated glioma who underwent preoperative (11)C-methionine PET because MRI was imprecise in defining the surgical target contour. Images were transferred to the treatment planning system, coregistered and fused (BrainLAB). Tumour delineation was performed by (11)C-methionine PET thresholding (vPET) and manual segmentation over MRI (vMRI). A 3-D volumetric study was conducted to evaluate the contribution of each modality to tumour target volume. All cases were surgically treated and histological classification was performed according to WHO grades. Additionally, several biopsy samples were taken according to the results derived either from PET or from MRI and analysed separately.\n\n\nRESULTS\nFifteen patients had high-grade tumours [ten glioblastoma multiforme (GBM) and five anaplastic), whereas eight patients had low-grade tumours. Biopsies from areas with high (11)C-methionine uptake without correspondence in MRI showed tumour proliferation, including infiltrative zones, distinguishing them from dysplasia and radionecrosis. Two main PET/MRI integration patterns emerged after analysis of volumetric data: pattern vMRI-in-vPET (11/23) and pattern vPET-in-vMRI (9/23). Besides, a possible third pattern with differences in both directions (vMRI-diff-vPET) could also be observed (3/23). There was a statistically significant association between the tumour classification and integration patterns described above (p < 0.001, κ = 0.72). GBM was associated with pattern vMRI-in-vPET (9/10), low-grade with pattern vPET-in-vMRI (7/8) and anaplastic with pattern vMRI-diff-vPET (3/5).\n\n\nCONCLUSION\nThe metabolically active tumour volume observed in (11)C-methionine PET differs from the volume of MRI by showing areas of infiltrative tumour and distinguishing from non-tumour lesions. Differences in (11)C-methionine PET/MRI integration patterns can be assigned to tumour grades according to the WHO classification. This finding may improve tumour delineation and therapy planning for gliomas."
}
] |
Diagnostics | null | PMC8871077 | 10.3390/diagnostics12020501 | Sequential Models for Endoluminal Image Classification | Wireless Capsule Endoscopy (WCE) is a procedure to examine the human digestive system for potential mucosal polyps, tumours, or bleedings using an encapsulated camera. This work focuses on polyp detection within WCE videos through Machine Learning. When using Machine Learning in the medical field, scarce and unbalanced datasets often make it hard to receive a satisfying performance. We claim that using Sequential Models in order to take the temporal nature of the data into account improves the performance of previous approaches. Thus, we present a bidirectional Long Short-Term Memory Network (BLSTM), a sequential network that is particularly designed for temporal data. We find the BLSTM Network outperforms non-sequential architectures and other previous models, receiving a final Area under the Curve of 93.83%. Experiments show that our method of extracting spatial and temporal features yields better performance and could be a possible method to decrease the time needed by physicians to analyse the video material. | 1.2. Related WorkSeveral approaches have been proposed to detect intestinal abnormalities in WCE images and videos within the past decade. Among others, they targeted the recognition of bleeding, polyps, tumours, and motility disorders. Before 2015, polyp detection algorithms centred on conventional, typically handcrafted Machine Learning methods. They usually targeted one of three main feature areas to detect abnormalities: colour, shape, or texture. Li and Meng [8] focused on the latter and used wavelet transformations as well uniform local binary patterns with Support Vector Machines as the final classifier. Yu et al. [9] were among the first to investigate Deep Learning techniques. They proposed an architecture named HCNN-NELM, which uses a Convolutional Neural Network as a feature extractor and a cascaded Extreme Learning Machine (ELM) classifier. ELMs are known to result in superior performance than SVM and the fully connected layer of a CNN [9].Yuan and Meng [10] used a novel method named stacked sparse autoencoder with image manifold constraint (short SSAEIM). This led to two subsequent publications featuring different approaches. The first system, called RIIS-DenseNet, consisted of a DenseNet using two loss functions and could outperform the previous results [11]. The idea was based on the argument that high intra-class variability and object rotation significantly hinder the performance of prior approaches. In a subsequent publication, they developed a slightly different system. This one not only aimed at overcoming high intra-class variability and low inter-class variance but also data imbalance [12]. In order to do so, the authors proposed a so-called DenseNet-UDCS, which uses a DenseNet (like in the previous publications) together with an “unbalanced discriminant (UD)” and a “category sensitive (CS)” loss. This helped to calculate discriminative and appropriate features.Furthermore, Guo and Yuan [13] introduced a system named Triple ANet using Adaptive Dense Block (ADB) and Abnormal-aware Attention Module (AAM). This helped to capture correlations and highlight informative areas in the images [2]. Additionally, they introduced a loss named Angular Contrastive Loss (AC Loss) to help deal with high intra-class variabilities and low inter-class variances.Laiz et al. [2] argued that especially the diversity of polyp appearance and the highly imbalanced and scarce data make this research area challenging. They aimed to improve the feature extraction in the case of small datasets using a Triplet loss function. A Triplet loss represents images from the same category by similar embedding vectors, whereas images from different categories are represented by non-similar vectors [2].Finally, to the best of our knowledge, Mohammed et al. [14] proposed the only currently existing work of using RNNs for colon disease detection. They argued that frame level labels are rarely available in the clinical context and therefore proposed a network named PS-DeVCEM that learns multi-label classification on the frame level. It uses a CNN and a residual Long Short-Term Memory Network [15], short LSTM, that extract the spatial and temporal features, respectively. The additionally implemented attention mechanism and self-supervision methods to allow the system to minimise within-video similarities between positive and negative feature frames. | [
"32133645",
"33130417",
"32966967",
"32124490",
"28160514",
"9377276",
"16112549"
] | [
{
"pmid": "32133645",
"title": "Colorectal cancer statistics, 2020.",
"abstract": "Colorectal cancer (CRC) is the second most common cause of cancer death in the United States. Every 3 years, the American Cancer Society provides an update of CRC occurrence based on incidence data (available through 2016) from population-based cancer registries and mortality data (through 2017) from the National Center for Health Statistics. In 2020, approximately 147,950 individuals will be diagnosed with CRC and 53,200 will die from the disease, including 17,930 cases and 3,640 deaths in individuals aged younger than 50 years. The incidence rate during 2012 through 2016 ranged from 30 (per 100,000 persons) in Asian/Pacific Islanders to 45.7 in blacks and 89 in Alaska Natives. Rapid declines in incidence among screening-aged individuals during the 2000s continued during 2011 through 2016 in those aged 65 years and older (by 3.3% annually) but reversed in those aged 50 to 64 years, among whom rates increased by 1% annually. Among individuals aged younger than 50 years, the incidence rate increased by approximately 2% annually for tumors in the proximal and distal colon, as well as the rectum, driven by trends in non-Hispanic whites. CRC death rates during 2008 through 2017 declined by 3% annually in individuals aged 65 years and older and by 0.6% annually in individuals aged 50 to 64 years while increasing by 1.3% annually in those aged younger than 50 years. Mortality declines among individuals aged 50 years and older were steepest among blacks, who also had the only decreasing trend among those aged younger than 50 years, and excluded American Indians/Alaska Natives, among whom rates remained stable. Progress against CRC can be accelerated by increasing access to guideline-recommended screening and high-quality treatment, particularly among Alaska Natives, and elucidating causes for rising incidence in young and middle-aged adults."
},
{
"pmid": "33130417",
"title": "WCE polyp detection with triplet based embeddings.",
"abstract": "Wireless capsule endoscopy is a medical procedure used to visualize the entire gastrointestinal tract and to diagnose intestinal conditions, such as polyps or bleeding. Current analyses are performed by manually inspecting nearly each one of the frames of the video, a tedious and error-prone task. Automatic image analysis methods can be used to reduce the time needed for physicians to evaluate a capsule endoscopy video. However these methods are still in a research phase. In this paper we focus on computer-aided polyp detection in capsule endoscopy images. This is a challenging problem because of the diversity of polyp appearance, the imbalanced dataset structure and the scarcity of data. We have developed a new polyp computer-aided decision system that combines a deep convolutional neural network and metric learning. The key point of the method is the use of the Triplet Loss function with the aim of improving feature extraction from the images when having small dataset. The Triplet Loss function allows to train robust detectors by forcing images from the same category to be represented by similar embedding vectors while ensuring that images from different categories are represented by dissimilar vectors. Empirical results show a meaningful increase of AUC values compared to state-of-the-art methods. A good performance is not the only requirement when considering the adoption of this technology to clinical practice. Trust and explainability of decisions are as important as performance. With this purpose, we also provide a method to generate visual explanations of the outcome of our polyp detector. These explanations can be used to build a physician's trust in the system and also to convey information about the inner working of the method to the designer for debugging purposes."
},
{
"pmid": "32966967",
"title": "A survey on contemporary computer-aided tumor, polyp, and ulcer detection methods in wireless capsule endoscopy imaging.",
"abstract": "Wireless capsule endoscopy (WCE) is a process in which a patient swallows a camera-embedded pill-shaped device that passes through the gastrointestinal (GI) tract, captures and transmits images to an external receiver. WCE devices are considered as a replacement of conventional endoscopy methods which are usually painful and distressful for the patients. WCE devices produce over 60,000 images typically during their course of operation inside the GI tract. These images need to be examined by expert physicians who attempt to identify frames that contain inflammation/disease. It can be hectic for a physician to go through such a large number of frames, hence computer-aided detection methods are considered an efficient alternative. Various anomalies can take place in the GI tract of a human being but the most important and common ones and the aim of this survey are ulcers, polyps, and tumors. In this paper, we have presented a survey of contemporary computer-aided detection methods that take WCE images as input and classify those images in a diseased/abnormal or disease-free/normal image. We have considered methods that detect tumors, polyps and ulcers, as these three diseases lie in the same category. Furthermore, general abnormalities and bleeding inside the GI tract may be the symptoms of these diseases; so an attempt is also made to enlighten the research work done for abnormalities and bleeding detection inside WCE images. Several studies have been included with in-depth detail of their methodologies, findings, and conclusions. Also, we have attempted to classify these methods based on their technical aspects. A formal discussion and comparison of recent review articles are also provided to have a benchmark for the presented survey mentioning their limitations. This paper also includes a proposed classification approach where a cascade approach of neural networks is presented for the classification of tumor, polyp, and ulcer jointly along with data set specifications and results."
},
{
"pmid": "28160514",
"title": "Deep learning for polyp recognition in wireless capsule endoscopy images.",
"abstract": "PURPOSE\nWireless capsule endoscopy (WCE) enables physicians to examine the digestive tract without any surgical operations, at the cost of a large volume of images to be analyzed. In the computer-aided diagnosis of WCE images, the main challenge arises from the difficulty of robust characterization of images. This study aims to provide discriminative description of WCE images and assist physicians to recognize polyp images automatically.\n\n\nMETHODS\nWe propose a novel deep feature learning method, named stacked sparse autoencoder with image manifold constraint (SSAEIM), to recognize polyps in the WCE images. Our SSAEIM differs from the traditional sparse autoencoder (SAE) by introducing an image manifold constraint, which is constructed by a nearest neighbor graph and represents intrinsic structures of images. The image manifold constraint enforces that images within the same category share similar learned features and images in different categories should be kept far away. Thus, the learned features preserve large intervariances and small intravariances among images.\n\n\nRESULTS\nThe average overall recognition accuracy (ORA) of our method for WCE images is 98.00%. The accuracies for polyps, bubbles, turbid images, and clear images are 98.00%, 99.50%, 99.00%, and 95.50%, respectively. Moreover, the comparison results show that our SSAEIM outperforms existing polyp recognition methods with relative higher ORA.\n\n\nCONCLUSION\nThe comprehensive results have demonstrated that the proposed SSAEIM can provide descriptive characterization for WCE images and recognize polyps in a WCE video accurately. This method could be further utilized in the clinical trials to help physicians from the tedious image reading work."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
},
{
"pmid": "16112549",
"title": "Framewise phoneme classification with bidirectional LSTM and other neural network architectures.",
"abstract": "In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it."
}
] |
Diagnostics | null | PMC8871295 | 10.3390/diagnostics12020461 | A Novel Computer-Aided Diagnostic System for Early Detection of Diabetic Retinopathy Using 3D-OCT Higher-Order Spatial Appearance Model | Early diagnosis of diabetic retinopathy (DR) is of critical importance to suppress severe damage to the retina and/or vision loss. In this study, an optical coherence tomography (OCT)-based computer-aided diagnosis (CAD) method is proposed to detect DR early using structural 3D retinal scans. This system uses prior shape knowledge to automatically segment all retinal layers of the 3D-OCT scans using an adaptive, appearance-based method. After the segmentation step, novel texture features are extracted from the segmented layers of the OCT B-scans volume for DR diagnosis. For every layer, Markov–Gibbs random field (MGRF) model is used to extract the 2nd-order reflectivity. In order to represent the extracted image-derived features, we employ cumulative distribution function (CDF) descriptors. For layer-wise classification in 3D volume, using the extracted Gibbs energy feature, an artificial neural network (ANN) is fed the extracted feature for every layer. Finally, the classification outputs for all twelve layers are fused using a majority voting schema for global subject diagnosis. A cohort of 188 3D-OCT subjects are used for system evaluation using different k-fold validation techniques and different validation metrics. Accuracy of 90.56%, 93.11%, and 96.88% are achieved using 4-, 5-, and 10-fold cross-validation, respectively. Additional comparison with deep learning networks, which represent the state-of-the-art, documented the promise of our system’s ability to diagnose the DR early. | Related WorkThere has been some CAD systems works on detecting DR using fundus photographs and Fluorescein Angiography. For example, Foeady et al. [3] examined the ability of a support vector machine (SVM) classifier to accurately grade DR in fundus photographs. A morphological operation and median filter were applied to improve the image in their system. Their next step was to construct a gray-level co-occurrence matrix to extract statistical features e.g., energy, correlation, homogeneity, and contrast. A SVM was then used to classify these features, and they reported that the system was accurate at 82.35%. Another system by the authors of [4] is used to diagnose the DR. Using fuzzy image preprocessing combined with Machine Learning (ML) algorithms, a detection system for DR is presented that uses color fundus images.In addition, there has been considerable success in exploiting OCT for detecting retinal diseases like DR, AMD, and DME. For example, Sandhu et al. [5] implemented a CAD system utilizing the two modalities, e.g, OCT and OCTA, to detect and grade DR. They also added demographic data for DR patients and fused it with the features extracted from OCT and OCTA images. Bernardes et al. [6] developed a CAD system using the OCT images to make grading for DR. They used OCT histogram information in their system as a feature extraction from OCT images. Then, they fed these features to SVM to classify DR. An experimental validation based on leave-one-subject-out results showed that 66.7% of them were correctly classified. However, they were very inaccurate in their result when comparing with other experiments. A different system is described in [7], the authors of which developed a CAD system to diagnose glaucoma using the OCT images. They used convolutional neural network (CNN) to extract the features and used the softmax layer to classify the OCT images. Then, they evaluated their system using the area under the ROC Curve. The system had an AUC of 94%. There are other works that use OCT to diagnose DR in [8,9,10,11].OCTA and OCT, which are noninvasive techniques, have been used in other investigations to identify retinal disorders because they provide cross-sectional and volumetric views of retinas and blood vessels, respectively. Alam et al. [12] developed an algorithm for detecting DR and grading DR using a classification system based on SVM in OCTA scans. OCTA scans were analyzed to determine six features, including blood vessel caliber, blood vessel curvature, foveal avascular zone size, vessel perimeter index, blood vessel density, and irregularity of the foveal avascular zone contour. A fusion of all features was presented as well as the results of every feature individually. Blood vessel density was the most accurate feature, but fusion of the features gave the best results. As reported, the fusion features were more diagnostic for normal compared with DR and normal compared with different grades of DR, with accuracy 94.41% and 92.96%, respectively. A CAD system was developed [13] using an extension of the CAD system from their prior study [14], the authors established a methodology for diagnosing DR using the 3D-OCT volume by the local binary pattern (LBP), histogram of oriented gradients (HOG). They used the principal component analysis (PCA) to reduce the dimensions of features. Each feature is separately fed into different classifiers. The best classifier used for the histogram of LBP using PCA is SVM with linear kernel which achieves sensitivity and specificity of 87.5% and 87.5%, respectively. Due to lack of layer segmentation and poor performance, this work has some shortcomings. Ibrahim and colleagues [15] have presented a CAD system which utilizes a pretrained deep learning method based on the VGG16 convolutional neural network to diagnose DM, CNV, and drusenoid disorders in a 3D-OCT volume by adding the features extracted from the deep neural network to hand-crafted features extracted from the ROI. Ghazal and colleagues [16] explained how to apply CNN CAD to analyze OCT B-scans to detect DR. First, an average B-scan is made up of five areas, including nasal, distal nasal, central, distal temporal, and temporal. Second, a total of seven distinct CNNs were trained, each based on a region plus two transfers based on only the nasal and temporal regions. Last, to obtain the overall diagnosis, the seven CNN results were fed, individually or together, with two regions of analysis (nasal and temporal) and transfer learning, the performance of the established system has been reported to be 94%. CAD systems integrating the two modalities have also been utilized to diagnose DR grades in another study [5]. Clinical and demographic data are combined with the findings from the two modalities and input into a classification system, namely random forest (RF) classifier.There have also been other works that used OCT with different outcomes [6,7,8,9,10,17,18,19,20,21,22,23,24,25,26,27,28,29,30].The proposed CAD system integrates a segmentation method to segments the twelve retinal layers for each B-scan in a 3-D OCT volume. For this, the segmentation utilizes an adaptive shape prior knowledge. This is followed by extracting a novel texture feature (2nd-order reflectivity) that is derived from a Markov–Gibbs random field (MGRF) model where the second-order structure of image gray levels by treating each B-scan in the OCT volume as an instance of one MGRF. Finally, we construct a cumulative distribution function (CDF) of Gibbs energy values throughout each layer and use its nine deciles to create a vector descriptor for each layer. Layer-wise deciles features are then concatenated and an artificial neural network (ANN) is fed with these features for testing and training. The proposed system is evaluated and conducted using different validation techniques using a majority voting schema.This contribution of the work presented in this manuscript can be described as follows:Each layer of the OCT is analyzed separately and is classified for local and individualized analyses. In the following step, the decision of the individual layers is combined into a global diagnosis.Our system integrates incorporates a 3D-MGRF model, which is in place of using a 1st-order reflectivity.To improve the descriptive power of the extracted features, a statistical approach is used (i.e., CDF percentile).The CDF percentile values are fed into an ANN to get the final diagnosis of the 3D OCT volume. | [
"31518657",
"27747815",
"31982407",
"31260494",
"31561100",
"32985536",
"28592309",
"27555965",
"34943550",
"30911364",
"31341808",
"32855839",
"32818081",
"33633139",
"33450073"
] | [
{
"pmid": "31518657",
"title": "Global and regional diabetes prevalence estimates for 2019 and projections for 2030 and 2045: Results from the International Diabetes Federation Diabetes Atlas, 9th edition.",
"abstract": "AIMS\nTo provide global estimates of diabetes prevalence for 2019 and projections for 2030 and 2045.\n\n\nMETHODS\nA total of 255 high-quality data sources, published between 1990 and 2018 and representing 138 countries were identified. For countries without high quality in-country data, estimates were extrapolated from similar countries matched by economy, ethnicity, geography and language. Logistic regression was used to generate smoothed age-specific diabetes prevalence estimates (including previously undiagnosed diabetes) in adults aged 20-79 years.\n\n\nRESULTS\nThe global diabetes prevalence in 2019 is estimated to be 9.3% (463 million people), rising to 10.2% (578 million) by 2030 and 10.9% (700 million) by 2045. The prevalence is higher in urban (10.8%) than rural (7.2%) areas, and in high-income (10.4%) than low-income countries (4.0%). One in two (50.1%) people living with diabetes do not know that they have diabetes. The global prevalence of impaired glucose tolerance is estimated to be 7.5% (374 million) in 2019 and projected to reach 8.0% (454 million) by 2030 and 8.6% (548 million) by 2045.\n\n\nCONCLUSIONS\nJust under half a billion people are living with diabetes worldwide and the number is projected to increase by 25% in 2030 and 51% in 2045."
},
{
"pmid": "27747815",
"title": "Automatic screening and classification of diabetic retinopathy and maculopathy using fuzzy image processing.",
"abstract": "Digital retinal imaging is a challenging screening method for which effective, robust and cost-effective approaches are still to be developed. Regular screening for diabetic retinopathy and diabetic maculopathy diseases is necessary in order to identify the group at risk of visual impairment. This paper presents a novel automatic detection of diabetic retinopathy and maculopathy in eye fundus images by employing fuzzy image processing techniques. The paper first introduces the existing systems for diabetic retinopathy screening, with an emphasis on the maculopathy detection methods. The proposed medical decision support system consists of four parts, namely: image acquisition, image preprocessing including four retinal structures localisation, feature extraction and the classification of diabetic retinopathy and maculopathy. A combination of fuzzy image processing techniques, the Circular Hough Transform and several feature extraction methods are implemented in the proposed system. The paper also presents a novel technique for the macula region localisation in order to detect the maculopathy. In addition to the proposed detection system, the paper highlights a novel online dataset and it presents the dataset collection, the expert diagnosis process and the advantages of our online database compared to other public eye fundus image databases for diabetic retinopathy purposes."
},
{
"pmid": "31982407",
"title": "Automated Diagnosis of Diabetic Retinopathy Using Clinical Biomarkers, Optical Coherence Tomography, and Optical Coherence Tomography Angiography.",
"abstract": "PURPOSE\nTo determine if combining clinical, demographic, and imaging data improves automated diagnosis of nonproliferative diabetic retinopathy (NPDR).\n\n\nDESIGN\nCross-sectional imaging and machine learning study.\n\n\nMETHODS\nThis was a retrospective study performed at a single academic medical center in the United States. Inclusion criteria were age >18 years and a diagnosis of diabetes mellitus (DM). Exclusion criteria were non-DR retinal disease and inability to image the macula. Optical coherence tomography (OCT) and OCT angiography (OCTA) were performed, and data on age, sex, hypertension, hyperlipidemia, and hemoglobin A1c were collected. Machine learning techniques were then applied. Multiple pathophysiologically important features were automatically extracted from each layer on OCT and each OCTA plexus and combined with clinical data in a random forest classifier to develop the system, whose results were compared to the clinical grading of NPDR, the gold standard.\n\n\nRESULTS\nA total of 111 patients with DM II were included in the study, 36 with DM without DR, 53 with mild NPDR, and 22 with moderate NPDR. When OCT images alone were analyzed by the system, accuracy of diagnosis was 76%, sensitivity 85%, specificity 87%, and area under the curve (AUC) was 0.78. When OCT and OCTA data together were analyzed, accuracy was 92%, sensitivity 95%, specificity 98%, and AUC 0.92. When all data modalities were combined, the system achieved an accuracy of 96%, sensitivity 100%, specificity 94%, and AUC 0.96.\n\n\nCONCLUSIONS\nCombining common clinical data points with OCT and OCTA data enhances the power of computer-aided diagnosis of NPDR."
},
{
"pmid": "31260494",
"title": "A feature agnostic approach for glaucoma detection in OCT volumes.",
"abstract": "Optical coherence tomography (OCT) based measurements of retinal layer thickness, such as the retinal nerve fibre layer (RNFL) and the ganglion cell with inner plexiform layer (GCIPL) are commonly employed for the diagnosis and monitoring of glaucoma. Previously, machine learning techniques have relied on segmentation-based imaging features such as the peripapillary RNFL thickness and the cup-to-disc ratio. Here, we propose a deep learning technique that classifies eyes as healthy or glaucomatous directly from raw, unsegmented OCT volumes of the optic nerve head (ONH) using a 3D Convolutional Neural Network (CNN). We compared the accuracy of this technique with various feature-based machine learning algorithms and demonstrated the superiority of the proposed deep learning based method. Logistic regression was found to be the best performing classical machine learning technique with an AUC of 0.89. In direct comparison, the deep learning approach achieved a substantially higher AUC of 0.94 with the additional advantage of providing insight into which regions of an OCT volume are important for glaucoma detection. Computing Class Activation Maps (CAM), we found that the CNN identified neuroretinal rim and optic disc cupping as well as the lamina cribrosa (LC) and its surrounding areas as the regions significantly associated with the glaucoma classification. These regions anatomically correspond to the well established and commonly used clinical markers for glaucoma diagnosis such as increased cup volume, cup diameter, and neuroretinal rim thinning at the superior and inferior segments."
},
{
"pmid": "31561100",
"title": "Deep learning based retinal OCT segmentation.",
"abstract": "We look at the recent application of deep learning (DL) methods in automated fine-grained segmentation of spectral domain optical coherence tomography (OCT) images of the retina. We describe a new method combining fully convolutional networks (FCN) with Gaussian Processes for post processing. We report performance comparisons between the proposed approach, human clinicians, and other machine learning (ML) such as graph based approaches. The approach is demonstrated on an OCT dataset consisting of mild non-proliferative diabetic retinopathy from the University of Miami. The method is shown to have performance on par with humans, also compares favorably with the other ML methods, and appears to have as small or smaller mean unsigned error (equal to 1.06), versus errors ranging from 1.17 to 1.81 for other methods, and compared with human error of 1.10."
},
{
"pmid": "32985536",
"title": "Density-based classification in diabetic retinopathy through thickness of retinal layers from optical coherence tomography.",
"abstract": "Diabetic retinopathy (DR) is a severe retinal disorder that can lead to vision loss, however, its underlying mechanism has not been fully understood. Previous studies have taken advantage of Optical Coherence Tomography (OCT) and shown that the thickness of individual retinal layers are affected in patients with DR. However, most studies analyzed the thickness by calculating summary statistics from retinal thickness maps of the macula region. This study aims to apply a density function-based statistical framework to the thickness data obtained through OCT, and to compare the predictive power of various retinal layers to assess the severity of DR. We used a prototype data set of 107 subjects which are comprised of 38 non-proliferative DR (NPDR), 28 without DR (NoDR), and 41 controls. Based on the thickness profiles, we constructed novel features which capture the variation in the distribution of the pixel-wise retinal layer thicknesses from OCT. We quantified the predictive power of each of the retinal layers to distinguish between all three pairwise comparisons of the severity in DR (NoDR vs NPDR, controls vs NPDR, and controls vs NoDR). When applied to this preliminary DR data set, our density-based method demonstrated better predictive results compared with simple summary statistics. Furthermore, our results indicate considerable differences in retinal layer structuring based on the severity of DR. We found that: (a) the outer plexiform layer is the most discriminative layer for classifying NoDR vs NPDR; (b) the outer plexiform, inner nuclear and ganglion cell layers are the strongest biomarkers for discriminating controls from NPDR; and (c) the inner nuclear layer distinguishes best between controls and NoDR."
},
{
"pmid": "28592309",
"title": "Machine learning techniques for diabetic macular edema (DME) classification on SD-OCT images.",
"abstract": "BACKGROUND\nSpectral domain optical coherence tomography (OCT) (SD-OCT) is most widely imaging equipment used in ophthalmology to detect diabetic macular edema (DME). Indeed, it offers an accurate visualization of the morphology of the retina as well as the retina layers.\n\n\nMETHODS\nThe dataset used in this study has been acquired by the Singapore Eye Research Institute (SERI), using CIRRUS TM (Carl Zeiss Meditec, Inc., Dublin, CA, USA) SD-OCT device. The dataset consists of 32 OCT volumes (16 DME and 16 normal cases). Each volume contains 128 B-scans with resolution of 1024 px × 512 px, resulting in more than 3800 images being processed. All SD-OCT volumes are read and assessed by trained graders and identified as normal or DME cases based on evaluation of retinal thickening, hard exudates, intraretinal cystoid space formation, and subretinal fluid. Within the DME sub-set, a large number of lesions has been selected to create a rather complete and diverse DME dataset. This paper presents an automatic classification framework for SD-OCT volumes in order to identify DME versus normal volumes. In this regard, a generic pipeline including pre-processing, feature detection, feature representation, and classification was investigated. More precisely, extraction of histogram of oriented gradients and local binary pattern (LBP) features within a multiresolution approach is used as well as principal component analysis (PCA) and bag of words (BoW) representations.\n\n\nRESULTS AND CONCLUSION\nBesides comparing individual and combined features, different representation approaches and different classifiers are evaluated. The best results are obtained for LBP[Formula: see text] vectors while represented and classified using PCA and a linear-support vector machine (SVM), leading to a sensitivity(SE) and specificity (SP) of 87.5 and 87.5%, respectively."
},
{
"pmid": "27555965",
"title": "Classification of SD-OCT Volumes Using Local Binary Patterns: Experimental Validation for DME Detection.",
"abstract": "This paper addresses the problem of automatic classification of Spectral Domain OCT (SD-OCT) data for automatic identification of patients with DME versus normal subjects. Optical Coherence Tomography (OCT) has been a valuable diagnostic tool for DME, which is among the most common causes of irreversible vision loss in individuals with diabetes. Here, a classification framework with five distinctive steps is proposed and we present an extensive study of each step. Our method considers combination of various preprocessing steps in conjunction with Local Binary Patterns (LBP) features and different mapping strategies. Using linear and nonlinear classifiers, we tested the developed framework on a balanced cohort of 32 patients. Experimental results show that the proposed method outperforms the previous studies by achieving a Sensitivity (SE) and a Specificity (SP) of 81.2% and 93.7%, respectively. Our study concludes that the 3D features and high-level representation of 2D features using patches achieve the best results. However, the effects of preprocessing are inconsistent with different classifiers and feature configurations."
},
{
"pmid": "34943550",
"title": "Role of Optical Coherence Tomography Imaging in Predicting Progression of Age-Related Macular Disease: A Survey.",
"abstract": "In developed countries, age-related macular degeneration (AMD), a retinal disease, is the main cause of vision loss in the elderly. Optical Coherence Tomography (OCT) is currently the gold standard for assessing individuals for initial AMD diagnosis. In this paper, we look at how OCT imaging can be used to diagnose AMD. Our main aim is to examine and compare automated computer-aided diagnostic (CAD) systems for diagnosing and grading of AMD. We provide a brief summary, outlining the main aspects of performance assessment and providing a basis for current research in AMD diagnosis. As a result, the only viable alternative is to prevent AMD and stop both this devastating eye condition and unwanted visual impairment. On the other hand, the grading of AMD is very important in order to detect early AMD and prevent patients from reaching advanced AMD disease. In light of this, we explore the remaining issues with automated systems for AMD detection based on OCT imaging, as well as potential directions for diagnosis and monitoring systems based on OCT imaging and telemedicine applications."
},
{
"pmid": "30911364",
"title": "Glaucoma Diagnosis with Machine Learning Based on Optical Coherence Tomography and Color Fundus Images.",
"abstract": "This study aimed to develop a machine learning-based algorithm for glaucoma diagnosis in patients with open-angle glaucoma, based on three-dimensional optical coherence tomography (OCT) data and color fundus images. In this study, 208 glaucomatous and 149 healthy eyes were enrolled, and color fundus images and volumetric OCT data from the optic disc and macular area of these eyes were captured with a spectral-domain OCT (3D OCT-2000, Topcon). Thickness and deviation maps were created with a segmentation algorithm. Transfer learning of convolutional neural network (CNN) was used with the following types of input images: (1) fundus image of optic disc in grayscale format, (2) disc retinal nerve fiber layer (RNFL) thickness map, (3) macular ganglion cell complex (GCC) thickness map, (4) disc RNFL deviation map, and (5) macular GCC deviation map. Data augmentation and dropout were performed to train the CNN. For combining the results from each CNN model, a random forest (RF) was trained to classify the disc fundus images of healthy and glaucomatous eyes using feature vector representation of each input image, removing the second fully connected layer. The area under receiver operating characteristic curve (AUC) of a 10-fold cross validation (CV) was used to evaluate the models. The 10-fold CV AUCs of the CNNs were 0.940 for color fundus images, 0.942 for RNFL thickness maps, 0.944 for macular GCC thickness maps, 0.949 for disc RNFL deviation maps, and 0.952 for macular GCC deviation maps. The RF combining the five separate CNN models improved the 10-fold CV AUC to 0.963. Therefore, the machine learning system described here can accurately differentiate between healthy and glaucomatous subjects based on their extracted images from OCT data and color fundus images. This system should help to improve the diagnostic accuracy in glaucoma."
},
{
"pmid": "31341808",
"title": "Artificial intelligence on diabetic retinopathy diagnosis: an automatic classification method based on grey level co-occurrence matrix and naive Bayesian model.",
"abstract": "AIM\nTo develop an automatic tool on screening diabetic retinopathy (DR) from diabetic patients.\n\n\nMETHODS\nWe extracted textures from eye fundus images of each diabetes subject using grey level co-occurrence matrix method and trained a Bayesian model based on these textures. The receiver operating characteristic (ROC) curve was used to estimate the sensitivity and specificity of the Bayesian model.\n\n\nRESULTS\nA total of 1000 eyes fundus images from diabetic patients in which 298 eyes were diagnosed as DR by two ophthalmologists. The Bayesian model was trained using four extracted textures including contrast, entropy, angular second moment and correlation using a training dataset. The Bayesian model achieved a sensitivity of 0.949 and a specificity of 0.928 in the validation dataset. The area under the ROC curve was 0.938, and the 10-fold cross validation method showed that the average accuracy rate is 93.5%.\n\n\nCONCLUSION\nTextures extracted by grey level co-occurrence can be useful information for DR diagnosis, and a trained Bayesian model based on these textures can be an effective tool for DR screening among diabetic patients."
},
{
"pmid": "32855839",
"title": "Transfer Learning for Automated OCTA Detection of Diabetic Retinopathy.",
"abstract": "Purpose\nTo test the feasibility of using deep learning for optical coherence tomography angiography (OCTA) detection of diabetic retinopathy.\n\n\nMethods\nA deep-learning convolutional neural network (CNN) architecture, VGG16, was employed for this study. A transfer learning process was implemented to retrain the CNN for robust OCTA classification. One dataset, consisting of images of 32 healthy eyes, 75 eyes with diabetic retinopathy (DR), and 24 eyes with diabetes but no DR (NoDR), was used for training and cross-validation. A second dataset consisting of 20 NoDR and 26 DR eyes was used for external validation. To demonstrate the feasibility of using artificial intelligence (AI) screening of DR in clinical environments, the CNN was incorporated into a graphical user interface (GUI) platform.\n\n\nResults\nWith the last nine layers retrained, the CNN architecture achieved the best performance for automated OCTA classification. The cross-validation accuracy of the retrained classifier for differentiating among healthy, NoDR, and DR eyes was 87.27%, with 83.76% sensitivity and 90.82% specificity. The AUC metrics for binary classification of healthy, NoDR, and DR eyes were 0.97, 0.98, and 0.97, respectively. The GUI platform enabled easy validation of the method for AI screening of DR in a clinical environment.\n\n\nConclusions\nWith a transfer learning process for retraining, a CNN can be used for robust OCTA classification of healthy, NoDR, and DR eyes. The AI-based OCTA classification platform may provide a practical solution to reducing the burden of experienced ophthalmologists with regard to mass screening of DR patients.\n\n\nTranslational Relevance\nDeep-learning-based OCTA classification can alleviate the need for manual graders and improve DR screening efficiency."
},
{
"pmid": "32818081",
"title": "Ensemble Deep Learning for Diabetic Retinopathy Detection Using Optical Coherence Tomography Angiography.",
"abstract": "Purpose\nTo evaluate the role of ensemble learning techniques with deep learning in classifying diabetic retinopathy (DR) in optical coherence tomography angiography (OCTA) images and their corresponding co-registered structural images.\n\n\nMethods\nA total of 463 volumes from 380 eyes were acquired using the 3 × 3-mm OCTA protocol on the Zeiss Plex Elite system. Enface images of the superficial and deep capillary plexus were exported from both the optical coherence tomography and OCTA data. Component neural networks were constructed using single data-types and fine-tuned using VGG19, ResNet50, and DenseNet architectures pretrained on ImageNet weights. These networks were then ensembled using majority soft voting and stacking techniques. Results were compared with a classifier using manually engineered features. Class activation maps (CAMs) were created using the original CAM algorithm and Grad-CAM.\n\n\nResults\nThe networks trained with the VGG19 architecture outperformed the networks trained on deeper architectures. Ensemble networks constructed using the four fine-tuned VGG19 architectures achieved accuracies of 0.92 and 0.90 for the majority soft voting and stacking methods respectively. Both ensemble methods outperformed the highest single data-type network and the network trained on hand-crafted features. Grad-CAM was shown to more accurately highlight areas of disease.\n\n\nConclusions\nEnsemble learning increases the predictive accuracy of CNNs for classifying referable DR on OCTA datasets.\n\n\nTranslational Relevance\nBecause the diagnostic accuracy of OCTA images is shown to be greater than the manually extracted features currently used in the literature, the proposed methods may be beneficial toward developing clinically valuable solutions for DR diagnoses."
},
{
"pmid": "33633139",
"title": "Precise higher-order reflectivity and morphology models for early diagnosis of diabetic retinopathy using OCT images.",
"abstract": "This study proposes a novel computer assisted diagnostic (CAD) system for early diagnosis of diabetic retinopathy (DR) using optical coherence tomography (OCT) B-scans. The CAD system is based on fusing novel OCT markers that describe both the morphology/anatomy and the reflectivity of retinal layers to improve DR diagnosis. This system separates retinal layers automatically using a segmentation approach based on an adaptive appearance and their prior shape information. High-order morphological and novel reflectivity markers are extracted from individual segmented layers. Namely, the morphological markers are layer thickness and tortuosity while the reflectivity markers are the 1st-order reflectivity of the layer in addition to local and global high-order reflectivity based on Markov-Gibbs random field (MGRF) and gray-level co-occurrence matrix (GLCM), respectively. The extracted image-derived markers are represented using cumulative distribution function (CDF) descriptors. The constructed CDFs are then described using their statistical measures, i.e., the 10th through 90th percentiles with a 10% increment. For individual layer classification, each extracted descriptor of a given layer is fed to a support vector machine (SVM) classifier with a linear kernel. The results of the four classifiers are then fused using a backpropagation neural network (BNN) to diagnose each retinal layer. For global subject diagnosis, classification outputs (probabilities) of the twelve layers are fused using another BNN to make the final diagnosis of the B-scan. This system is validated and tested on 130 patients, with two scans for both eyes (i.e. 260 OCT images), with a balanced number of normal and DR subjects using different validation metrics: 2-folds, 4-folds, 10-folds, and leave-one-subject-out (LOSO) cross-validation approaches. The performance of the proposed system was evaluated using sensitivity, specificity, F1-score, and accuracy metrics. The system's performance after the fusion of these different markers showed better performance compared with individual markers and other machine learning fusion methods. Namely, it achieved [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], respectively, using the LOSO cross-validation technique. The reported results, based on the integration of morphology and reflectivity markers and by using state-of-the-art machine learning classifications, demonstrate the ability of the proposed system to diagnose the DR early."
},
{
"pmid": "33450073",
"title": "A novel 3D segmentation approach for extracting retinal layers from optical coherence tomography images.",
"abstract": "PURPOSE\nAccurate segmentation of retinal layers of the eye in 3D Optical Coherence Tomography (OCT) data provides relevant information for clinical diagnosis. This manuscript describes a 3D segmentation approach that uses an adaptive patient-specific retinal atlas, as well as an appearance model for 3D OCT data.\n\n\nMETHODS\nTo reconstruct the atlas of 3D retinal scan, the central area of the macula (macula mid-area) where the fovea could be clearly identified, was segmented initially. Markov Gibbs Random Field (MGRF) including intensity, spatial information, and shape of 12 retinal layers were used to segment the selected area of retinal fovea. A set of coregistered OCT scans that were gathered from 200 different individuals were used to build a 2D shape prior. This shape prior was adapted subsequently to the first order appearance and second order spatial interaction MGRF model. After segmenting the center of the macula \"foveal area\", the labels and appearances of the layers that were segmented were utilized to segment the adjacent slices. The final step was repeated recursively until a 3D OCT scan of the patient was segmented.\n\n\nRESULTS\nThis approach was tested in 50 patients with normal and with ocular pathological conditions. The segmentation was compared to a manually segmented ground truth. The results were verified by clinical retinal experts. Dice Similarity Coefficient (DSC), 95% bidirectional modified Hausdorff Distance (HD), Unsigned Mean Surface Position Error (MSPE), and Average Volume Difference (AVD) metrics were used to quantify the performance of the proposed approach. The proposed approach was proved to be more accurate than the current state-of-the-art 3D OCT approaches.\n\n\nCONCLUSIONS\nThe proposed approach has the advantage of segmenting all the 12 retinal layers rapidly and more accurately than current state-of-the-art 3D OCT approaches."
}
] |
Diagnostics | null | PMC8871307 | 10.3390/diagnostics12020273 | Design of a Diagnostic System for Patient Recovery Based on Deep Learning Image Processing: For the Prevention of Bedsores and Leg Rehabilitation | Worldwide COVID-19 infections have caused various problems throughout different countries. In the case of Korea, problems related to the demand for medical care concerning wards and doctors are serious, which were already slowly worsening problems in Korea before the COVID-19 pandemic. In this paper, we propose the direction of developing a system by combining artificial intelligence technology with limited areas that do not require high expertise in the rehabilitation medical field that should be improved in Korea through the prevention of bedsores and leg rehabilitation methods. Regarding the introduction of artificial intelligence technology, medical and related laws and regulations were quite limited, so the actual needs of domestic rehabilitation doctors and advice on the hospital environment were obtained. Satisfaction with the test content was high, the degree of provision of important medical data was 95%, and the angular error was within 5 degrees and suitable for recovery confirmation. | 2. Related WorksIn conducting the study, it was necessary to check the design method, and the application method according to the object identification method of deep learning was considered. Joint tracking and object detection methods were combined to check whether the motion was in progress, and angular calculations were performed based on the coordinates obtained through joint tracking.2.1. Human Pose EstimationVarious technologies using computer vision are used, and what will be used in this study is human pose estimation. It is also called joint tracking, which means the technique of recognizing a person’s posture. It is a method of estimating the location of a person’s body joint in a photo or video. However, it is a difficult technique to identify joints only with images, and it is also tracked differently depending on the person doing the operating and the camera shooting direction. It is also a difficult technology field because the variables are so diverse.Figure 2 shows various joint tracking methods. From the left, it shows the method of attaching a sensor to the body, the method through a 3D camera, and the result of using a general camera [27,28,29].Table 2 shows the practicality of each joint tracking method. Although the sensor-attached measurement method is sophisticated, it is expensive in the initial stage and having to wear equipment for each measurement is an inconvenience. However, although the method of using a camera does not require any additional equipment, 3D cameras have the disadvantage of being more expensive than 2D cameras, and joint tracking performance is very limited compared to the sensor-attachment method. Furthermore, in the case of 2D cameras, additional technology or hardware is required because data is insufficient to recognize three-dimensional coordinates. Even if the joint is tracked, only the coordinates are displayed in the virtual space, and data such as the angle of the joint require additional computational work.This study was conducted using deep learning. It is largely divided into two-dimensional and three-dimensional measurements and refers to a method of tracking data on a plane and a method of including depth data, such as using hardware. There are top-down and bottom-up methods for tracking. Top-down is a method of detecting a person in an image first and then estimating a person’s posture inside a bounding box. If a person is not recognized, the posture cannot be measured, and as the number of people increases, the number of calculations increases. The bottom-up method can cause problems in close joint matching by recognizing joint parts and connecting them to each other to estimate posture; however, there is no process for detecting humans, so it is suitable for real-time processing.Figure 3 shows the accuracy of the current pose estimation codes. Accuracy uses an indicator called “Percentage of Correct Keypoint” using the MPII Human pose Dataset and is judged as True Positive when the estimated coordinates and correct coordinates are less than any threshold at the joint point. It is the same indicator as the general AP indicator used in the COCO dataset. OpenPose does have high accuracy but adopts a bottom-up method that is effective for real-time utilization. It was confirmed that there was no problem with this accuracy in confirming the validity of the content by adding a joint angle measurement function.Attempts to incorporate this human pose estimation technology into the medical field continues [31,32,33]. Usually, since it is a technology that mainly checks movement, research is mainly conducted to track joints that do not work properly and check movements such as joint movement range. Studies in slightly different directions were also conducted, and methods of identifying problems in the patient’s daily movements or using them for the mental treatment of patients were also conducted [34].2.2. The Seriousness of Bedsores and the Need for RehabilitationBedsores are caused by the weight of the body blocking the blood supply when sitting or lying in one position for too long if people cannot move well. In ordinary patients, it occurs when sitting in a bed or wheelchair for a long time. If it is detected in advance, before a pressure ulcer occurs, it can be cured by moving the body little by little; however, for some patients it may be difficult to move even a little. In this case, it gets worse and worse, forming pus in the area and even resulting in rotting.Bedsores are not generally disease-inflicted, but problems that occur in life, often caused by patient management issues in hospitals, and they are common regardless of gender or age. Bedsores can be prevented with simple treatment and/or prescriptions, yet it is still a significant problem within the current medical community where the number of management personnel is gradually decreasing. In particular, the problem is getting more serious as the population of the elderly who are vulnerable to bedsores is increasing. Recently, to solve this problem in Korea, the world’s first monitoring system to prevent bedsores with wireless battery-free soft pressure sensors has been developed [35]. However, most patients are still directly identified and managed, and diagnosis is performed using a protractor, especially for identifying lesions related to joint measurement.In addition, COVID-19 has increased the number of patients with respiratory organs, and since the respiratory tract is also a factor that puts pressure on the oral cavity, bedsores occur. Since bedsores are diseases caused by pressure, not only do they occur when the body is leaning against something, they also occur in various ways depending on the patient’s condition [36].Most of the patients with problems often have difficulty controlling their bodies, which is caused by musculoskeletal disorders or surgery in related areas but weakening muscle strength is not unconditionally due to surgery and illness. Concerning the human musculoskeletal system, to ensure healthy functioning, the legs should be continuously used for basic movement; however, people are increasingly sitting down more often and for longer and, therefore, muscle strength of the legs is becoming weaker. Legs are an important part of the human anatomy for movement and a healthy and full life and are used with frequency, hence, they are exposed to problems. In addition, some researchers have studied these problems to prevent sarcopenia or hypertrophic cardiomyopathy in people with poor movement skills. They studied the correlation between long-term hospitalization in nursing homes and the drugs, senile syndrome, and behavior present in home care [37,38,39]. Another study was conducted to confirm the importance of leg muscle strength and to provide rehabilitation programs that can be performed at home for patients with weak leg muscle strength using an online system. Methods have also been studied using related robotics and IoT technologies, and various attempts are being made to solve some of the rehabilitation-related processes that do not have to be performed by medical personnel or in hospitals [40,41,42,43]. Patients rely on rehabilitation treatment to return to their daily lives, and rehabilitation through exercise is an essential factor in medical practice because it provides various solutions along with medication. | [
"34360465",
"19035067",
"25991879",
"28873501",
"33921433",
"33568110",
"28898126",
"31331883",
"34501952",
"34429436",
"32952306",
"29411310",
"29549539",
"29340963"
] | [
{
"pmid": "34360465",
"title": "The Burden of Burnout among Healthcare Professionals of Intensive Care Units and Emergency Departments during the COVID-19 Pandemic: A Systematic Review.",
"abstract": "The primary aim was to evaluate the burnout prevalence among healthcare workers (HCWs) in intensive care units (ICUs) and emergency departments (EDs) during the COVID-19 pandemic. The secondary aim was to identify factors associated with burnout in this population. A systematic review was conducted following PRISMA guidelines by searching PubMed, Embase, PsychINFO, and Scopus from 1 January to 24 November 2020. Studies with information about burnout prevalence/level during the pandemic regarding ICU/ED HCWs were eligible. A total of 927 records were identified. The selection resulted in 11 studies. Most studies were conducted in April/May 2020. Samples ranged from 15 to 12,596 participants. The prevalence of overall burnout ranged from 49.3% to 58%. Nurses seemed to be at higher risk. Both socio-demographic and work-related features were associated with burnout. Many pandemic-related variables were associated with burnout, e.g., shortage in resources, worry regarding COVID-19, and stigma. This review highlighted a substantial burnout prevalence among ICU/ED HCWs. However, this population has presented a high burnout prevalence for a long time, and there is not sufficient evidence to understand if such prevalence is currently increased. It also outlined modifiable factors and the need to improve emergency preparedness both from an individual and structural level."
},
{
"pmid": "19035067",
"title": "Pressure ulcers: prevention, evaluation, and management.",
"abstract": "A pressure ulcer is a localized injury to the skin or underlying tissue, usually over a bony prominence, as a result of unrelieved pressure. Predisposing factors are classified as intrinsic (e.g., limited mobility, poor nutrition, comorbidities, aging skin) or extrinsic (e.g., pressure, friction, shear, moisture). Prevention includes identifying at-risk persons and implementing specific prevention measures, such as following a patient repositioning schedule; keeping the head of the bed at the lowest safe elevation to prevent shear; using pressure-reducing surfaces; and assessing nutrition and providing supplementation, if needed. When an ulcer occurs, documentation of each ulcer (i.e., size, location, eschar and granulation tissue, exudate, odor, sinus tracts, undermining, and infection) and appropriate staging (I through IV) are essential to the wound assessment. Treatment involves management of local and distant infections, removal of necrotic tissue, maintenance of a moist environment for wound healing, and possibly surgery. Debridement is indicated when necrotic tissue is present. Urgent sharp debridement should be performed if advancing cellulitis or sepsis occurs. Mechanical, enzymatic, and autolytic debridement methods are nonurgent treatments. Wound cleansing, preferably with normal saline and appropriate dressings, is a mainstay of treatment for clean ulcers and after debridement. Bacterial load can be managed with cleansing. Topical antibiotics should be considered if there is no improvement in healing after 14 days. Systemic antibiotics are used in patients with advancing cellulitis, osteomyelitis, or systemic infection."
},
{
"pmid": "25991879",
"title": "Pressure ulcers: Current understanding and newer modalities of treatment.",
"abstract": "This article reviews the mechanism, symptoms, causes, severity, diagnosis, prevention and present recommendations for surgical as well as non-surgical management of pressure ulcers. Particular focus has been placed on the current understandings and the newer modalities for the treatment of pressure ulcers. The paper also covers the role of nutrition and pressure-release devices such as cushions and mattresses as a part of the treatment algorithm for preventing and quick healing process of these wounds. Pressure ulcers develop primarily from pressure and shear; are progressive in nature and most frequently found in bedridden, chair bound or immobile people. They often develop in people who have been hospitalised for a long time generally for a different problem and increase the overall time as well as cost of hospitalisation that have detrimental effects on patient's quality of life. Loss of sensation compounds the problem manifold, and failure of reactive hyperaemia cycle of the pressure prone area remains the most important aetiopathology. Pressure ulcers are largely preventable in nature, and their management depends on their severity. The available literature about severity of pressure ulcers, their classification and medical care protocols have been described in this paper. The present treatment options include various approaches of cleaning the wound, debridement, optimised dressings, role of antibiotics and reconstructive surgery. The newer treatment options such as negative pressure wound therapy, hyperbaric oxygen therapy, cell therapy have been discussed, and the advantages and disadvantages of current and newer methods have also been described."
},
{
"pmid": "28873501",
"title": "ERAS-Value based surgery.",
"abstract": "This paper reviews implementation of ERAS and its financial implications. Literature on clinical outcomes and financial implications were reviewed. Reports from many different surgery types shows that implementation of ERAS reduces complications and shortens hospital stay. These improvements have major impacts on reducing the cost of care even when costs for implementation, and investment in time for personnel and training is accounted for. The conclusion is that ERAS is an excellent example of value based surgery."
},
{
"pmid": "33921433",
"title": "Enhanced Recovery after Surgery: History, Key Advancements and Developments in Transplant Surgery.",
"abstract": "Enhanced recovery after surgery (ERAS) aims to improve patient outcomes by controlling specific aspects of perioperative care. The concept was introduced in 1997 by Henrik Kehlet, who suggested that while minor changes in perioperative practise have no significant impact alone, incorporating multiple changes could drastically improve outcomes. Since 1997, significant advancements have been made through the foundation of the ERAS Society, responsible for creating consensus guidelines on the implementation of enhanced recovery pathways. ERAS reduces length of stay by an average of 2.35 days and healthcare costs by $639.06 per patient, as identified in a 2020 meta-analysis of ERAS across multiple surgical subspecialties. Carbohydrate loading, bowel preparation and patient education in the pre-operative phase, goal-directed fluid therapy in the intra-operative phase, and early mobilisation and enteral nutrition in the post-operative phase are some of the interventions that are commonly implemented in ERAS protocols. While many specialties have been quick to incorporate ERAS, uptake has been slow in the transplantation field, leading to a scarcity of literature. Recent studies reported a 47% reduction in length of hospital stay (LOS) in liver transplantation patients treated with ERAS, while progress in kidney transplantation focuses on pain management and its incorporation into enhanced recovery protocols."
},
{
"pmid": "33568110",
"title": "Successful recovery following musculoskeletal trauma: protocol for a qualitative study of patients' and physiotherapists' perceptions.",
"abstract": "BACKGROUND\nAnnually in the UK, 40,000-90,000 people are involved in a traumatic incident. Severity of injury and how well people recover from their injuries varies, with physiotherapy playing a key role in the rehabilitation process. Recovery is evaluated using multiple outcome measures for perceived levels of pain severity and quality of life. It is unclear however, what constitutes a successful recovery from injury throughout the course of recovery from the patient perspective, and whether this aligns with physiotherapists' perspectives.\n\n\nMETHODS\nA qualitative study using two approaches: Interpretive Phenomenological Analysis (IPA) using semi-structured interviews and thematic analysis following the Kreuger framework for focus groups. A purposive sample of 20 patients who have experienced musculoskeletal trauma within the past 4 weeks and 12 physiotherapists who manage this patient population will be recruited from a single trauma centre in the UK. Semi-structured interviews with patients at 4 weeks, 6 and 12 months following injury, and 2 focus groups with physiotherapists will be undertaken at one time point. Views and perceptions on the definition of recovery and what constitutes a successful recovery will be explored using both methods, with a focus on the lived experience and patient journey following musculoskeletal trauma, and how this changes through the process of recovery. Data from both the semi-structured interviews and focus groups will be analysed separately and then integrated and synthesised into key themes ensuring similarities and differences are identified. Strategies to ensure trustworthiness e.g., reflexivity will be employed.\n\n\nDISCUSSION\nRecovery following musculoskeletal trauma is complex and understanding of the concept of successful recovery and how this changes over time following an injury is largely unknown. It is imperative to understand the patient perspective and whether these perceptions align with current views of physiotherapists. A greater understanding of recovery following musculoskeletal trauma has potential to change clinical care, optimise patient centred care and improve efficiency and clinical decision making during rehabilitation. This in turn can contribute to improved clinical effectiveness, patient outcome and patient satisfaction with potential service and economic cost savings. This study has ethical approval (IRAS 287781/REC 20/PR/0712)."
},
{
"pmid": "28898126",
"title": "Computerized Bone Age Estimation Using Deep Learning Based Program: Evaluation of the Accuracy and Efficiency.",
"abstract": "OBJECTIVE\nThe purpose of this study is to evaluate the accuracy and efficiency of a new automatic software system for bone age assessment and to validate its feasibility in clinical practice.\n\n\nMATERIALS AND METHODS\nA Greulich-Pyle method-based deep-learning technique was used to develop the automatic software system for bone age determination. Using this software, bone age was estimated from left-hand radiographs of 200 patients (3-17 years old) using first-rank bone age (software only), computer-assisted bone age (two radiologists with software assistance), and Greulich-Pyle atlas-assisted bone age (two radiologists with Greulich-Pyle atlas assistance only). The reference bone age was determined by the consensus of two experienced radiologists.\n\n\nRESULTS\nFirst-rank bone ages determined by the automatic software system showed a 69.5% concordance rate and significant correlations with the reference bone age (r = 0.992; p < 0.001). Concordance rates increased with the use of the automatic software system for both reviewer 1 (63.0% for Greulich-Pyle atlas-assisted bone age vs 72.5% for computer-assisted bone age) and reviewer 2 (49.5% for Greulich-Pyle atlas-assisted bone age vs 57.5% for computer-assisted bone age). Reading times were reduced by 18.0% and 40.0% for reviewers 1 and 2, respectively.\n\n\nCONCLUSION\nAutomatic software system showed reliably accurate bone age estimations and appeared to enhance efficiency by reducing reading times without compromising the diagnostic accuracy."
},
{
"pmid": "31331883",
"title": "OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields.",
"abstract": "Realtime multi-person 2D pose estimation is a key component in enabling machines to have an understanding of people in images and videos. In this work, we present a realtime approach to detect the 2D pose of multiple people in an image. The proposed method uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. This bottom-up system achieves high accuracy and realtime performance, regardless of the number of people in the image. In previous work, PAFs and body part location estimation were refined simultaneously across training stages. We demonstrate that a PAF-only refinement rather than both PAF and body part location refinement results in a substantial increase in both runtime performance and accuracy. We also present the first combined body and foot keypoint detector, based on an internal annotated foot dataset that we have publicly released. We show that the combined detector not only reduces the inference time compared to running them sequentially, but also maintains the accuracy of each component individually. This work has culminated in the release of OpenPose, the first open-source realtime system for multi-person 2D pose detection, including body, foot, hand, and facial keypoints."
},
{
"pmid": "34501952",
"title": "Evaluating Therapeutic Effects of ADHD Medication Objectively by Movement Quantification with a Video-Based Skeleton Analysis.",
"abstract": "Attention-deficit/hyperactivity disorder (ADHD) is the most common neuropsychiatric disorder in children. Several scales are available to evaluate ADHD therapeutic effects, including the Swanson, Nolan, and Pelham (SNAP) questionnaire, the Vanderbilt ADHD Diagnostic Rating Scale, and the visual analog scale. However, these scales are subjective. In the present study, we proposed an objective and automatic approach for evaluating the therapeutic effects of medication in patients with (ADHD). The approach involved using movement quantification of patients' skeletons detected automatically with OpenPose in outpatient videos. Eleven skeleton parameter series were calculated from the detected skeleton sequence, and the corresponding 33 features were extracted using autocorrelation and variance analysis. This study enrolled 25 patients with ADHD. The outpatient videos were recorded before and after medication treatment. Statistical analysis indicated that four features corresponding to the first autocorrelation coefficients of the original series of four skeleton parameters and 11 features each corresponding to the first autocorrelation coefficients of the differenced series and the averaged variances of the original series of 11 skeleton parameters significantly decreased after the use of methylphenidate, an ADHD medication. The results revealed that the proposed approach can support physicians as an objective and automatic tool for evaluating the therapeutic effects of medication on patients with ADHD."
},
{
"pmid": "34429436",
"title": "Battery-free, wireless soft sensors for continuous multi-site measurements of pressure and temperature from patients at risk for pressure injuries.",
"abstract": "Capabilities for continuous monitoring of pressures and temperatures at critical skin interfaces can help to guide care strategies that minimize the potential for pressure injuries in hospitalized patients or in individuals confined to the bed. This paper introduces a soft, skin-mountable class of sensor system for this purpose. The design includes a pressure-responsive element based on membrane deflection and a battery-free, wireless mode of operation capable of multi-site measurements at strategic locations across the body. Such devices yield continuous, simultaneous readings of pressure and temperature in a sequential readout scheme from a pair of primary antennas mounted under the bedding and connected to a wireless reader and a multiplexer located at the bedside. Experimental evaluation of the sensor and the complete system includes benchtop measurements and numerical simulations of the key features. Clinical trials involving two hemiplegic patients and a tetraplegic patient demonstrate the feasibility, functionality and long-term stability of this technology in operating hospital settings."
},
{
"pmid": "32952306",
"title": "Perioral pressure ulcers in patients with COVID-19 requiring invasive mechanical ventilation.",
"abstract": "BACKGROUND\nFacial pressure ulcers are a rare yet significant complication. National Institute for Health and Care Excellence (NICE) guidelines recommend that patients should be risk-assessed for pressure ulcers and measures instated to prevent such complication. In this study, we report case series of perioral pressure ulcers developed following the use of two devices to secure endotracheal tubes in COVID-19 positive patients managed in the intensive care setting.\n\n\nMETHODS\nA retrospective analysis was conducted on sixteen patients identified to have perioral pressure ulcers by using the institutional risk management system. Data parameters included patient demographics (age, gender, comorbidities, smoking history and body mass index (BMI)). Data collection included the indication of admission to ITU, duration of intubation, types of medical devices utilised to secure the endotracheal tube, requirement of vasopressor agents and renal replacement therapy, presence of other associated ulcers, duration of proning and mortality.\n\n\nRESULTS\nSixteen patients developed different patterns of perioral pressure ulcers related to the use of two medical devices (Insight, AnchorFast). The mean age was 58.6 years. The average length of intubation was 18.8 days. Fourteen patients required proning, with an average duration of 5.2 days.\n\n\nCONCLUSIONS\nThe two devices utilised to secure endotracheal tubes are associated with unique patterns of facial pressure ulcers. Measures should be taken to assess the skin regularly and avoid utilising devices that are associated with a high risk of facial pressure ulcers. Awareness and training should be provided to prevent such significant complication.Level of evidence: Level IV, risk/prognostic study."
},
{
"pmid": "29411310",
"title": "Polypharmacy in Home Care in Europe: Cross-Sectional Data from the IBenC Study.",
"abstract": "BACKGROUND\nHome care (HC) patients are characterized by a high level of complexity, which is reflected by the prevalence of multimorbidity and the correlated high drug consumption. This study assesses prevalence and factors associated with polypharmacy in a sample of HC patients in Europe.\n\n\nMETHODS\nWe conducted a cross-sectional analysis on 1873 HC patients from six European countries participating in the Identifying best practices for care-dependent elderly by Benchmarking Costs and outcomes of community care (IBenC) project. Data were collected using the interResident Assessment Instrument (interRAI) instrument for HC. Polypharmacy status was categorized into three groups: non-polypharmacy (0-4 drugs), polypharmacy (5-9 drugs), and excessive polypharmacy (≥ 10 drugs). Multinomial logistic regressions were used to identify variables associated with polypharmacy and excessive polypharmacy.\n\n\nRESULTS\nPolypharmacy was observed in 730 (39.0%) HC patients and excessive polypharmacy in 433 (23.1%). As compared with non-polypharmacy, excessive polypharmacy was directly associated with chronic disease but also with female sex (odds ratio [OR] 1.58; 95% confidence interval [CI] 1.17-2.13), pain (OR 1.51; 95% CI 1.15-1.98), dyspnea (OR 1.37; 95% CI 1.01-1.89), and falls (OR 1.55; 95% CI 1.01-2.40). An inverse association with excessive polypharmacy was shown for age (OR 0.69; 95% CI 0.56-0.83).\n\n\nCONCLUSIONS\nPolypharmacy and excessive polypharmacy are common among HC patients in Europe. Factors associated with polypharmacy status include not only co-morbidity but also specific symptoms and age."
},
{
"pmid": "29549539",
"title": "A review of telomere length in sarcopenia and frailty.",
"abstract": "Sarcopenia and frailty are associated with several important health-related adverse events, including disability, loss of independence, institutionalization and mortality. Sarcopenia can be considered a biological substrate of frailty, and the prevalence of both these conditions progressively increases with age. Telomeres are nucleoprotein structures located at the end of linear chromosomes and implicated in cellular ageing, shorten with age, and are associated with various age-related diseases. In addition, telomere length (TL) is widely considered a molecular/cellular hallmark of the ageing process. This narrative review summarizes the knowledge about telomeres and analyzes for the first time a possible association of TL with sarcopenia and frailty. The overview provided by the present review suggests that leukocyte TL as single measurement, calculated by quantitative real-time polymerase chain reaction (qRT-PCR), cannot be considered a meaningful biological marker for complex, multidimensional age-related conditions, such as sarcopenia and frailty. Panels of biomarkers, including TL, may provide more accurate assessment and prediction of outcomes in these geriatric syndromes in elderly people."
},
{
"pmid": "29340963",
"title": "Interactions between drugs and geriatric syndromes in nursing home and home care: results from Shelter and IBenC projects.",
"abstract": "AIM\nDrugs may interact with geriatric syndromes by playing a role in the continuation, recurrence or worsening of these conditions. Aim of this study is to assess the prevalence of interactions between drugs and three common geriatric syndromes (delirium, falls and urinary incontinence) among older adults in nursing home and home care in Europe.\n\n\nMETHODS\nWe performed a cross-sectional multicenter study among 4023 nursing home residents participating in the Services and Health for Elderly in Long-TERm care (Shelter) project and 1469 home care patients participating in the Identifying best practices for care-dependent elderly by Benchmarking Costs and outcomes of community care (IBenC) project. Exposure to interactions between drugs and geriatric syndromes was assessed by 2015 Beers criteria.\n\n\nRESULTS\n790/4023 (19.6%) residents in the Shelter Project and 179/1469 (12.2%) home care patients in the IBenC Project presented with one or more drug interactions with geriatric syndromes. In the Shelter project, 288/373 (77.2%) residents experiencing a fall, 429/659 (65.1%) presenting with delirium and 180/2765 (6.5%) with urinary incontinence were on one or more interacting drugs. In the IBenC project, 78/172 (45.3%) participants experiencing a fall, 80/182 (44.0%) presenting with delirium and 36/504 (7.1%) with urinary incontinence were on one or more interacting drugs.\n\n\nCONCLUSION\nDrug-geriatric syndromes interactions are common in long-term care patients. Future studies and interventions aimed at improving pharmacological prescription in the long-term care setting should assess not only drug-drug and drug-disease interactions, but also interactions involving geriatric syndromes."
}
] |
Diagnostics | null | PMC8871377 | 10.3390/diagnostics12020532 | Automatic Detection of Age-Related Macular Degeneration Based on Deep Learning and Local Outlier Factor Algorithm | Age-related macular degeneration (AMD) is a retinal disorder affecting the elderly, and society’s aging population means that the disease is becoming increasingly prevalent. The vision in patients with early AMD is usually unaffected or nearly normal but central vision may be weakened or even lost if timely treatment is not performed. Therefore, early diagnosis is particularly important to prevent the further exacerbation of AMD. This paper proposed a novel automatic detection method of AMD from optical coherence tomography (OCT) images based on deep learning and a local outlier factor (LOF) algorithm. A ResNet-50 model with L2-constrained softmax loss was retrained to extract features from OCT images and the LOF algorithm was used as the classifier. The proposed method was trained on the UCSD dataset and tested on both the UCSD dataset and Duke dataset, with an accuracy of 99.87% and 97.56%, respectively. Even though the model was only trained on the UCSD dataset, it obtained good detection accuracy when tested on another dataset. Comparison with other methods also indicates the efficiency of the proposed method in detecting AMD. | 2. Related WorkLayer segmentation is crucial in many automatic analysis algorithms based on retinal OCT images. The position and thickness of each retinal layer are obtained according to the result of the layer segmentation algorithm, then by analyzing the similarities and differences between the layer index of the tested image and the reference image, a variety of issues, including lesion detection and positioning, can be addressed.Farsiu et al. [23] introduced a semi-automatic segmentation of RPE, RPE drusen complex (RPEDC) and total retina (TR) boundaries. Then, volumes of TR, RPEDC and abnormal RPEDC of each subject were measured and compared with the normal thickness generated by control subjects to detect AMD. The area under the curve (AUC) of the receiver operating characteristic (ROC) for this classifier was 0.9900. Naz et al. [24] proposed an algorithm to detect the AMD-effected OCT scans by calculating the difference between the RPE layer and a second-order polynomial curve. The method was made time efficient by using an intensity-based threshold method for the RPE segmentation. A dataset with 25 AMD and 25 healthy images was used, and the study obtained an accurate detection of AMD with 96.00% accuracy. Arabi et al. [25] used the binary threshold method to extract the RPE layer, sampled the extracted layers and counted the number of white pixels in each sample. The mean value of the numbers of pixels was calculated and classified. They tested the approach on 16 images and obtained an accuracy of 75.00%. Thomas et al. [26] proposed an algorithm based on RPE layer detection and baseline estimation using statistical methods and randomization for the detection of AMD from retinal OCT images. The method was tested on a public dataset including 2130 images and achieved an overall accuracy of 96.66%. Sharif et al. [27] presented a method based on feature extraction and the support vector machine (SVM). First, the RPE layer was extracted by utilizing the graph theory dynamic programming technique, then a unique feature set consisting of features extracted from the difference signal of RPE and the inner segment outer segment layer of RPE was obtained. Finally, the SVM classifier was used to detect AMD-affected images from 950 OCT scans, and an accuracy of 95.00% was obtained.Although the above methods based on layer segmentation obtained promising results, they are not suitable for large-scale AMD detection. The convolutional neural network (CNN), which emerged at the end of the 20th century, has significantly improved the ability to classify images [28].Lee et al. [29] classified 52,690 normal and 48,312 AMD OCT images utilizing a modified version of the VGG-16 CNN model and obtained an overall accuracy of 93.40%. Serener et al. [30] compared two pre-trained CNN, namely AlexNet and ResNet-18, to automatically classify OCT images for dry and wet AMD diseases, respectively. In both cases, the ResNet-18 model outperformed the AlexNet model, and the AUC of the ResNet model for each AMD stage was 0.9400 and 0.9300, respectively. Thomas et al. [31,32] conducted a number of studies based on AMD detection using OCT images. In [31], a multiscale and multipath CNN with six convolutional layers was proposed and finally achieved an overall accuracy of 98.79% with the random forest (RF) classifier. Later, in [32], they introduced another novel multiscale CNN with seven convolutional layers to classify AMD and normal OCT images. The multiscale convolution layer enables a large number of local structures to be generated with various filter sizes. The proposed CNN network finally achieved an accuracy of 99.73% on the UCSD dataset.Yoo et al. [33] utilized VGG-19 pre-trained with images from ImageNet as a feature extractor, and a multiclass RF classifier was operated to detect AMD images. The overall accuracy using OCT alone was 82.60% on a small dataset including both OCT and matched fundus images. Kadry et al. [34] extracted handcrafted features, such as the local binary pattern (LBP), the pyramid histogram of oriented gradients (PHOG), and the discrete wavelet transform (DWT) from the test images and concatenated them with the deep features of VGG-16. The proposed technique achieved an accuracy of up to 97.00% for OCT images with different binary classifiers.In this study, we presented a novel method for the detection of AMD based on OCT images and showed it to be more effective than existing methods. The rest of the paper is structured as follows. The proposed methodology is given in Section 3, then the datasets used for the experiment and the parameters of the model are given in Section 4. The experimental results and discussion are shown in Section 5. The conclusion is given in Section 6. | [
"30496104",
"32100327",
"34548176",
"33087239",
"16286610",
"26978865",
"30303083",
"1957169",
"19829430",
"34940719",
"34677283",
"7862410",
"31342201",
"32030669",
"34597989",
"32225791",
"34943550",
"16716639",
"17158086",
"21790112",
"23993787",
"33190943",
"34364184",
"30349958",
"31011155",
"33780867",
"30439600",
"29474911",
"25360373",
"29864167",
"28018716",
"28114453"
] | [
{
"pmid": "30496104",
"title": "Global, regional, and national incidence, prevalence, and years lived with disability for 354 diseases and injuries for 195 countries and territories, 1990-2017: a systematic analysis for the Global Burden of Disease Study 2017.",
"abstract": "BACKGROUND\nThe Global Burden of Diseases, Injuries, and Risk Factors Study 2017 (GBD 2017) includes a comprehensive assessment of incidence, prevalence, and years lived with disability (YLDs) for 354 causes in 195 countries and territories from 1990 to 2017. Previous GBD studies have shown how the decline of mortality rates from 1990 to 2016 has led to an increase in life expectancy, an ageing global population, and an expansion of the non-fatal burden of disease and injury. These studies have also shown how a substantial portion of the world's population experiences non-fatal health loss with considerable heterogeneity among different causes, locations, ages, and sexes. Ongoing objectives of the GBD study include increasing the level of estimation detail, improving analytical strategies, and increasing the amount of high-quality data.\n\n\nMETHODS\nWe estimated incidence and prevalence for 354 diseases and injuries and 3484 sequelae. We used an updated and extensive body of literature studies, survey data, surveillance data, inpatient admission records, outpatient visit records, and health insurance claims, and additionally used results from cause of death models to inform estimates using a total of 68 781 data sources. Newly available clinical data from India, Iran, Japan, Jordan, Nepal, China, Brazil, Norway, and Italy were incorporated, as well as updated claims data from the USA and new claims data from Taiwan (province of China) and Singapore. We used DisMod-MR 2.1, a Bayesian meta-regression tool, as the main method of estimation, ensuring consistency between rates of incidence, prevalence, remission, and cause of death for each condition. YLDs were estimated as the product of a prevalence estimate and a disability weight for health states of each mutually exclusive sequela, adjusted for comorbidity. We updated the Socio-demographic Index (SDI), a summary development indicator of income per capita, years of schooling, and total fertility rate. Additionally, we calculated differences between male and female YLDs to identify divergent trends across sexes. GBD 2017 complies with the Guidelines for Accurate and Transparent Health Estimates Reporting.\n\n\nFINDINGS\nGlobally, for females, the causes with the greatest age-standardised prevalence were oral disorders, headache disorders, and haemoglobinopathies and haemolytic anaemias in both 1990 and 2017. For males, the causes with the greatest age-standardised prevalence were oral disorders, headache disorders, and tuberculosis including latent tuberculosis infection in both 1990 and 2017. In terms of YLDs, low back pain, headache disorders, and dietary iron deficiency were the leading Level 3 causes of YLD counts in 1990, whereas low back pain, headache disorders, and depressive disorders were the leading causes in 2017 for both sexes combined. All-cause age-standardised YLD rates decreased by 3·9% (95% uncertainty interval [UI] 3·1-4·6) from 1990 to 2017; however, the all-age YLD rate increased by 7·2% (6·0-8·4) while the total sum of global YLDs increased from 562 million (421-723) to 853 million (642-1100). The increases for males and females were similar, with increases in all-age YLD rates of 7·9% (6·6-9·2) for males and 6·5% (5·4-7·7) for females. We found significant differences between males and females in terms of age-standardised prevalence estimates for multiple causes. The causes with the greatest relative differences between sexes in 2017 included substance use disorders (3018 cases [95% UI 2782-3252] per 100 000 in males vs s1400 [1279-1524] per 100 000 in females), transport injuries (3322 [3082-3583] vs 2336 [2154-2535]), and self-harm and interpersonal violence (3265 [2943-3630] vs 5643 [5057-6302]).\n\n\nINTERPRETATION\nGlobal all-cause age-standardised YLD rates have improved only slightly over a period spanning nearly three decades. However, the magnitude of the non-fatal disease burden has expanded globally, with increasing numbers of people who have a wide spectrum of conditions. A subset of conditions has remained globally pervasive since 1990, whereas other conditions have displayed more dynamic trends, with different ages, sexes, and geographies across the globe experiencing varying burdens and trends of health loss. This study emphasises how global improvements in premature mortality for select conditions have led to older populations with complex and potentially expensive diseases, yet also highlights global achievements in certain domains of disease and injury.\n\n\nFUNDING\nBill & Melinda Gates Foundation."
},
{
"pmid": "32100327",
"title": "Risk factors for progression of age-related macular degeneration.",
"abstract": "PURPOSE\nAge-related macular degeneration (AMD) is a degenerative disease of the macula, often leading to progressive vision loss. The rate of disease progression can vary among individuals and has been associated with multiple risk factors. In this review, we provide an overview of the current literature investigating phenotypic, demographic, environmental, genetic, and molecular risk factors, and propose the most consistently identified risk factors for disease progression in AMD based on these studies. Finally, we describe the potential use of these risk factors for personalised healthcare.\n\n\nRECENT FINDINGS\nWhile phenotypic risk factors such as drusen and pigment abnormalities become more important to predict disease progression during the course of the disease, demographic, environmental, genetic and molecular risk factors are more valuable at earlier disease stages. Demographic and environmental risk factors such as age and smoking are consistently reported to be related to disease progression, while other factors such as sex, body mass index (BMI) and education are less often associated. Of all known AMD variants, variants that are most consistently reported with disease progression are rs10922109 and rs570618 in CFH, rs116503776 in C2/CFB/SKIV2L, rs3750846 in ARMS2/HTRA1 and rs2230199 in C3. However, it seems likely that other AMD variants also contribute to disease progression but to a lesser extent. Rare variants have probably a large effect on disease progression in highly affected families. Furthermore, current prediction models do not include molecular risk factors, while these factors can be measured accurately in the blood. Possible promising molecular risk factors are High-Density Lipoprotein Cholesterol (HDL-C), Docosahexaenoic acid (DHA), eicosapentaenoic acid (EPA), zeaxanthin and lutein.\n\n\nSUMMARY\nPhenotypic, demographic, environmental, genetic and molecular risk factors can be combined in prediction models to predict disease progression, but the selection of the proper risk factors for personalised risk prediction will differ among individuals and is dependent on their current disease stage. Future prediction models should include a wider set of genetic variants to determine the genetic risk more accurately, and rare variants should be taken into account in highly affected families. In addition, adding molecular factors in prediction models may lead to preventive strategies and personalised advice."
},
{
"pmid": "34548176",
"title": "Global Burden of Dry Age-Related Macular Degeneration: A Targeted Literature Review.",
"abstract": "PURPOSE\nAge-related macular degeneration (AMD) is a leading cause of blindness, particularly in higher-income countries. Although dry AMD accounts for 85% to 90% of AMD cases, a comprehensive understanding of the global dry AMD burden is needed.\n\n\nMETHODS\nA targeted literature review was conducted in PubMed, MEDLINE, Embase, and the Cochrane Database of Systematic Reviews (1995-2019) to identify data on the epidemiology, management, and humanistic and economic burden of dry AMD in adults. A landscape analysis of patient-reported outcome (PRO) instruments in AMD was also conducted via searches in PubMed (1995-2019), ClinicalTrials.gov, PROQOLID, PROLABELS, and health technology assessment reports (2008-2018).\n\n\nFINDINGS\nThirty-seven of 4205 identified publications were included in the review. Dry AMD prevalence was 0.44% globally, varied across ethnic groups, and increased with age. Patients with dry AMD had higher risks of all-cause mortality (hazard ratio [HR] = 1.46; 95% CI, 0.99-2.16) and tobacco-related (HR = 2.86; 95% CI, 1.15-7.09) or cancer deaths (HR = 3.37; 95% CI, 1.56-7.29; P = 0.002) than those without dry AMD. Smoking, increasing age or cholesterol levels, and obesity are key risk factors for developing dry AMD. No treatment guidelines were identified for dry AMD specifically; management focuses on risk factor reduction and use of dietary supplements. In the United States and Italy, direct medical costs and health care resource utilization were lower in patients with dry versus wet AMD. Patients with dry AMD, particularly advanced disease, experienced significant visual function impairment. Dry AMD symptoms included reduced central vision, decreased ability to see at night, increased visual blurriness, distortion of straight lines and text, and faded color vision. Most PRO instruments used in AMD evaluations covered few, if any, of the identified symptoms reported by patients with dry AMD. Although the Quality of Life and Vision Function Questionnaire, 25-item National Eye Institute Vision Function Questionnaire, Low Vision Quality of Life, Impact of Vision Impairment-Very Low Vision, and Functional Reading Independence Index had strong content validity and psychometric properties in patients with dry AMD, they retained limited coverage of salient concepts.\n\n\nIMPLICATIONS\nDespite dry AMD accounting for most AMD cases, there are substantial gaps in the published literature, particularly the humanistic and economic burden of disease and the lack of differentiation among dry, wet, or unspecified dry AMD. The significant burden of illness alludes to a high unmet need for tolerable and effective treatment options, as well as PRO instruments with more coverage of dry AMD symptoms and salient concepts."
},
{
"pmid": "33087239",
"title": "The Diagnosis and Treatment of Age-Related Macular Degeneration.",
"abstract": "BACKGROUND\nAge-related macular degeneration (AMD) is thought to cause approximately 9% of all cases of blindness worldwide. In Germany, half of all cases of blindness and high-grade visual impairment are due to AMD. In this review, the main risk factors, clinical manifestations, and treatments of this disease are presented.\n\n\nMETHODS\nThis review is based on pertinent publications retrieved by a selective search in PubMed for original articles and reviews, as well as on current position statements by the relevant specialty societies.\n\n\nRESULTS\nAMD is subdivided into early, intermediate, and late stages. The early stage is often asymptomatic; patients in the other two stages often have distorted vision or central visual field defects. The main risk factors are age, genetic predisposition, and nicotine consumption. The number of persons with early AMD in Germany rose from 5.7 million in 2002 to ca. 7 million in 2017. Late AMD is subdivided into the dry late form of the disease, for which there is no treatment at present, and the exudative late form, which can be treated with the intravitreal injection of VEGF inhibitors.\n\n\nCONCLUSION\nMore research is needed on the dry late form of AMD in particular, which is currently untreatable. The treatment of the exudative late form with VEGF inhibitors is labor-intensive and requires a close collaboration of the patient, the ophthalmologist, and the primary care physician."
},
{
"pmid": "16286610",
"title": "The Age-Related Eye Disease Study severity scale for age-related macular degeneration: AREDS Report No. 17.",
"abstract": "OBJECTIVE\nTo develop a fundus photographic severity scale for age-related macular degeneration (AMD).\n\n\nMETHODS\nIn the Age-Related Eye Disease Study, stereoscopic color fundus photographs were taken at baseline, at the 2-year follow-up visit, and annually thereafter. Photographs were graded for drusen characteristics (size, type, area), pigmentary abnormalities (increased pigment, depigmentation, geographic atrophy), and presence of abnormalities characteristic of neovascular AMD (retinal pigment epithelial detachment, serous or hemorrhagic sensory retinal detachment, subretinal or sub-retinal pigment epithelial hemorrhage, subretinal fibrous tissue). Advanced AMD was defined as presence of 1 or more neovascular AMD abnormalities, photocoagulation for AMD, or geographic atrophy involving the center of the macula. We explored associations among right eyes of 3212 participants between severity of drusen characteristics and pigmentary abnormalities at baseline and development of advanced AMD within 5 years of follow-up.\n\n\nRESULTS\nA 9-step severity scale that combines a 6-step drusen area scale with a 5-step pigmentary abnormality scale was developed, on which the 5-year risk of advanced AMD increased progressively from less than 1% in step 1 to about 50% in step 9. Among the 334 eyes that had at least a 3-step progression on the scale between the baseline and 5-year visits, almost half showed stepwise progression through intervening severity levels at intervening visits. Replicate gradings showed agreement within 1 step on the scale in 87% of eyes.\n\n\nCONCLUSIONS\nThe scale provides convenient risk categories and has acceptable reproducibility. Progression along it may prove to be useful as a surrogate for progression to advanced AMD."
},
{
"pmid": "26978865",
"title": "AGE-RELATED MACULAR DEGENERATION.",
"abstract": "OBJECTIVES\nThe objective of our study was to review the current knowledge on Age- Related Macular Degeneration, including pathogenesis, ocular manifestations, diagnosis and ancillary testing.\n\n\nSYSTEMATIC REVIEW METHODOLOGY\nRelevant publications on Age-Related Macular Degeneration that were published until 2014.\n\n\nCONCLUSIONS\nAge-related macular degeneration (AMD) is a common macular disease affecting elderly people in the Western world. It is characterized by the appearance of drusen in the macula, accompanied by choroidal neovascularization (CNV) or geographic atrophy."
},
{
"pmid": "30303083",
"title": "Age-related macular degeneration.",
"abstract": "Age-related macular degeneration is a leading cause of visual impairment and severe vision loss. Clinically, it is classified as early-stage (medium-sized drusen and retinal pigmentary changes) to late-stage (neovascular and atrophic). Age-related macular degeneration is a multifactorial disorder, with dysregulation in the complement, lipid, angiogenic, inflammatory, and extracellular matrix pathways implicated in its pathogenesis. More than 50 genetic susceptibility loci have been identified, of which the most important are in the CFH and ARMS2 genes. The major non-genetic risk factors are smoking and low dietary intake of antioxidants (zinc and carotenoids). Progression from early-stage to late-stage disease can be slowed with high-dose zinc and antioxidant vitamin supplements. Intravitreal anti-vascular endothelial growth factor therapy (eg, ranibizumab, aflibercept, or bevacizumab) is highly effective at treating neovascular age-related macular degeneration, and has markedly decreased the prevalence of visual impairment in populations worldwide. Currently, no proven therapies for atrophic disease are available, but several agents are being investigated in clinical trials. Future progress is likely to be from improved efforts in prevention and risk-factor modification, personalised medicine targeting specific pathways, newer anti-vascular endothelial growth factor agents or other agents, and regenerative therapies."
},
{
"pmid": "1957169",
"title": "Optical coherence tomography.",
"abstract": "A technique called optical coherence tomography (OCT) has been developed for noninvasive cross-sectional imaging in biological systems. OCT uses low-coherence interferometry to produce a two-dimensional image of optical scattering from internal tissue microstructures in a way that is analogous to ultrasonic pulse-echo imaging. OCT has longitudinal and lateral spatial resolutions of a few micrometers and can detect reflected signals as small as approximately 10(-10) of the incident optical power. Tomographic imaging is demonstrated in vitro in the peripapillary area of the retina and in the coronary artery, two clinically relevant examples that are representative of transparent and turbid media, respectively."
},
{
"pmid": "19829430",
"title": "In vivo retinal imaging by optical coherence tomography.",
"abstract": "We describe what are to our knowledge the first in vivo measurements of human retinal structure with optical coherence tomography. These images represent the highest depth resolution in vivo retinal images to date. The tomographic system, image-processing techniques, and examples of high-resolution tomographs and their clinical relevance are discussed."
},
{
"pmid": "34940719",
"title": "Roadmap on Digital Holography-Based Quantitative Phase Imaging.",
"abstract": "Quantitative Phase Imaging (QPI) provides unique means for the imaging of biological or technical microstructures, merging beneficial features identified with microscopy, interferometry, holography, and numerical computations. This roadmap article reviews several digital holography-based QPI approaches developed by prominent research groups. It also briefly discusses the present and future perspectives of 2D and 3D QPI research based on digital holographic microscopy, holographic tomography, and their applications."
},
{
"pmid": "34677283",
"title": "Roadmap on Recent Progress in FINCH Technology.",
"abstract": "Fresnel incoherent correlation holography (FINCH) was a milestone in incoherent holography. In this roadmap, two pathways, namely the development of FINCH and applications of FINCH explored by many prominent research groups, are discussed. The current state-of-the-art FINCH technology, challenges, and future perspectives of FINCH technology as recognized by a diverse group of researchers contributing to different facets of research in FINCH have been presented."
},
{
"pmid": "7862410",
"title": "Imaging of macular diseases with optical coherence tomography.",
"abstract": "BACKGROUND/PURPOSE\nTo assess the potential of a new diagnostic technique called optical coherence tomography for imaging macular disease. Optical coherence tomography is a novel noninvasive, noncontact imaging modality which produces high depth resolution (10 microns) cross-sectional tomographs of ocular tissue. It is analogous to ultrasound, except that optical rather than acoustic reflectivity is measured.\n\n\nMETHODS\nOptical coherence tomography images of the macula were obtained in 51 eyes of 44 patients with selected macular diseases. Imaging is performed in a manner compatible with slit-lamp indirect biomicroscopy so that high-resolution optical tomography may be accomplished simultaneously with normal ophthalmic examination. The time-of-flight delay of light backscattered from different layers in the retina is determined using low-coherence interferometry. Cross-sectional tomographs of the retina profiling optical reflectivity versus distance into the tissue are obtained in 2.5 seconds and with a longitudinal resolution of 10 microns.\n\n\nRESULTS\nCorrelation of fundus examination and fluorescein angiography with optical coherence tomography tomographs was demonstrated in 12 eyes with the following pathologies: full- and partial-thickness macular hole, epiretinal membrane, macular edema, intraretinal exudate, idiopathic central serous chorioretinopathy, and detachments of the pigment epithelium and neurosensory retina.\n\n\nCONCLUSION\nOptical coherence tomography is potentially a powerful tool for detecting and monitoring a variety of macular diseases, including macular edema, macular holes, and detachments of the neurosensory retina and pigment epithelium."
},
{
"pmid": "31342201",
"title": "Swept-source optical coherence tomographic observation on prevalence and variations of cemento-enamel junction morphology.",
"abstract": "To investigate the prevalence of different patterns of cemento-enamel junction (CEJ) morphology under swept-source optical coherence tomography (SS-OCT). One hundred extracted human teeth were used consisting of incisors, premolars, and molars. Each sample was observed for every 500 μm circumferentially along CEJ and OCT images of the pattern were noted. Microscopic observations were done for the representative sample using confocal laser scanning microscope (CLSM) and transmission electron microscope (TEM). The OCT images exhibited four CEJ patterns: edge-to-edge (type I), exposed dentin (type II), cementum overlapping enamel (type III), and enamel overlapping cementum (type IV). The prevalence of CEJ patterns was further statistically considered for mesial, distal, buccal, and lingual surfaces. The real-time imaging by SS-OCT instantly determined CEJ morphology. CLSM and TEM observation revealed morphological features along CEJ, which corresponded to OCT images of CEJ anatomy. OCT results showed 56.8% of type I pattern predominantly found on proximal surfaces, followed by 36.5% of type II pattern on buccal and lingual surface, 6.4% of type III pattern, and 0.3% of type IV pattern. There was a significant difference in prevalence of CEJ patterns among different types of teeth, but there was no statistically significant difference among the four surfaces in each type of teeth. OCT is a non-invasive diagnostic tool to examine the CEJ patterns along the entire circumference. OCT observation revealed even minor dentin exposure that would need clinical and home procedures to prevent any symptoms."
},
{
"pmid": "32030669",
"title": "Techniques and Applications in Skin OCT Analysis.",
"abstract": "The skin is the largest organ of our body. Skin disease abnormalities which occur within the skin layers are difficult to examine visually and often require biopsies to make a confirmation on a suspected condition. Such invasive methods are not well-accepted by children and women due to the possibility of scarring. Optical coherence tomography (OCT) is a non-invasive technique enabling in vivo examination of sub-surface skin tissue without the need for excision of tissue. However, one of the challenges in OCT imaging is the interpretation and analysis of OCT images. In this review, we discuss the various methodologies in skin layer segmentation and how it could potentially improve the management of skin diseases. We also present a review of works which use advanced machine learning techniques to achieve layers segmentation and detection of skin diseases. Lastly, current challenges in analysis and applications are also discussed."
},
{
"pmid": "34597989",
"title": "In-situ 3D fouling visualization of membrane distillation treating industrial textile wastewater by optical coherence tomography imaging.",
"abstract": "Membrane fouling, which is caused by the deposition of particles on the membrane surface or pores, reduces system performance in membrane distillation (MD) applications, resulting in increased operational costs, poor recovery, and system failure. Optical Coherence Tomography enables in-situ foulant monitoring in both 2D and 3D, however, the 2D images can only determine fouling layer thickness in severe fouling. Therefore, in this study, an advanced 3D imaging analysis technique using intensity range filters was proposed to quantify fouling layer formation during MD through the use of a single 3D image. This approach not only reduces computational power requirements, but also successfully separated the fouling layer from the membrane at the microscale. Thus, the thickness, fouling index, and fouling layer coverage can be evaluated in real time. To test this approach, Polyvinylidene fluoride (C-PVDF) and polytetrafluoroethylene (C-PTFE) membranes were used to treat a feed consisting of industrial textile wastewater. Thin and disperse foulants was observed on the C-PTFE, with a 22 µm thick fouling layer which could not be observed using 2D images after 24 h. Moreover, the C-PTFE demonstrated better antifouling ability than the C-PVDF as demonstrated by its lower fouling index, which was also supported by surface energy characterization. This work demonstrates the significant potential of 3D imagery in the long-term monitoring of membrane fouling process to improve membrane antifouling performance in MD applications, which can lead to lowered operational costs and improved system stability."
},
{
"pmid": "32225791",
"title": "Optical coherence tomography imaging of plant root growth in soil.",
"abstract": "Complex interactions between roots and soil provide the nutrients and physical support required for robust plant growth. Yet, visualizing the root-soil interface is challenged by soil's opaque scattering characteristics. Herein, we describe methods for using optical coherence tomography (OCT) to provide non-destructive 3D and cross-sectional root imaging not available with traditional bright-field microscopy. OCT is regularly used for bioimaging, especially in ophthalmology, where it can detect retinal abnormalities. Prior use of OCT in plant biology has focused on surface defects of above-ground tissues, predominantly in food crops. Our results show OCT is also viable for detailed, in situ study of living plant roots. Using OCT for direct observations of root growth in soil can help elucidate key interactions between root morphology and various components of the soil environment including soil structure, microbial communities, and nutrient patches. Better understanding of these interactions can guide efforts to improve plant nutrient acquisition from soil to increase agricultural efficiency as well as better understand drivers of plant growth in natural systems."
},
{
"pmid": "34943550",
"title": "Role of Optical Coherence Tomography Imaging in Predicting Progression of Age-Related Macular Disease: A Survey.",
"abstract": "In developed countries, age-related macular degeneration (AMD), a retinal disease, is the main cause of vision loss in the elderly. Optical Coherence Tomography (OCT) is currently the gold standard for assessing individuals for initial AMD diagnosis. In this paper, we look at how OCT imaging can be used to diagnose AMD. Our main aim is to examine and compare automated computer-aided diagnostic (CAD) systems for diagnosing and grading of AMD. We provide a brief summary, outlining the main aspects of performance assessment and providing a basis for current research in AMD diagnosis. As a result, the only viable alternative is to prevent AMD and stop both this devastating eye condition and unwanted visual impairment. On the other hand, the grading of AMD is very important in order to detect early AMD and prevent patients from reaching advanced AMD disease. In light of this, we explore the remaining issues with automated systems for AMD detection based on OCT imaging, as well as potential directions for diagnosis and monitoring systems based on OCT imaging and telemedicine applications."
},
{
"pmid": "16716639",
"title": "Retinal assessment using optical coherence tomography.",
"abstract": "Over the 15 years since the original description, optical coherence tomography (OCT) has become one of the key diagnostic technologies in the ophthalmic subspecialty areas of retinal diseases and glaucoma. The reason for the widespread adoption of this technology originates from at least two properties of the OCT results: on the one hand, the results are accessible to the non-specialist where microscopic retinal abnormalities are grossly and easily noticeable; on the other hand, results are reproducible and exceedingly quantitative in the hands of the specialist. However, as in any other imaging technique in ophthalmology, some artifacts are expected to occur. Understanding of the basic principles of image acquisition and data processing as well as recognition of OCT limitations are crucial issues to using this equipment with cleverness. Herein, we took a brief look in the past of OCT and have explained the key basic physical principles of this imaging technology. In addition, each of the several steps encompassing a third generation OCT evaluation of retinal tissues has been addressed in details. A comprehensive explanation about next generation OCT systems has also been provided and, to conclude, we have commented on the future directions of this exceptional technique."
},
{
"pmid": "17158086",
"title": "Recent developments in optical coherence tomography for imaging the retina.",
"abstract": "Optical coherence tomography (OCT) was introduced in ophthalmology a decade ago. Within a few years in vivo imaging of the healthy retina and optic nerve head and of retinal diseases was a fact. In particular the ease with which these images can be acquired considerably changed the diagnostic strategy used by ophthalmologists. The OCT technique currently available in clinical practice is referred to as time-domain OCT, because the depth information of the retina is acquired as a sequence of samples, over time. This can be done either in longitudinal cross-sections perpendicular to, or in the coronal plane parallel to the retinal surface. Only recently, major advances have been made as to image resolution with the introduction of ultrahigh resolution OCT and in imaging speed, signal-to-noise ratio and sensitivity with the introduction of spectral-domain OCT. Functional OCT is the next frontier in OCT imaging. For example, polarization-sensitive OCT uses the birefringent characteristics of the retinal nerve fibre layer to better assess its thickness. Blood flow information from retinal vessels as well as the oxygenation state of retinal tissue can be extracted from the OCT signal. Very promising are the developments in contrast-enhanced molecular optical imaging, for example with the use of scattering tuneable nanoparticles targeted at specific tissue or cell structures. This review will provide an overview of these most recent developments in the field of OCT imaging focussing on applications for the retina."
},
{
"pmid": "21790112",
"title": "The role of spectral-domain OCT in the diagnosis and management of neovascular age-related macular degeneration.",
"abstract": "Spectral-domain optical coherence tomography (SD-OCT) has emerged as the ancillary examination of choice to assist the diagnosis and management of neovascular age-related macular degeneration (AMD). SD-OCT provides more detailed images of intraretinal, subretinal, and subretinal pigment epithelium fluid when compared to time-domain technology, leading to higher and earlier detection rates of neovascular AMD activity. Improvements in image analysis and acquisition speed make it important for decision-making in the diagnosis and treatment of this disease. However, this new technology needs to be validated for its role in the improvement of visual outcomes in the context of anti-angiogenic therapy."
},
{
"pmid": "23993787",
"title": "Quantitative classification of eyes with and without intermediate age-related macular degeneration using optical coherence tomography.",
"abstract": "OBJECTIVE\nTo define quantitative indicators for the presence of intermediate age-related macular degeneration (AMD) via spectral-domain optical coherence tomography (SD-OCT) imaging of older adults.\n\n\nDESIGN\nEvaluation of diagnostic test and technology.\n\n\nPARTICIPANTS AND CONTROLS\nOne eye from 115 elderly subjects without AMD and 269 subjects with intermediate AMD from the Age-Related Eye Disease Study 2 (AREDS2) Ancillary SD-OCT Study.\n\n\nMETHODS\nWe semiautomatically delineated the retinal pigment epithelium (RPE) and RPE drusen complex (RPEDC, the axial distance from the apex of the drusen and RPE layer to Bruch's membrane) and total retina (TR, the axial distance between the inner limiting and Bruch's membranes) boundaries. We registered and averaged the thickness maps from control subjects to generate a map of \"normal\" non-AMD thickness. We considered RPEDC thicknesses larger or smaller than 3 standard deviations from the mean as abnormal, indicating drusen or geographic atrophy (GA), respectively. We measured TR volumes, RPEDC volumes, and abnormal RPEDC thickening and thinning volumes for each subject. By using different combinations of these 4 disease indicators, we designed 5 automated classifiers for the presence of AMD on the basis of the generalized linear model regression framework. We trained and evaluated the performance of these classifiers using the leave-one-out method.\n\n\nMAIN OUTCOME MEASURES\nThe range and topographic distribution of the RPEDC and TR thicknesses in a 5-mm diameter cylinder centered at the fovea.\n\n\nRESULTS\nThe most efficient method for separating AMD and control eyes required all 4 disease indicators. The area under the curve (AUC) of the receiver operating characteristic (ROC) for this classifier was >0.99. Overall neurosensory retinal thickening in eyes with AMD versus control eyes in our study contrasts with previous smaller studies.\n\n\nCONCLUSIONS\nWe identified and validated efficient biometrics to distinguish AMD from normal eyes by analyzing the topographic distribution of normal and abnormal RPEDC thicknesses across a large atlas of eyes. We created an online atlas to share the 38 400 SD-OCT images in this study, their corresponding segmentations, and quantitative measurements."
},
{
"pmid": "33190943",
"title": "RPE layer detection and baseline estimation using statistical methods and randomization for classification of AMD from retinal OCT.",
"abstract": "BACKGROUND AND OBJECTIVE\nAge-related macular degeneration (AMD) is a condition of the eye that affects the aged people. Optical coherence tomography (OCT) is a diagnostic tool capable of analyzing and identifying the disease affected retinal layers with high resolution. The objective of this work is to extract the retinal pigment epithelium (RPE) layer and the baseline (natural eye curvature, particular to every patient) from retinal spectral-domain OCT (SD-OCT) images. It uses them to find the height of drusen (abnormalities) in the RPE layer and classify it as AMD or normal.\n\n\nMETHODS\nIn the proposed work, the contrast enhancement based adaptive denoising technique is used for speckle elimination. Pixel grouping and iterative elimination based on the knowledge of typical layer intensities and positions are used to obtain the RPE layer. Using this estimate, randomization techniques are employed, followed by polynomial fitting and drusen removal to arrive at a baseline estimate. The classification is based on the drusen height obtained by taking the difference between the RPE and baseline levels. We have used a patient, wise classification approach where a patient is classified diseased if more than a threshold number of patient images have drusen of more than a certain height. Since all slices of an affected patient will not show drusen, we are justified in adopting this technique.\n\n\nRESULTS\nThe proposed method is tested on a public data set of 2130 images/slices, which belonged to 30 patient volumes (15 AMD and 15 Normal) and achieved an overall accuracy of 96.66%, with no false positives. In comparison with existing works, the proposed method achieved higher overall accuracy and a better baseline estimate.\n\n\nCONCLUSIONS\nThe proposed work focuses on AMD/normal classification using a statistical approach. It does not require any training. The proposed method modifies the motion restoration paradigm to obtain an application-specific denoising algorithm. The existing RPE detection algorithm is modified significantly to make it robust and applicable even to images where the RPE is not very evident/there is a significant amount of perforations (drusen). The baseline estimation algorithm employs a powerful combination of randomization, iterative polynomial fitting, and pixel elimination in contrast to mere fitting techniques. The main highlight of this work is, it achieved an exact estimation of the baseline in the retinal image compared to the existing methods."
},
{
"pmid": "34364184",
"title": "A novel multiscale and multipath convolutional neural network based age-related macular degeneration detection using OCT images.",
"abstract": "BACKGROUND AND OBJECTIVE\nOne of the significant retinal diseases that affected older people is called Age-related Macular Degeneration (AMD). The first stage creates a blur effect on vision and later leads to central vision loss. Most people overlooked the primary stage blurring and converted it into an advanced stage. There is no proper treatment to cure the disease. So the early detection of AMD is essential to prevent its extension into the advanced stage. This paper proposes a novel deep Convolutional Neural Network (CNN) architecture to automate AMD diagnosis early from Optical Coherence Tomographic (OCT) images.\n\n\nMETHODS\nThe proposed architecture is a multiscale and multipath CNN with six convolutional layers. The multiscale convolution layer permits the network to produce many local structures with various filter dimensions. The multipath feature extraction permits CNN to merge more features regarding the sparse local and fine global structures. The performance of the proposed architecture is evaluated through ten-fold cross-validation methods using different classifiers like support vector machine, multi-layer perceptron, and random forest.\n\n\nRESULTS\nThe proposed CNN with the random forest classifier gives the best classification accuracy results. The proposed method is tested on data set 1, data set 2, data set 3, data set 4, and achieved an accuracy of 0.9666, 0.9897, 0.9974, and 0.9978 respectively, with random forest classifier. Also, we tested the combination of first three data sets and achieved an accuracy of 0.9902.\n\n\nCONCLUSIONS\nAn efficient algorithm for detecting AMD from OCT images is proposed based on a multiscale and multipath CNN architecture. Comparison with other approaches produced results that exhibit the efficiency of the proposed algorithm in the detection of AMD. The proposed architecture can be applied in rapid screening of the eye for the early detection of AMD. Due to less complexity and fewer learnable parameters."
},
{
"pmid": "30349958",
"title": "The possibility of the combination of OCT and fundus images for improving the diagnostic accuracy of deep learning for age-related macular degeneration: a preliminary experiment.",
"abstract": "Recently, researchers have built new deep learning (DL) models using a single image modality to diagnose age-related macular degeneration (AMD). Retinal fundus and optical coherence tomography (OCT) images in clinical settings are the most important modalities investigating AMD. Whether concomitant use of fundus and OCT data in DL technique is beneficial has not been so clearly identified. This experimental analysis used OCT and fundus image data of postmortems from the Project Macula. The DL based on OCT, fundus, and combination of OCT and fundus were invented to diagnose AMD. These models consisted of pre-trained VGG-19 and transfer learning using random forest. Following the data augmentation and training process, the DL using OCT alone showed diagnostic efficiency with area under the curve (AUC) of 0.906 (95% confidence interval, 0.891-0.921) and 82.6% (81.0-84.3%) accuracy rate. The DL using fundus alone exhibited AUC of 0.914 (0.900-0.928) and 83.5% (81.8-85.0%) accuracy rate. Combined usage of the fundus with OCT increased the diagnostic power with AUC of 0.969 (0.956-0.979) and 90.5% (89.2-91.8%) accuracy rate. The Delong test showed that the DL using both OCT and fundus data outperformed the DL using OCT alone (P value < 0.001) and fundus image alone (P value < 0.001). This multimodal random forest model showed even better performance than a restricted Boltzmann machine (P value = 0.002) and deep belief network algorithms (P value = 0.042). According to Duncan's multiple range test, the multimodal methods significantly improved the performance obtained by the single-modal methods. In this preliminary study, a multimodal DL algorithm based on the combination of OCT and fundus image raised the diagnostic accuracy compared to this data alone. Future diagnostic DL needs to adopt the multimodal process to combine various types of imaging for a more precise AMD diagnosis. Graphical abstract The basic architectural structure of the tested multimodal deep learning model based on pre-trained deep convolutional neural network and random forest using the combination of OCT and fundus image."
},
{
"pmid": "31011155",
"title": "Comparison of Deep Learning Approaches for Multi-Label Chest X-Ray Classification.",
"abstract": "The increased availability of labeled X-ray image archives (e.g. ChestX-ray14 dataset) has triggered a growing interest in deep learning techniques. To provide better insight into the different approaches, and their applications to chest X-ray classification, we investigate a powerful network architecture in detail: the ResNet-50. Building on prior work in this domain, we consider transfer learning with and without fine-tuning as well as the training of a dedicated X-ray network from scratch. To leverage the high spatial resolution of X-ray data, we also include an extended ResNet-50 architecture, and a network integrating non-image data (patient age, gender and acquisition type) in the classification process. In a concluding experiment, we also investigate multiple ResNet depths (i.e. ResNet-38 and ResNet-101). In a systematic evaluation, using 5-fold re-sampling and a multi-label loss function, we compare the performance of the different approaches for pathology classification by ROC statistics and analyze differences between the classifiers using rank correlation. Overall, we observe a considerable spread in the achieved performance and conclude that the X-ray-specific ResNet-38, integrating non-image data yields the best overall results. Furthermore, class activation maps are used to understand the classification process, and a detailed analysis of the impact of non-image features is provided."
},
{
"pmid": "33780867",
"title": "Deep learning for diagnosis of COVID-19 using 3D CT scans.",
"abstract": "A new pneumonia-type coronavirus, COVID-19, recently emerged in Wuhan, China. COVID-19 has subsequently infected many people and caused many deaths worldwide. Isolating infected people is one of the methods of preventing the spread of this virus. CT scans provide detailed imaging of the lungs and assist radiologists in diagnosing COVID-19 in hospitals. However, a person's CT scan contains hundreds of slides, and the diagnosis of COVID-19 using such scans can lead to delays in hospitals. Artificial intelligence techniques could assist radiologists with rapidly and accurately detecting COVID-19 infection from these scans. This paper proposes an artificial intelligence (AI) approach to classify COVID-19 and normal CT volumes. The proposed AI method uses the ResNet-50 deep learning model to predict COVID-19 on each CT image of a 3D CT scan. Then, this AI method fuses image-level predictions to diagnose COVID-19 on a 3D CT volume. We show that the proposed deep learning model provides 96% AUC value for detecting COVID-19 on CT scans."
},
{
"pmid": "30439600",
"title": "Exudate detection in fundus images using deeply-learnable features.",
"abstract": "Presence of exudates on a retina is an early sign of diabetic retinopathy, and automatic detection of these can improve the diagnosis of the disease. Convolutional Neural Networks (CNNs) have been used for automatic exudate detection, but with poor performance. This study has investigated different deep learning techniques to maximize the sensitivity and specificity. We have compared multiple deep learning methods, and both supervised and unsupervised classifiers for improving the performance of automatic exudate detection, i.e., CNNs, pre-trained Residual Networks (ResNet-50) and Discriminative Restricted Boltzmann Machines. The experiments were conducted on two publicly available databases: (i) DIARETDB1 and (ii) e-Ophtha. The results show that ResNet-50 with Support Vector Machines outperformed other networks with an accuracy and sensitivity of 98% and 0.99, respectively. This shows that ResNet-50 can be used for the analysis of the fundus images to detect exudates."
},
{
"pmid": "29474911",
"title": "Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning.",
"abstract": "The implementation of clinical-decision support algorithms for medical imaging faces challenges with reliability and interpretability. Here, we establish a diagnostic tool based on a deep-learning framework for the screening of patients with common treatable blinding retinal diseases. Our framework utilizes transfer learning, which trains a neural network with a fraction of the data of conventional approaches. Applying this approach to a dataset of optical coherence tomography images, we demonstrate performance comparable to that of human experts in classifying age-related macular degeneration and diabetic macular edema. We also provide a more transparent and interpretable diagnosis by highlighting the regions recognized by the neural network. We further demonstrate the general applicability of our AI system for diagnosis of pediatric pneumonia using chest X-ray images. This tool may ultimately aid in expediting the diagnosis and referral of these treatable conditions, thereby facilitating earlier treatment, resulting in improved clinical outcomes. VIDEO ABSTRACT."
},
{
"pmid": "25360373",
"title": "Fully automated detection of diabetic macular edema and dry age-related macular degeneration from optical coherence tomography images.",
"abstract": "We present a novel fully automated algorithm for the detection of retinal diseases via optical coherence tomography (OCT) imaging. Our algorithm utilizes multiscale histograms of oriented gradient descriptors as feature vectors of a support vector machine based classifier. The spectral domain OCT data sets used for cross-validation consisted of volumetric scans acquired from 45 subjects: 15 normal subjects, 15 patients with dry age-related macular degeneration (AMD), and 15 patients with diabetic macular edema (DME). Our classifier correctly identified 100% of cases with AMD, 100% cases with DME, and 86.67% cases of normal subjects. This algorithm is a potentially impactful tool for the remote diagnosis of ophthalmic diseases."
},
{
"pmid": "29864167",
"title": "Classification of healthy and diseased retina using SD-OCT imaging and Random Forest algorithm.",
"abstract": "In this paper, we propose a novel classification model for automatically identifying individuals with age-related macular degeneration (AMD) or Diabetic Macular Edema (DME) using retinal features from Spectral Domain Optical Coherence Tomography (SD-OCT) images. Our classification method uses retinal features such as the thickness of the retina and the thickness of the individual retinal layers, and the volume of the pathologies such as drusen and hyper-reflective intra-retinal spots. We extract automatically, ten clinically important retinal features by segmenting individual SD-OCT images for classification purposes. The effectiveness of the extracted features is evaluated using several classification methods such as Random Forrest on 251 (59 normal, 177 AMD and 15 DME) subjects. We have performed 15-fold cross-validation tests for three phenotypes; DME, AMD and normal cases using these data sets and achieved accuracy of more than 95% on each data set with the classification method using Random Forrest. When we trained the system as a two-class problem of normal and eye with pathology, using the Random Forrest classifier, we obtained an accuracy of more than 96%. The area under the receiver operating characteristic curve (AUC) finds a value of 0.99 for each dataset. We have also shown the performance of four state-of-the-methods for classification the eye participants and found that our proposed method showed the best accuracy."
},
{
"pmid": "28018716",
"title": "Machine learning based detection of age-related macular degeneration (AMD) and diabetic macular edema (DME) from optical coherence tomography (OCT) images.",
"abstract": "Non-lethal macular diseases greatly impact patients' life quality, and will cause vision loss at the late stages. Visual inspection of the optical coherence tomography (OCT) images by the experienced clinicians is the main diagnosis technique. We proposed a computer-aided diagnosis (CAD) model to discriminate age-related macular degeneration (AMD), diabetic macular edema (DME) and healthy macula. The linear configuration pattern (LCP) based features of the OCT images were screened by the Correlation-based Feature Subset (CFS) selection algorithm. And the best model based on the sequential minimal optimization (SMO) algorithm achieved 99.3% in the overall accuracy for the three classes of samples."
},
{
"pmid": "28114453",
"title": "Fully automated macular pathology detection in retina optical coherence tomography images using sparse coding and dictionary learning.",
"abstract": "We propose a framework for automated detection of dry age-related macular degeneration (AMD) and diabetic macular edema (DME) from retina optical coherence tomography (OCT) images, based on sparse coding and dictionary learning. The study aims to improve the classification performance of state-of-the-art methods. First, our method presents a general approach to automatically align and crop retina regions; then it obtains global representations of images by using sparse coding and a spatial pyramid; finally, a multiclass linear support vector machine classifier is employed for classification. We apply two datasets for validating our algorithm: Duke spectral domain OCT (SD-OCT) dataset, consisting of volumetric scans acquired from 45 subjects—15 normal subjects, 15 AMD patients, and 15 DME patients; and clinical SD-OCT dataset, consisting of 678 OCT retina scans acquired from clinics in Beijing—168, 297, and 213 OCT images for AMD, DME, and normal retinas, respectively. For the former dataset, our classifier correctly identifies 100%, 100%, and 93.33% of the volumes with DME, AMD, and normal subjects, respectively, and thus performs much better than the conventional method; for the latter dataset, our classifier leads to a correct classification rate of 99.67%, 99.67%, and 100.00% for DME, AMD, and normal images, respectively."
}
] |
Frontiers in Psychology | 35222216 | PMC8873145 | 10.3389/fpsyg.2022.839440 | New Breakthroughs and Innovation Modes in English Education in Post-pandemic Era | The outbreak of COVID-19 has brought drastic changes to English teaching as it has shifted from the offline mode before the pandemic to the online mode during the pandemic. However, in the post-pandemic era, there are still many problems in the effective implementation of the process of English teaching, leading to the inability of achieving better results in the quality and efficiency of English teaching and effective cultivation of students’ practical application ability. In recent years, English speaking has attracted the attention of experts and scholars. Therefore, this study constructs an interactive English-speaking practice scene based on a virtual character. A dual-modality emotion recognition method is proposed that mainly recognizes and analyzes facial expressions and physiological signals of students and the virtual character in each scene. Thereafter, the system adjusts the difficulty of the conversation according to the current state of students, toward making the conversation more conducive to the students’ understanding and gradually improving their English-speaking ability. The simulation compares nine facial expressions based on the eNTERFACE05 and CAS-PEAL datasets, which shows that the emotion recognition method proposed in this manuscript can effectively recognize students’ emotions in interactive English-speaking practice and reduce the recognition time to a great extent. The recognition accuracy of the nine facial expressions was close to 90% for the dual-modality emotion recognition method in the eNTERFACE05 dataset, and the recognition accuracy of the dual-modality emotion recognition method was significantly improved with an average improvement of approximately 5%. | Related WorkThe continuous innovation and development in information technology have also impacted English education. The traditional uniform English education mode cannot consider the shortcomings of individual differences among students. The mode of English education needs to break through and innovate, and use information technology to stimulate students’ interest in learning so that English education can enter a new stage. In a recent study, the authors analyzed the higher education informatization-based English innovation teaching mode (Zhu, 2018). In order to quantitatively evaluate and analyze the effect of college English teaching innovation reform, a curriculum thinking-based evaluation model of college English teaching innovation reform was proposed to establish a large data analysis model of the effect of college English teaching innovation reform (Zhang and Zhan, 2020). In an English teaching course, the application of the whole-brain theory in English teaching was explored through a questionnaire survey of teachers and students in English classes (Mao and Zhang, 2018). In English translation, the author discussed the theoretical and methodological significance of translation methods in the study of multilingual user language innovation and the study of world English (Wei, 2020). With the development of information technology, the authors explored the impact of information technology resources on the innovation performance of higher education (Maulani et al., 2021). Considering 5G, artificial intelligence, and education, the author introduced 5G technology into English-speaking teaching, explored a new English-speaking teaching model through case design, summarized its advantages, and presented solutions to its shortcomings (Sun, 2021). To explore the characteristics of English teaching, the author established an innovative English teaching management model and investigated English teaching from the perspective of innovative management (Sun, 2017). Combining game theory and English education, the author established a game model and an evolutionary game model to analyze the necessity of the development of Chinese English cross-cultural awareness, and to make the best strategy choices (Li, 2016). In English teaching reform, the current situation of English teaching was discussed, and the development and reform of English teaching paradigms were analyzed to promote the professional development of English teaching (Zhao, 2017). Combining English theory and practice, the optimal application and innovation model of network resources in a college English-hearing class was analyzed (Yan et al., 2017).Considering some specific scenes of human–computer interaction, such as classroom listening, fatigue driving detection, and other scenes where only facial expressions can be obtained, it is necessary to use facial features for emotion recognition. In hierarchical group-level emotion recognition, a novel method for group-level emotion recognition was proposed (Fujii et al., 2021). In emotion recognition of facial images, a convolutional neural network (CNN)-based deep learning technique was proposed to classify emotions from facial images (Khattak et al., 2021). A hybrid classifier could be used for emotion recognition, a novel classifier-based text-independent and speaker-independent emotion recognition system was proposed (Shahin et al., 2019). In neural networks, a single deep CNN-based model was proposed to recognize facial expressions (Jain et al., 2019). In automatic recognition of emotions, the authors used migratory learning to generate models of specific subjects to extract emotional content from facial images in the valence/wake dimension (Rescigno et al., 2020). In dual-channel emoticon recognition, machine learning theory and a philosophical-thought-based feature fusion dual-channel emoticon recognition algorithm were proposed (Song, 2021). In the facial emotion recognition system, the authors used feature extraction based on scale-invariant feature transformation to extract features from face points (Sreedharan et al., 2018). With the enhancements on the Internet of medical things, the authors introduced an Internet of Medical Things-based face emotion detection and recognition system (Rathour et al., 2021). In human–robot interaction system, the authors proposed a multimodal emotional recognition method to build a system with a low sense of disharmony (Tan et al., 2021). At present, facial expression recognition is seldom applied to English teaching; hence, this study carries out research on it. | [
"32321874",
"33613403",
"29874156",
"34407522",
"34646223",
"33263204"
] | [
{
"pmid": "32321874",
"title": "The Coronavirus Disease 2019 (COVID-19) Pandemic.",
"abstract": "The present study provides an overview of the coronavirus disease 2019 (COVID-19) outbreak which has rapidly extended globally within a short period. COVID-19 is a highly infectious respiratory disease caused by a new coronavirus known as SARS-CoV-2 (severe acute respiratory syndrome-coronavirus-2). SARS-CoV-2 is different from usual coronaviruses responsible for mild sickness such as common cold among human beings. It is crucial to understand the impact and outcome of this pandemic. We therefore overview the changes in the curves of COVID-19 confirmed cases and fatality rate in China and outside of China from 31st of December 2019 to 25th of March 2020. We also aimed to assess the temporal developments and death rate of COVID-19 in China and worldwide. More than 414,179 confirmed cases of COVID-19 have been reported in 197 countries, including 81,848 cases in China and 332,331 outside of China. Furthermore, 18,440 infected patients died from COVID-19 infection; 3,287 cases were from China and 15,153 fatalities were reported worldwide. Among the worldwide infected cases, 113,802 patients have been recovered and discharged from different hospitals. Effective prevention and control measures should be taken to control the disease. The presented Chinese model (protocol) of disease prevention and control could be utilized in order to curb the pandemic situation."
},
{
"pmid": "33613403",
"title": "Teacher Written Feedback on English as a Foreign Language Learners' Writing: Examining Native and Nonnative English-Speaking Teachers' Practices in Feedback Provision.",
"abstract": "While previous studies have examined front-line teachers' written feedback practices in second language (L2) writing classrooms, such studies tend to not take teachers' language and sociocultural backgrounds into consideration, which may mediate their performance in written feedback provision. Therefore, much remains to be known about how L2 writing teachers with different first languages (L1) enact written feedback. To fill this gap, we designed an exploratory study to examine native English-speaking (NES) and non-native English-speaking (NNES) (i.e., Chinese L1) teachers' written feedback practices in the Chinese tertiary context. Our study collected 80 English as a foreign language (EFL) students' writing samples with teacher written feedback and analyzed them from three aspects: Feedback scope, feedback focus, and feedback strategy. The findings of our study revealed that the two groups of teachers shared similar practices regarding feedback scope and feedback strategies. Both NES and NNES EFL teachers used a comprehensive approach to feedback provision, although NNES teachers provided significantly more feedback points than their NES peers and they delivered their feedback directly and indirectly. However, their practices differed greatly with regard to feedback focus. Specifically, when responding to EFL students' writing, NES teachers showed more concern with global issues (i.e., content and organization), whereas NNES teachers paid more attention to linguistic errors. With a surge in the recruitment of expatriate NES and local NNES English teachers in China and other EFL countries, our study is expected to make a contribution to a better understanding of the two groups of EFL teachers' pedagogical practices in written feedback provision and generate important implications for their feedback provision."
},
{
"pmid": "29874156",
"title": "Two Ways to Facial Expression Recognition? Motor and Visual Information Have Different Effects on Facial Expression Recognition.",
"abstract": "Motor-based theories of facial expression recognition propose that the visual perception of facial expression is aided by sensorimotor processes that are also used for the production of the same expression. Accordingly, sensorimotor and visual processes should provide congruent emotional information about a facial expression. Here, we report evidence that challenges this view. Specifically, the repeated execution of facial expressions has the opposite effect on the recognition of a subsequent facial expression than the repeated viewing of facial expressions. Moreover, the findings of the motor condition, but not of the visual condition, were correlated with a nonsensory condition in which participants imagined an emotional situation. These results can be well accounted for by the idea that facial expression recognition is not always mediated by motor processes but can also be recognized on visual information alone."
},
{
"pmid": "34407522",
"title": "Enhancing transfer performance across datasets for brain-computer interfaces using a combination of alignment strategies and adaptive batch normalization.",
"abstract": "Objective. Recently, transfer learning (TL) and deep learning (DL) have been introduced to solve intra- and inter-subject variability problems in brain-computer interfaces (BCIs). However, current TL and DL algorithms are usually validated within a single dataset, assuming that data of the test subjects are acquired under the same condition as that of training (source) subjects. This assumption is generally violated in practice because of different acquisition systems and experimental settings across studies and datasets. Thus, the generalization ability of these algorithms needs further validations in a cross-dataset scenario, which is closer to the actual situation. This study compared the transfer performance of pre-trained deep-learning models with different preprocessing strategies in a cross-dataset scenario.Approach. This study used four publicly available motor imagery datasets, each was successively selected as a source dataset, and the others were used as target datasets. EEGNet and ShallowConvNet with four preprocessing strategies, namely channel normalization, trial normalization, Euclidean alignment, and Riemannian alignment, were trained with the source dataset. The transfer performance of pre-trained models was validated on the target datasets. This study also used adaptive batch normalization (AdaBN) for reducing interval covariate shift across datasets. This study compared the transfer performance of using the four preprocessing strategies and that of a baseline approach based on manifold embedded knowledge transfer (MEKT). This study also explored the possibility and performance of fusing MEKT and EEGNet.Main results. The results show that DL models with alignment strategies had significantly better transfer performance than the other two preprocessing strategies. As an unsupervised domain adaptation method, AdaBN could also significantly improve the transfer performance of DL models. The transfer performance of DL models that combined AdaBN and alignment strategies significantly outperformed MEKT. Moreover, the generalizability of EEGNet models that combined AdaBN and alignment strategies could be further improved via the domain adaptation step in MEKT, achieving the best generalization ability among multiple datasets (BNCI2014001: 0.788, PhysionetMI: 0.679, Weibo2014: 0.753, Cho2017: 0.650).Significance. The combination of alignment strategies and AdaBN could easily improve the generalizability of DL models without fine-tuning. This study may provide new insights into the design of transfer neural networks for BCIs by separating source and target batch normalization layers in the domain adaptation process."
},
{
"pmid": "34646223",
"title": "Facial Expression Emotion Recognition Model Integrating Philosophy and Machine Learning Theory.",
"abstract": "Facial expression emotion recognition is an intuitive reflection of a person's mental state, which contains rich emotional information, and is one of the most important forms of interpersonal communication. It can be used in various fields, including psychology. As a celebrity in ancient China, Zeng Guofan's wisdom involves facial emotion recognition techniques. His book Bing Jian summarizes eight methods on how to identify people, especially how to choose the right one, which means \"look at the eyes and nose for evil and righteousness, the lips for truth and falsehood; the temperament for success and fame, the spirit for wealth and fortune; the fingers and claws for ideas, the hamstrings for setback; if you want to know his consecution, you can focus on what he has said.\" It is said that a person's personality, mind, goodness, and badness can be showed by his face. However, due to the complexity and variability of human facial expression emotion features, traditional facial expression emotion recognition technology has the disadvantages of insufficient feature extraction and susceptibility to external environmental influences. Therefore, this article proposes a novel feature fusion dual-channel expression recognition algorithm based on machine learning theory and philosophical thinking. Specifically, the feature extracted using convolutional neural network (CNN) ignores the problem of subtle changes in facial expressions. The first path of the proposed algorithm takes the Gabor feature of the ROI area as input. In order to make full use of the detailed features of the active facial expression emotion area, first segment the active facial expression emotion area from the original face image, and use the Gabor transform to extract the emotion features of the area. Focus on the detailed description of the local area. The second path proposes an efficient channel attention network based on depth separable convolution to improve linear bottleneck structure, reduce network complexity, and prevent overfitting by designing an efficient attention module that combines the depth of the feature map with spatial information. It focuses more on extracting important features, improves emotion recognition accuracy, and outperforms the competition on the FER2013 dataset."
},
{
"pmid": "33263204",
"title": "An Ultrahigh-Field-Tailored T1 -T2 Dual-Mode MRI Contrast Agent for High-Performance Vascular Imaging.",
"abstract": "The assessment of vascular anatomy and functions using magnetic resonance imaging (MRI) is critical for medical diagnosis, whereas the commonly used low-field MRI system (≤3 T) suffers from low spatial resolution. Ultrahigh field (UHF) MRI (≥7 T), with significantly improved resolution and signal-to-noise ratio, shows great potential to provide high-resolution vasculature images. However, practical applications of UHF MRI technology for vascular imaging are currently limited by the low sensitivity and accuracy of single-mode (T1 or T2 ) contrast agents. Herein, a UHF-tailored T1 -T2 dual-mode iron oxide nanoparticle-based contrast agent (UDIOC) with extremely small core size and ultracompact hydrophilic surface modification, exhibiting dually enhanced T1 -T2 contrast effect under the 7 T magnetic field, is reported. The UDIOC enables clear visualization of microvasculature as small as ≈140 µm in diameter under UHF MRI, extending the detection limit of the 7 T MR angiography. Moreover, by virtue of high-resolution UHF MRI and a simple double-checking process, UDIOC-based dual-mode dynamic contrast-enhanced MRI is successfully applied to detect tumor vascular permeability with extremely high sensitivity and accuracy, providing a novel paradigm for the precise medical diagnosis of vascular-related diseases."
}
] |
Frontiers in Bioengineering and Biotechnology | null | PMC8873531 | 10.3389/fbioe.2021.817723 | Self-Tuning Control of Manipulator Positioning Based on Fuzzy PID and PSO Algorithm | With the manipulator performs fixed-point tasks, it becomes adversely affected by external disturbances, parameter variations, and random noise. Therefore, it is essential to improve the robust and accuracy of the controller. In this article, a self-tuning particle swarm optimization (PSO) fuzzy PID positioning controller is designed based on fuzzy PID control. The quantization and scaling factors in the fuzzy PID algorithm are optimized by PSO in order to achieve high robustness and high accuracy of the manipulator. First of all, a mathematical model of the manipulator is developed, and the manipulator positioning controller is designed. A PD control strategy with compensation for gravity is used for the positioning control system. Then, the PID controller parameters dynamically are minute-tuned by the fuzzy controller 1. Through a closed-loop control loop to adjust the magnitude of the quantization factors–proportionality factors online. Correction values are outputted by the modified fuzzy controller 2. A quantization factor–proportion factor online self-tuning strategy is achieved to find the optimal parameters for the controller. Finally, the control performance of the improved controller is verified by the simulation environment. The results show that the transient response speed, tracking accuracy, and follower characteristics of the system are significantly improved. | Related WorkNow, more and more intelligent algorithms are used to manipulator control systems (Zhang et al., 2017; Lu, 2018; Jiang et al., 2019b; Huang et al., 2019; Nguyen et al., 2019; Huang et al., 2020; Ozyer, 2020; Sun and Liu, 2020; Cheng et al., 2021; Liu et al., 2021c; Yu et al., 2021). Due to various factor interferences, such as the environment, the robot cannot be positioned accurately, and the positioning error will gradually increase (Sun et al., 2020b; Ozyer, 2020; Liu X. et al., 2021). Against the problem of uncertainty in the motion control of the manipulator, the manipulator is controlled to obtain the desired position by means of a calculated torque method (He et al., 2019; My and Bein, 2020), which improves the systematic robust to a certain extent. RBF neural networks can compensate for external environmental disturbances. A PD + RBF control algorithm is combined, which improves the immunity and robust of the power positioning system (Wen et al., 2016; Huang et al., 2021). An adaptive fuzzy SMC algorithm is proposed to the positioning control problem of articulated robots, and the steady-state convergence is good and has some robustness (Zirkohi and Fateh, 2017). Aiming the interference of internal and external factors on the performance of the robotic arm, a joint trajectory sliding mode robust control algorithm is proposed. It can effectively avoid the system jitter phenomenon. However, there are internal models with external disturbances (Weng et al., 2021; Zhao et al., 2021).While there are algorithms that can improve on some aspects, there can be limitations. A passive-based control method for single-link flexible robotic arms is proposed. Precise positioning of the linkage end is achieved by a combination of precise joint positioning and linkage damping, but the stability is less than ideal (Jiang et al., 2019c; Jiang et al., 2021b). When the modeling is uncertain and the external disturbance is large and complex, it will lead to the phenomenon of jitter and vibration. By improving the interferer, compensating for external disturbances with feedback, and using neural networks to approximate the errors, the jitter is effectively suppressed, and the response speed and tracking accuracy are improved. However, it is suitable for situations where the system modeling error and external disturbances fluctuate greatly (Feliu et al., 2014; Sun et al., 2021). A motion control algorithm with a non-singular sliding mode saturation function method is proposed by combining a sliding mode variable structure and a BP neural network algorithm (Choubey and Chri, 2021; Yang et al., 2021). It provides accurate and stable control of the motion state of the robotic arm. A sliding mode controller was designed (Li et al., 2019a). Variable gain is incorporated into the controller, thus resulting in a controller with high robust and motion control accuracy. However, it is limited to the joint space.A complete set of gravity compensation algorithms is proposed. According to the joint moment measurements, the parameters are adjusted in real time to meet the dynamic requirements of each stage of the main dynamic positioning process (Wang, 2020). Combining PD control with preset performance control, a simple PD control structure and a preset performance function based on a logarithmic form error transformation are used to design the robotic arm motion controller. The control algorithm improves the dynamic response performance to a certain extent. To speed up convergence, a PD-type iterative learning control law was devised (Zhao et al., 2018). The gain matrix is modified in real time to shorten the correction interval and overcome the problem of slow convergence of system disturbances, but it is less stable.In summary, there are numerous ways to improve control strategies at this stage, and positioning control strategies are essential. However, there are still problems with some control strategies that need to be addressed, and further research is needed on universality and robustness. The response time and accuracy of the controller also need to be improved. This article finds the optimal parameters by modifying the fuzzy controller to adjust the size of quantization factor–proportion factor online. In the meantime, the improved solution is simulated and compared with the general solution. | [
"28801078",
"33562366"
] | [
{
"pmid": "28801078",
"title": "On position/force tracking control problem of cooperative robot manipulators using adaptive fuzzy backstepping approach.",
"abstract": "In this paper, the position and force tracking control problem of cooperative robot manipulator system handling a common rigid object with unknown dynamical models and unknown external disturbances is investigated. The universal approximation properties of fuzzy logic systems are employed to estimate the unknown system dynamics. On the other hand, by defining new state variables based on the integral and differential of position and orientation errors of the grasped object, the error system of coordinated robot manipulators is constructed. Subsequently by defining the appropriate change of coordinates and using the backstepping design strategy, an adaptive fuzzy backstepping position tracking control scheme is proposed for multi-robot manipulator systems. By utilizing the properties of internal forces, extra terms are also added to the control signals to consider the force tracking problem. Moreover, it is shown that the proposed adaptive fuzzy backstepping position/force control approach ensures all the signals of the closed loop system uniformly ultimately bounded and tracking errors of both positions and forces can converge to small desired values by proper selection of the design parameters. Finally, the theoretic achievements are tested on the two three-link planar robot manipulators cooperatively handling a common object to illustrate the effectiveness of the proposed approach."
},
{
"pmid": "33562366",
"title": "Combining Public Opinion Dissemination with Polarization Process Considering Individual Heterogeneity.",
"abstract": "The wide dissemination of false information and the frequent occurrence of extreme speeches on online social platforms have become increasingly prominent, which impact on the harmony and stability of society. In order to solve the problems in the dissemination and polarization of public opinion over online social platforms, it is necessary to conduct in-depth research on the formation mechanism of the dissemination and polarization of public opinion. This article appends individual communicating willingness and forgetting effects to the Susceptible-Exposed-Infected-Recovered (SEIR) model to describe individual state transitions; secondly, it introduces three heterogeneous factors describing the characteristics of individual differences in the Jager-Amblard (J-A) model, namely: Individual conformity, individual conservative degree, and inter-individual relationship strength in order to reflect the different roles of individual heterogeneity in the opinions interaction; thirdly, it integrates the improved SEIR model and J-A model to construct the SEIR-JA model to study the formation mechanism of public opinion dissemination and polarization. Transmission parameters and polarization parameters are simulated and analyzed. Finally, a public opinion event from the pricing of China's self-developed COVID-19 vaccine are used, and related Weibo comment data about this event are also collected so as to verify the rationality and effectiveness of the proposed model."
}
] |
Frontiers in Bioengineering and Biotechnology | null | PMC8873790 | 10.3389/fbioe.2021.706229 | Deep Feature Mining via the Attention-Based Bidirectional Long Short Term Memory Graph Convolutional Neural Network for Human Motor Imagery Recognition | Recognition accuracy and response time are both critically essential ahead of building the practical electroencephalography (EEG)-based brain–computer interface (BCI). However, recent approaches have compromised either the classification accuracy or the responding time. This paper presents a novel deep learning approach designed toward both remarkably accurate and responsive motor imagery (MI) recognition based on scalp EEG. Bidirectional long short-term memory (BiLSTM) with the attention mechanism is employed, and the graph convolutional neural network (GCN) promotes the decoding performance by cooperating with the topological structure of features, which are estimated from the overall data. Particularly, this method is trained and tested on the short EEG recording with only 0.4 s in length, and the result has shown effective and efficient prediction based on individual and groupwise training, with 98.81% and 94.64% accuracy, respectively, which outperformed all the state-of-the-art studies. The introduced deep feature mining approach can precisely recognize human motion intents from raw and almost-instant EEG signals, which paves the road to translate the EEG-based MI recognition to practical BCI systems.
| 1.1 Related WorkLately, deep learning (DL) has attracted increasing attention in many disciplines because of its promising performance in classification tasks (LeCun et al., 2015). A growing number of works have shown that DL will play a pivotal role in the precise decoding of brain activities (Schwemmer et al., 2018). Especially, recent works have been carried out on EEG motion intention detection. A primary current focus is to implement the DL-based approach to decode EEG MI tasks, which have attained promising results (Lotte et al., 2018). Due to the high temporal resolution of EEG signals, methods related to the recurrent neural network (RNN) (Rumelhart et al., 1986), which can analyze time-series data, were extensively applied to filter and classify EEG sequences, i.e., time points (Güler et al., 2005; Wang P et al., 2018; Luo et al., 2018; Zhang T et al., 2018; Zhang X et al., 2018). In reference to Zhang T et al. (2018), a novel RNN framework with spatial and temporal filtering was put forward to classify EEG signals for emotion recognition and achieved 95.4% accuracy for three classes with a 9-s segment as a sample. Yang et al. also proposed an emotion recognition method using long short-term memory (LSTM) (Yang J et al., 2020). Wang et al. and Luo et al. performed LSTM (Hochreiter and Schmidhuber, 1997) to handle signals of time slices and achieved 77.30% and 82.75% accuracy, respectively (Wang P et al., 2018; Luo et al., 2018). Zhang X et al. (2018) presented attention-based RNN for EEG-based person identification, which attained 99.89% accuracy for eight participants at the subject level with 4-s signals as a sample. LSTM was also employed in some medical fields, such as seizure detection (Hu et al., 2020), with the recorded EEG signals. However, it can be found that in these studies, signals over experimental duration were recognized as samples, which resulted in a slow responsive prediction.Apart from RNN, the convolutional neural network (CNN) (Fukushima, 1980; LeCun et al., 1998) has been performed to decode EEG signals as well (Dose et al., 2018; Hou et al., 2020). Hou et al. proposed ESI and CNN and achieved competitive results, i.e., 94.50% and 96.00% accuracy at the group and subject levels, respectively, for four-class classification. What is more, by combining CNN with the graph theory, the graph convolutional neural network (GCN) (Bruna et al., 2014; Henaff et al., 2015; Duvenaud et al., 2015; Niepert et al., 2016; Defferrard et al., 2016) approach was presented lately, taking consideration of the functional topological relationship of EEG electrodes (Wang XH et al., 2018; Song et al., 2018; Zhang T et al., 2019; Wang et al., 2019). In reference to Wang XH et al. (2018) and Zhang T et al. (2019), a GCN with a broad learning approach was proposed and attained 93.66% and 94.24% accuracy, separately, for EEG emotion recognition. Song et al. and Wang et al. introduced dynamical GCN (90.40% accuracy) and phase-locking value-based GCN (84.35% accuracy) to recognize different emotions (Song et al., 2018; Wang et al., 2019). A highly accurate prediction has been accomplished via the GCN model. Few researchers have investigated the approach in the area of EEG MI decoding. | [
"27074513",
"18835541",
"10851218",
"10879535",
"9377276",
"31585454",
"32771673",
"26017442",
"29488902",
"30268089",
"30440769",
"30190543",
"30250141",
"31919460",
"30334800",
"30892250",
"31341093",
"29994572"
] | [
{
"pmid": "27074513",
"title": "Restoring cortical control of functional movement in a human with quadriplegia.",
"abstract": "Millions of people worldwide suffer from diseases that lead to paralysis through disruption of signal pathways between the brain and the muscles. Neuroprosthetic devices are designed to restore lost function and could be used to form an electronic 'neural bypass' to circumvent disconnected pathways in the nervous system. It has previously been shown that intracortically recorded signals can be decoded to extract information related to motion, allowing non-human primates and paralysed humans to control computers and robotic arms through imagined movements. In non-human primates, these types of signal have also been used to drive activation of chemically paralysed arm muscles. Here we show that intracortically recorded signals can be linked in real-time to muscle activation to restore movement in a paralysed human. We used a chronically implanted intracortical microelectrode array to record multiunit activity from the motor cortex in a study participant with quadriplegia from cervical spinal cord injury. We applied machine-learning algorithms to decode the neuronal activity and control activation of the participant's forearm muscles through a custom-built high-resolution neuromuscular electrical stimulation system. The system provided isolated finger movements and the participant achieved continuous cortical control of six different wrist and hand motions. Furthermore, he was able to use the system to complete functional tasks relevant to daily living. Clinical assessment showed that, when using the system, his motor impairment improved from the fifth to the sixth cervical (C5-C6) to the seventh cervical to first thoracic (C7-T1) level unilaterally, conferring on him the critical abilities to grasp, manipulate, and release objects. This is the first demonstration to our knowledge of successful control of muscle activation using intracortically recorded signals in a paralysed human. These results have significant implications in advancing neuroprosthetic technology for people worldwide living with the effects of paralysis."
},
{
"pmid": "18835541",
"title": "Brain-computer interfaces in neurological rehabilitation.",
"abstract": "Recent advances in analysis of brain signals, training patients to control these signals, and improved computing capabilities have enabled people with severe motor disabilities to use their brain signals for communication and control of objects in their environment, thereby bypassing their impaired neuromuscular system. Non-invasive, electroencephalogram (EEG)-based brain-computer interface (BCI) technologies can be used to control a computer cursor or a limb orthosis, for word processing and accessing the internet, and for other functions such as environmental control or entertainment. By re-establishing some independence, BCI technologies can substantially improve the lives of people with devastating neurological disorders such as advanced amyotrophic lateral sclerosis. BCI technology might also restore more effective motor control to people after stroke or other traumatic brain disorders by helping to guide activity-dependent brain plasticity by use of EEG brain signals to indicate to the patient the current state of brain activity and to enable the user to subsequently lower abnormal activity. Alternatively, by use of brain signals to supplement impaired muscle control, BCIs might increase the efficacy of a rehabilitation protocol and thus improve muscle control for the patient."
},
{
"pmid": "10851218",
"title": "PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals.",
"abstract": "The newly inaugurated Research Resource for Complex Physiologic Signals, which was created under the auspices of the National Center for Research Resources of the National Institutes of Health, is intended to stimulate current research and new investigations in the study of cardiovascular and other complex biomedical signals. The resource has 3 interdependent components. PhysioBank is a large and growing archive of well-characterized digital recordings of physiological signals and related data for use by the biomedical research community. It currently includes databases of multiparameter cardiopulmonary, neural, and other biomedical signals from healthy subjects and from patients with a variety of conditions with major public health implications, including life-threatening arrhythmias, congestive heart failure, sleep apnea, neurological disorders, and aging. PhysioToolkit is a library of open-source software for physiological signal processing and analysis, the detection of physiologically significant events using both classic techniques and novel methods based on statistical physics and nonlinear dynamics, the interactive display and characterization of signals, the creation of new databases, the simulation of physiological and other signals, the quantitative evaluation and comparison of analysis methods, and the analysis of nonstationary processes. PhysioNet is an on-line forum for the dissemination and exchange of recorded biomedical signals and open-source software for analyzing them. It provides facilities for the cooperative analysis of data and the evaluation of proposed new algorithms. In addition to providing free electronic access to PhysioBank data and PhysioToolkit software via the World Wide Web (http://www.physionet. org), PhysioNet offers services and training via on-line tutorials to assist users with varying levels of expertise."
},
{
"pmid": "10879535",
"title": "Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit.",
"abstract": "Digital circuits such as the flip-flop use feedback to achieve multistability and nonlinearity to restore signals to logical levels, for example 0 and 1. Analogue feedback circuits are generally designed to operate linearly, so that signals are over a range, and the response is unique. By contrast, the response of cortical circuits to sensory stimulation can be both multistable and graded. We propose that the neocortex combines digital selection of an active set of neurons with analogue response by dynamically varying the positive feedback inherent in its recurrent connections. Strong positive feedback causes differential instabilities that drive the selection of a set of active neurons under the constraints embedded in the synaptic weights. Once selected, the active neurons generate weaker, stable feedback that provides analogue amplification of the input. Here we present our model of cortical processing as an electronic circuit that emulates this hybrid operation, and so is able to perform computations that are similar to stimulus selection, gain modulation and spatiotemporal pattern generation in the neocortex."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
},
{
"pmid": "31585454",
"title": "A novel approach of decoding EEG four-class motor imagery tasks via scout ESI and CNN.",
"abstract": "OBJECTIVE\nTo develop and implement a novel approach which combines the technique of scout EEG source imaging (ESI) with convolutional neural network (CNN) for the classification of motor imagery (MI) tasks.\n\n\nAPPROACH\nThe technique of ESI uses a boundary element method (BEM) and weighted minimum norm estimation (WMNE) to solve the EEG forward and inverse problems, respectively. Ten scouts are then created within the motor cortex to select the region of interest (ROI). We extract features from the time series of scouts using a Morlet wavelet approach. Lastly, CNN is employed for classifying MI tasks.\n\n\nMAIN RESULTS\nThe overall mean accuracy on the Physionet database reaches 94.5% and the individual accuracy of each task reaches 95.3%, 93.3%, 93.6%, 96% for the left fist, right fist, both fists and both feet, correspondingly, validated using ten-fold cross validation. We report an increase of up to 14.4% for overall classification compared with the competitive results from the state-of-the-art MI classification methods. Then, we add four new subjects to verify the validity of the method and the overall mean accuracy is 92.5%. Furthermore, the global classifier was adapted to single subjects improving the overall mean accuracy to 94.54%.\n\n\nSIGNIFICANCE\nThe combination of scout ESI and CNN enhances BCI performance of decoding EEG four-class MI tasks."
},
{
"pmid": "32771673",
"title": "Scalp EEG classification using deep Bi-LSTM network for seizure detection.",
"abstract": "Automatic seizure detection technology not only reduces workloads of neurologists for epilepsy diagnosis but also is of great significance for treatments of epileptic patients. A novel seizure detection method based on the deep bidirectional long short-term memory (Bi-LSTM) network is proposed in this paper. To preserve the non-stationary nature of EEG signals while decreasing the computational burden, the local mean decomposition (LMD) and statistical feature extraction procedures are introduced. The deep architecture is then designed by combining two independent LSTM networks with the opposite propagation directions: one transmits information from the front to the back, and another from the back to the front. Thus the deep model can take advantage of the information both before and after the currently analyzing moment to jointly determine the output state. A mean sensitivity of 93.61% and a mean specificity of 91.85% were achieved on a long-term scalp EEG database. The comparisons with other published methods based on either traditional machine learning models or convolutional neural networks demonstrated the improved performance for seizure detection."
},
{
"pmid": "26017442",
"title": "Deep learning.",
"abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech."
},
{
"pmid": "29488902",
"title": "A review of classification algorithms for EEG-based brain-computer interfaces: a 10 year update.",
"abstract": "OBJECTIVE\nMost current electroencephalography (EEG)-based brain-computer interfaces (BCIs) are based on machine learning algorithms. There is a large diversity of classifier types that are used in this field, as described in our 2007 review paper. Now, approximately ten years after this review publication, many new algorithms have been developed and tested to classify EEG signals in BCIs. The time is therefore ripe for an updated review of EEG classification algorithms for BCIs.\n\n\nAPPROACH\nWe surveyed the BCI and machine learning literature from 2007 to 2017 to identify the new classification approaches that have been investigated to design BCIs. We synthesize these studies in order to present such algorithms, to report how they were used for BCIs, what were the outcomes, and to identify their pros and cons.\n\n\nMAIN RESULTS\nWe found that the recently designed classification algorithms for EEG-based BCIs can be divided into four main categories: adaptive classifiers, matrix and tensor classifiers, transfer learning and deep learning, plus a few other miscellaneous classifiers. Among these, adaptive classifiers were demonstrated to be generally superior to static ones, even with unsupervised adaptation. Transfer learning can also prove useful although the benefits of transfer learning remain unpredictable. Riemannian geometry-based methods have reached state-of-the-art performances on multiple BCI problems and deserve to be explored more thoroughly, along with tensor-based methods. Shrinkage linear discriminant analysis and random forests also appear particularly useful for small training samples settings. On the other hand, deep learning methods have not yet shown convincing improvement over state-of-the-art BCI methods.\n\n\nSIGNIFICANCE\nThis paper provides a comprehensive overview of the modern classification algorithms used in EEG-based BCIs, presents the principles of these methods and guidelines on when and how to use them. It also identifies a number of challenges to further advance EEG classification in BCI."
},
{
"pmid": "30268089",
"title": "Exploring spatial-frequency-sequential relationships for motor imagery classification with recurrent neural network.",
"abstract": "BACKGROUND\nConventional methods of motor imagery brain computer interfaces (MI-BCIs) suffer from the limited number of samples and simplified features, so as to produce poor performances with spatial-frequency features and shallow classifiers.\n\n\nMETHODS\nAlternatively, this paper applies a deep recurrent neural network (RNN) with a sliding window cropping strategy (SWCS) to signal classification of MI-BCIs. The spatial-frequency features are first extracted by the filter bank common spatial pattern (FB-CSP) algorithm, and such features are cropped by the SWCS into time slices. By extracting spatial-frequency-sequential relationships, the cropped time slices are then fed into RNN for classification. In order to overcome the memory distractions, the commonly used gated recurrent unit (GRU) and long-short term memory (LSTM) unit are applied to the RNN architecture, and experimental results are used to determine which unit is more suitable for processing EEG signals.\n\n\nRESULTS\nExperimental results on common BCI benchmark datasets show that the spatial-frequency-sequential relationships outperform all other competing spatial-frequency methods. In particular, the proposed GRU-RNN architecture achieves the lowest misclassification rates on all BCI benchmark datasets.\n\n\nCONCLUSION\nBy introducing spatial-frequency-sequential relationships with cropping time slice samples, the proposed method gives a novel way to construct and model high accuracy and robustness MI-BCIs based on limited trials of EEG signals."
},
{
"pmid": "30440769",
"title": "Improving EEG-Based Motor Imagery Classification via Spatial and Temporal Recurrent Neural Networks.",
"abstract": "Motor imagery (MI) based Brain-Computer Interface (BCI) is an important active BCI paradigm for recognizing movement intention of severely disabled persons. There are extensive studies about MI-based intention recognition, most of which heavily rely on staged handcrafted EEG feature extraction and classifier design. For end-to-end deep learning methods, researchers encode spatial information with convolution neural networks (CNNs) from raw EEG data. Compared with CNNs, recurrent neural networks (RNNs) allow for long-range lateral interactions between features. In this paper, we proposed a pure RNNs-based parallel method for encoding spatial and temporal sequential raw data with bidirectional Long Short- Term Memory (bi-LSTM) and standard LSTM, respectively. Firstly, we rearranged the index of EEG electrodes considering their spatial location relationship. Secondly, we applied sliding window method over raw EEG data to obtain more samples and split them into training and testing sets according to their original trial index. Thirdly, we utilized the samples and their transposed matrix as input to the proposed pure RNNs- based parallel method, which encodes spatial and temporal information simultaneously. Finally, the proposed method was evaluated in the public MI-based eegmmidb dataset and compared with the other three methods (CSP+LDA, FBCSP+LDA, and CNN-RNN method). The experiment results demonstrated the superior performance of our proposed pure RNNs-based parallel method. In the multi-class trial-wise movement intention classification scenario, our approach obtained an average accuracy of 68.20% and significantly outperformed other three methods with an 8.25% improvement of relative accuracy on average, which proves the feasibility of our approach for the real-world BCI system."
},
{
"pmid": "30190543",
"title": "EEG patterns of self-paced movement imaginations towards externally-cued and internally-selected targets.",
"abstract": "In this study, we investigate the neurophysiological signature of the interacting processes which lead to a single reach-and-grasp movement imagination (MI). While performing this task, the human healthy participants could either define their movement targets according to an external cue, or through an internal selection process. After defining their target, they could start the MI whenever they wanted. We recorded high density electroencephalographic (EEG) activity and investigated two neural correlates: the event-related potentials (ERPs) associated with the target selection, which reflect the perceptual and cognitive processes prior to the MI, and the movement-related cortical potentials (MRCPs), associated with the planning of the self-paced MI. We found differences in frontal and parietal areas between the late ERP components related to the internally-driven selection and the externally-cued process. Furthermore, we could reliably estimate the MI onset of the self-paced task. Next, we extracted MRCP features around the MI onset to train classifiers of movement vs. rest directly on self-paced MI data. We attained performance significantly higher than chance level for both time-locked and asynchronous classification. These findings contribute to the development of more intuitive brain-computer interfaces in which movement targets are defined internally and the movements are self-paced."
},
{
"pmid": "30250141",
"title": "Meeting brain-computer interface user performance expectations using a deep neural network decoding framework.",
"abstract": "Brain-computer interface (BCI) neurotechnology has the potential to reduce disability associated with paralysis by translating neural activity into control of assistive devices1-9. Surveys of potential end-users have identified key BCI system features10-14, including high accuracy, minimal daily setup, rapid response times, and multifunctionality. These performance characteristics are primarily influenced by the BCI's neural decoding algorithm1,15, which is trained to associate neural activation patterns with intended user actions. Here, we introduce a new deep neural network16 decoding framework for BCI systems enabling discrete movements that addresses these four key performance characteristics. Using intracortical data from a participant with tetraplegia, we provide offline results demonstrating that our decoder is highly accurate, sustains this performance beyond a year without explicit daily retraining by combining it with an unsupervised updating procedure3,17-20, responds faster than competing methods8, and can increase functionality with minimal retraining by using a technique known as transfer learning21. We then show that our participant can use the decoder in real-time to reanimate his paralyzed forearm with functional electrical stimulation (FES), enabling accurate manipulation of three objects from the grasp and release test (GRT)22. These results demonstrate that deep neural network decoders can advance the clinical translation of BCI technology."
},
{
"pmid": "31919460",
"title": "Group task-related component analysis (gTRCA): a multivariate method for inter-trial reproducibility and inter-subject similarity maximization for EEG data analysis.",
"abstract": "EEG is known to contain considerable inter-trial and inter-subject variability, which poses a challenge in any group-level EEG analyses. A true experimental effect must be reproducible even with variabilities in trials, sessions, and subjects. Extracting components that are reproducible across trials and subjects benefits both understanding common mechanisms in neural processing of cognitive functions and building robust brain-computer interfaces. This study extends our previous method (task-related component analysis, TRCA) by maximizing not only trial-by-trial reproducibility within single subjects but also similarity across a group of subjects, hence referred to as group TRCA (gTRCA). The problem of maximizing reproducibility of time series across trials and subjects is formulated as a generalized eigenvalue problem. We applied gTRCA to EEG data recorded from 35 subjects during a steady-state visual-evoked potential (SSVEP) experiment. The results revealed: (1) The group-representative data computed by gTRCA showed higher and consistent spectral peaks than other conventional methods; (2) Scalp maps obtained by gTRCA showed estimated source locations consistently within the occipital lobe; And (3) the high-dimensional features extracted by gTRCA are consistently mapped to a low-dimensional space. We conclude that gTRCA offers a framework for group-level EEG data analysis and brain-computer interfaces alternative in complement to grand averaging."
},
{
"pmid": "30334800",
"title": "LSTM-Based EEG Classification in Motor Imagery Tasks.",
"abstract": "Classification of motor imagery electroencephalograph signals is a fundamental problem in brain-computer interface (BCI) systems. We propose in this paper a classification framework based on long short-term memory (LSTM) networks. To achieve robust classification, a one dimension-aggregate approximation (1d-AX) is employed to extract effective signal representation for LSTM networks. Inspired by classical common spatial pattern, channel weighting technique is further deployed to enhance the effectiveness of the proposed classification framework. Public BCI competition data are used for the evaluation of the proposed feature extraction and classification network, whose performance is also compared with that of the state-of-the-arts approaches based on other deep networks."
},
{
"pmid": "30892250",
"title": "Scalable Digital Neuromorphic Architecture for Large-Scale Biophysically Meaningful Neural Network With Multi-Compartment Neurons.",
"abstract": "Multicompartment emulation is an essential step to enhance the biological realism of neuromorphic systems and to further understand the computational power of neurons. In this paper, we present a hardware efficient, scalable, and real-time computing strategy for the implementation of large-scale biologically meaningful neural networks with one million multi-compartment neurons (CMNs). The hardware platform uses four Altera Stratix III field-programmable gate arrays, and both the cellular and the network levels are considered, which provides an efficient implementation of a large-scale spiking neural network with biophysically plausible dynamics. At the cellular level, a cost-efficient multi-CMN model is presented, which can reproduce the detailed neuronal dynamics with representative neuronal morphology. A set of efficient neuromorphic techniques for single-CMN implementation are presented with all the hardware cost of memory and multiplier resources removed and with hardware performance of computational speed enhanced by 56.59% in comparison with the classical digital implementation method. At the network level, a scalable network-on-chip (NoC) architecture is proposed with a novel routing algorithm to enhance the NoC performance including throughput and computational latency, leading to higher computational efficiency and capability in comparison with state-of-the-art projects. The experimental results demonstrate that the proposed work can provide an efficient model and architecture for large-scale biologically meaningful networks, while the hardware synthesis results demonstrate low area utilization and high computational speed that supports the scalability of the approach."
},
{
"pmid": "31341093",
"title": "A novel hybrid deep learning scheme for four-class motor imagery classification.",
"abstract": "OBJECTIVE\nLearning the structures and unknown correlations of a motor imagery electroencephalogram (MI-EEG) signal is important for its classification. It is also a major challenge to obtain good classification accuracy from the increased number of classes and increased variability from different people. In this study, a four-class MI task is investigated.\n\n\nAPPROACH\nAn end-to-end novel hybrid deep learning scheme is developed to decode the MI task from EEG data. The proposed algorithm consists of two parts: a. A one-versus-rest filter bank common spatial pattern is adopted to preprocess and pre-extract the features of the four-class MI signal. b. A hybrid deep network based on the convolutional neural network and long-term short-term memory network is proposed to extract and learn the spatial and temporal features of the MI signal simultaneously.\n\n\nMAIN RESULTS\nThe main contribution of this paper is to propose a hybrid deep network framework to improve the classification accuracy of the four-class MI-EEG signal. The hybrid deep network is a subject-independent shared neural network, which means it can be trained by using the training data from all subjects to form one model.\n\n\nSIGNIFICANCE\nThe classification performance obtained by the proposed algorithm on brain-computer interface (BCI) competition IV dataset 2a in terms of accuracy is 83% and Cohen's kappa value is 0.80. Finally, the shared hybrid deep network is evaluated by every subject respectively, and the experimental results illustrate that the shared neural network has satisfactory accuracy. Thus, the proposed algorithm could be of great interest for real-life BCIs."
},
{
"pmid": "29994572",
"title": "Spatial-Temporal Recurrent Neural Network for Emotion Recognition.",
"abstract": "In this paper, we propose a novel deep learning framework, called spatial-temporal recurrent neural network (STRNN), to integrate the feature learning from both spatial and temporal information of signal sources into a unified spatial-temporal dependency model. In STRNN, to capture those spatially co-occurrent variations of human emotions, a multidirectional recurrent neural network (RNN) layer is employed to capture long-range contextual cues by traversing the spatial regions of each temporal slice along different directions. Then a bi-directional temporal RNN layer is further used to learn the discriminative features characterizing the temporal dependencies of the sequences, where sequences are produced from the spatial RNN layer. To further select those salient regions with more discriminative ability for emotion recognition, we impose sparse projection onto those hidden states of spatial and temporal domains to improve the model discriminant ability. Consequently, the proposed two-layer RNN model provides an effective way to make use of both spatial and temporal dependencies of the input signals for emotion recognition. Experimental results on the public emotion datasets of electroencephalogram and facial expression demonstrate the proposed STRNN method is more competitive over those state-of-the-art methods."
}
] |
Micromachines | null | PMC8874697 | 10.3390/mi13020146 | Soft Ionic Pressure Sensor with Aloe Vera Gel for Low-Pressure Applications | Ionic pressure sensors are made of ionic compounds suspended in a suitable solvent mixture. When external pressure is exerted on them, it is reflected as a change in electrical parameters due to physical deformation and a redistribution of ions within the sensing medium. Variations in the composition and material of the sensing medium result in different pressure sensors with varying operating ranges and sensitivity. This work presents the design and fabrication procedure of a novel soft-pressure sensor for a very low-pressure range (<20 mm Hg) using Aloe vera gel and Glycerin as the solvent for the ionic sensing medium. We also provide a comparative study on the performance of sensor prototypes with varying solvent concentrations and geometric parameters based on a series of characterization experiments. Maximum sensitivity (7.498×10−4 Ω/mmHg) was observed when using 40% glycerin in the sensing medium, filled in a toroidal geometry with outer and inner channel diameters of 8 mm and 7 mm, respectively. The proposed sensor is entirely soft and can be designed to conform to any desired geometry. | 2. Related WorkIn general, pressure sensors are transducers that convert a mechanical input (pressure) into an electrical output (capacitance, resistance, etc.). Thus, apart from being classified as rigid and soft sensors, they can also be classified based on their working principle as capacitive, resistive, piezoelectric, optical, ionic fluid-based, etc. This section uses a few representatives from corresponding pressure sensor types and discusses their fundamental working principles.Capacitive sensors work by changing their capacitance in response to changes in pressure. The capacitance change occurs when a dielectric layer sandwiched between two conductive layers is compressed from the pressure. Microstructures can increase the sensitivity of the sensors, as shown in the carbon nanotube coated elastomer fibre-based capacitive pressure sensors in [7]. Furthermore, the size and frequency of specific microstructures can be tailored for custom pressure-sensing requirements. An example can be seen in [14], where micro-structured pyramids were cast out of polydimethylsiloxane (PDMS) and deposited with Au to form conductive layers of a capacitive pressure sensor. Both these sensors are highly sensitive in low-pressure ranges of under 20 kPa. However, capacitive sensors are more susceptible to external noise and require dedicated capacitance readout electronics. As such, capacitive sensors are preferred in applications that require high sensitivity and can afford the power and cost for operating the accommodating electronics.In resistive pressure sensors, the external pressure changes cause the active conducting material to change its resistance in response. Thus, they behave similarly to a variable resistor and easily measure simple voltage divider or bridge configurations. An ultrathin, flexible, sterilizable, and nonferromagnetic low-pressure range sensor was presented in [15]. The sensor consists of carbon nanotube (CNT)-filled Polyvinylidene fluoride (PVDF) as the main sensing material, with sputtered copper electrodes and a Kapton polyimide substrate. Another work presented a piezoresistive pressure sensor using Liquid Crystal Polymer (LCP), a thermoplastic polymer that contains links of rigid and flexible monomers [16]. It uses a Wheatstone bridge in a half-bridge configuration to measure resistance changes when external pressure is applied. Other materials used to fabricate piezoresistive pressure sensors include Poly(styreneethylene-butylenestyrene) (SEBS) [17], Polyacrylamide hydrogel [18], PDMS, Ecoflex, etc., as substrate or base materials and graphene nanofibers [19], porous carbon flowers [17], AgNW, zinc powder, etc., as conductive and sensing materials. Of these, AgNW-based sensors are known to have good sensitivity in the lower pressure ranges (0.8 to 2.1 kPa) [20], (<6 kPa) [21]. Hence, they are used to compare the proposed aloe vera-based sensors’ sensitivity and linearity within (<2.6 kPa) and beyond their intended range.Resistive pressure sensors can also be made using an ionic-liquid-based sensing medium. The resistance change will be due to the change in the redistribution of the ions in a liquid medium. There are several examples in the literature of such sensors, with the main difference being the choice of conductive fluid. Most of them use NaCl solution, with additives such as glycerol [22,23] or ethylene glycol [24] to increase the viscosity and performance of the ionic fluid. Some ionic sensors can also be characterized so as to be interpreted from multiple parametric variations. For example, an ionic skin that can measure compressive strain from different output signals such as open-circuit voltage and short circuit current, in addition to the conventional resistance and capacitance values is reported in [25].Piezoelectric pressure sensors are active sensors. They differ from resistive and capacitive sensors because they generate electrical current or voltage in response to external stimuli. Specific piezoelectric materials such as Lead Zirconium Titanate (PZT), form the sensing medium in such sensors [26]. They can generate open-circuit voltages on the order of tens of mV in response to pressure changes. Recent studies have also presented novel concepts for the development of soft pressure sensors, e.g., using perovskite quantum dots in fibre-optic fabry-perot sensors [27], using hydrophobic microfluidic channels in combination with ultrasonic imaging systems [28], etc.We propose a novel ionic-chief sensing medium consisting of Aloe vera gel, NaCl, and Glycerin. Our proposed sensor in this work can be categorized under the ionic-gel-based resistive pressure sensor. They can be easily fabricated using a 3D printed mold without involving complex steps such as photolithography or other MEMS procedure. Furthermore, it does not require specialized electrical readout circuitry for its operation. | [
"31942905",
"12185258",
"1117323",
"18039242",
"21167797",
"32633932",
"31458567",
"31912722",
"26809055",
"27625915",
"27996234",
"31015594",
"32780694",
"34755087"
] | [
{
"pmid": "31942905",
"title": "A review of electronic skin: soft electronics and sensors for human health.",
"abstract": "This article reviews several categories of electronic skins (e-skins) for monitoring signals involved in human health. It covers advanced candidate materials, compositions, structures, and integrate strategies of e-skin, focusing on stretchable and wearable electronics. In addition, this article further discusses the potential applications and expected development of e-skins. It is possible to provide a new generation of sensors which are able to introduce artificial intelligence to the clinic and daily healthcare."
},
{
"pmid": "18039242",
"title": "Measurement of intracranial pressure in children: a critical review of current methods.",
"abstract": "Assessment of intracranial pressure (ICP) is essential in the management of acute intracranial catastrophe to limit or actively reduce ICP. This article provides background information and reviews the current literature on methods of measuring ICP in children. Indications for ICP measurement are described for children with traumatic brain injury, shunt insertion or malfunction, arachnoid cyst, craniosynostosis, and prematurity. Various methods of ICP monitoring are detailed: non-invasive, indirect (lumbar puncture, visual-evoked potentials, fontanelle compression, and optic nerve sheath), and direct assessment (ventricular cannulation, and epidural, subdural, and intraparenchymal devices). Normal levels of ICP will depend on the age and position of the child during monitoring. This article provides clinical and research-based evidence in this area where there is currently limited guidance."
},
{
"pmid": "21167797",
"title": "Anisotropy and nonlinear properties of electrochemical circuits in leaves of Aloe vera L.",
"abstract": "Plant tissue has biologically closed electrical circuits and electric fields that regulate its physiology. The biologically closed electrochemical circuits in the leaves of Aloe vera were analyzed using the charge stimulation method with Ag/AgCl electrodes inserted along a leaf at 1-2 cm distance. The electrostimulation was provided with different timing and different voltages. Strong electrical anisotropy of the leaves was found. In the direction across the leaf the electrical circuits remained passive and linear, while along the leaf the response remained linear only at small voltages not exceeding 1 V. At higher potentials the circuits became strongly non-linear pointing to the opening of voltage gated ion channels in the plant tissues. Changing the polarity of electrodes located along conductive bundles led to a strong rectification effect and to different kinetics of capacitor discharge. Equivalent electrical circuit models of the leaf were proposed to explain the experimental data."
},
{
"pmid": "32633932",
"title": "Temperature Sensor with a Water-Dissolvable Ionic Gel for Ionic Skin.",
"abstract": "In the era of a trillion sensors, a tremendous number of sensors will be consumed to collect information for big data analysis. Once they are installed in a harsh environment or implanted in a human/animal body, we cannot easily retrieve the sensors; the sensors for these applications are left unattended but expected to decay after use. In this paper, a disposable temperature sensor that disappears with contact with water is reported. The gel electrolyte based on an ionic liquid and a water-soluble polymer, so-called ionic gel, exhibits a Young's modulus of 96 kPa, which is compatible with human muscle, skin, and organs, and can be a wearable device or in soft robotics. A study on electrical characteristics of the sensor with various temperatures reveals that the ionic conductivity and capacitance increased by 12 times and 4.8 times, respectively, when the temperature varies from 30 to 80 °C. The temperature sensor exhibits a short response time of 1.4 s, allowing real-time monitoring of temperature change. Furthermore, sensors in an array format can obtain the spatial distribution of temperature. The developed sensor was found to fully dissolve in water in 16 h. The water-dissolvability enables practical applications including healthcare, artificial intelligence, and environmental sensing."
},
{
"pmid": "31458567",
"title": "Flexible Highly Sensitive Pressure Sensor Based on Ionic Liquid Gel Film.",
"abstract": "Flexible, semitransparent ionic liquid gel (ionogels) film was first fabricated by in situ polymerization. The optimized ionogels exhibited excellent mechanical properties, high conductivity, and force sensing characteristics. The multifunctional sensor based on the ionogel film was constructed and provided the high sensitivity of 15.4 kPa-1 and wide detection range sensing from 5 Pa to 5 kPa. Moreover, the aforementioned sensor demonstrated excellent mechanical stability against repeated external deformations (for 3000 cycles under 90° bending). Importantly, the sensor showed advantages in detection of environmental changes to the external stimulus of subtle signals, including a rubber blower blowing the sensor, gently touching, torsion, and bending."
},
{
"pmid": "31912722",
"title": "Carbon Nanotubes/Hydrophobically Associated Hydrogels as Ultrastretchable, Highly Sensitive, Stable Strain, and Pressure Sensors.",
"abstract": "Conductive hydrogels have become one of the most promising materials for skin-like sensors because of their excellent biocompatibility and mechanical flexibility. However, the limited stretchability, low toughness, and fatigue resistance lead to a narrow sensing region and insufficient durability of the hydrogel-based sensors. In this work, an extremely stretchable, highly tough, and anti-fatigue conductive nanocomposite hydrogel is prepared by integrating hydrophobic carbon nanotubes (CNTs) into hydrophobically associated polyacrylamide (HAPAAm) hydrogel. In this conductive hydrogel, amphiphilic sodium dodecyl sulfate was used to ensure uniform dispersion of CNTs in the hydrogel network, and hydrophobic interactions between the hydrogel matrix and the CNT surface formed, greatly improving the mechanical properties of the hydrogel. The obtained CNTs/HAPAAm hydrogel showed excellent stretchability (ca. 3000%), toughness (3.42 MJ m-3), and great anti-fatigue property. Moreover, it exhibits both high tensile strain sensitivity in the wide strain ranges (gauge factor = 4.32, up to 1000%) and high linear sensitivity (0.127 kPa-1) in a large-pressure region within 0-50 kPa. The CNTs/HAPAAm hydrogel-based sensors can sensitively and stably detect full-range human activities (e.g., elbow rotation, finger bending, swallowing motion, and pronouncing) and handwriting, demonstrating the CNTs/HAPAAm hydrogel's potential as the wearable strain and pressure sensors for flexible devices."
},
{
"pmid": "26809055",
"title": "A transparent bending-insensitive pressure sensor.",
"abstract": "Measuring small normal pressures is essential to accurately evaluate external stimuli in curvilinear and dynamic surfaces such as natural tissues. Usually, sensitive and spatially accurate pressure sensors are achieved through conformal contact with the surface; however, this also makes them sensitive to mechanical deformation (bending). Indeed, when a soft object is pressed by another soft object, the normal pressure cannot be measured independently from the mechanical stress. Here, we show a pressure sensor that measures only the normal pressure, even under extreme bending conditions. To reduce the bending sensitivity, we use composite nanofibres of carbon nanotubes and graphene. Our simulations show that these fibres change their relative alignment to accommodate bending deformation, thus reducing the strain in individual fibres. Pressure sensitivity is maintained down to a bending radius of 80 μm. To test the suitability of our sensor for soft robotics and medical applications, we fabricated an integrated sensor matrix that is only 2 μm thick. We show real-time (response time of ∼20 ms), large-area, normal pressure monitoring under different, complex bending conditions."
},
{
"pmid": "27625915",
"title": "Soft and Stretchable Sensor Using Biocompatible Electrodes and Liquid for Medical Applications.",
"abstract": "This article introduces a soft and stretchable sensor composed of silicone rubber integrating a conductive liquid-filled channel with a biocompatible sodium chloride (NaCl) solution and novel stretchable gold sputtered electrodes to facilitate the biocompatibility of the sensor. By stretching the sensor, the cross section of the channel deforms, thus leading to a change in electrical resistance. The functionalities of the sensor have been validated experimentally: changes in electrical resistance are measured as a function of the applied strain. The experimentally measured values match theoretical predictions, showing relatively low hysteresis. A preliminary assessment on the proposed sensor prototype shows good results with a maximum tested strain of 64%. The design optimization of the saline solution, the electrodes, and the algebraic approximations derived for integrating the sensors in a flexible manipulator for surgery has been discussed. The contribution of this article is the introduction of the biocompatible and stretchable gold sputtered electrodes integrated with the NaCl-filled channel rubber as a fully biocompatible solution for measuring deformations in soft and stretchable medical instruments."
},
{
"pmid": "27996234",
"title": "Highly Stretchable, Hysteresis-Free Ionic Liquid-Based Strain Sensor for Precise Human Motion Monitoring.",
"abstract": "A highly stretchable, low-cost strain sensor was successfully prepared using an extremely cost-effective ionic liquid of ethylene glycol/sodium chloride. The hysteresis performance of the ionic-liquid-based sensor was able to be improved by introducing a wavy-shaped fluidic channel diminishing the hysteresis by the viscoelastic relaxation of elastomers. From the simulations on visco-hyperelastic behavior of the elastomeric channel, we demonstrated that the wavy structure can offer lower energy dissipation compared to a flat structure under a given deformation. The resistance response of the ionic-liquid-based wavy (ILBW) sensor was fairly deterministic with no hysteresis, and it was well-matched to the theoretically estimated curves. The ILBW sensors exhibited a low degree of hysteresis (0.15% at 250%), low overshoot (1.7% at 150% strain), and outstanding durability (3000 cycles at 300% strain). The ILBW sensor has excellent potential for use in precise and quantitative strain detections in various areas, such as human motion monitoring, healthcare, virtual reality, and smart clothes."
},
{
"pmid": "31015594",
"title": "Flexible piezoelectric devices for gastrointestinal motility sensing.",
"abstract": "Improvements in ingestible electronics with the capacity to sense physiological and pathophysiological states have transformed the standard of care for patients. Yet, despite advances in device development, significant risks associated with solid, non-flexible gastrointestinal transiting systems remain. Here, we report the design and use of an ingestible, flexible piezoelectric device that senses mechanical deformation within the gastric cavity. We demonstrate the capabilities of the sensor in both in vitro and ex vivo simulated gastric models, quantify its key behaviours in the gastrointestinal tract using computational modelling and validate its functionality in awake and ambulating swine. Our proof-of-concept device may lead to the development of ingestible piezoelectric devices that might safely sense mechanical variations and harvest mechanical energy inside the gastrointestinal tract for the diagnosis and treatment of motility disorders, as well as for monitoring ingestion in bariatric applications."
},
{
"pmid": "32780694",
"title": "A Wireless Implantable Passive Intra-Abdominal Pressure Sensing Scheme via Ultrasonic Imaging of a Microfluidic Device.",
"abstract": "In this article, we demonstrate a wireless and passive physiological pressure sensing scheme that utilizes ultrasound imaging of an implantable microfluidic based pressure sensitive transducer. The transducer consists of a sub-mm scale pressure sensitive membrane that covers a reservoir filled with water and is connected to a hydrophobic micro-channel. Applied pressure onto the transducer deflects the membrane and pushes the water from the reservoir into the channel; the water's travelling distance in the channel is a function of the applied pressure, which is quantitatively measured by using a 40 MHz ultrasound imaging system. The sensor presents a linear sensitivity of 42 kPa/mm and a spatial resolution of 1.2 kPa/30 μm in the physiological range of abdominal compartment syndrome. Reliability assessments of the transducer confirm its ability to remain functional after more than 600 cycles of pressure up to 55 kPa over the course of 2 days. Ex vivo experimental results verify the practical capability of the technology to effectively measure pressures under a 15 mm thick porcine skin. It is anticipated that this technology can be applied to a broad range of implantable pressure measurement, by simply tuning the thickness of the thin polydimethylsiloxane membrane and the geometry of the reservoir."
},
{
"pmid": "34755087",
"title": "Skin-like hydrogel devices for wearable sensing, soft robotics and beyond.",
"abstract": "Skin-like electronics are developing rapidly to realize a variety of applications such as wearable sensing and soft robotics. Hydrogels, as soft biomaterials, have been studied intensively for skin-like electronic utilities due to their unique features such as softness, wetness, biocompatibility and ionic sensing capability. These features could potentially blur the gap between soft biological systems and hard artificial machines. However, the development of skin-like hydrogel devices is still in its infancy and faces challenges including limited functionality, low ambient stability, poor surface adhesion, and relatively high power consumption (as ionic sensors). This review aims to summarize current development of skin-inspired hydrogel devices to address these challenges. We first conduct an overview of hydrogels and existing strategies to increase their toughness and conductivity. Next, we describe current approaches to leverage hydrogel devices with advanced merits including anti-dehydration, anti-freezing, and adhesion. Thereafter, we highlight state-of-the-art skin-like hydrogel devices for applications including wearable electronics, soft robotics, and energy harvesting. Finally, we conclude and outline the future trends."
}
] |
Journal of Personalized Medicine | null | PMC8876295 | 10.3390/jpm12020310 | COVID-19 Detection in CT/X-ray Imagery Using Vision Transformers | The steady spread of the 2019 Coronavirus disease has brought about human and economic losses, imposing a new lifestyle across the world. On this point, medical imaging tests such as computed tomography (CT) and X-ray have demonstrated a sound screening potential. Deep learning methodologies have evidenced superior image analysis capabilities with respect to prior handcrafted counterparts. In this paper, we propose a novel deep learning framework for Coronavirus detection using CT and X-ray images. In particular, a Vision Transformer architecture is adopted as a backbone in the proposed network, in which a Siamese encoder is utilized. The latter is composed of two branches: one for processing the original image and another for processing an augmented view of the original image. The input images are divided into patches and fed through the encoder. The proposed framework is evaluated on public CT and X-ray datasets. The proposed system confirms its superiority over state-of-the-art methods on CT and X-ray data in terms of accuracy, precision, recall, specificity, and F1 score. Furthermore, the proposed system also exhibits good robustness when a small portion of training data is allocated. | 2. Related WorkThe processing of COVID-19 images aims to determine the existence of features potentially associated with infection, namely unilateral or bilateral ground-glass opacities, distributed peripherally, mostly in round and oval shapes [35,36,37]. A comprehensive review for machine learning techniques used for COVID-19 detection and classification based on CXR or CT images was provided in [38].Some contributions follow a traditional scheme by combining such features with a classifier to infer the presence of infection. For instance, Mahdy et al. [39] used a multi-level thresholding for segmenting the X-ray images. The segments were then classified using a Support Vector Machine (SVM) classifier. Barstugan [40] first proceeded with SVM-based classification without any feature selection and then with features selected via five feature selection methods. The best score was observed using a grey level size zone matrix feature selector along with SVM classification.Thus far, the literature has accumulated various deep learning methods for COVID-19 detection in X-ray and CT images. For X-ray images, Marques et al. presented an EffecientNet pipeline to classify chest X-ray images into the classes COVID-19, normal, or pneumonia following 10-fold cross validation [41]. Zabirul Islam et al. combined a convolutional neural network (CNN) and a long short-term memory network for COVID-19 detection in X-ray images [42]. In [43], the authors proposed a multiscale attention-guided deep network with soft distance regularization to detect COVID-19 in X-ray images. The proposed network generated a prediction vector and attention from multiscale feature maps. Furthermore, to render the model more robust and to populate the training data, attention-guided augmentations along with a soft distance regularization were adopted. In [44], wavelet decomposition was incorporated into a convolutional neural network to enable multiresolution analysis. The authors in [45] proposed detecting COVID-19 in X-ray data by implementing several uncertainty estimation methods such as Softmax scores, Monte-Carlo dropout, and deterministic uncertainty quantification. An ensemble of deep learning models was presented in [46], where weighted averaging was applied according to the sensitivity of each model towards each class. Heidari et al. fine-tuned a pre-trained VGG16 model to classify X-ray images into three classes [47]. Abbas et al. applied transfer learning from object recognition (i.e., ImageNet dataset) to X-ray images. The transfer was carried out in three steps, namely (i) decomposition, which consists in applying class decomposition to AlexNet-extracted deep local features; (ii) the transfer phase, where the network weights were fine-tuned for X-ray images; and (iii) the compose phase, which assembles the subclasses of each class [48]. The dependence of these methods on CXR in the diagnosis reduces the sensitivity of the results of early detection because the sensitivity increases with the progression of the disease [18,49,50].Regarding CT images, Amyar et al. [51] constructed a deep network that consisted of a 10-convolutonal-layer encoder stage, a 9-convolutional-layer decoder part for reconstruction, and a 9-convolutional-layer decoder part for segmentation. Xu et al. implemented a VNet and an inception residual network for feature extraction and region proposal network for region-of-interest segmentation [52]. Sun et al. presented a two-stage feature selection method, namely, a deep forest to learn the high-level features and an adaptive feature selection to find the discriminative features. The selected features were then fed to the four-criteria classifier [53]. Ko et al. also used transfer learning to compare four pre-trained deep convolutional networks and obtained their best result using ResNet-50 [54], while Wu et al. transferred the knowledge of a Res2Net and appended an enhanced feature model to detect COVID-19 cases in a two-class CT dataset [55]. In [56], a CT image synthesis approach based on a conditional generative adversarial network was proposed to deal with data shortage. Horry et al. proposed a noise-reduction pre-processing step to prepare a hybrid dataset of X-ray, CT, and US images, and the data were then fed into a VGG19 network [57]. Although processing CT datasets yields better results when diagnosing COVID-19 [18,58], there will be always restrictions in reducing patients’ exposure to radiation, which limits the availability of a CT dataset that can optimize the performance of model diagnoses alone [59,60]. | [
"32389405",
"32297805",
"32105304",
"32196430",
"28114041",
"29994501",
"30137018",
"32437939",
"32519256",
"33024935",
"32335002",
"33574070",
"33158840",
"33033574",
"16937184",
"12585705",
"26574297",
"21767831",
"31144149",
"30976831",
"29993996",
"31110349",
"28117445",
"32105562",
"33560192",
"32411942",
"33519327",
"32835084",
"33560995",
"34812397",
"33778628",
"34786568",
"33065387",
"32837749",
"32845849",
"32568730",
"33600316",
"33275588",
"32175437",
"33177550",
"27683318",
"32077789",
"32953971"
] | [
{
"pmid": "32389405",
"title": "Understanding Antibody Testing for COVID-19.",
"abstract": "The orthopedic community has seen the COVID-19 pandemic decimate elective surgical volumes in most geographies. Patients and essential workers, such as health care providers, remain rightfully concerned about how to appropriately begin to return to work and community activity in a safe and responsible manner. Many believe that testing for the presence of antibodies on a widespread scale could help drive evidence-based decision-making, both on an individual and societal scale. Much information, and an equal amount of misinformation, has been produced on antibody testing. Education about the role and science of such testing is critically important for programs to be effectively understood and managed."
},
{
"pmid": "32196430",
"title": "Laboratory diagnosis of emerging human coronavirus infections - the state of the art.",
"abstract": "The three unprecedented outbreaks of emerging human coronavirus (HCoV) infections at the beginning of the twenty-first century have highlighted the necessity for readily available, accurate and fast diagnostic testing methods. The laboratory diagnostic methods for human coronavirus infections have evolved substantially, with the development of novel assays as well as the availability of updated tests for emerging ones. Newer laboratory methods are fast, highly sensitive and specific, and are gradually replacing the conventional gold standards. This presentation reviews the current laboratory methods available for testing coronaviruses by focusing on the coronavirus disease 2019 (COVID-19) outbreak going on in Wuhan. Viral pneumonias typically do not result in the production of purulent sputum. Thus, a nasopharyngeal swab is usually the collection method used to obtain a specimen for testing. Nasopharyngeal specimens may miss some infections; a deeper specimen may need to be obtained by bronchoscopy. Alternatively, repeated testing can be used because over time, the likelihood of the SARS-CoV-2 being present in the nasopharynx increases. Several integrated, random-access, point-of-care molecular devices are currently under development for fast and accurate diagnosis of SARS-CoV-2 infections. These assays are simple, fast and safe and can be used in the local hospitals and clinics bearing the burden of identifying and treating patients."
},
{
"pmid": "28114041",
"title": "An Ensemble of Fine-Tuned Convolutional Neural Networks for Medical Image Classification.",
"abstract": "The availability of medical imaging data from clinical archives, research literature, and clinical manuals, coupled with recent advances in computer vision offer the opportunity for image-based diagnosis, teaching, and biomedical research. However, the content and semantics of an image can vary depending on its modality and as such the identification of image modality is an important preliminary step. The key challenge for automatically classifying the modality of a medical image is due to the visual characteristics of different modalities: some are visually distinct while others may have only subtle differences. This challenge is compounded by variations in the appearance of images based on the diseases depicted and a lack of sufficient training data for some modalities. In this paper, we introduce a new method for classifying medical images that uses an ensemble of different convolutional neural network (CNN) architectures. CNNs are a state-of-the-art image classification technique that learns the optimal image features for a given classification task. We hypothesise that different CNN architectures learn different levels of semantic image representation and thus an ensemble of CNNs will enable higher quality features to be extracted. Our method develops a new feature extractor by fine-tuning CNNs that have been initialized on a large dataset of natural images. The fine-tuning process leverages the generic image features from natural images that are fundamental for all images and optimizes them for the variety of medical imaging modalities. These features are used to train numerous multiclass classifiers whose posterior probabilities are fused to predict the modalities of unseen images. Our experiments on the ImageCLEF 2016 medical image public dataset (30 modalities; 6776 training images, and 4166 test images) show that our ensemble of fine-tuned CNNs achieves a higher accuracy than established CNNs. Our ensemble also achieves a higher accuracy than methods in the literature evaluated on the same benchmark dataset and is only overtaken by those methods that source additional training data."
},
{
"pmid": "29994501",
"title": "Segmentation of Retinal Cysts From Optical Coherence Tomography Volumes Via Selective Enhancement.",
"abstract": "Automated and accurate segmentation of cystoid structures in optical coherence tomography (OCT) is of interest in the early detection of retinal diseases. It is, however, a challenging task. We propose a novel method for localizing cysts in 3-D OCT volumes. The proposed work is biologically inspired and based on selective enhancement of the cysts, by inducing motion to a given OCT slice. A convolutional neural network is designed to learn a mapping function that combines the result of multiple such motions to produce a probability map for cyst locations in a given slice. The final segmentation of cysts is obtained via simple clustering of the detected cyst locations. The proposed method is evaluated on two public datasets and one private dataset. The public datasets include the one released for the OPTIMA cyst segmentation challenge (OCSC) in MICCAI 2015 and the DME dataset. After training on the OCSC train set, the method achieves a mean dice coefficient (DC) of 0.71 on the OCSC test set. The robustness of the algorithm was examined by cross validation on the DME and AEI (private) datasets and a mean DC values obtained were 0.69 and 0.79, respectively. Overall, the proposed system has the highest performance on all the benchmarks. These results underscore the strengths of the proposed method in handling variations in both data acquisition protocols and scanners."
},
{
"pmid": "30137018",
"title": "Multimodal Assessment of Parkinson's Disease: A Deep Learning Approach.",
"abstract": "Parkinson's disease is a neurodegenerative disorder characterized by a variety of motor symptoms. Particularly, difficulties to start/stop movements have been observed in patients. From a technical/diagnostic point of view, these movement changes can be assessed by modeling the transitions between voiced and unvoiced segments in speech, the movement when the patient starts or stops a new stroke in handwriting, or the movement when the patient starts or stops the walking process. This study proposes a methodology to model such difficulties to start or to stop movements considering information from speech, handwriting, and gait. We used those transitions to train convolutional neural networks to classify patients and healthy subjects. The neurological state of the patients was also evaluated according to different stages of the disease (initial, intermediate, and advanced). In addition, we evaluated the robustness of the proposed approach when considering speech signals in three different languages: Spanish, German, and Czech. According to the results, the fusion of information from the three modalities is highly accurate to classify patients and healthy subjects, and it shows to be suitable to assess the neurological state of the patients in several stages of the disease. We also aimed to interpret the feature maps obtained from the deep learning architectures with respect to the presence or absence of the disease and the neurological state of the patients. As far as we know, this is one of the first works that considers multimodal information to assess Parkinson's disease following a deep learning approach."
},
{
"pmid": "32437939",
"title": "Chest X-ray severity index as a predictor of in-hospital mortality in coronavirus disease 2019: A study of 302 patients from Italy.",
"abstract": "OBJECTIVES\nThis study aimed to assess the usefulness of a new chest X-ray scoring system - the Brixia score - to predict the risk of in-hospital mortality in hospitalized patients with coronavirus disease 2019 (COVID-19).\n\n\nMETHODS\nBetween March 4, 2020 and March 24, 2020, all CXR reports including the Brixia score were retrieved. We enrolled only hospitalized Caucasian patients with COVID-19 for whom the final outcome was available. For each patient, age, sex, underlying comorbidities, immunosuppressive therapies, and the CXR report containing the highest score were considered for analysis. These independent variables were analyzed using a multivariable logistic regression model to extract the predictive factors for in-hospital mortality.\n\n\nRESULTS\n302 Caucasian patients who were hospitalized for COVID-19 were enrolled. In the multivariable logistic regression model, only Brixia score, patient age, and conditions that induced immunosuppression were the significant predictive factors for in-hospital mortality. According to receiver operating characteristic curve analyses, the optimal cutoff values for Brixia score and patient age were 8 points and 71 years, respectively. Three different models that included the Brixia score showed excellent predictive power.\n\n\nCONCLUSIONS\nPatients with a high Brixia score and at least one other predictive factor had the highest risk of in-hospital death."
},
{
"pmid": "32519256",
"title": "Chest X-ray in new Coronavirus Disease 2019 (COVID-19) infection: findings and correlation with clinical outcome.",
"abstract": "AIM\nThe purpose of this study is to describe the main chest radiological features (CXR) of COVID-19 and correlate them with clinical outcome.\n\n\nMATERIALS AND METHODS\nThis is a retrospective study involving patients with clinical-epidemiological suspect of COVID-19 infection, who performed CXRs at the emergency department (ED) of our University Hospital from March 1 to March 31, 2020. All patients performed RT-PCR nasopharyngeal and throat swab, CXR at the ED and clinical-epidemiological data. RT-PCR results were considered the reference standard. The final outcome was expressed as discharged or hospitalized patients into a medicine department or intensive care unit (ICU).\n\n\nRESULTS\nPatients that had a RT-PCR positive for COVID-19 infection were 234 in total: 153 males (65.4%) and 81 females (34.6%), with a mean age of 66.04 years (range 18-97 years). Thirteen CXRs were negative for radiological thoracic involvement (5.6%). The following alterations were more commonly observed: 135 patients with lung consolidations (57.7%), 147 (62.8%) with GGO, 55 (23.5%) with nodules and 156 (66.6%) with reticular-nodular opacities. Patients with consolidations and GGO coexistent in the same radiography were 35.5% of total. Peripheral (57.7%) and lower zone distribution (58.5%) were the most common predominance. Moreover, bilateral involvement (69.2%) was most frequent than unilateral one. Baseline CXR sensitivity in our experience is about 67.1%. The most affected patients were especially males in the age group 60-79 years old (45.95%, of which 71.57% males). RALE score was slightly higher in male than in female patients. ANOVA with Games-Howell post hoc showed significant differences of RALE scores for group 1 vs 3 (p < 0.001) and 2 vs 3 (p = 0.001). Inter-reader agreement in assigning RALE score was very good (ICC: 0.92-with 95% confidence interval 0.88-0.95).\n\n\nCONCLUSION\nIn COVID-19, CXR shows patchy or diffuse reticular-nodular opacities and consolidation, with basal, peripheral and bilateral predominance. In our experience, baseline CXR had a sensitivity of 68.1%. The RALE score can be used in the emergency setting as a quantitative method of the extent of SARS-CoV-2 pneumonia, correlating with an increased risk of ICU admission."
},
{
"pmid": "33024935",
"title": "Lung Ultrasound Findings in Patients with COVID-19.",
"abstract": "The current SARS-CoV-2 outbreak leads to a growing need of point-of-care thoracic imaging that is compatible with isolation settings and infection prevention precautions. We retrospectively reviewed 17 COVID-19 patients who received point-of-care lung ultrasound imaging in our isolation unit. Lung ultrasound was able to detect interstitial lung disease effectively; severe cases showed bilaterally distributed B-Lines with or without consolidations; one case showed bilateral pleural plaques. Corresponding to CT scans, interstitial involvement is accurately depicted as B-Lines on lung ultrasound. Lung ultrasound might be suitable for detecting interstitial involvement in a bedside setting under high security isolation precautions."
},
{
"pmid": "33574070",
"title": "Chest radiography or computed tomography for COVID-19 pneumonia? Comparative study in a simulated triage setting.",
"abstract": "INTRODUCTION\nFor the management of patients referred to respiratory triage during the early stages of the severe acute respiratory syndrome coronavirus type 2 (SARS-CoV-2) pandemic, either chest radiography or computed tomography (CT) were used as first-line diagnostic tools. The aim of this study was to compare the impact on the triage, diagnosis and prognosis of patients with suspected COVID-19 when clinical decisions are derived from reconstructed chest radiography or from CT.\n\n\nMETHODS\nWe reconstructed chest radiographs from high-resolution CT (HRCT) scans. Five clinical observers independently reviewed clinical charts of 300 subjects with suspected COVID-19 pneumonia, integrated with either a reconstructed chest radiography or HRCT report in two consecutive blinded and randomised sessions: clinical decisions were recorded for each session. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and prognostic value were compared between reconstructed chest radiography and HRCT. The best radiological integration was also examined to develop an optimised respiratory triage algorithm.\n\n\nRESULTS\nInterobserver agreement was fair (Kendall's W=0.365, p<0.001) by the reconstructed chest radiography-based protocol and good (Kendall's W=0.654, p<0.001) by the CT-based protocol. NPV assisted by reconstructed chest radiography (31.4%) was lower than that of HRCT (77.9%). In case of indeterminate or typical radiological appearance for COVID-19 pneumonia, extent of disease on reconstructed chest radiography or HRCT were the only two imaging variables that were similarly linked to mortality by adjusted multivariable models CONCLUSIONS: The present findings suggest that clinical triage is safely assisted by chest radiography. An integrated algorithm using first-line chest radiography and contingent use of HRCT can help optimise management and prognostication of COVID-19."
},
{
"pmid": "33158840",
"title": "Diagnostic accuracy of X-ray versus CT in COVID-19: a propensity-matched database study.",
"abstract": "OBJECTIVES\nTo identify the diagnostic accuracy of common imaging modalities, chest X-ray (CXR) and CT, for diagnosis of COVID-19 in the general emergency population in the UK and to find the association between imaging features and outcomes in these patients.\n\n\nDESIGN\nRetrospective analysis of electronic patient records.\n\n\nSETTING\nTertiary academic health science centre and designated centre for high consequence infectious diseases in London, UK.\n\n\nPARTICIPANTS\n1198 patients who attended the emergency department with paired reverse transcriptase PCR (RT-PCR) swabs for SARS-CoV-2 and CXR between 16 March and 16 April 2020.\n\n\nMAIN OUTCOME MEASURES\nSensitivity and specificity of CXR and CT for diagnosis of COVID-19 using the British Society of Thoracic Imaging reporting templates. Reference standard was any RT-PCR positive naso-oropharyngeal swab within 30 days of attendance. ORs of CXR in association with vital signs, laboratory values and 30-day outcomes were calculated.\n\n\nRESULTS\nSensitivity and specificity of CXR for COVID-19 diagnosis were 0.56 (95% CI 0.51 to 0.60) and 0.60 (95% CI 0.54 to 0.65), respectively. For CT scans, these were 0.85 (95% CI 0.79 to 0.90) and 0.50 (95% CI 0.41 to 0.60), respectively. This gave a statistically significant mean increase in sensitivity with CT of 29% (95% CI 19% to 38%, p<0.0001) compared with CXR. Specificity was not significantly different between the two modalities.CXR findings were not statistically significantly or clinically meaningfully associated with vital signs, laboratory parameters or 30-day outcomes.\n\n\nCONCLUSIONS\nCT has substantially improved diagnostic performance over CXR in COVID-19. CT should be strongly considered in the initial assessment for suspected COVID-19. This gives potential for increased sensitivity and considerably faster turnaround time, where capacity allows and balanced against excess radiation exposure risk."
},
{
"pmid": "33033574",
"title": "Review of X-ray and computed tomography scan findings with a promising role of point of care ultrasound in COVID-19 pandemic.",
"abstract": "As healthcare professionals continue to combat the coronavirus disease 2019 (COVID-19) infection worldwide, there is an increasing interest in the role of imaging and the relevance of various modalities. Since imaging not only helps assess the disease at the time of diagnosis but also aids evaluation of response to management, it is critical to examine the role of different modalities currently in use, such as baseline X-rays and computed tomography scans carefully. In this article, we will draw attention to the critical findings for the radiologist. Further, we will look at point of care ultrasound, an increasingly a popular tool in diagnostic medicine, as a component of COVID-19 management."
},
{
"pmid": "16937184",
"title": "Automated image processing method for the diagnosis and classification of malaria on thin blood smears.",
"abstract": "Malaria is a serious global health problem, and rapid, accurate diagnosis is required to control the disease. An image processing algorithm to automate the diagnosis of malaria on thin blood smears is developed. The image classification system is designed to positively identify malaria parasites present in thin blood smears, and differentiate the species of malaria. Images are acquired using a charge-coupled device camera connected to a light microscope. Morphological and novel threshold selection techniques are used to identify erythrocytes (red blood cells) and possible parasites present on microscopic slides. Image features based on colour, texture and the geometry of the cells and parasites are generated, as well as features that make use of a priori knowledge of the classification problem and mimic features used by human technicians. A two-stage tree classifier using backpropogation feedforward neural networks distinguishes between true and false positives, and then diagnoses the species (Plasmodium falciparum, P. vivax, P. ovale or P. malariae) of the infection. Malaria samples obtained from the Department of Clinical Microbiology and Infectious Diseases at the University of the Witwatersrand Medical School are used for training and testing of the system. Infected erythrocytes are positively identified with a sensitivity of 85% and a positive predictive value (PPV) of 81%, which makes the method highly sensitive at diagnosing a complete sample provided many views are analysed. Species were correctly determined for 11 out of 15 samples."
},
{
"pmid": "12585705",
"title": "A contribution of image processing to the diagnosis of diabetic retinopathy--detection of exudates in color fundus images of the human retina.",
"abstract": "In the framework of computer assisted diagnosis of diabetic retinopathy, a new algorithm for detection of exudates is presented and discussed. The presence of exudates within the macular region is a main hallmark of diabetic macular edema and allows its detection with a high sensitivity. Hence, detection of exudates is an important diagnostic task, in which computer assistance may play a major role. Exudates are found using their high grey level variation, and their contours are determined by means of morphological reconstruction techniques. The detection of the optic disc is indispensable for this approach. We detect the optic disc by means of morphological filtering techniques and the watershed transformation. The algorithm has been tested on a small image data base and compared with the performance of a human grader. As a result, we obtain a mean sensitivity of 92.8% and a mean predictive value of 92.4%. Robustness with respect to changes of the parameters of the algorithm has been evaluated."
},
{
"pmid": "26574297",
"title": "Image processing based automatic diagnosis of glaucoma using wavelet features of segmented optic disc from fundus image.",
"abstract": "Glaucoma is a disease of the retina which is one of the most common causes of permanent blindness worldwide. This paper presents an automatic image processing based method for glaucoma diagnosis from the digital fundus image. In this paper wavelet feature extraction has been followed by optimized genetic feature selection combined with several learning algorithms and various parameter settings. Unlike the existing research works where the features are considered from the complete fundus or a sub image of the fundus, this work is based on feature extraction from the segmented and blood vessel removed optic disc to improve the accuracy of identification. The experimental results presented in this paper indicate that the wavelet features of the segmented optic disc image are clinically more significant in comparison to features of the whole or sub fundus image in the detection of glaucoma from fundus image. Accuracy of glaucoma identification achieved in this work is 94.7% and a comparison with existing methods of glaucoma detection from fundus image indicates that the proposed approach has improved accuracy of classification."
},
{
"pmid": "21767831",
"title": "Melanomas non-invasive diagnosis application based on the ABCD rule and pattern recognition image processing algorithms.",
"abstract": "In this paper an automated dermatological tool for the parameterization of melanomas is presented. The system is based on the standard ABCD Rule and dermatological Pattern Recognition protocols. On the one hand, a complete stack of algorithms for the asymmetry, border, color, and diameter parameterization were developed. On the other hand, three automatic algorithms for digital image processing have been developed in order to detect the appropriate patterns. These allow one to calculate certain quantitative features based on the aspect and inner patterns of the melanoma using simple-operation algorithms, in order to minimize response time. The database used consists of 160 500 x 500-pixel RGB images (20 images per pattern) cataloged by dermatologists, and the results have turned out to be successful according to assessment by medical experts. While the ABCD algorithms are mathematically reliable, the proposed algorithms for pattern recognition produced a remarkable rate of globular, reticular, and blue veil Pattern recognition, with an average above 85% of accuracy. It thus proves to be a reliable system when performing a diagnosis."
},
{
"pmid": "31144149",
"title": "Deep Learning Techniques for Medical Image Segmentation: Achievements and Challenges.",
"abstract": "Deep learning-based image segmentation is by now firmly established as a robust tool in image segmentation. It has been widely used to separate homogeneous areas as the first and critical component of diagnosis and treatment pipeline. In this article, we present a critical appraisal of popular methods that have employed deep-learning techniques for medical image segmentation. Moreover, we summarize the most common challenges incurred and suggest possible solutions."
},
{
"pmid": "30976831",
"title": "Deep learning reconstruction improves image quality of abdominal ultra-high-resolution CT.",
"abstract": "OBJECTIVES\nDeep learning reconstruction (DLR) is a new reconstruction method; it introduces deep convolutional neural networks into the reconstruction flow. This study was conducted in order to examine the clinical applicability of abdominal ultra-high-resolution CT (U-HRCT) exams reconstructed with a new DLR in comparison to hybrid and model-based iterative reconstruction (hybrid-IR, MBIR).\n\n\nMETHODS\nOur retrospective study included 46 patients seen between December 2017 and April 2018. A radiologist recorded the standard deviation of attenuation in the paraspinal muscle as the image noise and calculated the contrast-to-noise ratio (CNR) for the aorta, portal vein, and liver. The overall image quality was assessed by two other radiologists and graded on a 5-point confidence scale ranging from 1 (unacceptable) to 5 (excellent). The difference between CT images subjected to hybrid-IR, MBIR, and DLR was compared.\n\n\nRESULTS\nThe image noise was significantly lower and the CNR was significantly higher on DLR than hybrid-IR and MBIR images (p < 0.01). DLR images received the highest and MBIR images the lowest scores for overall image quality.\n\n\nCONCLUSIONS\nDLR improved the quality of abdominal U-HRCT images.\n\n\nKEY POINTS\n• The potential degradation due to increased noise may prevent implementation of ultra-high-resolution CT in the abdomen. • Image noise and overall image quality for hepatic ultra-high-resolution CT images improved with deep learning reconstruction as compared to hybrid- and model-based iterative reconstruction."
},
{
"pmid": "29993996",
"title": "Pulmonary Artery-Vein Classification in CT Images Using Deep Learning.",
"abstract": "Recent studies show that pulmonary vascular diseases may specifically affect arteries or veins through different physiologic mechanisms. To detect changes in the two vascular trees, physicians manually analyze the chest computed tomography (CT) image of the patients in search of abnormalities. This process is time consuming, difficult to standardize, and thus not feasible for large clinical studies or useful in real-world clinical decision making. Therefore, automatic separation of arteries and veins in CT images is becoming of great interest, as it may help physicians to accurately diagnose pathological conditions. In this paper, we present a novel, fully automatic approach to classify vessels from chest CT images into arteries and veins. The algorithm follows three main steps: first, a scale-space particles segmentation to isolate vessels; then a 3-D convolutional neural network (CNN) to obtain a first classification of vessels; finally, graph-cuts' optimization to refine the results. To justify the usage of the proposed CNN architecture, we compared different 2-D and 3-D CNNs that may use local information from bronchus- and vessel-enhanced images provided to the network with different strategies. We also compared the proposed CNN approach with a random forests (RFs) classifier. The methodology was trained and evaluated on the superior and inferior lobes of the right lung of 18 clinical cases with noncontrast chest CT scans, in comparison with manual classification. The proposed algorithm achieves an overall accuracy of 94%, which is higher than the accuracy obtained using other CNN architectures and RF. Our method was also validated with contrast-enhanced CT scans of patients with chronic thromboembolic pulmonary hypertension to demonstrate that our model generalizes well to contrast-enhanced modalities. The proposed method outperforms state-of-the-art methods, paving the way for future use of 3-D CNN for artery/vein classification in CT images."
},
{
"pmid": "31110349",
"title": "End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography.",
"abstract": "With an estimated 160,000 deaths in 2018, lung cancer is the most common cause of cancer death in the United States1. Lung cancer screening using low-dose computed tomography has been shown to reduce mortality by 20-43% and is now included in US screening guidelines1-6. Existing challenges include inter-grader variability and high false-positive and false-negative rates7-10. We propose a deep learning algorithm that uses a patient's current and prior computed tomography volumes to predict the risk of lung cancer. Our model achieves a state-of-the-art performance (94.4% area under the curve) on 6,716 National Lung Cancer Screening Trial cases, and performs similarly on an independent clinical validation set of 1,139 cases. We conducted two reader studies. When prior computed tomography imaging was not available, our model outperformed all six radiologists with absolute reductions of 11% in false positives and 5% in false negatives. Where prior computed tomography imaging was available, the model performance was on-par with the same radiologists. This creates an opportunity to optimize the screening process via computer assistance and automation. While the vast majority of patients remain unscreened, we show the potential for deep learning models to increase the accuracy, consistency and adoption of lung cancer screening worldwide."
},
{
"pmid": "28117445",
"title": "Dermatologist-level classification of skin cancer with deep neural networks.",
"abstract": "Skin cancer, the most common human malignancy, is primarily diagnosed visually, beginning with an initial clinical screening and followed potentially by dermoscopic analysis, a biopsy and histopathological examination. Automated classification of skin lesions using images is a challenging task owing to the fine-grained variability in the appearance of skin lesions. Deep convolutional neural networks (CNNs) show potential for general and highly variable tasks across many fine-grained object categories. Here we demonstrate classification of skin lesions using a single CNN, trained end-to-end from images directly, using only pixels and disease labels as inputs. We train a CNN using a dataset of 129,450 clinical images-two orders of magnitude larger than previous datasets-consisting of 2,032 different diseases. We test its performance against 21 board-certified dermatologists on biopsy-proven clinical images with two critical binary classification use cases: keratinocyte carcinomas versus benign seborrheic keratoses; and malignant melanomas versus benign nevi. The first case represents the identification of the most common cancers, the second represents the identification of the deadliest skin cancer. The CNN achieves performance on par with all tested experts across both tasks, demonstrating an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists. Outfitted with deep neural networks, mobile devices can potentially extend the reach of dermatologists outside of the clinic. It is projected that 6.3 billion smartphone subscriptions will exist by the year 2021 (ref. 13) and can therefore potentially provide low-cost universal access to vital diagnostic care."
},
{
"pmid": "33560192",
"title": "COVID-19 Imaging: What We Know Now and What Remains Unknown.",
"abstract": "Infection with SARS-CoV-2 ranges from an asymptomatic condition to a severe and sometimes fatal disease, with mortality most frequently being the result of acute lung injury. The role of imaging has evolved during the pandemic, with CT initially being an alternative and possibly superior testing method compared with reverse transcriptase-polymerase chain reaction (RT-PCR) testing and evolving to having a more limited role based on specific indications. Several classification and reporting schemes were developed for chest imaging early during the pandemic for patients suspected of having COVID-19 to aid in triage when the availability of RT-PCR testing was limited and its level of performance was unclear. Interobserver agreement for categories with findings typical of COVID-19 and those suggesting an alternative diagnosis is high across multiple studies. Furthermore, some studies looking at the extent of lung involvement on chest radiographs and CT images showed correlations with critical illness and a need for mechanical ventilation. In addition to pulmonary manifestations, cardiovascular complications such as thromboembolism and myocarditis have been ascribed to COVID-19, sometimes contributing to neurologic and abdominal manifestations. Finally, artificial intelligence has shown promise for use in determining both the diagnosis and prognosis of COVID-19 pneumonia with respect to both radiography and CT."
},
{
"pmid": "33519327",
"title": "Automated medical diagnosis of COVID-19 through EfficientNet convolutional neural network.",
"abstract": "COVID-19 infection was reported in December 2019 at Wuhan, China. This virus critically affects several countries such as the USA, Brazil, India and Italy. Numerous research units are working at their higher level of effort to develop novel methods to prevent and control this pandemic scenario. The main objective of this paper is to propose a medical decision support system using the implementation of a convolutional neural network (CNN). This CNN has been developed using EfficientNet architecture. To the best of the authors' knowledge, there is no similar study that proposes an automated method for COVID-19 diagnosis using EfficientNet. Therefore, the main contribution is to present the results of a CNN developed using EfficientNet and 10-fold stratified cross-validation. This paper presents two main experiments. First, the binary classification results using images from COVID-19 patients and normal patients are shown. Second, the multi-class results using images from COVID-19, pneumonia and normal patients are discussed. The results show average accuracy values for binary and multi-class of 99.62% and 96.70%, respectively. On the one hand, the proposed CNN model using EfficientNet presents an average recall value of 99.63% and 96.69% concerning binary and multi-class, respectively. On the other hand, 99.64% is the average precision value reported by binary classification, and 97.54% is presented in multi-class. Finally, the average F1-score for multi-class is 97.11%, and 99.62% is presented for binary classification. In conclusion, the proposed architecture can provide an automated medical diagnostics system to support healthcare specialists for enhanced decision making during this pandemic scenario."
},
{
"pmid": "32835084",
"title": "A combined deep CNN-LSTM network for the detection of novel coronavirus (COVID-19) using X-ray images.",
"abstract": "Nowadays, automatic disease detection has become a crucial issue in medical science due to rapid population growth. An automatic disease detection framework assists doctors in the diagnosis of disease and provides exact, consistent, and fast results and reduces the death rate. Coronavirus (COVID-19) has become one of the most severe and acute diseases in recent times and has spread globally. Therefore, an automated detection system, as the fastest diagnostic option, should be implemented to impede COVID-19 from spreading. This paper aims to introduce a deep learning technique based on the combination of a convolutional neural network (CNN) and long short-term memory (LSTM) to diagnose COVID-19 automatically from X-ray images. In this system, CNN is used for deep feature extraction and LSTM is used for detection using the extracted feature. A collection of 4575 X-ray images, including 1525 images of COVID-19, were used as a dataset in this system. The experimental results show that our proposed system achieved an accuracy of 99.4%, AUC of 99.9%, specificity of 99.2%, sensitivity of 99.3%, and F1-score of 98.9%. The system achieved desired results on the currently available dataset, which can be further improved when more COVID-19 images become available. The proposed system can help doctors to diagnose and treat COVID-19 patients easily."
},
{
"pmid": "33560995",
"title": "Multiscale Attention Guided Network for COVID-19 Diagnosis Using Chest X-Ray Images.",
"abstract": "Coronavirus disease 2019 (COVID-19) is one of the most destructive pandemic after millennium, forcing the world to tackle a health crisis. Automated lung infections classification using chest X-ray (CXR) images could strengthen diagnostic capability when handling COVID-19. However, classifying COVID-19 from pneumonia cases using CXR image is a difficult task because of shared spatial characteristics, high feature variation and contrast diversity between cases. Moreover, massive data collection is impractical for a newly emerged disease, which limited the performance of data thirsty deep learning models. To address these challenges, Multiscale Attention Guided deep network with Soft Distance regularization (MAG-SD) is proposed to automatically classify COVID-19 from pneumonia CXR images. In MAG-SD, MA-Net is used to produce prediction vector and attention from multiscale feature maps. To improve the robustness of trained model and relieve the shortage of training data, attention guided augmentations along with a soft distance regularization are posed, which aims at generating meaningful augmentations and reduce noise. Our multiscale attention model achieves better classification performance on our pneumonia CXR image dataset. Plentiful experiments are proposed for MAG-SD which demonstrates its unique advantage in pneumonia classification over cutting-edge models. The code is available at https://github.com/JasonLeeGHub/MAG-SD."
},
{
"pmid": "34812397",
"title": "Improving Uncertainty Estimation With Semi-Supervised Deep Learning for COVID-19 Detection Using Chest X-Ray Images.",
"abstract": "In this work we implement a COVID-19 infection detection system based on chest X-ray images with uncertainty estimation. Uncertainty estimation is vital for safe usage of computer aided diagnosis tools in medical applications. Model estimations with high uncertainty should be carefully analyzed by a trained radiologist. We aim to improve uncertainty estimations using unlabelled data through the MixMatch semi-supervised framework. We test popular uncertainty estimation approaches, comprising Softmax scores, Monte-Carlo dropout and deterministic uncertainty quantification. To compare the reliability of the uncertainty estimates, we propose the usage of the Jensen-Shannon distance between the uncertainty distributions of correct and incorrect estimations. This metric is statistically relevant, unlike most previously used metrics, which often ignore the distribution of the uncertainty estimations. Our test results show a significant improvement in uncertainty estimates when using unlabelled data. The best results are obtained with the use of the Monte Carlo dropout method."
},
{
"pmid": "33778628",
"title": "Determinants of Chest X-Ray Sensitivity for COVID- 19: A Multi-Institutional Study in the United States.",
"abstract": "PURPOSE\nTo evaluate the sensitivity, specificity, and severity of chest x-rays (CXR) and chest CTs over time in confirmed COVID-19+ and COVID-19- patients and to evaluate determinants of false negatives.\n\n\nMETHODS\nIn a retrospective multi-institutional study, 254 RT-PCR verified COVID-19+ patients with at least one CXR or chest CT were compared with 254 age- and gender-matched COVID-19- controls. CXR severity, sensitivity, and specificity were determined with respect to time after onset of symptoms; sensitivity and specificity for chest CTs without time stratification. Performance of serial CXRs against CTs was determined by comparing area under the receiver operating characteristic curves (AUC). A multivariable logistic regression analysis was performed to assess factors related to false negative CXR.\n\n\nRESULTS\nCOVID-19+ CXR severity and sensitivity increased with time (from sensitivity of 55% at ≤2 days to 79% at >11 days; p<0.001 for trends of both severity and sensitivity) whereas CXR specificity decreased over time (from 83% to 70%, p=0.02). Serial CXR demonstrated increase in AUC (first CXR AUC=0.79, second CXR=0.87, p=0.02), and second CXR approached the accuracy of CT (AUC=0.92, p=0.11). COVID-19 sensitivity of first CXR, second CXR, and CT was 73%, 83%, and 88%, whereas specificity was 80%, 73%, and 77%, respectively. Normal and mild severity CXR findings were the largest factor behind false-negative CXRs (40% normal and 87% combined normal/mild). Young age and African-American ethnicity increased false negative rates.\n\n\nCONCLUSION\nCXR sensitivity in COVID-19 detection increases with time, and serial CXRs of COVID-19+ patients has accuracy approaching that of chest CT."
},
{
"pmid": "34786568",
"title": "Comparing CT scan and chest X-ray imaging for COVID-19 diagnosis.",
"abstract": "People suspected of having COVID-19 need to know quickly if they are infected, so they can receive appropriate treatment, self-isolate, and inform those with whom they have been in close contact. Currently, the formal diagnosis of COVID-19 requires a laboratory test (RT-PCR) on samples taken from the nose and throat. The RT-PCR test requires specialized equipment and takes at least 24 h to produce a result. Chest imaging has demonstrated its valuable role in the development of this lung disease. Fast and accurate diagnosis of COVID-19 is possible with the chest X-ray (CXR) and computed tomography (CT) scan images. Our manuscript aims to compare the performances of chest imaging techniques in the diagnosis of COVID-19 infection using different convolutional neural networks (CNN). To do so, we have tested Resnet-18, InceptionV3, and MobileNetV2, for CT scan and CXR images. We found that the ResNet-18 has the best overall precision and sensitivity of 98.5% and 98.6%, respectively, the InceptionV3 model has achieved the best overall specificity of 97.4%, and the MobileNetV2 has obtained a perfect sensitivity for COVID-19 cases. All these performances have occurred with CT scan images."
},
{
"pmid": "33065387",
"title": "Multi-task deep learning based CT imaging analysis for COVID-19 pneumonia: Classification and segmentation.",
"abstract": "This paper presents an automatic classification segmentation tool for helping screening COVID-19 pneumonia using chest CT imaging. The segmented lesions can help to assess the severity of pneumonia and follow-up the patients. In this work, we propose a new multitask deep learning model to jointly identify COVID-19 patient and segment COVID-19 lesion from chest CT images. Three learning tasks: segmentation, classification and reconstruction are jointly performed with different datasets. Our motivation is on the one hand to leverage useful information contained in multiple related tasks to improve both segmentation and classification performances, and on the other hand to deal with the problems of small data because each task can have a relatively small dataset. Our architecture is composed of a common encoder for disentangled feature representation with three tasks, and two decoders and a multi-layer perceptron for reconstruction, segmentation and classification respectively. The proposed model is evaluated and compared with other image segmentation techniques using a dataset of 1369 patients including 449 patients with COVID-19, 425 normal ones, 98 with lung cancer and 397 of different kinds of pathology. The obtained results show very encouraging performance of our method with a dice coefficient higher than 0.88 for the segmentation and an area under the ROC curve higher than 97% for the classification."
},
{
"pmid": "32837749",
"title": "A Deep Learning System to Screen Novel Coronavirus Disease 2019 Pneumonia.",
"abstract": "The real-time reverse transcription-polymerase chain reaction (RT-PCR) detection of viral RNA from sputum or nasopharyngeal swab had a relatively low positive rate in the early stage of coronavirus disease 2019 (COVID-19). Meanwhile, the manifestations of COVID-19 as seen through computed tomography (CT) imaging show individual characteristics that differ from those of other types of viral pneumonia such as influenza-A viral pneumonia (IAVP). This study aimed to establish an early screening model to distinguish COVID-19 from IAVP and healthy cases through pulmonary CT images using deep learning techniques. A total of 618 CT samples were collected: 219 samples from 110 patients with COVID-19 (mean age 50 years; 63 (57.3%) male patients); 224 samples from 224 patients with IAVP (mean age 61 years; 156 (69.6%) male patients); and 175 samples from 175 healthy cases (mean age 39 years; 97 (55.4%) male patients). All CT samples were contributed from three COVID-19-designated hospitals in Zhejiang Province, China. First, the candidate infection regions were segmented out from the pulmonary CT image set using a 3D deep learning model. These separated images were then categorized into the COVID-19, IAVP, and irrelevant to infection (ITI) groups, together with the corresponding confidence scores, using a location-attention classification model. Finally, the infection type and overall confidence score for each CT case were calculated using the Noisy-OR Bayesian function. The experimental result of the benchmark dataset showed that the overall accuracy rate was 86.7% in terms of all the CT cases taken together. The deep learning models established in this study were effective for the early screening of COVID-19 patients and were demonstrated to be a promising supplementary diagnostic method for frontline clinical doctors."
},
{
"pmid": "32845849",
"title": "Adaptive Feature Selection Guided Deep Forest for COVID-19 Classification With Chest CT.",
"abstract": "Chest computed tomography (CT) becomes an effective tool to assist the diagnosis of coronavirus disease-19 (COVID-19). Due to the outbreak of COVID-19 worldwide, using the computed-aided diagnosis technique for COVID-19 classification based on CT images could largely alleviate the burden of clinicians. In this paper, we propose an Adaptive Feature Selection guided Deep Forest (AFS-DF) for COVID-19 classification based on chest CT images. Specifically, we first extract location-specific features from CT images. Then, in order to capture the high-level representation of these features with the relatively small-scale data, we leverage a deep forest model to learn high-level representation of the features. Moreover, we propose a feature selection method based on the trained deep forest model to reduce the redundancy of features, where the feature selection could be adaptively incorporated with the COVID-19 classification model. We evaluated our proposed AFS-DF on COVID-19 dataset with 1495 patients of COVID-19 and 1027 patients of community acquired pneumonia (CAP). The accuracy (ACC), sensitivity (SEN), specificity (SPE), AUC, precision and F1-score achieved by our method are 91.79%, 93.05%, 89.95%, 96.35%, 93.10% and 93.07%, respectively. Experimental results on the COVID-19 dataset suggest that the proposed AFS-DF achieves superior performance in COVID-19 vs. CAP classification, compared with 4 widely used machine learning methods."
},
{
"pmid": "32568730",
"title": "COVID-19 Pneumonia Diagnosis Using a Simple 2D Deep Learning Framework With a Single Chest CT Image: Model Development and Validation.",
"abstract": "BACKGROUND\nCoronavirus disease (COVID-19) has spread explosively worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) is a relevant screening tool due to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely occupied fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians.\n\n\nOBJECTIVE\nWe aimed to rapidly develop an AI technique to diagnose COVID-19 pneumonia in CT images and differentiate it from non-COVID-19 pneumonia and nonpneumonia diseases.\n\n\nMETHODS\nA simple 2D deep learning framework, named the fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning using one of four state-of-the-art pretrained deep learning models (VGG16, ResNet-50, Inception-v3, or Xception) as a backbone. For training and testing of FCONet, we collected 3993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and nonpneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training set and a testing set at a ratio of 8:2. For the testing data set, the diagnostic performance of the four pretrained FCONet models to diagnose COVID-19 pneumonia was compared. In addition, we tested the FCONet models on an external testing data set extracted from embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers.\n\n\nRESULTS\nAmong the four pretrained models of FCONet, ResNet-50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100.00%, and accuracy 99.87%) and outperformed the other three pretrained models in the testing data set. In the additional external testing data set using low-quality CT images, the detection accuracy of the ResNet-50 model was the highest (96.97%), followed by Xception, Inception-v3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively).\n\n\nCONCLUSIONS\nFCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing data set, the FCONet model based on ResNet-50 appears to be the best model, as it outperformed other FCONet models based on VGG16, Xception, and Inception-v3."
},
{
"pmid": "33600316",
"title": "JCS: An Explainable COVID-19 Diagnosis System by Joint Classification and Segmentation.",
"abstract": "Recently, the coronavirus disease 2019 (COVID-19) has caused a pandemic disease in over 200 countries, influencing billions of humans. To control the infection, identifying and separating the infected people is the most crucial step. The main diagnostic tool is the Reverse Transcription Polymerase Chain Reaction (RT-PCR) test. Still, the sensitivity of the RT-PCR test is not high enough to effectively prevent the pandemic. The chest CT scan test provides a valuable complementary tool to the RT-PCR test, and it can identify the patients in the early-stage with high sensitivity. However, the chest CT scan test is usually time-consuming, requiring about 21.5 minutes per case. This paper develops a novel Joint Classification and Segmentation (JCS) system to perform real-time and explainable COVID- 19 chest CT diagnosis. To train our JCS system, we construct a large scale COVID- 19 Classification and Segmentation (COVID-CS) dataset, with 144,167 chest CT images of 400 COVID- 19 patients and 350 uninfected cases. 3,855 chest CT images of 200 patients are annotated with fine-grained pixel-level labels of opacifications, which are increased attenuation of the lung parenchyma. We also have annotated lesion counts, opacification areas, and locations and thus benefit various diagnosis aspects. Extensive experiments demonstrate that the proposed JCS diagnosis system is very efficient for COVID-19 classification and segmentation. It obtains an average sensitivity of 95.0% and a specificity of 93.0% on the classification test set, and 78.5% Dice score on the segmentation test set of our COVID-CS dataset. The COVID-CS dataset and code are available at https://github.com/yuhuan-wu/JCS."
},
{
"pmid": "33275588",
"title": "COVID-19 CT Image Synthesis With a Conditional Generative Adversarial Network.",
"abstract": "Coronavirus disease 2019 (COVID-19) is an ongoing global pandemic that has spread rapidly since December 2019. Real-time reverse transcription polymerase chain reaction (rRT-PCR) and chest computed tomography (CT) imaging both play an important role in COVID-19 diagnosis. Chest CT imaging offers the benefits of quick reporting, a low cost, and high sensitivity for the detection of pulmonary infection. Recently, deep-learning-based computer vision methods have demonstrated great promise for use in medical imaging applications, including X-rays, magnetic resonance imaging, and CT imaging. However, training a deep-learning model requires large volumes of data, and medical staff faces a high risk when collecting COVID-19 CT data due to the high infectivity of the disease. Another issue is the lack of experts available for data labeling. In order to meet the data requirements for COVID-19 CT imaging, we propose a CT image synthesis approach based on a conditional generative adversarial network that can effectively generate high-quality and realistic COVID-19 CT images for use in deep-learning-based medical imaging tasks. Experimental results show that the proposed method outperforms other state-of-the-art image synthesis methods with the generated COVID-19 CT images and indicates promising for various machine learning applications including semantic segmentation and classification."
},
{
"pmid": "33177550",
"title": "COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images.",
"abstract": "The Coronavirus Disease 2019 (COVID-19) pandemic continues to have a devastating effect on the health and well-being of the global population. A critical step in the fight against COVID-19 is effective screening of infected patients, with one of the key screening approaches being radiology examination using chest radiography. It was found in early studies that patients present abnormalities in chest radiography images that are characteristic of those infected with COVID-19. Motivated by this and inspired by the open source efforts of the research community, in this study we introduce COVID-Net, a deep convolutional neural network design tailored for the detection of COVID-19 cases from chest X-ray (CXR) images that is open source and available to the general public. To the best of the authors' knowledge, COVID-Net is one of the first open source network designs for COVID-19 detection from CXR images at the time of initial release. We also introduce COVIDx, an open access benchmark dataset that we generated comprising of 13,975 CXR images across 13,870 patient patient cases, with the largest number of publicly available COVID-19 positive cases to the best of the authors' knowledge. Furthermore, we investigate how COVID-Net makes predictions using an explainability method in an attempt to not only gain deeper insights into critical factors associated with COVID cases, which can aid clinicians in improved screening, but also audit COVID-Net in a responsible and transparent manner to validate that it is making decisions based on relevant information from the CXR images. By no means a production-ready solution, the hope is that the open access COVID-Net, along with the description on constructing the open source COVIDx dataset, will be leveraged and build upon by both researchers and citizen data scientists alike to accelerate the development of highly accurate yet practical deep learning solutions for detecting COVID-19 cases and accelerate treatment of those who need it the most."
},
{
"pmid": "27683318",
"title": "Measures of Diagnostic Accuracy: Basic Definitions.",
"abstract": "Diagnostic accuracy relates to the ability of a test to discriminate between the target condition and health. This discriminative potential can be quantified by the measures of diagnostic accuracy such as sensitivity and specificity, predictive values, likelihood ratios, the area under the ROC curve, Youden's index and diagnostic odds ratio. Different measures of diagnostic accuracy relate to the different aspects of diagnostic procedure: while some measures are used to assess the discriminative property of the test, others are used to assess its predictive ability. Measures of diagnostic accuracy are not fixed indicators of a test performance, some are very sensitive to the disease prevalence, while others to the spectrum and definition of the disease. Furthermore, measures of diagnostic accuracy are extremely sensitive to the design of the study. Studies not meeting strict methodological standards usually over- or under-estimate the indicators of test performance as well as they limit the applicability of the results of the study. STARD initiative was a very important step toward the improvement the quality of reporting of studies of diagnostic accuracy. STARD statement should be included into the Instructions to authors by scientific journals and authors should be encouraged to use the checklist whenever reporting their studies on diagnostic accuracy. Such efforts could make a substantial difference in the quality of reporting of studies of diagnostic accuracy and serve to provide the best possible evidence to the best for the patient care. This brief review outlines some basic definitions and characteristics of the measures of diagnostic accuracy."
},
{
"pmid": "32077789",
"title": "Chest CT Findings in Coronavirus Disease-19 (COVID-19): Relationship to Duration of Infection.",
"abstract": "In this retrospective study, chest CTs of 121 symptomatic patients infected with coronavirus disease-19 (COVID-19) from four centers in China from January 18, 2020 to February 2, 2020 were reviewed for common CT findings in relationship to the time between symptom onset and the initial CT scan (i.e. early, 0-2 days (36 patients), intermediate 3-5 days (33 patients), late 6-12 days (25 patients)). The hallmarks of COVID-19 infection on imaging were bilateral and peripheral ground-glass and consolidative pulmonary opacities. Notably, 20/36 (56%) of early patients had a normal CT. With a longer time after the onset of symptoms, CT findings were more frequent, including consolidation, bilateral and peripheral disease, greater total lung involvement, linear opacities, \"crazy-paving\" pattern and the \"reverse halo\" sign. Bilateral lung involvement was observed in 10/36 early patients (28%), 25/33 intermediate patients (76%), and 22/25 late patients (88%)."
},
{
"pmid": "32953971",
"title": "COVID-19 detection in CT images with deep learning: A voting-based scheme and cross-datasets analysis.",
"abstract": "Early detection and diagnosis are critical factors to control the COVID-19 spreading. A number of deep learning-based methodologies have been recently proposed for COVID-19 screening in CT scans as a tool to automate and help with the diagnosis. These approaches, however, suffer from at least one of the following problems: (i) they treat each CT scan slice independently and (ii) the methods are trained and tested with sets of images from the same dataset. Treating the slices independently means that the same patient may appear in the training and test sets at the same time which may produce misleading results. It also raises the question of whether the scans from the same patient should be evaluated as a group or not. Moreover, using a single dataset raises concerns about the generalization of the methods. Different datasets tend to present images of varying quality which may come from different types of CT machines reflecting the conditions of the countries and cities from where they come from. In order to address these two problems, in this work, we propose an Efficient Deep Learning Technique for the screening of COVID-19 with a voting-based approach. In this approach, the images from a given patient are classified as group in a voting system. The approach is tested in the two biggest datasets of COVID-19 CT analysis with a patient-based split. A cross dataset study is also presented to assess the robustness of the models in a more realistic scenario in which data comes from different distributions. The cross-dataset analysis has shown that the generalization power of deep learning models is far from acceptable for the task since accuracy drops from 87.68% to 56.16% on the best evaluation scenario. These results highlighted that the methods that aim at COVID-19 detection in CT-images have to improve significantly to be considered as a clinical option and larger and more diverse datasets are needed to evaluate the methods in a realistic scenario."
}
] |
Micromachines | null | PMC8876488 | 10.3390/mi13020332 | Spectrum Analysis Enabled Periodic Feature Reconstruction Based Automatic Defect Detection System for Electroluminescence Images of Photovoltaic Modules | Electroluminescence (EL) imaging is a widely adopted method in quality assurance of the photovoltaic (PV) manufacturing industry. With the growing demand for high-quality PV products, automatic inspection methods based on machine vision have become an emerging area concern to replace manual inspectors. Therefore, this paper presents an automatic defect-inspection method for multi-cell monocrystalline PV modules with EL images. A processing routine is designed to extract the defect features of the PV module, eliminating the influence of the intrinsic structural features. Spectrum domain analysis is applied to effectively reconstruct an improved PV layout from a defective one by spectrum filtering in a certain direction. The reconstructed image is used to segment the PV module into cells and slices. Based on the segmentation, defect detection is carried out on individual cells or slices to detect cracks, breaks, and speckles. Robust performance has been achieved from experiments on many samples with varying illumination conditions and defect shapes/sizes, which shows the proposed method can efficiently distinguish intrinsic structural features from the defect features, enabling precise and speedy defect detections on multi-cell PV modules. | 1.2. Related WorkA great deal of research has been dedicated to automated defect detection, localization, and/or classification of PV modules [15]. Schuss et al. [16] used thermographic techniques to characterize defect regions. Wang et al. [17] used an image processing pipeline and adaptive thresholding using both adaptive color threshold and window size. This method works on a single PV cell that is rectangular shaped of varying sizes. Deitsch et al. [18] managed to segment a large PV module into multiple PV cells. Curve estimation with subpixel precision is utilized on local ridge information to extract dividing grids. This method could break a large-scale detection problem into smaller ones within a standardized PV cell. Akram et al. [19] proposed a light CNN architecture trained on the labeled dataset from Deitsch’s work. This is a supervised learning method, which is highly dependent on an adequate amount of labeled data. Moreover, the trained model cannot trivially transfer to another product that has different shapes and structures. Dhimish el al. [20] used Fourier transformation and band filtering to remove noise, but the method could not distinguish intrinsic layout features from defect features. Our work is most similar to the research of Tsai et al. [21]. They used Fourier transformation to remove the influence of inhomogeneous texture in the background. This method has an assumption of the defect shape, and hence is able to detect some specified defects. Tsai’s work aimed to remove the defects to produce a reference image, while in this paper, the aim is to remove all features other than the defects. In addition, Tsai’s method was designed on a polycrystalline cell, which contains an inhomogeneous background texture, while in our case, the monocrystalline cell is nearly homogeneous within one silicon slice. | [
"33234276",
"26190957"
] | [
{
"pmid": "33234276",
"title": "Environmental impacts of solar photovoltaic systems: A critical review of recent progress and future outlook.",
"abstract": "Photovoltaic (PV) systems are regarded as clean and sustainable sources of energy. Although the operation of PV systems exhibits minimal pollution during their lifetime, the probable environmental impacts of such systems from manufacturing until disposal cannot be ignored. The production of hazardous contaminates, water resources pollution, and emissions of air pollutants during the manufacturing process as well as the impact of PV installations on land use are important environmental factors to consider. The present study aims at developing a comprehensive analysis of all possible environmental challenges as well as presenting novel design proposals to mitigate and solve the aforementioned environmental problems. The emissions of greenhouse gas (GHG) from various PV systems were also explored and compared with fossil fuel energy resources. The results revealed that the negative environmental impacts of PV systems could be substantially mitigated using optimized design, development of novel materials, minimize the use of hazardous materials, recycling whenever possible, and careful site selection. Such mitigation actions will reduce the emissions of GHG to the environment, decrease the accumulation of solid wastes, and preserve valuable water resources. The carbon footprint emission from PV systems was found to be in the range of 14-73 g CO2-eq/kWh, which is 10 to 53 orders of magnitude lower than emission reported from the burning of oil (742 g CO2-eq/kWh from oil). It was concluded that the carbon footprint of the PV system could be decreased further by one order of magnitude using novel manufacturing materials. Recycling solar cell materials can also contribute up to a 42% reduction in GHG emissions. The present study offers a valuable management strategy that can be used to improve the sustainability of PV manufacturing processes, improve its economic value, and mitigate its negative impacts on the environment."
},
{
"pmid": "26190957",
"title": "A High Power-Density, Mediator-Free, Microfluidic Biophotovoltaic Device for Cyanobacterial Cells.",
"abstract": "Biophotovoltaics has emerged as a promising technology for generating renewable energy because it relies on living organisms as inexpensive, self-repairing, and readily available catalysts to produce electricity from an abundant resource: sunlight. The efficiency of biophotovoltaic cells, however, has remained significantly lower than that achievable through synthetic materials. Here, a platform is devised to harness the large power densities afforded by miniaturized geometries. To this effect, a soft-lithography approach is developed for the fabrication of microfluidic biophotovoltaic devices that do not require membranes or mediators. Synechocystis sp. PCC 6803 cells are injected and allowed to settle on the anode, permitting the physical proximity between cells and electrode required for mediator-free operation. Power densities of above 100 mW m-2 are demonstrated for a chlorophyll concentration of 100 μM under white light, which is a high value for biophotovoltaic devices without extrinsic supply of additional energy."
}
] |
Polymers | null | PMC8876967 | 10.3390/polym14040762 | Variable Offset Computation Space for Automatic Cooling Dimensioning | The injection mold is one of the most important elements for the part precision of this important mass production process. The thermal mold design is realized by cooling channels around the cavity and poses as a decisive factor for the part quality. Thus, the objective but specific design of the cooling channel layout is crucial for a reproducible part with high-dimensional accuracy in production. Consequently, knowledge-based and automated methods are used to create the optimal heat management in the mold. One of these methods is the inverse thermal mold design, which uses a specific calculation space. The geometric boundary conditions of the optimization algorithm influence the resulting thermal balance within the mold. As the calculation area in the form of an offset around the molded part is one of these boundary conditions, its influence on the optimization result is determined. The thermal optimizations show a dependency on different offset shapes due to the offset thickness and coalescence of concave geometries. An algorithm is developed to generate an offset for this thermal mold design methodology considering the identified influences. Hence, a reproducible and adaptive offset is generated automatically for a complex geometry, and the quality function result improves by 43% in this example. | 2. Related Work2.1. Methodology of an Inverse Thermal Mold DesignThis work uses an optimization algorithm to calculate a thermal balance within the mold, which provokes a cooling of the molded part with minimized warpage. Within the optimization of the inverse mold design, the thermal boundary conditions in the mold are calculated in such a way that a thermally optimal molded part is produced at the end of the cooling phase with regard to the material models and modeling used. This methodology is based on an approach by Agazzi et al. [7], was extended by Nikoleizig [18], and is further researched at IKV [17,19,20]. In this approach, a thermal optimization of the injection mold is carried out on the basis of a quality functional, which objectively quantifies the molded part quality after cooling in the injection mold. Agazzi et al. [7] use a purely thermal approach to evaluate the molded part quality, which is supplemented by further material-specific and numeric aspects. Extensive research in different fields of this methodology has been performed to design reproducible and objective cooling channels to compensate the need for lower warpage and higher precision.Before the optimization step, a contour is created in the form of an offset of the part. This geometry maps the steel tool around the molded part. This has the advantage of a smaller calculation space than compared to the whole mold, which is multiple times bigger. Additionally, the contour’s surface is the location of the temperature values, which are varied by the optimizer. These temperatures are the optimization parameters that result in the optimized thermal balance.The used routine for the thermal cooling channel design used in this research is based on the further development of Hohlweck [16]. Figure 1 shows that the generation of the offset is the first step within this method. Therefore, its influence affects the entire following procedure.Once a suitable offset is created, an injection molding simulation is carried out using commercial models and software. This simulation depicts solely a temperature distribution after multiple cycles without any cooling channels within the mold. This allows a location-dependent temperature distribution to be quickly calculated for the molded part and the offset as a starting condition for the following optimization. Additionally, a time-dependent pressure distribution can be imported from the injection molding simulation to consider the pressure dependencies of the thermal properties and pvT (pressure, specific volume, temperature) behavior. These starting and boundary conditions are implemented in Comsol Multiphysics, Comsol AB, Stockholm, Sweden for the optimization step. They are particularly important because the approach of Agazzi et al. [7] is executed with gradient-based algorithms. These algorithms strongly depend on initial values and the later introduced quality function [21]. Starting from these calculated initial conditions, iterative convergence to the nearest optimum is performed.The optimization deals with the boundary value problem of inverse thermal heat transfer [22]. The boundary values have to be calculated for the desired or targeted temperature distribution. Therefore, the temperature on the offset’s contour and the resulting heat conduction is calculated so that the temperature of the molded part during cooling is optimized. A quality function is necessary to evaluate the thermal condition in the molded part with the aim of minimizing distortion. The used quality function considers the influence of the temperature distribution as in [7] and the cooling rate to evaluate the morphology of the cooled polymer. In general, injection molded parts experience different cooling rates over the wall thickness. Close to the cavity wall, a high cooling rate can be observed due to the contact of the hot melt with the cold mold. In the middle of the part, the cooling rate is significantly lower due to the isolation of the plastic [2]. As this physical effect cannot be changed, it is important to generate homogeneous properties inside constant layers around the midplane of the part geometry. If the properties are not symmetric around the midplane, a lever effect is generated, and warpage can be expected. These aspects lead to the newly formulated quality function [17]:(1)Q(TKK0)=∑i=1m(Tejec−TP(xi→,tc;TKK0)Tejec)2·Aelem,iAtot,1·wi+∑j=1k((T˙(T)¯−T˙(xj→,Tt)T˙¯)2+t¯t−tt(xj→,Tt)t¯t)2·Aelem,jAtot,2·wj.The first term of the equation is similar to the quality function from Nikoleizig [18]. It evaluates the temperature difference in the part TP to the demolding temperature Tejec at the end of cooling tc. In addition, the term is normalized based on the demolding temperature Tejec. Furthermore, the mesh influence is considered by the variable Aelem,i. This fraction ensures that every temperature node is only considered by the share of its respective element on the whole evaluation surface or volume Atot,1.The second row of the equation evaluates the morphology of the molded part. The first term adds up the differences of the local cooling rates T˙ compared to an averaged cooling rate T˙¯. The second term evaluates the solidification time tt. Additionally, each term is weighted with the weight factors wi and wj. These two terms become very small in the case of a homogeneous cooling rate at a similar solidification time. Both terms are evaluated on the Atot,2 (Figure 2, left). Atot,1 corresponds to the part surface. Atot,2 is an offset surface inside the part. The inner offset is necessary, because at the interface of mold and melt, different cooling rates appear in steel and plastic, which lead to numerical instabilities in a gradient-based optimization. Based on the evaluation on these two surfaces, the optimization algorithm subtracts a temperature distribution on the mold surface such that the shown function becomes minimal.An isosurface can be extracted from the optimal thermal heat balance in the following. This isosurface represents the position with the given heat flux that can be provided by cooling channels at this position. Along these isosurfaces, a developed algorithm approximates the segments of cooling channels and creates connections between them [15,16]. By taking into account obstacles such as ejectors, cavity, and parting plane and smoothing for a flow-optimized course, the calculated cooling channel system can be used practically.2.2. OffsetsThere is extensive literature on how to compute offsets from triangle meshes with constant offset diameters [23,24,25,26,27,28,29]. The ideal mesh offset is defined as the surface that has a constant shortest distance to the input surface at every point. This is equivalent to the Minkowski sum of the input manifold and a sphere with the chosen constant offset radius [23]. Given two input objects P and Q, their Minkowski sum is given by M(P,Q)={p+q|p∈P,q∈Q}. For the offset, imagine that the sphere’s center is moved to every surface point of the input mesh.The approaches to compute constant radius offsets can be separated into three groups, including some hybrid approaches: Mesh Boolean-based, surface-based, and volumetric. Mesh Boolean-based approaches compute the Minkowski sum of the input object and a sphere. This can be solved by separating the sum into easier to compute primitives [28,29]: a sphere for each vertex, a cylinder for each edge, and a prism for each triangle. The spheres and the cylinders each have the constant offset as radius, while the prisms have the height of the offset and the triangle at the base. Unfortunately, these approaches inherit all the issues of Mesh Booleans in terms of precision, performance, and stability, while quickly creating highly complex problems at the same time [23].Surface-based approaches usually try to directly apply offsets to input primitives [24,26,30]. This means that either all vertices or all faces are moved in the normal direction by the constant offset. For example, Kim et al. [24] add multiple vertices for each vertex to create a smooth offset. They do not resolve self-intersection in concave regions, as that is handled later by their NC tool-path generation application. On the other hand, Jung et al. [27] aim to efficiently eliminate all self-intersection from a raw offset mesh.Finally, volumetric approaches evaluate the output surface in a grid and use isosurface extraction to extract the final offset mesh. The simplest candidate of volumetric approaches evaluates the signed distance to the input mesh in a regular grid and then uses an isosurface extraction algorithm such as marching cubes [31] to extract the boundary mesh at the desired offset radius. The difficulty lies in circumventing the O(n3) complexity of the regular grid as well as avoiding aliasing effects caused by the grid. More sophisticated volumetric approaches such as [25] use an adaptive octree and add additional per octree cell data such as local normal estimates that allow the creation of higher-quality offset meshes.Variable OffsetsWhile most previous work was done on algorithms to extract offset surfaces with a constant distance to the input mesh, there is little work on variable offset surfaces. The definition for a variable offset varies vastly: Musialski et al. [32] aim to change a shape by generating offset vectors that, when applied to the input positions, assure that the shape fulfills specific properties, such as e.g., a desired barycenter to assure the manufactured part swims with a fixed upright orientation while minimizing the geometric deviation from the original input part. To avoid self-intersections, they prohibit offset vectors from crossing estimates of the medial axis. Ross et al. [33] propose a variable face-offset based method: Each face is shifted by its own offset distance in normal direction. Their algorithm produces one, not necessarily triangular, output face per input face. Woerl et al. [34] offer a solution to render variable offsets where a radius is assigned to each vertex. Every point on each triangle is also assigned an offset radius by linear interpolation of the triangles’ three vertex radii. Similarly to some constant offset approaches, this allows them to separate the offset into three primitives: a sphere for each vertex, a cone frustrum for each edge, and an offset triangle that tangentially touches the three offset spheres. They apply ray tracing to evaluate the offsets.The approach of this work uses a similar offset definition as Woerl et al. [34]: the desired offset is defined for discrete values on the input surface. Instead of a radius for each input vertex, the input surface is rasterized, and a radius is defined for each rasterized sample point. Then, each sample point and its radius define an offset sphere. Then, the envelope surrounding the input is efficiently evaluated as union of all spheres using a regular volumetric voxel grid. Similarly to Musialski et al. [32], these offsets are prevented to grow across the medial axis, although for different reasons that are explained in Section 5.2. The offset radii are generated by first evaluating the local thickness for each rasterized sample and then mapping the thickness to a radius. The optimal mapping is determined experimentally. | [] | [] |
Micromachines | null | PMC8877205 | 10.3390/mi13020333 | A Portable Sign Language Collection and Translation Platform with Smart Watches Using a BLSTM-Based Multi-Feature Framework | Continuous sign language recognition (CSLR) using different types of sensors to precisely recognize sign language in real time is a very challenging but important research direction in sensor technology. Many previous methods are vision-based, with computationally intensive algorithms to process a large number of image/video frames possibly contaminated with noises, which can result in a large translation delay. On the other hand, gesture-based CSLR relying on hand movement data captured on wearable devices may require less computation resources and translation time. Thus, it is more efficient to provide instant translation during real-world communication. However, the insufficient amount of information provided by the wearable sensors often affect the overall performance of this system. To tackle this issue, we propose a bidirectional long short-term memory (BLSTM)-based multi-feature framework for conducting gesture-based CSLR precisely with two smart watches. In this framework, multiple sets of input features are extracted from the collected gesture data to provide a diverse spectrum of valuable information to the underlying BLSTM model for CSLR. To demonstrate the effectiveness of the proposed framework, we test it on an extremely challenging and radically new dataset of Hong Kong sign language (HKSL), in which hand movement data are collected from 6 individual signers for 50 different sentences. The experimental results reveal that the proposed framework attains a much lower word error rate compared with other existing machine learning or deep learning approaches for gesture-based CSLR. Based on this framework, we further propose a portable sign language collection and translation platform, which can simplify the procedure of collecting gesture-based sign language dataset and recognize sign language through smart watch data in real time, in order to break the communication barrier for the sign language users. | 2. Related WorkTo facilitate the interaction between the deaf and hearing people, significant research has been conducted on applying different types of sensor technologies in gesture-based SLR. The first work in this field dates back to 1983, in which Grimes [24] used an electronic glove for recognizing finger-spellings. Since then, research has been conducted on applying different approaches and different devices in gesture-based SLR. In 2017, Ekiz et al. [25] firstly attempted to capture the hand movements of signers with smart watches and used dynamic time warping (DTW) to compute the distances between the gestures and the templates in different dimensions for SLR.In 2018, Kishore et al. [26] proposed a two-phase matching algorithm for isolated SLR with gloves and cameras in which they extracted the motion joints from signers and used a kernel matching algorithm to find the most likely sign in their database according to these motion joints. In 2018 as well, Lee et al. [27] designed a new wearable hand device for isolated sign language recognition in which there are five flex-sensors, two pressure sensors, and a three-axis inertial motion sensor. However, rather than using a matching algorithm, Lee et al. adopted a support vector machine (SVM) for classifying different signs.In 2019, Deriche et al. [28] utilized leap motions for SLR, and they performed the classification through two approaches: a Bayesian approach with a Gaussian mixture model, and a linear discriminant analysis approach. Similarly, in 2019, Kumar et al. [29] applied leap motions in sign language recognition. To achieve a high recognition accuracy, they adopted a modified LSTM model with an extra RESET gate in their work. Later in the same year, Hou et al. [30] proposed a new SignSpeaker system, in which they extracted the frequency domain features from smart watch data and fed them into to next LSTM layer for SLR. Instead of using any smart watches, Yu et al. [31] attached three types of sensors, including surface electromyography, accelerometers, and gyroscopes, onto the signers to collect their data when performing isolated sign language. After that, they applied a deep belief net to conduct SLR.In 2020, Pan et al. [32] developed a wireless multi-channel capacitive sensor for recognizing numbers from 1 to 9. In their proposed system, code-modulated signals are directly processed without any demodulation. A faster response time was thus achieved. Similarly, using capacitive sensors, Wong et al. [33] also proposed a capacitance-based glove to measure capacitance values from the electrodes placed on finger phalanges for sign language recognition. Based on this device, they extracted 15 features from the capacitive signals and compared the performance of support vector machine (SVM) with k-nearest neighbor (KNN) in classifying different alphabets according to these features.In 2021, Ramalingame et al. [34] developed a wearable band integrated with nano-composite pressure sensors. The sensors in their work consisted of homogeneously dispersed carbon nano-tubes in a polydimethylsiloxane polymer matrix prepared by an optimized synthesis process for actively monitoring the contractions/relaxations of muscles in the arm. In 2021 as well, Zhao et al. [35] introduced a sign language gesture recognition system that can differentiate fine-grained finger movements using the photoplethysmography (PPG) and motion sensors. An accuracy of 98% was attained by their system when differentiating nine finger-level gestures in American Sign Language. In addition, many sensors that are not commonly used in our daily lives have also been applied for SLR, such as RF sensors [36,37] and thermal sensors [38,39].However, most of the aforementioned research only extracted a limited number of features from the raw data, which are not enough to fully exploit the potential capabilities of recognition models, especially for deep learning models. Little research has been conducted on improving the accuracy of CSLR by extracting multiple features from raw data to provide a diverse range of information to the underlying BLSTM model. To fill this research gap, we propose a pioneering BLSTM-based multi-feature framework, which extracts three sets of features from three different domains as the input features for the next BLSTM layer.In addition, although many existing works have reached decent performance in terms of recognition accuracy, most of them either remain at a theoretical level, or only support recognition for digits and letters, which are still far away from real-world communication. To address this issue, we further develop a portable sign language collection and translation platform using the proposed BLSTM-based multi-feature framework to translate continuous sign language in real time to facilitate communication between deaf people and others. | [
"18586624",
"28026761",
"18267698",
"29060883",
"26886976",
"9377276"
] | [
{
"pmid": "18586624",
"title": "A self-organizing approach to background subtraction for visual surveillance applications.",
"abstract": "Detection of moving objects in video streams is the first relevant step of information extraction in many computer vision applications. Aside from the intrinsic usefulness of being able to segment video streams into moving and background components, detecting moving objects provides a focus of attention for recognition, classification, and activity analysis, making these later steps more efficient. We propose an approach based on self organization through artificial neural networks, widely applied in human image processing systems and more generally in cognitive science. The proposed approach can handle scenes containing moving backgrounds, gradual illumination variations and camouflage, has no bootstrapping limitations, can include into the background model shadows cast by moving objects, and achieves robust detection for different types of videos taken with stationary cameras. We compare our method with other modeling techniques and report experimental results, both in terms of detection accuracy and in terms of processing speed, for color video sequences that represent typical situations critical for video surveillance systems."
},
{
"pmid": "28026761",
"title": "Detection of Stationary Foreground Objects Using Multiple Nonparametric Background-Foreground Models on a Finite State Machine.",
"abstract": "There is a huge proliferation of surveillance systems that require strategies for detecting different kinds of stationary foreground objects (e.g., unattended packages or illegally parked vehicles). As these strategies must be able to detect foreground objects remaining static in crowd scenarios, regardless of how long they have not been moving, several algorithms for detecting different kinds of such foreground objects have been developed over the last decades. This paper presents an efficient and high-quality strategy to detect stationary foreground objects, which is able to detect not only completely static objects but also partially static ones. Three parallel nonparametric detectors with different absorption rates are used to detect currently moving foreground objects, short-term stationary foreground objects, and long-term stationary foreground objects. The results of the detectors are fed into a novel finite state machine that classifies the pixels among background, moving foreground objects, stationary foreground objects, occluded stationary foreground objects, and uncovered background. Results show that the proposed detection strategy is not only able to achieve high quality in several challenging situations but it also improves upon previous strategies."
},
{
"pmid": "18267698",
"title": "Glove-Talk: a neural network interface between a data-glove and a speech synthesizer.",
"abstract": "To illustrate the potential of multilayer neural networks for adaptive interfaces, a VPL Data-Glove connected to a DECtalk speech synthesizer via five neural networks was used to implement a hand-gesture to speech system. Using minor variations of the standard backpropagation learning procedure, the complex mapping of hand movements to speech is learned using data obtained from a single ;speaker' in a simple training phase. With a 203 gesture-to-word vocabulary, the wrong word is produced less than 1% of the time, and no word is produced about 5% of the time. Adaptive control of the speaking rate and word stress is also available. The training times and final performance speed are improved by using small, separate networks for each naturally defined subtask. The system demonstrates that neural networks can be used to develop the complex mappings required in a high bandwidth interface that adapts to the individual user."
},
{
"pmid": "29060883",
"title": "A wearable hand gesture recognition device based on acoustic measurements at wrist.",
"abstract": "This paper investigates hand gesture recognition from acoustic measurements at wrist for the development of a low-cost wearable human-computer interaction (HCI) device. A prototype with 5 microphone sensors on human wrist is benchmarked in hand gesture recognition performance by identifying 36 gestures in American Sign Language (ASL). Three subjects were recruited to perform over 20 trials for each set of hand gestures, including 26 ASL alphabets and 10 ASL numbers. Ten features were extracted from the signal recorded by each sensor. Support Vector Machine (SVM), Decision Tree (DT), K-Nearest Neighbors (kNN), and Linear Discriminant Analysis (LDA) were compared in classification performance. Among which, LDA offered the highest average classification accuracy above 80%. Based on these preliminary results, our proposed technique has exhibited a promising means for developing a low-cost HCI."
},
{
"pmid": "26886976",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning.",
"abstract": "Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
}
] |
Journal of Personalized Medicine | null | PMC8877642 | 10.3390/jpm12020190 | Verification of De-Identification Techniques for Personal Information Using Tree-Based Methods with Shapley Values | With the development of big data and cloud computing technologies, the importance of pseudonym information has grown. However, the tools for verifying whether the de-identification methodology is correctly applied to ensure data confidentiality and usability are insufficient. This paper proposes a verification of de-identification techniques for personal healthcare information by considering data confidentiality and usability. Data are generated and preprocessed by considering the actual statistical data, personal information datasets, and de-identification datasets based on medical data to represent the de-identification technique as a numeric dataset. Five tree-based regression models (i.e., decision tree, random forest, gradient boosting machine, extreme gradient boosting, and light gradient boosting machine) are constructed using the de-identification dataset to effectively discover nonlinear relationships between dependent and independent variables in numerical datasets. Then, the most effective model is selected from personal information data in which pseudonym processing is essential for data utilization. The Shapley additive explanation, an explainable artificial intelligence technique, is applied to the most effective model to establish pseudonym processing policies and machine learning to present a machine-learning process that selects an appropriate de-identification methodology. | 2. Related Work2.1. Data UsabilityBloland and MacNeil [35] provided a definitional framework to untangle many of the variables that have previously confounded conversations about the quality of vaccination data. The framework classifies immunization data into three categories: data quality, usability, and utilization. The framework also provides tangible recommendations for a specific set of indicators that could better identify the important qualities of immunization, such as trueness, concurrence, relevancy, completeness, timeliness, integrity, consistency, and utilization. Silsand et al. [36] conducted a formative review of an empirical project in North Norway using a qualitative trailing research approach paired with information infrastructure theory. Parts of the clinical information in the electronic health record (EHR) were formatted as openEHR archetypes in this project to enable automatic data to be reused from the EHR system in a national medical quality registry. They investigated the design problems that arise from organizing clinical information for various uses. As a result, they identified three critical concerns to fix: (1) the need for context when reusing variables, (2) how to verify reusing the correct data, and (3) the difficulties of granulating the variables. The most critical prerequisites for increasing data usability through clinical information structuring were governance and competency. Wait [37] assisted in developing attainable data quality objectives and insight into obtaining reliable results that adequately support the findings when reviewed by others.Adnan et al. [38] undertook a rigorous systematic review of the literature using the preferred reporting item for systematic reviews and meta-analyses (PRISMA) framework to construct a model to improve the usability of unstructured data and bridge the research gap. The most recent methodologies and solutions for text analytics were thoroughly studied. Concerns regarding the unstructured text data usability and their implications for data preparation for analytics were discovered. The usability enhancement methodology incorporates the definition of usability dimensions for unstructured big data, the discovery of usability determinants, and the development of a relationship between usability dimensions and determinants to produce usability rules. Their proposed model contributes to the usability of unstructured data and simplifies data preparation operations with more valuable data, which ultimately improves the analytical process. They also discovered unstructured big data usability difficulties for the analytical process to bridge the identified gap [39]. The usability enhancement approach for unstructured big data has been presented to improve the subjective and objective efficacy of unstructured big data for data preparation and manipulation operations. Furthermore, idea mapping was a crucial component for improving the usability of unstructured big data in the suggested model with usability principles. These principles bridged the usability gap between the availability of data and their usefulness for the intended purpose. The proposed study methodology could help improve unstructured big data analytics efficiency.2.2. Data ConfidentialityJavid et al. [40] underlined the difficulties of cyber security and data privacy and developing solutions in adopting Industry 4.0 in the healthcare industry. For example, a reduction in the attack surface is required to seamlessly integrate complex computational algorithms, such as those used in cryptography. This issue can be solved by employing Cloudlet technology, which employs virtual machines near the mobile device to assist with preliminary big data analysis for wireless body area networks. Furthermore, the authors suggested several possibilities from Cloudlet technology for future research, such as supported remote robotic surgery. Domingo-Ferrer et al. [41] used the utility in the traditional sense of retaining the statistical properties of the original data. They specifically used the unified perspective of anonymization of the permutation model to create constrained confidentiality metrics for microdata anonymization based on the relative quantities of permutations by the different attributes of a dataset. They presented experimental results demonstrating that their proposed metrics produce outcomes consistent with intuition for several anonymization approaches in the literature, including privacy models and statistical disclosure control methods based on the noise and generated data. Yuan et al. [42] proposed a comprehensive scheme that simultaneously achieves data privacy protection, data dynamics, and batch auditing in a public cloud storage environment. This scheme can safeguard data blocks during audits and effectively support data dynamics. Furthermore, third-party auditors can do batch audits for many users. Finally, the security analysis demonstrated that the scheme is risk-free. Gai et al. [43] focused on privacy and proposed a revolutionary data encryption strategy called the dynamic data encryption strategy (D2ES). They recommended selective data encryption and using privacy categorization methods under time restrictions. This approach was designed to maximize the extent of privacy protection by employing a selective encryption mechanism within the execution time constraints. In their experiments, the performance of D2ES was examined, verifying the privacy enhancement. Bakir [44] developed a model that includes three key characteristics: data confidentiality, integrity, and consistency of information security for massive datasets. A more practical and adaptable structure was realized with the single labeling model for all database operations (reading, writing, updating, and deleting) on actual data. Therefore, all processes were given with the three key features. The outcomes of the proposed single-label model were compared to the application and experimental investigation that the author conducted, and the findings are encouraging for further research.However, no previous work on personal health information has been conducted that assesses whether alias processing was successfully performed using a de-identification methodology that considers data confidentiality and usability for appropriate data utilization. In addition, which variables had an effect when verifying data de-identification were not easily determined in the existing studies. | [
"33611870",
"32218872",
"32284619",
"30616584",
"18652655",
"20678228",
"33430240",
"34834442",
"34357096",
"33079674",
"32607472",
"30947703",
"34822693"
] | [
{
"pmid": "32218872",
"title": "Communicable Disease Surveillance Ethics in the Age of Big Data and New Technology.",
"abstract": "Surveillance is essential for communicable disease prevention and control. Traditional notification of demographic and clinical information, about individuals with selected (notifiable) infectious diseases, allows appropriate public health action and is protected by public health and privacy legislation, but is slow and insensitive. Big data-based electronic surveillance, by commercial bodies and government agencies (for profit or population control), which draws on a plethora of internet- and mobile device-based sources, has been widely accepted, if not universally welcomed. Similar anonymous digital sources also contain syndromic information, which can be analysed, using customised algorithms, to rapidly predict infectious disease outbreaks, but the data are nonspecific and predictions sometimes misleading. However, public health authorities could use these online sources, in combination with de-identified personal health data, to provide more accurate and earlier warning of infectious disease events-including exotic or emerging infections-even before the cause is confirmed, and allow more timely public health intervention. Achieving optimal benefits would require access to selected data from personal electronic health and laboratory (including pathogen genomic) records and the potential to (confidentially) re-identify individuals found to be involved in outbreaks, to ensure appropriate care and infection control. Despite existing widespread digital surveillance and major potential community benefits of extending its use to communicable disease control, there is considerable public disquiet about allowing public health authorities access to personal health data. Informed public discussion, greater transparency and an ethical framework will be essential to build public trust in the use of new technology for communicable disease control."
},
{
"pmid": "30616584",
"title": "A clinical text classification paradigm using weak supervision and deep representation.",
"abstract": "BACKGROUND\nAutomatic clinical text classification is a natural language processing (NLP) technology that unlocks information embedded in clinical narratives. Machine learning approaches have been shown to be effective for clinical text classification tasks. However, a successful machine learning model usually requires extensive human efforts to create labeled training data and conduct feature engineering. In this study, we propose a clinical text classification paradigm using weak supervision and deep representation to reduce these human efforts.\n\n\nMETHODS\nWe develop a rule-based NLP algorithm to automatically generate labels for the training data, and then use the pre-trained word embeddings as deep representation features for training machine learning models. Since machine learning is trained on labels generated by the automatic NLP algorithm, this training process is called weak supervision. We evaluat the paradigm effectiveness on two institutional case studies at Mayo Clinic: smoking status classification and proximal femur (hip) fracture classification, and one case study using a public dataset: the i2b2 2006 smoking status classification shared task. We test four widely used machine learning models, namely, Support Vector Machine (SVM), Random Forest (RF), Multilayer Perceptron Neural Networks (MLPNN), and Convolutional Neural Networks (CNN), using this paradigm. Precision, recall, and F1 score are used as metrics to evaluate performance.\n\n\nRESULTS\nCNN achieves the best performance in both institutional tasks (F1 score: 0.92 for Mayo Clinic smoking status classification and 0.97 for fracture classification). We show that word embeddings significantly outperform tf-idf and topic modeling features in the paradigm, and that CNN captures additional patterns from the weak supervision compared to the rule-based NLP algorithms. We also observe two drawbacks of the proposed paradigm that CNN is more sensitive to the size of training data, and that the proposed paradigm might not be effective for complex multiclass classification tasks.\n\n\nCONCLUSION\nThe proposed clinical text classification paradigm could reduce human efforts of labeled training data creation and feature engineering for applying machine learning to clinical text classification by leveraging weak supervision and deep representation. The experimental experiments have validated the effectiveness of paradigm by two institutional and one shared clinical text classification tasks."
},
{
"pmid": "18652655",
"title": "Automated de-identification of free-text medical records.",
"abstract": "BACKGROUND\nText-based patient medical records are a vital resource in medical research. In order to preserve patient confidentiality, however, the U.S. Health Insurance Portability and Accountability Act (HIPAA) requires that protected health information (PHI) be removed from medical records before they can be disseminated. Manual de-identification of large medical record databases is prohibitively expensive, time-consuming and prone to error, necessitating automatic methods for large-scale, automated de-identification.\n\n\nMETHODS\nWe describe an automated Perl-based de-identification software package that is generally usable on most free-text medical records, e.g., nursing notes, discharge summaries, X-ray reports, etc. The software uses lexical look-up tables, regular expressions, and simple heuristics to locate both HIPAA PHI, and an extended PHI set that includes doctors' names and years of dates. To develop the de-identification approach, we assembled a gold standard corpus of re-identified nursing notes with real PHI replaced by realistic surrogate information. This corpus consists of 2,434 nursing notes containing 334,000 words and a total of 1,779 instances of PHI taken from 163 randomly selected patient records. This gold standard corpus was used to refine the algorithm and measure its sensitivity. To test the algorithm on data not used in its development, we constructed a second test corpus of 1,836 nursing notes containing 296,400 words. The algorithm's false negative rate was evaluated using this test corpus.\n\n\nRESULTS\nPerformance evaluation of the de-identification software on the development corpus yielded an overall recall of 0.967, precision value of 0.749, and fallout value of approximately 0.002. On the test corpus, a total of 90 instances of false negatives were found, or 27 per 100,000 word count, with an estimated recall of 0.943. Only one full date and one age over 89 were missed. No patient names were missed in either corpus.\n\n\nCONCLUSION\nWe have developed a pattern-matching de-identification system based on dictionary look-ups, regular expressions, and heuristics. Evaluation based on two different sets of nursing notes collected from a U.S. hospital suggests that, in terms of recall, the software out-performs a single human de-identifier (0.81) and performs at least as well as a consensus of two human de-identifiers (0.94). The system is currently tuned to de-identify PHI in nursing notes and discharge summaries but is sufficiently generalized and can be customized to handle text files of any format. Although the accuracy of the algorithm is high, it is probably insufficient to be used to publicly disseminate medical data. The open-source de-identification software and the gold standard re-identified corpus of medical records have therefore been made available to researchers via the PhysioNet website to encourage improvements in the algorithm."
},
{
"pmid": "20678228",
"title": "Automatic de-identification of textual documents in the electronic health record: a review of recent research.",
"abstract": "BACKGROUND\nIn the United States, the Health Insurance Portability and Accountability Act (HIPAA) protects the confidentiality of patient data and requires the informed consent of the patient and approval of the Internal Review Board to use data for research purposes, but these requirements can be waived if data is de-identified. For clinical data to be considered de-identified, the HIPAA \"Safe Harbor\" technique requires 18 data elements (called PHI: Protected Health Information) to be removed. The de-identification of narrative text documents is often realized manually, and requires significant resources. Well aware of these issues, several authors have investigated automated de-identification of narrative text documents from the electronic health record, and a review of recent research in this domain is presented here.\n\n\nMETHODS\nThis review focuses on recently published research (after 1995), and includes relevant publications from bibliographic queries in PubMed, conference proceedings, the ACM Digital Library, and interesting publications referenced in already included papers.\n\n\nRESULTS\nThe literature search returned more than 200 publications. The majority focused only on structured data de-identification instead of narrative text, on image de-identification, or described manual de-identification, and were therefore excluded. Finally, 18 publications describing automated text de-identification were selected for detailed analysis of the architecture and methods used, the types of PHI detected and removed, the external resources used, and the types of clinical documents targeted. All text de-identification systems aimed to identify and remove person names, and many included other types of PHI. Most systems used only one or two specific clinical document types, and were mostly based on two different groups of methodologies: pattern matching and machine learning. Many systems combined both approaches for different types of PHI, but the majority relied only on pattern matching, rules, and dictionaries.\n\n\nCONCLUSIONS\nIn general, methods based on dictionaries performed better with PHI that is rarely mentioned in clinical text, but are more difficult to generalize. Methods based on machine learning tend to perform better, especially with PHI that is not mentioned in the dictionaries used. Finally, the issues of anonymization, sufficient performance, and \"over-scrubbing\" are discussed in this publication."
},
{
"pmid": "33430240",
"title": "How Do Machines Learn? Artificial Intelligence as a New Era in Medicine.",
"abstract": "With an increased number of medical data generated every day, there is a strong need for reliable, automated evaluation tools. With high hopes and expectations, machine learning has the potential to revolutionize many fields of medicine, helping to make faster and more correct decisions and improving current standards of treatment. Today, machines can analyze, learn, communicate, and understand processed data and are used in health care increasingly. This review explains different models and the general process of machine learning and training the algorithms. Furthermore, it summarizes the most useful machine learning applications and tools in different branches of medicine and health care (radiology, pathology, pharmacology, infectious diseases, personalized decision making, and many others). The review also addresses the futuristic prospects and threats of applying artificial intelligence as an advanced, automated medicine tool."
},
{
"pmid": "34834442",
"title": "Artificial Intelligence and Its Application to Minimal Hepatic Encephalopathy Diagnosis.",
"abstract": "Hepatic encephalopathy (HE) is a brain dysfunction caused by liver insufficiency and/or portosystemic shunting. HE manifests as a spectrum of neurological or psychiatric abnormalities. Diagnosis of overt HE (OHE) is based on the typical clinical manifestation, but covert HE (CHE) has only very subtle clinical signs and minimal HE (MHE) is detected only by specialized time-consuming psychometric tests, for which there is still no universally accepted gold standard. Significant progress has been made in artificial intelligence and its application to medicine. In this review, we introduce how artificial intelligence has been used to diagnose minimal hepatic encephalopathy thus far, and we discuss its further potential in analyzing speech and handwriting data, which are probably the most accessible data for evaluating the cognitive state of the patient."
},
{
"pmid": "34357096",
"title": "Automatic Segmentation of Mandible from Conventional Methods to Deep Learning-A Review.",
"abstract": "Medical imaging techniques, such as (cone beam) computed tomography and magnetic resonance imaging, have proven to be a valuable component for oral and maxillofacial surgery (OMFS). Accurate segmentation of the mandible from head and neck (H&N) scans is an important step in order to build a personalized 3D digital mandible model for 3D printing and treatment planning of OMFS. Segmented mandible structures are used to effectively visualize the mandible volumes and to evaluate particular mandible properties quantitatively. However, mandible segmentation is always challenging for both clinicians and researchers, due to complex structures and higher attenuation materials, such as teeth (filling) or metal implants that easily lead to high noise and strong artifacts during scanning. Moreover, the size and shape of the mandible vary to a large extent between individuals. Therefore, mandible segmentation is a tedious and time-consuming task and requires adequate training to be performed properly. With the advancement of computer vision approaches, researchers have developed several algorithms to automatically segment the mandible during the last two decades. The objective of this review was to present the available fully (semi)automatic segmentation methods of the mandible published in different scientific articles. This review provides a vivid description of the scientific advancements to clinicians and researchers in this field to help develop novel automatic methods for clinical applications."
},
{
"pmid": "33079674",
"title": "A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI.",
"abstract": "Recently, artificial intelligence and machine learning in general have demonstrated remarkable performances in many tasks, from image processing to natural language processing, especially with the advent of deep learning (DL). Along with research progress, they have encroached upon many different fields and disciplines. Some of them require high level of accountability and thus transparency, for example, the medical sector. Explanations for machine decisions and predictions are thus needed to justify their reliability. This requires greater interpretability, which often means we need to understand the mechanism underlying the algorithms. Unfortunately, the blackbox nature of the DL is still unresolved, and many machine decisions are still poorly understood. We provide a review on interpretabilities suggested by different research works and categorize them. The different categories show different dimensions in interpretability research, from approaches that provide \"obviously\" interpretable information to the studies of complex patterns. By applying the same categorization to interpretability in medical research, it is hoped that: 1) clinicians and practitioners can subsequently approach these methods with caution; 2) insight into interpretability will be born with more considerations for medical practices; and 3) initiatives to push forward data-based, mathematically grounded, and technically grounded medical education are encouraged."
},
{
"pmid": "32607472",
"title": "From Local Explanations to Global Understanding with Explainable AI for Trees.",
"abstract": "Tree-based machine learning models such as random forests, decision trees, and gradient boosted trees are popular non-linear predictive models, yet comparatively little attention has been paid to explaining their predictions. Here, we improve the interpretability of tree-based models through three main contributions: 1) The first polynomial time algorithm to compute optimal explanations based on game theory. 2) A new type of explanation that directly measures local feature interaction effects. 3) A new set of tools for understanding global model structure based on combining many local explanations of each prediction. We apply these tools to three medical machine learning problems and show how combining many high-quality local explanations allows us to represent global structure while retaining local faithfulness to the original model. These tools enable us to i) identify high magnitude but low frequency non-linear mortality risk factors in the US population, ii) highlight distinct population sub-groups with shared risk characteristics, iii) identify non-linear interaction effects among risk factors for chronic kidney disease, and iv) monitor a machine learning model deployed in a hospital by identifying which features are degrading the model's performance over time. Given the popularity of tree-based machine learning models, these improvements to their interpretability have implications across a broad set of domains."
},
{
"pmid": "30947703",
"title": "Defining & assessing the quality, usability, and utilization of immunization data.",
"abstract": "BACKGROUND\nHigh quality data are needed for decision-making at all levels of the public health system, from guiding public health activities at the local level, to informing national policy development, to monitoring the impact of global initiatives. Although a number of approaches have been developed to evaluate the underlying quality of routinely collected vaccination administrative data, there remains a lack of consensus around how data quality is best defined or measured.\n\n\nDISCUSSION\nWe present a definitional framework that is intended to disentangle many of the elements that have confused discussions of vaccination data quality to date. The framework describes immunization data in terms of three key characteristics: data quality, data usability, and data utilization. The framework also offers concrete suggestions for a specific set of indicators that could be used to better understand immunization those key characteristics, including Trueness, Concurrence, Relevancy, Efficiency, Completeness, Timeliness, Integrity, Consistency, and Utilization.\n\n\nCONCLUSION\nBeing deliberate about the choice of indicators; being clear on their definitions, limitations, and methods of measurement; and describing how those indicators work together to give a more comprehensive and practical understanding of immunization data quality, usability, and use, should yield more informed, and therefore better, programmatic decision-making."
},
{
"pmid": "34822693",
"title": "The Importance of Data Reliability and Usability When Assessing Impacts of Marine Mineral Oil Spills.",
"abstract": "Spilled mineral oils in the marine environment pose a number of challenges to sampling and analysis. Mineral oils are complex assemblages of hydrocarbons and additives, the composition of which can vary considerably depending on the source oil and product specifications. Further, the marine microbial and chemical environment can be harsh and variable over short times and distances, producing a rigorous source of hydrocarbon degradation of a mineral oil assemblage. Researchers must ensure that any measurements used to determine the nature and extent of the oil release, the fate and transport of the mineral oil constituents, and any resultant toxicological effects are derived using representative data that adhere to the study's data quality objectives (DQOs). The purpose of this paper is to provide guidance for crafting obtainable DQOs and provide insights into producing reliable results that properly underpin researchers' findings when scrutinized by others."
}
] |
Journal of Imaging | 35200726 | PMC8877769 | 10.3390/jimaging8020023 | Segmentation-Based vs. Regression-Based Biomarker Estimation: A Case Study of Fetus Head Circumference Assessment from Ultrasound Images | The fetus head circumference (HC) is a key biometric to monitor fetus growth during pregnancy, which is estimated from ultrasound (US) images. The standard approach to automatically measure the HC is to use a segmentation network to segment the skull, and then estimate the head contour length from the segmentation map via ellipse fitting, usually after post-processing. In this application, segmentation is just an intermediate step to the estimation of a parameter of interest. Another possibility is to estimate directly the HC with a regression network. Even if this type of segmentation-free approaches have been boosted with deep learning, it is not yet clear how well direct approach can compare to segmentation approaches, which are expected to be still more accurate. This observation motivates the present study, where we propose a fair, quantitative comparison of segmentation-based and segmentation-free (i.e., regression) approaches to estimate how far regression-based approaches stand from segmentation approaches. We experiment various convolutional neural networks (CNN) architectures and backbones for both segmentation and regression models and provide estimation results on the HC18 dataset, as well agreement analysis, to support our findings. We also investigate memory usage and computational efficiency to compare both types of approaches. The experimental results demonstrate that even if segmentation-based approaches deliver the most accurate results, regression CNN approaches are actually learning to find prominent features, leading to promising yet improvable HC estimation results. | 2. Related Works2.1. Fetus Head Circumference EstimationSeveral approaches have been proposed in the literature to measure the fetus head circumference in US images, based on image segmentation [11,12,13]. Some follow a two-step approach, namely fetus head localization and segmentation refinement [11]. For example, in [14], the first step consists of locating the fetus head with Haar-like features used to train a random forest classifier; and the second step consists of the measurement of the HC, via ellipse fitting and Hough transform. Other approaches build upon deep segmentation models also in a two-step process, contour prediction, and ellipse fitting [15]. In [16], the standard segmentation model U-Net [17] is trained using manually labeled images, and segmentation results are fitted to ellipses. In [18], authors build upon the same idea, using the multi-task learning paradigm to jointly segment the US image and estimate the ellipse parameters. In [19], the authors use first a region-proposal CNN for head localization, and a regression CNN trained on distance fields to segment the HC. Ref. [20] advances the work [19] since they propose a Mask-R2CNN neural network to perform HC distance-field regression for head delineation in an end-to-end way, which does not need prior HC localization or postprocessing for outlier removal. All these methods rely on a segmentation of the fetus head as a prerequisite to estimating the HC.2.2. Segmentation-Free Approaches for Biomarker EstimationWorks aimed at directly extracting biomarkers from medical images have gained traction these last years, especially thanks to advances in deep learning. The goal is to avoid intermediate steps, such as segmentation and other adhoc post-processing steps, that maybe computationally expensive (for both model training and images annotation) and prone to errors [5]. Direct parameter estimation with deep learning can be found in various medical applications; for example, in [5], the authors propose a learning-based approach to perform a direct volume estimation of the cardiac left and right ventricles from magnetic resonance (MR) images, without segmentation. The approach consists in computing shape descriptors using a bag-of-word model, and to perform Bayesian estimation with regression forests. Ref. [6] utilizes regression forest to directly estimate the kidney volume on computed tomography images. Ref. [7] quantify spine indices from MRI via regression CNN with feature amplifiers. Ref. [8] propose multi-task learning for the measurement of cardiac volumes from MRI. For vascular disease diagnosis, Ref. [9] quantify 6 indices of the coronary artery stenosis from X-ray images by using multi-output regression CNN model with an attention mechanism. Preliminary results on the estimation of the head circumference in US images with regression CNN are presented in [10]. By taking advantage of the representation power of CNN, one can now skip the feature design step and learn the features, while at the same time performing the estimation of the value of interest, i.e., regression. Regression CNN are also at the heart of other fields in computer vision, such as head-pose estimation [21], facial landmark detection [22], and human-body pose estimation [23]. | [
"22535628",
"21216179",
"32055950",
"31704452",
"32804646",
"28504954",
"15972198",
"15708464",
"30138319",
"31091515",
"33049451",
"34156608",
"33813286",
"30990175",
"26161953"
] | [
{
"pmid": "22535628",
"title": "Intra- and interobserver variability in fetal ultrasound measurements.",
"abstract": "OBJECTIVE\nTo assess intra- and interobserver variability of fetal biometry measurements throughout pregnancy.\n\n\nMETHODS\nA total of 175 scans (of 140 fetuses) were prospectively performed at 14-41 weeks of gestation ensuring an even distribution throughout gestation. From among three experienced sonographers, a pair of observers independently acquired a duplicate set of seven standard measurements for each fetus. Differences between and within observers were expressed in measurement units (mm), as a percentage of fetal dimensions and as gestational age-specific Z-scores. For all comparisons, Bland-Altman plots were used to quantify limits of agreement.\n\n\nRESULTS\nWhen using measurement units (mm) to express differences, both intra- and interobserver variability increased with gestational age. However, when measurement of variability took into account the increasing fetal size and was expressed as a percentage or Z-score, it remained constant throughout gestation. When expressed as a percentage or Z-score, the 95% limits of agreement for intraobserver difference for head circumference (HC) were ± 3.0% or 0.67; they were ± 5.3% or 0.90 and ± 6.6% or 0.94 for abdominal circumference (AC) and femur length (FL), respectively. The corresponding values for interobserver differences were ± 4.9% or 0.99 for HC, ± 8.8% or 1.35 for AC and ± 11.1% or 1.43 for FL.\n\n\nCONCLUSIONS\nAlthough intra- and interobserver variability increases with advancing gestation when expressed in millimeters, both are constant as a percentage of the fetal dimensions or when reported as a Z-score. Thus, measurement variability should be considered when interpreting fetal growth rates."
},
{
"pmid": "21216179",
"title": "A review of segmentation methods in short axis cardiac MR images.",
"abstract": "For the last 15 years, Magnetic Resonance Imaging (MRI) has become a reference examination for cardiac morphology, function and perfusion in humans. Yet, due to the characteristics of cardiac MR images and to the great variability of the images among patients, the problem of heart cavities segmentation in MRI is still open. This paper is a review of fully and semi-automated methods performing segmentation in short axis images using a cardiac cine MRI sequence. Medical background and specific segmentation difficulties associated to these images are presented. For this particularly complex segmentation task, prior knowledge is required. We thus propose an original categorization for cardiac segmentation methods, with a special emphasis on what level of external information is required (weak or strong) and how it is used to constrain segmentation. After reviewing method principles and analyzing segmentation results, we conclude with a discussion and future trends in this field regarding methodological and medical issues."
},
{
"pmid": "32055950",
"title": "Prognostic value of anthropometric measures extracted from whole-body CT using deep learning in patients with non-small-cell lung cancer.",
"abstract": "INTRODUCTION\nThe aim of the study was to extract anthropometric measures from CT by deep learning and to evaluate their prognostic value in patients with non-small-cell lung cancer (NSCLC).\n\n\nMETHODS\nA convolutional neural network was trained to perform automatic segmentation of subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), and muscular body mass (MBM) from low-dose CT images in 189 patients with NSCLC who underwent pretherapy PET/CT. After a fivefold cross-validation in a subset of 35 patients, anthropometric measures extracted by deep learning were normalized to the body surface area (BSA) to control the various patient morphologies. VAT/SAT ratio and clinical parameters were included in a Cox proportional-hazards model for progression-free survival (PFS) and overall survival (OS).\n\n\nRESULTS\nInference time for a whole volume was about 3 s. Mean Dice similarity coefficients in the validation set were 0.95, 0.93, and 0.91 for SAT, VAT, and MBM, respectively. For PFS prediction, T-stage, N-stage, chemotherapy, radiation therapy, and VAT/SAT ratio were associated with disease progression on univariate analysis. On multivariate analysis, only N-stage (HR = 1.7 [1.2-2.4]; p = 0.006), radiation therapy (HR = 2.4 [1.0-5.4]; p = 0.04), and VAT/SAT ratio (HR = 10.0 [2.7-37.9]; p < 0.001) remained significant prognosticators. For OS, male gender, smoking status, N-stage, a lower SAT/BSA ratio, and a higher VAT/SAT ratio were associated with mortality on univariate analysis. On multivariate analysis, male gender (HR = 2.8 [1.2-6.7]; p = 0.02), N-stage (HR = 2.1 [1.5-2.9]; p < 0.001), and the VAT/SAT ratio (HR = 7.9 [1.7-37.1]; p < 0.001) remained significant prognosticators.\n\n\nCONCLUSION\nThe BSA-normalized VAT/SAT ratio is an independent predictor of both PFS and OS in NSCLC patients.\n\n\nKEY POINTS\n• Deep learning will make CT-derived anthropometric measures clinically usable as they are currently too time-consuming to calculate in routine practice. • Whole-body CT-derived anthropometrics in non-small-cell lung cancer are associated with progression-free survival and overall survival. • A priori medical knowledge can be implemented in the neural network loss function calculation."
},
{
"pmid": "31704452",
"title": "Commensal correlation network between segmentation and direct area estimation for bi-ventricle quantification.",
"abstract": "Accurate and automated cardiac bi-ventricle quantification based on cardiac magnetic resonance (CMR) image is a very crucial procedure for clinical cardiac disease diagnosis. Two traditional and commensal tasks, i.e., bi-ventricle segmentation and direct ventricle function index estimation, are always independently devoting to address ventricle quantification problem. However, because of inherent difficulties from the variable CMR imaging conditions, these two tasks are still open challenging. In this paper, we proposed a unified bi-ventricle quantification framework based on commensal correlation between the bi-ventricle segmentation and direct area estimation. Firstly, we proposed the area commensal correlation between the two traditional cardiac quantification tasks for the first time, and designed a novel deep commensal network (DCN) to join these two commensal tasks into a unified framework based on the proposed commensal correlation loss. Secondly, we proposed an differentiable area operator to model the proposed area commensal correlation and made the proposed model continuously differentiable. Thirdly, we proposed a high-efficiency and novel uncertainty estimation method through one-time inference based on cross-task output variability. And finally DCN achieved end-to-end optimization and fast convergence as well as uncertainty estimation with one-time inference. Experiments on the four open accessible short-axis CMR benchmark datasets (i.e., Sunnybrook, STACOM 2011, RVSC, and ACDC) showed that the proposed method achieves best bi-ventricle quantification accuracy and optimization performance. Hence, the proposed method has big potential to be extended to other medical image analysis tasks and has clinical application value."
},
{
"pmid": "32804646",
"title": "Direct Quantification of Coronary Artery Stenosis Through Hierarchical Attentive Multi-View Learning.",
"abstract": "Quantification of coronary artery stenosis on X-ray angiography (XRA) images is of great importance during the intraoperative treatment of coronary artery disease. It serves to quantify the coronary artery stenosis by estimating the clinical morphological indices, which are essential in clinical decision making. However, stenosis quantification is still a challenging task due to the overlapping, diversity and small-size region of the stenosis in the XRA images. While efforts have been devoted to stenosis quantification through low-level features, these methods have difficulty in learning the real mapping from these features to the stenosis indices. These methods are still cumbersome and unreliable for the intraoperative procedures due to their two-phase quantification, which depends on the results of segmentation or reconstruction of the coronary artery. In this work, we are proposing a hierarchical attentive multi-view learning model (HEAL) to achieve a direct quantification of coronary artery stenosis, without the intermediate segmentation or reconstruction. We have designed a multi-view learning model to learn more complementary information of the stenosis from different views. For this purpose, an intra-view hierarchical attentive block is proposed to learn the discriminative information of stenosis. Additionally, a stenosis representation learning module is developed to extract the multi-scale features from the keyframe perspective for considering the clinical workflow. Finally, the morphological indices are directly estimated based on the multi-view feature embedding. Extensive experiment studies on clinical multi-manufacturer dataset consisting of 228 subjects show the superiority of our HEAL against nine comparing methods, including direct quantification methods and multi-view learning methods. The experimental results demonstrate the better clinical agreement between the ground truth and the prediction, which endows our proposed method with a great potential for the efficient intraoperative treatment of coronary artery disease."
},
{
"pmid": "28504954",
"title": "Automatic Fetal Head Circumference Measurement in Ultrasound Using Random Forest and Fast Ellipse Fitting.",
"abstract": "Head circumference (HC) is one of the most important biometrics in assessing fetal growth during prenatal ultrasound examinations. However, the manual measurement of this biometric by doctors often requires substantial experience. We developed a learning-based framework that used prior knowledge and employed a fast ellipse fitting method (ElliFit) to measure HC automatically. We first integrated the prior knowledge about the gestational age and ultrasound scanning depth into a random forest classifier to localize the fetal head. We further used phase symmetry to detect the center line of the fetal skull and employed ElliFit to fit the HC ellipse for measurement. The experimental results from 145 HC images showed that our method had an average measurement error of 1.7 mm and outperformed traditional methods. The experimental results demonstrated that our method shows great promise for applications in clinical practice."
},
{
"pmid": "15972198",
"title": "Automated fetal head detection and measurement in ultrasound images by iterative randomized Hough transform.",
"abstract": "An image-processing and object-detection method was developed to automate the measurements of biparietal diameter (BPD) and head circumference (HC) in ultrasound fetal images. The heads in 214 of 217 images were detected by an iterative randomized Hough transform. A head was assumed to have an elliptical shape with parameters progressively estimated by the iterative randomized Hough transform. No user input or size range of the head was required. The detection and measurement took 1.6 s on a personal computer. The interrun variations of the algorithm were small at 0.84% for BPD and 2.08% for HC. The differences between the automatic measurements and sonographers' manual measurements were 0.12% for BPD and -0.52% for HC. The 95% limits of agreement were -3.34%, 3.58% for BPD and -5.50%, 4.45% for HC. The results demonstrated that the automatic measurements were consistent and accurate. This method provides a valuable tool for fetal examinations."
},
{
"pmid": "15708464",
"title": "Segmentation of fetal ultrasound images.",
"abstract": "This paper describes a new method for segmentation of fetal anatomic structures from echographic images. More specifically, we estimate and measure the contours of the femur and of cranial cross-sections of fetal bodies, which can thus be automatically measured. Contour estimation is formulated as a statistical estimation problem, where both the contour and the observation model parameters are unknown. The observation model (or likelihood function) relates, in probabilistic terms, the observed image with the underlying contour. This likelihood function is derived from a region-based statistical image model. The contour and the observation model parameters are estimated according to the maximum likelihood (ML) criterion, via deterministic iterative algorithms. Experiments reported in the paper, using synthetic and real images, testify for the adequacy and good performance of the proposed approach."
},
{
"pmid": "30138319",
"title": "Automated measurement of fetal head circumference using 2D ultrasound images.",
"abstract": "In this paper we present a computer aided detection (CAD) system for automated measurement of the fetal head circumference (HC) in 2D ultrasound images for all trimesters of the pregnancy. The HC can be used to estimate the gestational age and monitor growth of the fetus. Automated HC assessment could be valuable in developing countries, where there is a severe shortage of trained sonographers. The CAD system consists of two steps: First, Haar-like features were computed from the ultrasound images to train a random forest classifier to locate the fetal skull. Secondly, the HC was extracted using Hough transform, dynamic programming and an ellipse fit. The CAD system was trained on 999 images and validated on an independent test set of 335 images from all trimesters. The test set was manually annotated by an experienced sonographer and a medical researcher. The reference gestational age (GA) was estimated using the crown-rump length measurement (CRL). The mean difference between the reference GA and the GA estimated by the experienced sonographer was 0.8 ± 2.6, -0.0 ± 4.6 and 1.9 ± 11.0 days for the first, second and third trimester, respectively. The mean difference between the reference GA and the GA estimated by the medical researcher was 1.6 ± 2.7, 2.0 ± 4.8 and 3.9 ± 13.7 days. The mean difference between the reference GA and the GA estimated by the CAD system was 0.6 ± 4.3, 0.4 ± 4.7 and 2.5 ± 12.4 days. The results show that the CAD system performs comparable to an experienced sonographer. The presented system shows similar or superior results compared to systems published in literature. This is the first automated system for HC assessment evaluated on a large test set which contained data of all trimesters of the pregnancy."
},
{
"pmid": "31091515",
"title": "Automatic evaluation of fetal head biometry from ultrasound images using machine learning.",
"abstract": "OBJECTIVE\nUltrasound-based fetal biometric measurements, such as head circumference (HC) and biparietal diameter (BPD), are frequently used to evaluate gestational age and diagnose fetal central nervous system pathology. Because manual measurements are operator-dependent and time-consuming, much research is being actively conducted on automated methods. However, the existing automated methods are still not satisfactory in terms of accuracy and reliability, owing to difficulties dealing with various artefacts in ultrasound images.\n\n\nAPPROACH\nUsing the proposed method, a labeled dataset containing 102 ultrasound images was used for training, and validation was performed with 70 ultrasound images.\n\n\nMAIN RESULTS\nA success rate of 91.43% and 100% for HC and BPD estimations, respectively, and an accuracy of 87.14% for the plane acceptance check.\n\n\nSIGNIFICANCE\nThis paper focuses on fetal head biometry and proposes a deep-learning-based method for estimating HC and BPD with a high degree of accuracy and reliability."
},
{
"pmid": "33049451",
"title": "A regression framework to head-circumference delineation from US fetal images.",
"abstract": "BACKGROUND AND OBJECTIVES\nMeasuring head-circumference (HC) length from ultrasound (US) images is a crucial clinical task to assess fetus growth. To lower intra- and inter-operator variability in HC length measuring, several computer-assisted solutions have been proposed in the years. Recently, a large number of deep-learning approaches is addressing the problem of HC delineation through the segmentation of the whole fetal head via convolutional neural networks (CNNs). Since the task is a edge-delineation problem, we propose a different strategy based on regression CNNs.\n\n\nMETHODS\nThe proposed framework consists of a region-proposal CNN for head localization and centering, and a regression CNN for accurately delineate the HC. The first CNN is trained exploiting transfer learning, while we propose a training strategy for the regression CNN based on distance fields.\n\n\nRESULTS\nThe framework was tested on the HC18 Challenge dataset, which consists of 999 training and 335 testing images. A mean absolute difference of 1.90 ( ± 1.76) mm and a Dice similarity coefficient of 97.75 ( ± 1.32) % were achieved, overcoming approaches in the literature.\n\n\nCONCLUSIONS\nThe experimental results showed the effectiveness of the proposed framework, proving its potential in supporting clinicians during the clinical practice."
},
{
"pmid": "34156608",
"title": "Mask-R[Formula: see text]CNN: a distance-field regression version of Mask-RCNN for fetal-head delineation in ultrasound images.",
"abstract": "BACKGROUND AND OBJECTIVES\nFetal head-circumference (HC) measurement from ultrasound (US) images provides useful hints for assessing fetal growth. Such measurement is performed manually during the actual clinical practice, posing issues relevant to intra- and inter-clinician variability. This work presents a fully automatic, deep-learning-based approach to HC delineation, which we named Mask-R[Formula: see text]CNN. It advances our previous work in the field and performs HC distance-field regression in an end-to-end fashion, without requiring a priori HC localization nor any postprocessing for outlier removal.\n\n\nMETHODS\nMask-R[Formula: see text]CNN follows the Mask-RCNN architecture, with a backbone inspired by feature-pyramid networks, a region-proposal network and the ROI align. The Mask-RCNN segmentation head is here modified to regress the HC distance field.\n\n\nRESULTS\nMask-R[Formula: see text]CNN was tested on the HC18 Challenge dataset, which consists of 999 training and 335 testing images. With a comprehensive ablation study, we showed that Mask-R[Formula: see text]CNN achieved a mean absolute difference of 1.95 mm (standard deviation [Formula: see text] mm), outperforming other approaches in the literature.\n\n\nCONCLUSIONS\nWith this work, we proposed an end-to-end model for HC distance-field regression. With our experimental results, we showed that Mask-R[Formula: see text]CNN may be an effective support for clinicians for assessing fetal growth."
},
{
"pmid": "33813286",
"title": "Loss odyssey in medical image segmentation.",
"abstract": "The loss function is an important component in deep learning-based segmentation methods. Over the past five years, many loss functions have been proposed for various segmentation tasks. However, a systematic study of the utility of these loss functions is missing. In this paper, we present a comprehensive review of segmentation loss functions in an organized manner. We also conduct the first large-scale analysis of 20 general loss functions on four typical 3D segmentation tasks involving six public datasets from 10+ medical centers. The results show that none of the losses can consistently achieve the best performance on the four segmentation tasks, but compound loss functions (e.g. Dice with TopK loss, focal loss, Hausdorff distance loss, and boundary loss) are the most robust losses. Our code and segmentation results are publicly available and can serve as a loss function benchmark. We hope this work will also provide insights on new loss function development for the community."
},
{
"pmid": "30990175",
"title": "A Comprehensive Analysis of Deep Regression.",
"abstract": "Deep learning revolutionized data science, and recently its popularity has grown exponentially, as did the amount of papers employing deep networks. Vision tasks, such as human pose estimation, did not escape from this trend. There is a large number of deep models, where small changes in the network architecture, or in the data pre-processing, together with the stochastic nature of the optimization procedures, produce notably different results, making extremely difficult to sift methods that significantly outperform others. This situation motivates the current study, in which we perform a systematic evaluation and statistical analysis of vanilla deep regression, i.e., convolutional neural networks with a linear regression top layer. This is the first comprehensive analysis of deep regression techniques. We perform experiments on four vision problems, and report confidence intervals for the median performance as well as the statistical significance of the results, if any. Surprisingly, the variability due to different data pre-processing procedures generally eclipses the variability due to modifications in the network architecture. Our results reinforce the hypothesis according to which, in general, a general-purpose network (e.g., VGG-16 or ResNet-50) adequately tuned can yield results close to the state-of-the-art without having to resort to more complex and ad-hoc regression models."
},
{
"pmid": "26161953",
"title": "On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation.",
"abstract": "Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest. We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package."
}
] |
Journal of Imaging | 35200745 | PMC8877883 | 10.3390/jimaging8020043 | A Boosted Minimum Cross Entropy Thresholding for Medical Images Segmentation Based on Heterogeneous Mean Filters Approaches | Computer vision plays an important role in the accurate foreground detection of medical images. Diagnosing diseases in their early stages has effective life-saving potential, and this is every physician’s goal. There is a positive relationship between improving image segmentation methods and precise diagnosis in medical images. This relation provides a profound indication for feature extraction in a segmented image, such that an accurate separation occurs between the foreground and the background. There are many thresholding-based segmentation methods found under the pure image processing approach. Minimum cross entropy thresholding (MCET) is one of the frequently used mean-based thresholding methods for medical image segmentation. In this paper, the aim was to boost the efficiency of MCET, based on heterogeneous mean filter approaches. The proposed model estimates an optimized mean by excluding the negative influence of noise, local outliers, and gray intensity levels; thus, obtaining new mean values for the MCET’s objective function. The proposed model was examined compared to the original and related methods, using three types of medical image dataset. It was able to show accurate results based on the performance measures, using the benchmark of unsupervised and supervised evaluation. | 2. Related WorkImage segmentation is an important process for the CAD system to detect various types of objects in medical images. Boosting segmentation efficiency provides reliable data that reflect the accurate object detection. Such improvements could have a profound effect on the CAD system and increases its reliability. Nonetheless, medical image segmentation has significant differences that remain a difficult task.Several thresholding-based segmentation methods have been presented in the literature. Some methods were dedicated to enhancing the accuracy of skin lesion detection [2,14,15,16,17]. Minimum cross-entropy thresholding was proposed by Li et al. [14] to improve the segmentation process, by obtaining the optimal threshold from the optimized entropy. This technique considered one of the most well-known entropy-based thresholding methods. It tries to minimize the variance between two class entropies based on the mean value in each region, as shown in its objective function in Equation (1), which represent the main process for finding the optimal threshold t* after minimizing n (t).
(1)n(t)=−∑i=1t i×h(i)×log(μ1(t))−∑i=t+1L i×h(i)×log(μ2(t))
where h(i) refers to the histogram of the grey level i in the range [1, L]. The regions of the image are considered as two Gaussian distributions, such that the value of μ1(t) and μ2(t) are estimated in Equations (2) and (3).
(2) μ1(t)= ∑i=1ti×h(i)∑i=1th(i)
(3) μ2(t)= ∑i=t+1Li×h(i)∑i=t+1Lh(i)The mean estimator technique of this method is based on Gaussian distribution. However, this type of distribution is often effective for symmetric distribution, but not for the other cases of asymmetric distribution, and asymmetric intensities could have some contradicted parts that affect the mean computation. Nonetheless, the process of mean computation is considered a classical or arithmetic mean in the Gaussian approach; thus, the mean value in most cases can include impacting parts, e.g., the presence of noise, local outliers, and gray areas; thus, it can impact the calculation in Equation (1), according to the direct relationship between the mean value and the objective function.This method added new possibilities for improvements in the literature, as a solution for difficult image segmentation cases. Chakraborty et al. [16] proposed a particle swarm optimization (PSO) based minimum cross entropy. This method was tested for image segmentation, to show its convergence rate. minimum cross entropy was also improved using Gamma distribution to optimize the final threshold [17]. Moreover, it has been developed using hybrid cross entropy thresholding using Gaussian and Gamma distributions for skin lesion segmentation [2]. The method was also improved for selecting multi-level threshold values, using an improved human mental search algorithm [18]. In addition, it was proposed for color image segmentation based on an exchange market algorithm [19]. Another improvement of this method was proposed based on homogeneous mean filter approaches [15]. This improvement tested multiple types of medical images, to show the impact of the enhanced mean value on the optimal threshold and its positive impact on the segmentation results. However, there are some gaps that need to be filled in this approach, where homogeneous forms try to use one filter for two mean values in two different regions.As stated in the literature, most of the proposed methods focused on the distribution issue, without taking into account the impact of the mean value on the mean-based MCET, especially when noise, outliers, or gray levels become influential parts within the mean computation, since this original method is based on the classical mean, which includes all the overall intensity levels. MCET was improved based on homogeneous mean filter approaches [15], this improvement proposed excluding certain parts from the total stated issues. It relies on the role of mean filters in homogenous form, without taking into account the combination and the positions of noise in different regions. In this paper, the heterogeneous mean filter approach tries to handle the contradicted intensity levels inside the mean values of each region. This model intends to provide a different and an enhanced mean value for MCET, in order to boost its efficiency. The main aim is to find noise-free mean values; as well as, to evolve the method based on dedicated mean filters in heterogeneous form. The strength of the proposed methods compared to the related methods lies in the impact of the enhanced mean, as shown in Table 1. | [
"32544197",
"33841095",
"31176438",
"21869254",
"21690639",
"26415155"
] | [
{
"pmid": "32544197",
"title": "Towards the automatic detection of skin lesion shape asymmetry, color variegation and diameter in dermoscopic images.",
"abstract": "Asymmetry, color variegation and diameter are considered strong indicators of malignant melanoma. The subjectivity inherent in the first two features and the fact that 10% of melanomas tend to be missed in the early diagnosis due to having a diameter less than 6mm, deem it necessary to develop an objective computer vision system to evaluate these criteria and aid in the early detection of melanoma which could eventually lead to a higher 5-year survival rate. This paper proposes an approach for evaluating the three criteria objectively, whereby we develop a measure to find asymmetry with the aid of a decision tree which we train on the extracted asymmetry measures and then use to predict the asymmetry of new skin lesion images. A range of colors that demonstrate the suspicious colors for the color variegation feature have been derived, and Feret's diameter has been utilized to find the diameter of the skin lesion. The decision tree is 80% accurate in determining the asymmetry of skin lesions, and the number of suspicious colors and diameter values are objectively identified."
},
{
"pmid": "33841095",
"title": "A Novel Brain MRI Image Segmentation Method Using an Improved Multi-View Fuzzy c-Means Clustering Algorithm.",
"abstract": "Background: The brain magnetic resonance imaging (MRI) image segmentation method mainly refers to the division of brain tissue, which can be divided into tissue parts such as white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). The segmentation results can provide a basis for medical image registration, 3D reconstruction, and visualization. Generally, MRI images have defects such as partial volume effects, uneven grayscale, and noise. Therefore, in practical applications, the segmentation of brain MRI images has difficulty obtaining high accuracy. Materials and Methods: The fuzzy clustering algorithm establishes the expression of the uncertainty of the sample category and can describe the ambiguity brought by the partial volume effect to the brain MRI image, so it is very suitable for brain MRI image segmentation (B-MRI-IS). The classic fuzzy c-means (FCM) algorithm is extremely sensitive to noise and offset fields. If the algorithm is used directly to segment the brain MRI image, the ideal segmentation result cannot be obtained. Accordingly, considering the defects of MRI medical images, this study uses an improved multiview FCM clustering algorithm (IMV-FCM) to improve the algorithm's segmentation accuracy of brain images. IMV-FCM uses a view weight adaptive learning mechanism so that each view obtains the optimal weight according to its cluster contribution. The final division result is obtained through the view ensemble method. Under the view weight adaptive learning mechanism, the coordination between various views is more flexible, and each view can be adaptively learned to achieve better clustering effects. Results: The segmentation results of a large number of brain MRI images show that IMV-FCM has better segmentation performance and can accurately segment brain tissue. Compared with several related clustering algorithms, the IMV-FCM algorithm has better adaptability and better clustering performance."
},
{
"pmid": "31176438",
"title": "Assessment of Image Quality and Dosimetric Performance of CT Simulators.",
"abstract": "BACKGROUND\nCT simulator for radiation therapy aims to produce high-quality images for dose calculation and delineation of target and organs at risk in the process of treatment planning. Selection of CT imaging protocols that achieve a desired image quality while minimizing patient dose depends on technical CT parameters and their relationship with image quality and radiation dose. For similar imaging protocols using comparable technical CT parameters, there are also variations in image quality metrics between different CT simulator models. Understanding the relationship and variation is important for selecting appropriate imaging protocol and standardizing QC process. Here, we proposed an automated method to determine the relationship between image quality and radiation dose for various CT technical parameters.\n\n\nMATERIAL AND METHOD\nThe impact of scan parameters on various aspects of image quality and volumetric CT dose index for a Philips Brilliance Big Bore and a Toshiba Aquilion One CT scanners were determined by using commercial phantom and automated image quality analysis software and cylindrical radiation dose phantom.\n\n\nRESULTS AND DISCUSSION\nBoth scanners had very similar and satisfactory performance based on the diagnostic acceptance criteria recommended by ACR, International Atomic Energy Agency, and American Association of Physicists in Medicine. However, our results showed a compromise between different image quality components such as low-contrast and spatial resolution with the change of scanning parameters and revealed variations between the two scanners on their image quality performance. Measurement using a generic phantom and analysis by automated software was unbiased and efficient.\n\n\nCONCLUSION\nThis method provides information that can be used as a baseline for CT scanner image quality and dosimetric QC for different CT scanner models in a given institution or across sites."
},
{
"pmid": "21869254",
"title": "Dynamic measurement of computer generated image segmentations.",
"abstract": "This paper introduces a general purpose performance measurement scheme for image segmentation algorithms. Performance parameters that function in real-time distinguish this method from previous approaches that depended on an a priori knowledge of the correct segmentation. A low level, context independent definition of segmentation is used to obtain a set of optimization criteria for evaluating performance. Uniformity within each region and contrast between adjacent regions serve as parameters for region analysis. Contrast across lines and connectivity between them represent measures for line analysis. Texture is depicted by the introduction of focus of attention areas as groups of regions and lines. The performance parameters are then measured separately for each area. The usefulness of this approach lies in the ability to adjust the strategy of a system according to the varying characteristics of different areas. This feedback path provides the means for more efficient and error-free processing. Results from areas with dissimilar properties show a diversity in the measurements that is utilized for dynamic strategy setting."
},
{
"pmid": "21690639",
"title": "Image segmentation by probabilistic bottom-up aggregation and cue integration.",
"abstract": "We present a bottom-up aggregation approach to image segmentation. Beginning with an image, we execute a sequence of steps in which pixels are gradually merged to produce larger and larger regions. In each step, we consider pairs of adjacent regions and provide a probability measure to assess whether or not they should be included in the same segment. Our probabilistic formulation takes into account intensity and texture distributions in a local area around each region. It further incorporates priors based on the geometry of the regions. Finally, posteriors based on intensity and texture cues are combined using “a mixture of experts” formulation. This probabilistic approach is integrated into a graph coarsening scheme, providing a complete hierarchical segmentation of the image. The algorithm complexity is linear in the number of the image pixels and it requires almost no user-tuned parameters. In addition, we provide a novel evaluation scheme for image segmentation algorithms, attempting to avoid human semantic considerations that are out of scope for segmentation algorithms. Using this novel evaluation scheme, we test our method and provide a comparison to several existing segmentation algorithms."
},
{
"pmid": "26415155",
"title": "Supervised Evaluation of Image Segmentation and Object Proposal Techniques.",
"abstract": "This paper tackles the supervised evaluation of image segmentation and object proposal algorithms. It surveys, structures, and deduplicates the measures used to compare both segmentation results and object proposals with a ground truth database; and proposes a new measure: the precision-recall for objects and parts. To compare the quality of these measures, eight state-of-the-art object proposal techniques are analyzed and two quantitative meta-measures involving nine state of the art segmentation methods are presented. The meta-measures consist in assuming some plausible hypotheses about the results and assessing how well each measure reflects these hypotheses. As a conclusion of the performed experiments, this paper proposes the tandem of precision-recall curves for boundaries and for objects-and-parts as the tool of choice for the supervised evaluation of image segmentation. We make the datasets and code of all the measures publicly available."
}
] |
Journal of Imaging | 35200735 | PMC8878166 | 10.3390/jimaging8020033 | Head-Mounted Display-Based Augmented Reality for Image-Guided Media Delivery to the Heart: A Preliminary Investigation of Perceptual Accuracy | By aligning virtual augmentations with real objects, optical see-through head-mounted display (OST-HMD)-based augmented reality (AR) can enhance user-task performance. Our goal was to compare the perceptual accuracy of several visualization paradigms involving an adjacent monitor, or the Microsoft HoloLens 2 OST-HMD, in a targeted task, as well as to assess the feasibility of displaying imaging-derived virtual models aligned with the injured porcine heart. With 10 participants, we performed a user study to quantify and compare the accuracy, speed, and subjective workload of each paradigm in the completion of a point-and-trace task that simulated surgical targeting. To demonstrate the clinical potential of our system, we assessed its use for the visualization of magnetic resonance imaging (MRI)-based anatomical models, aligned with the surgically exposed heart in a motion-arrested open-chest porcine model. Using the HoloLens 2 with alignment of the ground truth target and our display calibration method, users were able to achieve submillimeter accuracy (0.98 mm) and required 1.42 min for calibration in the point-and-trace task. In the porcine study, we observed good spatial agreement between the MRI-models and target surgical site. The use of an OST-HMD led to improved perceptual accuracy and task-completion times in a simulated targeting task. | 1.2. Related WorkIn the context of surgical navigation, see-through HMDs have been explored in neurosurgery [14], orthopedic surgery [15], minimally invasive surgeries (laparoscopic or endoscopic) [16], general surgery [13], and plastic surgery [17]; however, HMD led AR for targeted cardiac procedures, particularly therapeutic delivery, remains unexplored.The perception location of augmented content and the impact of different visualization paradigms on user task-performance remains an active and ongoing area of research in the AR space. Using a marker-based alignment approach, the perceptual limitations of OST-HMDs, due to contribution of the vergence-accommodation conflict, have been investigated in guidance during manual tasks, with superimposed virtual content, using the HoloLens 1 [18] and Magic Leap One [19]. In a non-marker-based alignment strategy, other groups have focused on investigating how different visualization techniques or a multi-view AR experience can influence a users’ ability to perform perception-based manual alignment of virtual-to-real content for breast reconstruction surgery [20] or robot-assisted minimally invasive surgery [21].Though there have been significant efforts going into the design of see-through HMD-based surgical navigation platforms, applications have remained primarily constrained to research lab environments and have experienced little clinical uptake—to date, there are no widely used commercial see-through HMD surgical navigation systems [22]. The historically poor clinical uptake of these technologies can be attributed to a lack of comfort and HMD performance [16], poor rendering resolution [23], limitations in perception due to the vergence-accommodation conflict [24], reliance upon a user to manually control the appearance and presentation of virtually augmented entities [25], and poor virtual model alignment with the scene due to failed per-user display calibration [18]. | [
"21856481",
"32175315",
"25795463",
"25093889",
"24776797",
"24862441",
"23165643",
"20415592",
"20357147",
"31514682",
"27154018",
"32061248",
"28160692",
"28573091",
"31059421",
"31025950",
"26336129",
"28961115",
"26357098",
"21869429",
"22275200",
"9874293",
"26719357",
"31646408",
"17670397",
"29603366"
] | [
{
"pmid": "21856481",
"title": "The effects of the cardiac myosin activator, omecamtiv mecarbil, on cardiac function in systolic heart failure: a double-blind, placebo-controlled, crossover, dose-ranging phase 2 trial.",
"abstract": "BACKGROUND\nMany patients with heart failure remain symptomatic and have a poor prognosis despite existing treatments. Decreases in myocardial contractility and shortening of ventricular systole are characteristic of systolic heart failure and might be improved by a new therapeutic class, cardiac myosin activators. We report the first study of the cardiac myosin activator, omecamtiv mecarbil, in patients with systolic heart failure.\n\n\nMETHODS\nWe undertook a double-blind, placebo-controlled, crossover, dose-ranging, phase 2 trial investigating the effects of omecamtiv mecarbil (formerly CK-1827452), given intravenously for 2, 24, or 72 h to patients with stable heart failure and left ventricular systolic dysfunction receiving guideline-indicated treatment. Clinical assessment (including vital signs, echocardiograms, and electrocardiographs) and testing of plasma drug concentrations took place during and after completion of each infusion. The primary aim was to assess safety and tolerability of omecamtiv mecarbil. This study is registered at ClinicalTrials.gov, NCT00624442.\n\n\nFINDINGS\n45 patients received 151 infusions of active drug or placebo. Placebo-corrected, concentration-dependent increases in left ventricular ejection time (up to an 80 ms increase from baseline) and stroke volume (up to 9·7 mL) were recorded, associated with a small reduction in heart rate (up to 2·7 beats per min; p<0·0001 for all three measures). Higher plasma concentrations were also associated with reductions in end-systolic (decrease of 15 mL at >500 ng/mL, p=0·0026) and end-diastolic volumes (16 mL, p=0·0096) that might have been more pronounced with increased duration of infusion. Cardiac ischaemia emerged at high plasma concentrations (two patients, plasma concentrations roughly 1750 ng/mL and 1350 ng/mL). For patients tolerant of all study drug infusions, no consistent pattern of adverse events with either dose or duration emerged.\n\n\nINTERPRETATION\nOmecamtiv mecarbil improved cardiac function in patients with heart failure caused by left ventricular dysfunction and could be the first in class of a new therapeutic agent.\n\n\nFUNDING\nCytokinetics Inc."
},
{
"pmid": "32175315",
"title": "Inducing Endogenous Cardiac Regeneration: Can Biomaterials Connect the Dots?",
"abstract": "Heart failure (HF) after myocardial infarction (MI) due to blockage of coronary arteries is a major public health issue. MI results in massive loss of cardiac muscle due to ischemia. Unfortunately, the adult mammalian myocardium presents a low regenerative potential, leading to two main responses to injury: fibrotic scar formation and hypertrophic remodeling. To date, complete heart transplantation remains the only clinical option to restore heart function. In the last two decades, tissue engineering has emerged as a promising approach to promote cardiac regeneration. Tissue engineering aims to target processes associated with MI, including cardiomyogenesis, modulation of extracellular matrix (ECM) remodeling, and fibrosis. Tissue engineering dogmas suggest the utilization and combination of two key components: bioactive molecules and biomaterials. This chapter will present current therapeutic applications of biomaterials in cardiac regeneration and the challenges still faced ahead. The following biomaterial-based approaches will be discussed: Nano-carriers for cardiac regeneration-inducing biomolecules; corresponding matrices for their controlled release; injectable hydrogels for cell delivery and cardiac patches. The concept of combining cardiac patches with controlled release matrices will be introduced, presenting a promising strategy to promote endogenous cardiac regeneration."
},
{
"pmid": "25795463",
"title": "The winding road to regenerating the human heart.",
"abstract": "UNLABELLED\nRegenerating the human heart is a challenge that has engaged researchers and clinicians around the globe for nearly a century. From the repair of the first septal defect in 1953, followed by the first successful heart transplant in 1967, and later to the first infusion of bone marrow-derived cells to the human myocardium in 2002, significant progress has been made in heart repair. However, chronic heart failure remains a leading pathological burden worldwide. Why has regenerating the human heart been such a challenge, and how close are we to achieving clinically relevant regeneration? Exciting progress has been made to establish cell transplantation techniques in recent years, and new preclinical studies in large animal models have shed light on the promises and challenges that lie ahead. In this review, we will discuss the history of cell therapy approaches and provide an overview of clinical trials using cell transplantation for heart regeneration. Focusing on the delivery of human stem cell-derived cardiomyocytes, current experimental strategies in the field will be discussed as well as their clinical translation potential. Although the human heart has not been regenerated yet, decades of experimental progress have guided us onto a promising path.\n\n\nSUMMARY\nPrevious work in clinical cell therapy for heart repair using bone marrow mononuclear cells, mesenchymal stem cells, and cardiac-derived cells have overall demonstrated safety and modest efficacy. Recent advancements using human stem cell-derived cardiomyocytes have established them as a next generation cell type for moving forward, however certain challenges must be overcome for this technique to be successful in the clinics."
},
{
"pmid": "25093889",
"title": "Clinical imaging in regenerative medicine.",
"abstract": "In regenerative medicine, clinical imaging is indispensable for characterizing damaged tissue and for measuring the safety and efficacy of therapy. However, the ability to track the fate and function of transplanted cells with current technologies is limited. Exogenous contrast labels such as nanoparticles give a strong signal in the short term but are unreliable long term. Genetically encoded labels are good both short- and long-term in animals, but in the human setting they raise regulatory issues related to the safety of genomic integration and potential immunogenicity of reporter proteins. Imaging studies in brain, heart and islets share a common set of challenges, including developing novel labeling approaches to improve detection thresholds and early delineation of toxicity and function. Key areas for future research include addressing safety concerns associated with genetic labels and developing methods to follow cell survival, differentiation and integration with host tissue. Imaging may bridge the gap between cell therapies and health outcomes by elucidating mechanisms of action through longitudinal monitoring."
},
{
"pmid": "24776797",
"title": "Human embryonic-stem-cell-derived cardiomyocytes regenerate non-human primate hearts.",
"abstract": "Pluripotent stem cells provide a potential solution to current epidemic rates of heart failure by providing human cardiomyocytes to support heart regeneration. Studies of human embryonic-stem-cell-derived cardiomyocytes (hESC-CMs) in small-animal models have shown favourable effects of this treatment. However, it remains unknown whether clinical-scale hESC-CM transplantation is feasible, safe or can provide sufficient myocardial regeneration. Here we show that hESC-CMs can be produced at a clinical scale (more than one billion cells per batch) and cryopreserved with good viability. Using a non-human primate model of myocardial ischaemia followed by reperfusion, we show that cryopreservation and intra-myocardial delivery of one billion hESC-CMs generates extensive remuscularization of the infarcted heart. The hESC-CMs showed progressive but incomplete maturation over a 3-month period. Grafts were perfused by host vasculature, and electromechanical junctions between graft and host myocytes were present within 2 weeks of engraftment. Importantly, grafts showed regular calcium transients that were synchronized to the host electrocardiogram, indicating electromechanical coupling. In contrast to small-animal models, non-fatal ventricular arrhythmias were observed in hESC-CM-engrafted primates. Thus, hESC-CMs can remuscularize substantial amounts of the infarcted monkey heart. Comparable remuscularization of a human heart should be possible, but potential arrhythmic complications need to be overcome."
},
{
"pmid": "24862441",
"title": "Comparison of biomaterial delivery vehicles for improving acute retention of stem cells in the infarcted heart.",
"abstract": "Cell delivery to the infarcted heart has emerged as a promising therapy, but is limited by very low acute retention and engraftment of cells. The objective of this study was to compare a panel of biomaterials to evaluate if acute retention can be improved with a biomaterial carrier. Cells were quantified post-implantation in a rat myocardial infarct model in five groups (n = 7-8); saline injection (current clinical standard), two injectable hydrogels (alginate, chitosan/β-glycerophosphate (chitosan/ß-GP)) and two epicardial patches (alginate, collagen). Human mesenchymal stem cells (hMSCs) were delivered to the infarct border zone with each biomaterial. At 24 h, retained cells were quantified by fluorescence. All biomaterials produced superior fluorescence to saline control, with approximately 8- and 14-fold increases with alginate and chitosan/β-GP injectables, and 47 and 59-fold increases achieved with collagen and alginate patches, respectively. Immunohistochemical analysis qualitatively confirmed these findings. All four biomaterials retained 50-60% of cells that were present immediately following transplantation, compared to 10% for the saline control. In conclusion, all four biomaterials were demonstrated to more efficiently deliver and retain cells when compared to a saline control. Biomaterial-based delivery approaches show promise for future development of efficient in vivo delivery techniques."
},
{
"pmid": "23165643",
"title": "Quantitative magnetic resonance imaging can distinguish remodeling mechanisms after acute myocardial infarction based on the severity of ischemic insult.",
"abstract": "The type and extent of myocardial infarction encountered clinically is primarily determined by the severity of the initial ischemic insult. The purpose of the study was to differentiate longitudinal fluctuations in remodeling mechanisms in porcine myocardium following different ischemic insult durations. Animals (N = 8) were subjected to coronary balloon occlusion for either 90 or 45 min, followed by reperfusion. Imaging was performed on a 3 T MRI scanner between day-2 and week-6 postinfarction with edema quantified by T2, hemorrhage by T2*, vasodilatory function by blood-oxygenation-level-dependent T2 alterations and infarction/microvascular obstruction by contrast-enhanced imaging. The 90-min model produced large transmural infarcts with hemorrhage and microvascular obstruction, while the 45 min produced small nontransmural and nonhemorrhagic infarction. In the 90-min group, elevation of end-diastolic-volume, reduced cardiac function, persistence of edema, and prolonged vasodilatory dysfunction were all indicative of adverse remodeling; in contrast, the 45-min group showed no signs of adverse remodeling. The 45- and 90-min porcine models seem to be ideal for representing the low- and high-risk patient groups, respectively, commonly encountered in the clinic. Such in vivo characterization will be a key in predicting functional recovery and may potentially allow evaluation of novel therapies targeted to alleviate ischemic injury and prevent microvascular obstruction/hemorrhage."
},
{
"pmid": "20415592",
"title": "Image-guided interventions: technology review and clinical applications.",
"abstract": "Image-guided interventions are medical procedures that use computer-based systems to provide virtual image overlays to help the physician precisely visualize and target the surgical site. This field has been greatly expanded by the advances in medical imaging and computing power over the past 20 years. This review begins with a historical overview and then describes the component technologies of tracking, registration, visualization, and software. Clinical applications in neurosurgery, orthopedics, and the cardiac and thoracoabdominal areas are discussed, together with a description of an evolving technology named Natural Orifice Transluminal Endoscopic Surgery (NOTES). As the trend toward minimally invasive procedures continues, image-guided interventions will play an important role in enabling new procedures, while improving the accuracy and success of existing approaches. Despite this promise, the role of image-guided systems must be validated by clinical trials facilitated by partnerships between scientists and physicians if this field is to reach its full potential."
},
{
"pmid": "20357147",
"title": "Monitoring with head-mounted displays in general anesthesia: a clinical evaluation in the operating room.",
"abstract": "BACKGROUND\nPatient monitors in the operating room are often positioned where it is difficult for the anesthesiologist to see them when performing procedures. Head-mounted displays (HMDs) can help anesthesiologists by superimposing a display of the patient's vital signs over the anesthesiologist's field of view. Simulator studies indicate that by using an HMD, anesthesiologists can spend more time looking at the patient and less at the monitors. We performed a clinical evaluation testing whether this finding would apply in practice.\n\n\nMETHODS\nSix attending anesthesiologists provided anesthesia to patients undergoing rigid cystoscopy. Each anesthesiologist performed 6 cases alternating between standard monitoring using a Philips IntelliVue MP70 and standard monitoring plus a Microvision Nomad ND2000 HMD. The HMD interfaced wirelessly with the MP70 monitor and displayed waveform and numerical vital signs data. Video was recorded during all cases and analyzed to determine the percentage of time, frequency, and duration of looks at the anesthesia workstation and at the patient and surgical field during various anesthetic phases. Differences between the display conditions were tested for significance using repeated-measures analysis of variance.\n\n\nRESULTS\nVideo data were collected from 36 cases that ranged from 17 to 75 minutes in duration (median 31 minutes). When participants were using the HMD, compared with standard monitoring, they spent less time looking toward the anesthesia workstation (21.0% vs 25.3%, P = 0.003) and more time looking toward the patient and surgical field (55.9% vs 51.5%, P = 0.014). The HMD had no effect on either the frequency of looks or the average duration of looks toward the patient and surgical field or toward the anesthesia workstation.\n\n\nCONCLUSIONS\nAn HMD of patient vital signs reduces anesthesiologists' surveillance of the anesthesia workstation and allows them to spend more time monitoring their patient and surgical field during normal anesthesia. More research is needed to determine whether the behavioral changes can lead to improved anesthesiologist performance in the operating room."
},
{
"pmid": "31514682",
"title": "Head-Mounted Display Use in Surgery: A Systematic Review.",
"abstract": "Purpose. We analyzed the literature to determine (1) the surgically relevant applications for which head-mounted display (HMD) use is reported; (2) the types of HMD most commonly reported; and (3) the surgical specialties in which HMD use is reported. Methods. The PubMed, Embase, Cochrane Library, and Web of Science databases were searched through August 27, 2017, for publications describing HMD use during surgically relevant applications. We identified 120 relevant English-language, non-opinion publications for inclusion. HMD types were categorized as \"heads-up\" (nontransparent HMD display and direct visualization of the real environment), \"see-through\" (visualization of the HMD display overlaid on the real environment), or \"non-see-through\" (visualization of only the nontransparent HMD display). Results. HMDs were used for image guidance and augmented reality (70 publications), data display (63 publications), communication (34 publications), and education/training (18 publications). See-through HMDs were described in 55 publications, heads-up HMDs in 41 publications, and non-see-through HMDs in 27 publications. Google Glass, a see-through HMD, was the most frequently used model, reported in 32 publications. The specialties with the highest frequency of published HMD use were urology (20 publications), neurosurgery (17 publications), and unspecified surgical specialty (20 publications). Conclusion. Image guidance and augmented reality were the most commonly reported applications for which HMDs were used. See-through HMDs were the most commonly reported type used in surgically relevant applications. Urology and neurosurgery were the specialties with greatest published HMD use."
},
{
"pmid": "27154018",
"title": "Augmented reality in neurosurgery: a systematic review.",
"abstract": "Neuronavigation has become an essential neurosurgical tool in pursuing minimal invasiveness and maximal safety, even though it has several technical limitations. Augmented reality (AR) neuronavigation is a significant advance, providing a real-time updated 3D virtual model of anatomical details, overlaid on the real surgical field. Currently, only a few AR systems have been tested in a clinical setting. The aim is to review such devices. We performed a PubMed search of reports restricted to human studies of in vivo applications of AR in any neurosurgical procedure using the search terms \"Augmented reality\" and \"Neurosurgery.\" Eligibility assessment was performed independently by two reviewers in an unblinded standardized manner. The systems were qualitatively evaluated on the basis of the following: neurosurgical subspecialty of application, pathology of treated lesions and lesion locations, real data source, virtual data source, tracking modality, registration technique, visualization processing, display type, and perception location. Eighteen studies were included during the period 1996 to September 30, 2015. The AR systems were grouped by the real data source: microscope (8), hand- or head-held cameras (4), direct patient view (2), endoscope (1), and X-ray fluoroscopy (1) head-mounted display (1). A total of 195 lesions were treated: 75 (38.46 %) were neoplastic, 77 (39.48 %) neurovascular, and 1 (0.51 %) hydrocephalus, and 42 (21.53 %) were undetermined. Current literature confirms that AR is a reliable and versatile tool when performing minimally invasive approaches in a wide range of neurosurgical diseases, although prospective randomized studies are not yet available and technical improvements are needed."
},
{
"pmid": "32061248",
"title": "Applicability of augmented reality in orthopedic surgery - A systematic review.",
"abstract": "BACKGROUND\nComputer-assisted solutions are changing surgical practice continuously. One of the most disruptive technologies among the computer-integrated surgical techniques is Augmented Reality (AR). While Augmented Reality is increasingly used in several medical specialties, its potential benefit in orthopedic surgery is not yet clear. The purpose of this article is to provide a systematic review of the current state of knowledge and the applicability of AR in orthopedic surgery.\n\n\nMETHODS\nA systematic review of the current literature was performed to find the state of knowledge and applicability of AR in Orthopedic surgery. A systematic search of the following three databases was performed: \"PubMed\", \"Cochrane Library\" and \"Web of Science\". The systematic review followed the Preferred Reporting Items on Systematic Reviews and Meta-analysis (PRISMA) guidelines and it has been published and registered in the international prospective register of systematic reviews (PROSPERO).\n\n\nRESULTS\n31 studies and reports are included and classified into the following categories: Instrument / Implant Placement, Osteotomies, Tumor Surgery, Trauma, and Surgical Training and Education. Quality assessment could be performed in 18 studies. Among the clinical studies, there were six case series with an average score of 90% and one case report, which scored 81% according to the Joanna Briggs Institute Critical Appraisal Checklist (JBI CAC). The 11 cadaveric studies scored 81% according to the QUACS scale (Quality Appraisal for Cadaveric Studies).\n\n\nCONCLUSION\nThis manuscript provides 1) a summary of the current state of knowledge and research of Augmented Reality in orthopedic surgery presented in the literature, and 2) a discussion by the authors presenting the key remarks required for seamless integration of Augmented Reality in the future surgical practice.\n\n\nTRIAL REGISTRATION\nPROSPERO registration number: CRD42019128569."
},
{
"pmid": "28160692",
"title": "The status of augmented reality in laparoscopic surgery as of 2016.",
"abstract": "This article establishes a comprehensive review of all the different methods proposed by the literature concerning augmented reality in intra-abdominal minimally invasive surgery (also known as laparoscopic surgery). A solid background of surgical augmented reality is first provided in order to support the survey. Then, the various methods of laparoscopic augmented reality as well as their key tasks are categorized in order to better grasp the current landscape of the field. Finally, the various issues gathered from these reviewed approaches are organized in order to outline the remaining challenges of augmented reality in laparoscopic surgery."
},
{
"pmid": "28573091",
"title": "Virtual Reality and Augmented Reality in Plastic Surgery: A Review.",
"abstract": "Recently, virtual reality (VR) and augmented reality (AR) have received increasing attention, with the development of VR/AR devices such as head-mounted displays, haptic devices, and AR glasses. Medicine is considered to be one of the most effective applications of VR/AR. In this article, we describe a systematic literature review conducted to investigate the state-of-the-art VR/AR technology relevant to plastic surgery. The 35 studies that were ultimately selected were categorized into 3 representative topics: VR/AR-based preoperative planning, navigation, and training. In addition, future trends of VR/AR technology associated with plastic surgery and related fields are discussed."
},
{
"pmid": "31059421",
"title": "Perceptual Limits of Optical See-Through Visors for Augmented Reality Guidance of Manual Tasks.",
"abstract": "OBJECTIVE\nThe focal length of available optical see-through (OST) head-mounted displays (HMDs) is at least 2 m; therefore, during manual tasks, the user eye cannot keep in focus both the virtual and real content at the same time. Another perceptual limitation is related to the vergence-accommodation conflict, the latter being present in binocular vision only. This paper investigates the effect of incorrect focus cues on the user performance, visual comfort, and workload during the execution of augmented reality (AR)-guided manual task with one of the most advanced OST HMD, the Microsoft HoloLens.\n\n\nMETHODS\nAn experimental study was designed to investigate the performance of 20 subjects in a connect-the-dots task, with and without the use of AR. The following tests were planned: AR-guided monocular and binocular, and naked-eye monocular and binocular. Each trial was analyzed to evaluate the accuracy in connecting dots. NASA Task Load Index and Likert questionnaires were used to assess the workload and the visual comfort.\n\n\nRESULTS\nNo statistically significant differences were found in the workload, and in the perceived comfort between the AR-guided binocular and monocular test. User performances were significantly better during the naked eye tests. No statistically significant differences in performances were found in the monocular and binocular tests. The maximum error in AR tests was 5.9 mm.\n\n\nCONCLUSION\nEven if there is a growing interest in using commercial OST HMD, for guiding high-precision manual tasks, attention should be paid to the limitations of the available technology not designed for the peripersonal space."
},
{
"pmid": "31025950",
"title": "Augmented Reality in Medicine: Systematic and Bibliographic Review.",
"abstract": "BACKGROUND\nAugmented reality (AR) is a technology that integrates digital information into the user's real-world environment. It offers a new approach for treatments and education in medicine. AR aids in surgery planning and patient treatment and helps explain complex medical situations to patients and their relatives.\n\n\nOBJECTIVE\nThis systematic and bibliographic review offers an overview of the development of apps in AR with a medical use case from March 2012 to June 2017. This work can aid as a guide to the literature and categorizes the publications in the field of AR research.\n\n\nMETHODS\nFrom March 2012 to June 2017, a total of 1309 publications from PubMed and Scopus databases were manually analyzed and categorized based on a predefined taxonomy. Of the total, 340 duplicates were removed and 631 publications were excluded due to incorrect classification or unavailable technical data. The remaining 338 publications were original research studies on AR. An assessment of the maturity of the projects was conducted on these publications by using the technology readiness level. To provide a comprehensive process of inclusion and exclusion, the authors adopted the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement.\n\n\nRESULTS\nThe results showed an increasing trend in the number of publications on AR in medicine. There were no relevant clinical trials on the effect of AR in medicine. Domains that used display technologies seemed to be researched more than other medical fields. The technology readiness level showed that AR technology is following a rough bell curve from levels 4 to 7. Current AR technology is more often applied to treatment scenarios than training scenarios.\n\n\nCONCLUSIONS\nThis work discusses the applicability and future development of augmented- and mixed-reality technologies such as wearable computers and AR devices. It offers an overview of current technology and a base for researchers interested in developing AR apps in medicine. The field of AR is well researched, and there is a positive trend in its application, but its use is still in the early stages in the field of medicine and it is not widely adopted in clinical practice. Clinical studies proving the effectiveness of applied AR technologies are still lacking."
},
{
"pmid": "26336129",
"title": "Resolving the Vergence-Accommodation Conflict in Head-Mounted Displays.",
"abstract": "The vergence-accommodation conflict (VAC) remains a major problem in head-mounted displays for virtual and augmented reality (VR and AR). In this review, I discuss why this problem is pivotal for nearby tasks in VR and AR, present a comprehensive taxonomy of potential solutions, address advantages and shortfalls of each design, and cover various ways to better evaluate the solutions. The review describes how VAC is addressed in monocular, stereoscopic, and multiscopic HMDs, including retinal scanning and accommodation-free displays. Eye-tracking-based approaches that do not provide natural focal cues-gaze-guided blur and dynamic stereoscopy-are also covered. Promising future research directions in this area are identified."
},
{
"pmid": "28961115",
"title": "A Survey of Calibration Methods for Optical See-Through Head-Mounted Displays.",
"abstract": "Optical see-through head-mounted displays (OST HMDs) are a major output medium for Augmented Reality, which have seen significant growth in popularity and usage among the general public due to the growing release of consumer-oriented models, such as the Microsoft Hololens. Unlike Virtual Reality headsets, OST HMDs inherently support the addition of computer-generated graphics directly into the light path between a user's eyes and their view of the physical world. As with most Augmented and Virtual Reality systems, the physical position of an OST HMD is typically determined by an external or embedded 6-Degree-of-Freedom tracking system. However, in order to properly render virtual objects, which are perceived as spatially aligned with the physical environment, it is also necessary to accurately measure the position of the user's eyes within the tracking system's coordinate frame. For over 20 years, researchers have proposed various calibration methods to determine this needed eye position. However, to date, there has not been a comprehensive overview of these procedures and their requirements. Hence, this paper surveys the field of calibration methods for OST HMDs. Specifically, it provides insights into the fundamentals of calibration techniques, and presents an overview of both manual and automatic approaches, as well as evaluation methods and metrics. Finally, it also identifies opportunities for future research."
},
{
"pmid": "26357098",
"title": "Corneal-Imaging Calibration for Optical See-Through Head-Mounted Displays.",
"abstract": "In recent years optical see-through head-mounted displays (OST-HMDs) have moved from conceptual research to a market of mass-produced devices with new models and applications being released continuously. It remains challenging to deploy augmented reality (AR) applications that require consistent spatial visualization. Examples include maintenance, training and medical tasks, as the view of the attached scene camera is shifted from the user's view. A calibration step can compute the relationship between the HMD-screen and the user's eye to align the digital content. However, this alignment is only viable as long as the display does not move, an assumption that rarely holds for an extended period of time. As a consequence, continuous recalibration is necessary. Manual calibration methods are tedious and rarely support practical applications. Existing automated methods do not account for user-specific parameters and are error prone. We propose the combination of a pre-calibrated display with a per-frame estimation of the user's cornea position to estimate the individual eye center and continuously recalibrate the system. With this, we also obtain the gaze direction, which allows for instantaneous uncalibrated eye gaze tracking, without the need for additional hardware and complex illumination. Contrary to existing methods, we use simple image processing and do not rely on iris tracking, which is typically noisy and can be ambiguous. Evaluation with simulated and real data shows that our approach achieves a more accurate and stable eye pose estimation, which results in an improved and practical calibration with a largely improved distribution of projection error."
},
{
"pmid": "21869429",
"title": "Least-squares fitting of two 3-d point sets.",
"abstract": "Two point sets {pi} and {p'i}; i = 1, 2,..., N are related by p'i = Rpi + T + Ni, where R is a rotation matrix, T a translation vector, and Ni a noise vector. Given {pi} and {p'i}, we present an algorithm for finding the least-squares solution of R and T, which is based on the singular value decomposition (SVD) of a 3 × 3 matrix. This new algorithm is compared to two earlier algorithms with respect to computer time requirements."
},
{
"pmid": "22275200",
"title": "Virtual and augmented medical imaging environments: enabling technology for minimally invasive cardiac interventional guidance.",
"abstract": "Virtual and augmented reality environments have been adopted in medicine as a means to enhance the clinician's view of the anatomy and facilitate the performance of minimally invasive procedures. Their value is truly appreciated during interventions where the surgeon cannot directly visualize the targets to be treated, such as during cardiac procedures performed on the beating heart. These environments must accurately represent the real surgical field and require seamless integration of pre- and intra-operative imaging, surgical tracking, and visualization technology in a common framework centered around the patient. This review begins with an overview of minimally invasive cardiac interventions, describes the architecture of a typical surgical guidance platform including imaging, tracking, registration and visualization, highlights both clinical and engineering accuracy limitations in cardiac image guidance, and discusses the translation of the work from the laboratory into the operating room together with typically encountered challenges."
},
{
"pmid": "9874293",
"title": "Predicting error in rigid-body point-based registration.",
"abstract": "Guidance systems designed for neurosurgery, hip surgery, and spine surgery, and for approaches to other anatomy that is relatively rigid can use rigid-body transformations to accomplish image registration. These systems often rely on point-based registration to determine the transformation, and many such systems use attached fiducial markers to establish accurate fiducial points for the registration, the points being established by some fiducial localization process. Accuracy is important to these systems, as is knowledge of the level of that accuracy. An advantage of marker-based systems, particularly those in which the markers are bone-implanted, is that registration error depends only on the fiducial localization error (FLE) and is thus to a large extent independent of the particular object being registered. Thus, it should be possible to predict the clinical accuracy of marker-based systems on the basis of experimental measurements made with phantoms or previous patients. This paper presents two new expressions for estimating registration accuracy of such systems and points out a danger in using a traditional measure of registration accuracy. The new expressions represent fundamental theoretical results with regard to the relationship between localization error and registration error in rigid-body, point-based registration. Rigid-body, point-based registration is achieved by finding the rigid transformation that minimizes \"fiducial registration error\" (FRE), which is the root mean square distance between homologous fiducials after registration. Closed form solutions have been known since 1966. The expected value (FRE2) depends on the number N of fiducials and expected squared value of FLE, (FLE-2, but in 1979 it was shown that (FRE2) is approximately independent of the fiducial configuration C. The importance of this surprising result seems not yet to have been appreciated by the registration community: Poor registrations caused by poor fiducial configurations may appear to be good due to a small FRE value. A more critical and direct measure of registration error is the \"target registration error\" (TRE), which is the distance between homologous points other than the centroids of fiducials. Efforts to characterize its behavior have been made since 1989. Published numerical simulations have shown that (TRE2) is roughly proportional to (FLE2)/N and, unlike (FRE2), does depend in some way on C. Thus, FRE, which is often used as feedback to the surgeon using a point-based guidance system, is in fact an unreliable indicator of registration-accuracy. In this work we derive approximate expressions for (TRE2), and for the expected squared alignment error of an individual fiducial. We validate both approximations through numerical simulations. The former expression can be used to provide reliable feedback to the surgeon during surgery and to guide the placement of markers before surgery, or at least to warn the surgeon of potentially dangerous fiducial placements; the latter expression leads to a surprising conclusion: Expected registration accuracy (TRE) is worst near the fiducials that are most closely aligned! This revelation should be of particular concern to surgeons who may at present be relying on fiducial alignment as an indicator of the accuracy of their point-based guidance systems."
},
{
"pmid": "31646408",
"title": "Measuring geometric accuracy in magnetic resonance imaging with 3D-printed phantom and nonrigid image registration.",
"abstract": "OBJECTIVE\nWe aimed to develop a vendor-neutral and interaction-free quality assurance protocol for measuring geometric accuracy of head and brain magnetic resonance (MR) images. We investigated the usability of nonrigid image registration in the analysis and looked for the optimal registration parameters.\n\n\nMATERIALS AND METHODS\nWe constructed a 3D-printed phantom and imaged it with 12 MR scanners using clinical sequences. We registered a geometric-ground-truth computed tomography (CT) acquisition to the MR images using an open-source nonrigid-registration-toolbox with varying parameters. We applied the transforms to a set of control points in the CT image and compared their locations to the corresponding visually verified reference points in the MR images.\n\n\nRESULTS\nWith optimized registration parameters, the mean difference (and standard deviation) of control point locations when compared to the reference method was (0.17 ± 0.02) mm for the 12 studied scanners. The maximum displacements varied from 0.50 to 1.35 mm or 0.89 to 2.30 mm, with vendors' distortion correction on or off, respectively.\n\n\nDISCUSSION\nUsing nonrigid CT-MR registration can provide a robust and relatively test-object-agnostic method for estimating the intra- and inter-scanner variations of the geometric distortions."
},
{
"pmid": "17670397",
"title": "Do cardiac stabilizers really stabilize? Experimental quantitative analysis of mechanical stabilization.",
"abstract": "In order to assess the three-dimensional movement of the coronary arteries both during normal cardiac activity and after mechanical stabilization, a polypropylene black marker was placed in 10 pigs on the middle portion of the three main coronary branches. Marker motion was recorded for 10 s using two TV-digital cameras and was estimated with a precision of 50 microm. After stabilization with three different mechanical stabilizers (Medtronic, Genzyme, CTS-Guidant), a remnant coronary artery excursion of about 1.5-2.4 mm was found. There is a significant residual coronary artery motion after mechanical stabilization, which could affect the quality of anastomosis, especially in unfavourable situations."
},
{
"pmid": "29603366",
"title": "MR imaging of magnetic ink patterns via off-resonance sensitivity.",
"abstract": "PURPOSE\nPrinted magnetic ink creates predictable B0 field perturbations based on printed shape and magnetic susceptibility. This can be exploited for contrast in MR imaging techniques that are sensitized to off-resonance. The purpose of this work was to characterize the susceptibility variations of magnetic ink and demonstrate its application for creating MR-visible skin markings.\n\n\nMETHODS\nThe magnetic susceptibility of the ink was estimated by comparing acquired and simulated B0 field maps of a custom-built phantom. The phantom was also imaged using a 3D gradient echo sequence with a presaturation pulse tuned to different frequencies, which adjusts the range of suppressed frequencies. Healthy volunteers with a magnetic ink pattern pressed to the skin or magnetic ink temporary flexible adhesives applied to the skin were similarly imaged.\n\n\nRESULTS\nThe volume-average magnetic susceptibility of the ink was estimated to be 131 ± 3 parts per million across a 1-mm isotropic voxel (13,100 parts per million assuming a 10-μm thickness of printed ink). Adjusting the saturation frequency highlights different off-resonant regions created by the ink patterns; for example, if tuned to suppress fat, fat suppression will fail near the ink due to the off-resonance. This causes magnetic ink skin markings placed over a region with underlying subcutaneous fat to be visible on MR images.\n\n\nCONCLUSION\nPatterns printed with magnetic ink can be imaged and identified with MRI. Temporary flexible skin adhesives printed with magnetic ink have the potential to be used as skin markings that are visible both by eye and on MR images."
}
] |
Materials | null | PMC8878319 | 10.3390/ma15041566 | Study of the Structure and Mechanical Properties after Electrical Discharge Machining with Composite Electrode Tools | Our study was devoted to increasing the efficiency of electrical discharge machining of high-quality parts with a composite electrode tool. We analyzed the chemical composition of the surface layer of the processed product, microhardness, the parameter of roughness of the treated surface, residual stresses, and mechanical properties under tension and durability with low-cycle fatigue of steel 15. Our objective was to study the effect of the process of copy-piercing electrical discharge machining on the performance of parts using composite electrode tools. The experiments were carried out on a copy-piercing electrical discharge machining machine Smart CNC using annular and rectangular electrodes; electrode tool materials included copper, graphite, and composite material of the copper–graphite system with a graphite content of 20%. The elemental composition of the surface layer of steel 15 after electrical discharge machining was determined. Measurements of microhardness (HV) and surface roughness were made. Residual stresses were determined using the method of X-ray diffractometry. Metallographic analysis was performed for the presence of microdefects. Tensile tests and low-cycle fatigue tests were carried out. The mechanical properties of steel 15 before and after electrical discharge machining under low-cycle fatigue were determined. We established that the use of a composite electrode tool for electrical discharge machining of steel 15 does not have negative consequences. | Related WorkCurrently, there is a lot of research in the field of EDM. The main directions of research surrounding the EDM process that were observed in our literary analysis are shown in Table 1.Based on our analysis of the literature, we concluded that the dynamics of research have changed (Figure 3). There has been an increase in the amount of research on EDM in general over the past 15 years. The largest number of studies is devoted to changes in surface morphology and topography during EDM, and the rate of their development is greatest. Over the past 5 years, there has been a sharp increase in the number of studies that focus on changing the chemical composition of the treated surface. There is much less research on the mechanical properties and the resulting white layer after EDM. Nevertheless, their number has also increased several times.Leading universities around the world are engaged in EDM research. The chemical composition and structure of the processed surface in comparison with traditional copper and graphite electrodes remains unexplored. | [
"30669518",
"30901872"
] | [
{
"pmid": "30669518",
"title": "Multi-Response Optimization of Electrical Discharge Machining Using the Desirability Function.",
"abstract": "Electrical discharge machining (EDM) is a modern technology that is widely used in the production of difficult to cut conductive materials. The basic problem of EDM is the stochastic nature of electrical discharges. The optimal selection of machining parameters to achieve micron surface roughness and the recast layer with the maximal possible value of the material removal rate (MRR) is quite challenging. In this paper, we performed an analytical and experimental investigation of the influence of the EDM parameters: Surface integrity and MRR. Response surface methodology (RSM) was used to build empirical models on the influence of the discharge current I, pulse time ton, and the time interval toff, on the surface roughness (Sa), the thickness of the white layer (WL), and the MRR, during the machining of tool steel 55NiCrMoV7. The surface and subsurface integrity were evaluated using an optical microscope and a scanning profilometer. Analysis of variance (ANOVA) was used to establish the statistical significance parameters. The calculated contribution indicated that the discharge current had the most influence (over the 50%) on the Sa, WL, and MRR, followed by the discharge time. The multi-response optimization was carried out using the desirability function for the three cases of EDM: Finishing, semi-finishing, and roughing. The confirmation test showed that maximal errors between the predicted and the obtained values did not exceed 6%."
},
{
"pmid": "30901872",
"title": "Investigation of the Influence of Reduced Graphene Oxide Flakes in the Dielectric on Surface Characteristics and Material Removal Rate in EDM.",
"abstract": "Electrical discharge machining (EDM) is an advanced technology used to manufacture difficult-to-cut conductive materials. However, the surface layer properties after EDM require additional finishing operations in many cases. Therefore, new methods implemented in EDM are being developed to improve surface characteristics and the material removal rate. This paper presents new research about improving the surface integrity of 55NiCrMoV7 tool steel by using reduced graphene oxide (RGO) flakes in the dielectric. The main goal of the research was to investigate the influence of RGO flakes in the dielectric on electrical discharge propagation and heat dissipation in the gap. The investigation of the influence of discharge current I and pulse time ton during EDM with RGO flakes in the dielectric was carried out using response surface methodology. Furthermore, the surface texture properties and metallographic structure after EDM with RGO in the dielectric and conventional EDM were investigated and described. The obtained results indicate that using RGO flakes in the dielectric leads to a decreased surface roughness and recast layer thickness with an increased material removal rate (MRR). The presence of RGO flakes in the dielectric reduced the breakdown voltage and allowed several discharges to occur during one pulse. The dispersion of the discharge caused a decrease in the energy delivered to the workpiece. In terms of the finishing EDM parameters, there was a 460% reduction in roughness Ra with a uniform distribution of the recast layer on the surface, and a slight increase in MRR (12%) was obtained."
}
] |
Micromachines | null | PMC8878869 | 10.3390/mi13020312 | A W-Band Communication and Sensing Convergence System Enabled by Single OFDM Waveform | Convergence of communication and sensing is highly desirable for future wireless systems. This paper presents a converged millimeter-wave system using a single orthogonal frequency division multiplexing (OFDM) waveform and proposes a novel method, based on the zero-delay shift for the received echoes, to extend the sensing range beyond the cyclic prefix interval (CPI). Both simulation and proof-of-concept experiments evaluate the performance of the proposed system at 97 GHz. The experiment uses a W-band heterodyne structure to transmit/receive an OFDM waveform featuring 3.9 GHz bandwidth with quadrature amplitude modulation (16-QAM). The proposed approach successfully achieves a range resolution of 0.042 m and a speed resolution of 0.79 m/s with an extended range, which agree well with the simulation. Meanwhile, based on the same OFDM waveform, it also achieves a bit-error-rate (BER) 10−2, below the forward error-correction threshold. Our proposed system is expected to be a significant step forward for future wireless convergence applications. | 1.1. Related WorksThe OFDM waveform for sensing can be processed either by the conventional correlation-based approach [24,25], or by OFDM symbol-based processing [26]. Correlation-based sensing is usually performed by cross-correlation in the delay and Doppler domains between the transmitted and received pulses, and different schemes have been proposed to improve sensing performance. For example, a good approximation of the transmitted signal is generated at the receiver for removing clutter in the correlation-based target detection [15]. Work in [25] proposes to use the information of data symbols for ambiguity suppression, and circular correlation for range extension up to an OFDM symbol duration. Different correlation-based OFDM radar receiver schemes have been compared in [27], in terms of complexity, signal-to-interference-plus-noise-ratio, and robustness against ground clutter.Alternatively, similar to OFDM-based communication, OFDM-based sensing can also use IFFT/FFT operations to extract range and speed information. Based on this approach, a 77 GHz OFDM-based sensing system with a bandwidth of 200 MHz demonstrated a sensing resolution of 0.75 m with the maximum range of 150 m [28]. Another OFDM-based radar at 77 GHz used a stepped carrier approach to achieve a sensing resolution of 0.146 m with a bandwidth of 1.024 GHz, while the maximum range is 60 m [29]. Moreover, the authors implemented OFDM-based radar processing for automotive scenario by using a relatively longer interval of 128 ms to achieve speed resolution of 0.22 m/s, while the range resolution was 1.87 m for a bandwidth of 80 MHz at 5.2 GHz [30].These two sensing processing approaches were employed in the development of OFDM-based radars, while from the viewpoint of converging OFDM-based communication and sensing, OFDM symbol-based sensing processing is more attractive, provided that a sensing receiver is synchronized with the transmitter and the transmitted data are readily available for sensing processing. Some interesting research has been done on OFDM-based convergence in the microwave band. By using OFDM waveforms which are designed for 3GPP-LTE and 5G-NR at 2.4 GHz with a bandwidth of 98.28 MHz, OFDM-based sensing supports a sensing resolution of 1.5 m and a maximum range of 350 m and performs an algorithm for self-interference cancellation in the full-duplex mode [31]. Authors in [32] provide measurement results for the indoor mapping using a 28 GHz carrier frequency for the 5G-NR with a bandwidth of 400 MHz and achieve a sensing resolution of 0.4 m. Another work in [33] shows results of mmWave demonstration testbed for joint sensing and communication; measurements were performed at 26 GHz with a bandwidth of 10 MHz to identify the angular location of different targets using beamforming technique. The work in [34] also presents a range resolution of 1.61 m and a maximum range of 206 m within 93 MHz bandwidth at the 24 GHz band. In addition, authors in [35] provide a parameter selection criterion for joint OFDM radar and communication systems by considering vehicular communication scenarios, such as CPI, subcarriers spacing, and coherence time of the channel. | [] | [] |
Journal of Imaging | 35200733 | PMC8879196 | 10.3390/jimaging8020031 | Digitization of Handwritten Chess Scoresheets with a BiLSTM Network | During an Over-the-Board (OTB) chess event, all players are required to record their moves strictly by hand, and later the event organizers are required to digitize these sheets for official records. This is a very time-consuming process, and in this paper we present an alternate workflow of digitizing scoresheets using a BiLSTM network. Starting with a pretrained network for standard Latin handwriting recognition, we imposed chess-specific restrictions and trained with our Handwritten Chess Scoresheet (HCS) dataset. We developed two post-processing strategies utilizing the facts that we have two copies of each scoresheet (both players are required to write the entire game), and we can easily check if a move is valid. The autonomous post-processing requires no human interaction and achieves a Move Recognition Accuracy (MRA) around 95%. The semi-autonomous approach, which requires requesting user input on unsettling cases, increases the MRA to around 99% while interrupting only on 4% moves. This is a major extension of the very first handwritten chess move recognition work reported by us in September 2021, and we believe this has the potential to revolutionize the scoresheet digitization process for the thousands of chess events that happen every day. | 1.2. Related WorksThis paper is an extended version of our formal approach presented in [5], and we found no other academic journal or conference publications for handwritten chess scoresheet recognition at this time of writing. There are some preliminary level works in the form of undergraduate or graduate thesis reports [6,7], but they are not quite well-presented or finished enough to be comparable with our approach. In addition, there are some works on typographical chess move reading from books or magazines (e.g., [8]) which propose ideas for issues like layout fixing and semantic correction, but do not address the problems that can arise from a handwritten scoresheet. Services for digitizing chess scoresheets such as Reine Chess [9] currently exist, but they require games to be recorded on their proprietary scoresheets with very specific formats; they cannot be applied to existing scoresheet formats, and would require tournaments to alter their structure, causing a variety of problems. Figure 2 shows sample scoresheets from Reine Chess and from a typical chess event, which demonstrates the differences and limitations of such scannable solutions in a practical scenario. Scoresheet-specific solutions also offer no solution to retroactively digitize scoresheets and cannot be easily applied to other documents.Digitizing chess scoresheets is essentially an offline handwriting recognition problem, and there are many different ways this problem can be approached. One approach, known as character spotting, works by finding the locations and classes of each individual component from a word image. This is a powerful technique but is better suited for more complicated scripts [10]. Since the chess moves are recorded using a fraction of the Latin alphabet, a segmentation-free whole word recognition, using tools like Recurrent Neural Networks (RNN) or Hidden Markov Models (HMM), can be considered more suitable for this problem. Our choice for this experiment is a convolutional BiLSTM network. BiLSTM or Bi-directional Long Short Term Memory is a variant of a Recurrent Neural Network (RNN), which has been proven to be extremely powerful in offline recognition in recent years. For example, Bruel et al. achieved a 0.6% Character Error Rate (CER) using a BiLSTM network for printed Latin text [11]. Shkarupa et al. achieved 78% word-level accuracy in classifying unconstrained Latin handwriting on the KNMP Chronicon Boemorum and Stanford CCCC datasets [12]. Dutta et al. were able to achieve a 12.61% Word Error Rate (WER) when recognizing unconstrained handwriting on the IAM dataset [13]. Ingle et al. proposed a line recognition based on neural networks without recurrent connections and achieved a comparable training accuracy with LSTM-based models while allowing better parallelism in training and inference [14]. They also presented methods for building large datasets for handwriting text recognition models. In contrast, Chammas and Mokbel used an RNN to recognize historical text documents demonstrating how to work with smaller datasets [15]. Sudholt et al. presented a Convolutional Neural Network (CNN), called Pyramidal Histogram of Characters or PHOCNet, to deploy a query based word spotting algorithm from handwritten Latin documents [16]. Scheidl et al. demonstrated the benefit of training an RNN with the Connectionist Temporal Classification (CTC) loss function using the IAM and Bentham HTR datasets [17].Pattern recognition systems are never error-free and often researchers propose systems with human intervention for assistance with certain predictions [18,19]. We also used such a strategy that we call semi-autonomous post-processing, which looks for manual help for unresolved cases. Different segmented text recognition approaches for problems like bank check recognition, signature verification, tabular or form based text identification are also relevant to our chess move recognition problem. One notable demonstration for recognizing segmented handwritten text was presented by Su et al. using two RNN classifiers with Histogram of Oriented Gradient (HOG) and traditional geometric features to obtain 93.32% accuracy [20]. | [
"28055850"
] | [
{
"pmid": "28055850",
"title": "An End-to-End Trainable Neural Network for Image-Based Sequence Recognition and Its Application to Scene Text Recognition.",
"abstract": "Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it."
}
] |
Micromachines | null | PMC8879659 | 10.3390/mi13020230 | RGB-D Visual SLAM Based on Yolov4-Tiny in Indoor Dynamic Environment | For a SLAM system operating in a dynamic indoor environment, its position estimation accuracy and visual odometer stability could be reduced because the system can be easily affected by moving obstacles. In this paper, a visual SLAM algorithm based on the Yolov4-Tiny network is proposed. Meanwhile, a dynamic feature point elimination strategy based on the traditional ORBSLAM is proposed. Besides this, to obtain semantic information, object detection is carried out when the feature points of the image are extracted. In addition, the epipolar geometry algorithm and the LK optical flow method are employed to detect dynamic objects. The dynamic feature points are removed in the tracking thread, and only the static feature points are used to estimate the position of the camera. The proposed method is evaluated on the TUM dataset. The experimental results show that, compared with ORB-SLAM2, our algorithm improves the camera position estimation accuracy by 93.35% in a highly dynamic environment. Additionally, the average time needed by our algorithm to process an image frame in the tracking thread is 21.49 ms, achieving real-time performance. | 2. Related Work2.1. Dynamic SLAM Based on Geometric MethodSun et al. [4] used the method of determining the difference between adjacent frames to detect moving targets, but this method has poor real-time performance. Wang et al. [5] proposed an indoor moving target detection scheme. Firstly, the matched outer points in adjacent frames are filtered through epipolar geometry, and then the clustering information of the depth map provided by the rgb-d camera is fused to identify independent moving targets in the scene. However, the accuracy of the algorithm depends on the pose transformation matrix between adjacent frames. In highly dynamic scenes, the error of the algorithm is large. Lin et al. [6] proposed a method to detect moving objects in a scene using depth information and visual ranging. By fusing the detected outer point information with the depth information of the visual sensor, the position of the moving target in the scene can be easily obtained. However, due to the uncertainty of depth information and the calculation error of the transformation matrix between adjacent frames, the accuracy of target detection and segmentation is low.The above methods are based on the same principle: the moving object part in the image is regarded as an outlier, which is excluded in the process of estimating attitude, meaning this only depends on the static part of the scene. As a result, the accuracy of current estimation methods depends on the proportion of static feature points in the scene. If there are too many dense dynamic objects in the scene, the reliability of pose estimation will be seriously affected, and the accuracy of map construction will be affected.2.2. SLAM Based on Deep Learning or Semantic InformationIn recent years, with the development of deep learning, deep learning technology is being combined with SLAM algorithms to deal with dynamic obstacles in an indoor dynamic environment. Chao Yu et al. [7] proposed DS-SLAM based on the ORB-SLAM2 framework, which uses the SegNet network to obtain semantic information in the scene with independent threads. Then, the inter-frame transformation matrix is estimated through the RANSAC algorithm, and the pole line geometry is adopted to judge feature point states. When the number of dynamic feature points on an object is greater than the threshold, the object is considered dynamic, and all feature points are filtered. This algorithm performs well on the TUM dataset. However, since the basic matrix used in the polar constraint is calculated based on all feature points, the estimated basic matrix will suffer from serious deviations when there are too many abnormal feature points in the image. Similarly, Berta Bescos et al. [8] proposed a DynaSLAM algorithm based on ORB-SLAM2, which filters out dynamic feature points in scenarios by combining geometry and deep learning. The algorithm achieves excellent results on the TUM dataset, but mask-RCNN cannot be used in real time, which affects the application of this algorithm in a real environment. DDL-SLAM [9] detects dynamic objects with semantic masks obtained by DUNet and multi-view geometry, and then reconstructs the background that is obscured by dynamic objects with the strategy of image inpainting. Given that the computation of the masks of dynamic objects is a process taking place at the pixel level, this method also cannot achieve real-time performance. Y. Fan et al. [10] proposed a semantic SLAM system by using BlitzNet to obtain the masks and bounding boxes of dynamic objects in images. The images can be quickly divided into environment regions and dynamic regions, and the depth-stable matching points in the environment are used to construct epipolar constraints to locate the static matching points in the dynamic regions. However, the method still has two problems; one is the real-time problem, and the other is that the method cannot solve unknown objects. Han and Xi [11] proposed a PSPnet-SLAM (Pyramid Scene Parsing Network–SLAM) to improve ORB-SLAM2, in which the PSPNet and optical flow are used to detect dynamic characteristics. The features extracted from labeled dynamic objects and the features with large optical flow values are filtered out, and the rest are used for tracking. This method achieves high positioning accuracy. Zhang et al. [12] used Yolo running in an independent thread to acquire semantic information, assuming that features extracted from moving objects would be unstable and need to be filtered out. Li et al. [13] also used Yolo to detect dynamic features, and they proposed a novel sliding window compensation algorithm to reduce the detection errors of Yolo, thus providing a new means of detecting dynamic objects. | [] | [] |
Micromachines | null | PMC8880441 | 10.3390/mi13020216 | Modeling of Soft Pneumatic Actuators with Different Orientation Angles Using Echo State Networks for Irregular Time Series Data | Modeling of soft robotics systems proves to be an extremely difficult task, due to the large deformation of the soft materials used to make such robots. Reliable and accurate models are necessary for the control task of these soft robots. In this paper, a data-driven approach using machine learning is presented to model the kinematics of Soft Pneumatic Actuators (SPAs). An Echo State Network (ESN) architecture is used to predict the SPA’s tip position in 3 axes. Initially, data from actual 3D printed SPAs is obtained to build a training dataset for the network. Irregular-intervals pressure inputs are used to drive the SPA in different actuation sequences. The network is then iteratively trained and optimized. The demonstrated method is shown to successfully model the complex non-linear behavior of the SPA, using only the control input without any feedback sensory data as additional input to the network. In addition, the ability of the network to estimate the kinematics of SPAs with different orientation angles θ is achieved. The ESN is compared to a Long Short-Term Memory (LSTM) network that is trained on the interpolated experimental data. Both networks are then tested on Finite Element Analysis (FEA) data for other θ angle SPAs not included in the training data. This methodology could offer a general approach to modeling SPAs with varying design parameters. | 2. Related WorkSeveral approaches have been investigated to achieve modeling and control for soft robots. Some methods are based on mathematical analysis approximation of the soft structure such as the piecewise constant curvature model approximation [3], the geometrical exact approach such as the Cosserat rod theory [4], and the variable-strain method that generalizes the piecewise constant-strain approach [5]. Other methods rely on data-driven techniques such as neural networks and reinforcement learning [6].However, the elastic behavior of the soft material leads to large deformation in the body of soft robots. Hence, it becomes extremely difficult to reach a general model for such robotic systems. Many attempts have been made in this area to model the deformation of soft actuators. Finite-element methods have been used by Moseley et al. to predict the displacement and force of a soft pneumatic actuator (SPA) [7]. Several attempts also investigated the modeling of a fiber-reinforced soft actuator using continuum models [8,9,10]. Another model that’s widely used is the Euler-Cantilever-Beam model, which assumes that the soft actuator behaves like a cantilever beam [11,12,13]. Other approaches tried to split the actuator into several small segments and study the bending of each segment separately, then add them together to estimate the total bending of the whole actuator [14].Furthermore, several groups have been investigating data-driven approaches. Most notably, the use of different machine learning and deep learning models is showing promising results. These models rely on training neural networks that are capable of predicting the deformation of the soft material and the position of the actuator tip or end-effector. Some proposed models are linear regression models [15]. One approach used simulation data from a Finite Element Method (FEM) hyperelastic material model to train an Artificial Neural Network (ANN) to predict the bending angles of SPAs with variable geometrical parameters [16]. However, the most commonly used networks to model the time series data obtained from the actuator are Recurrent Neural Networks (RNNs). Thuruthel et al. embedded soft sensors into the actuator to obtain bending data and used it to train an RNN that can predict the position of the actuator’s tip and the force applied by it [17]. Another group used a Bidirectional Long Short-Term Memory (BiLSTM) network, a type of RNN, to estimate the position of a hydraulic soft hybrid sensor-actuator [18].Despite offering reasonable accuracy, RNNs still struggle to fully map the input-output relationship of the soft actuator. ESNs provide a possible solution to this problem, due to their ability to model the non-linear dynamics of complex systems. They depend on the concept of Dynamic Reservoir (DR) computing and they tend to simulate the actual soft robotic system more accurately, due to their closeness to the real system. Some attempts have been made to use ESN to model and control complex dynamic systems, such as the CoroBot’s Arm [19]. Sakurai et al. also used an ESN to model a McKibben Pneumatic Artificial Muscle (PAM) [20].In this article, an ESN is used to model an SPA using irregular data, and to predict the position of the actuator’s tip in 3 dimensions (Figure 1). In the next section, an overview of the SPA used in the experiment is presented, including its design features. The concept behind ESNs is also discussed. In the subsequent sections, the experiment conducted is demonstrated in detail with the ESN training, and the results attained, showing the performance of the network in predicting the SPA tip position in 3D. In addition, a Long Short-Term Memory (LSTM) network is trained on the interpolated data and its performance is compared against the ESN’s. Finally, both trained networks are tested for their ability to generalize using data obtained from Finite Element Analysis (FEA) simulation. | [
"26017446",
"23373976",
"29412079",
"27625912",
"34103571",
"22116876",
"9377276"
] | [
{
"pmid": "26017446",
"title": "Design, fabrication and control of soft robots.",
"abstract": "Conventionally, engineers have employed rigid materials to fabricate precise, predictable robotic systems, which are easily modelled as rigid members connected at discrete joints. Natural systems, however, often match or exceed the performance of robotic systems with deformable bodies. Cephalopods, for example, achieve amazing feats of manipulation and locomotion without a skeleton; even vertebrates such as humans achieve dynamic gaits by storing elastic energy in their compliant bones and soft tissues. Inspired by nature, engineers have begun to explore the design and control of soft-bodied robots composed of compliant materials. This Review discusses recent developments in the emerging field of soft robotics."
},
{
"pmid": "23373976",
"title": "Growing and evolving soft robots.",
"abstract": "Completely soft and flexible robots offer to revolutionize fields ranging from search and rescue to endoscopic surgery. One of the outstanding challenges in this burgeoning field is the chicken-and-egg problem of body-brain design: Development of locomotion requires the preexistence of a locomotion-capable body, and development of a location-capable body requires the preexistence of a locomotive gait. This problem is compounded by the high degree of coupling between the material properties of a soft body (such as stiffness or damping coefficients) and the effectiveness of a gait. This article synthesizes four years of research into soft robotics, in particular describing three approaches to the co-discovery of soft robot morphology and control. In the first, muscle placement and firing patterns are coevolved for a fixed body shape with fixed material properties. In the second, the material properties of a simulated soft body coevolve alongside locomotive gaits, with body shape and muscle placement fixed. In the third, a developmental encoding is used to scalably grow elaborate soft body shapes from a small seed structure. Considerations of the simulation time and the challenges of physically implementing soft robots in the real world are discussed."
},
{
"pmid": "29412079",
"title": "Modeling and Experimental Evaluation of Bending Behavior of Soft Pneumatic Actuators Made of Discrete Actuation Chambers.",
"abstract": "In this article, we have established an analytical model to estimate the quasi-static bending displacement (i.e., angle) of the pneumatic actuators made of two different elastomeric silicones (Elastosil M4601 with a bulk modulus of elasticity of 262 kPa and Translucent Soft silicone with a bulk modulus of elasticity of 48 kPa-both experimentally determined) and of discrete chambers, partially separated from each other with a gap in between the chambers to increase the magnitude of their bending angle. The numerical bending angle results from the proposed gray-box model, and the corresponding experimental results match well that the model is accurate enough to predict the bending behavior of this class of pneumatic soft actuators. Further, by using the experimental bending angle results and blocking force results, the effective modulus of elasticity of the actuators is estimated from a blocking force model. The numerical and experimental results presented show that the bending angle and blocking force models are valid for this class of pneumatic actuators. Another contribution of this study is to incorporate a bistable flexible thin metal typified by a tape measure into the topology of the actuators to prevent the deflection of the actuators under their own weight when operating in the vertical plane."
},
{
"pmid": "27625912",
"title": "Autonomous Soft Robotic Fish Capable of Escape Maneuvers Using Fluidic Elastomer Actuators.",
"abstract": "In this work we describe an autonomous soft-bodied robot that is both self-contained and capable of rapid, continuum-body motion. We detail the design, modeling, fabrication, and control of the soft fish, focusing on enabling the robot to perform rapid escape responses. The robot employs a compliant body with embedded actuators emulating the slender anatomical form of a fish. In addition, the robot has a novel fluidic actuation system that drives body motion and has all the subsystems of a traditional robot onboard: power, actuation, processing, and control. At the core of the fish's soft body is an array of fluidic elastomer actuators. We design the fish to emulate escape responses in addition to forward swimming because such maneuvers require rapid body accelerations and continuum-body motion. These maneuvers showcase the performance capabilities of this self-contained robot. The kinematics and controllability of the robot during simulated escape response maneuvers are analyzed and compared with studies on biological fish. We show that during escape responses, the soft-bodied robot has similar input-output relationships to those observed in biological fish. The major implication of this work is that we show soft robots can be both self-contained and capable of rapid body motion."
},
{
"pmid": "34103571",
"title": "Modelling and implementation of soft bio-mimetic turtle using echo state network and soft pneumatic actuators.",
"abstract": "Advances of soft robotics enabled better mimicking of biological creatures and closer realization of animals' motion in the robotics field. The biological creature's movement has morphology and flexibility that is problematic deportation to a bio-inspired robot. This paper aims to study the ability to mimic turtle motion using a soft pneumatic actuator (SPA) as a turtle flipper limb. SPA's behavior is simulated using finite element analysis to design turtle flipper at 22 different geometrical configurations, and the simulations are conducted on a large pressure range (0.11-0.4 Mpa). The simulation results are validated using vision feedback with respect to varying the air pillow orientation angle. Consequently, four SPAs with different inclination angles are selected to build a bio-mimetic turtle, which is tested at two different driving configurations. The nonlinear dynamics of soft actuators, which is challenging to model the motion using traditional modeling techniques affect the turtle's motion. Conclusively, according to kinematics behavior, the turtle motion path is modeled using the Echo State Network (ESN) method, one of the reservoir computing techniques. The ESN models the turtle path with respect to the actuators' rotation motion angle with maximum root-mean-square error of [Formula: see text]. The turtle is designed to enhance the robot interaction with living creatures by mimicking their limbs' flexibility and the way of their motion."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
}
] |
Journal of Imaging | 35200744 | PMC8880448 | 10.3390/jimaging8020042 | A Soft Coprocessor Approach for Developing Image and Video Processing Applications on FPGAs | Developing Field Programmable Gate Array (FPGA)-based applications is typically a slow and multi-skilled task. Research in tools to support application development has gradually reached a higher level. This paper describes an approach which aims to further raise the level at which an application developer works in developing FPGA-based implementations of image and video processing applications. The starting concept is a system of streamed soft coprocessors. We present a set of soft coprocessors which implement some of the key abstractions of Image Algebra. Our soft coprocessors are designed for easy chaining, and allow users to describe their application as a dataflow graph. A prototype implementation of a development environment, called SCoPeS, is presented. An application can be modified even during execution without requiring re-synthesis. The paper concludes with performance and resource utilization results for different implementations of a sample algorithm. We conclude that the soft coprocessor approach has the potential to deliver better performance than the soft processor approach, and can improve programmability over dedicated HDL cores for domain-specific applications while achieving competitive real time performance and utilization. | 2. Background and Related Work2.1. Current Tools for Designing FPGA Custom Cores in a High-Level EnvironmentModern FPGAs are no longer thought of as arrays of gates, but as collections of larger-scale functional blocks, integrated using programmable logic. They are still programmable but are not restricted to programmable logic (PL), and sometimes come equipped with on-chip ARM processors or embedded GPUs. When implementing an image processing system on FPGAs, the design effort is a critical project requirement. Very large image processing systems are difficult to design efficiently and require very detailed hardware knowledge. To address this challenge, vendors have released their HLS tools to reduce the design time. The syntax of design description languages has moved up from VHDL/Verilog HDL to C/C++ level because of HLS tools such as Vivado HLS and Intel HLS compilers [19,20]. Applications are becoming more complex. System-on-chip solutions are achievable through the hybrid architecture of ARM+PL and the HLS design approach.There are also some HLS tools from academia, such as LegUp [21], CyberWorkBench [22], autoBridge [23] and LeFlow [24]. autoBridge is used specifically for floor planning and pipelining high-frequency designs on multi-die FPGAs. LeFlow is designed specifically for deep learning inference implementation. LegUp can generate a hybrid system of custom cores and soft processors; the other tools only generate custom cores. With currently available HLS tools, users need to rely on the vendor’s tools to integrate the RTL design into a whole system, which is a non-trivial task. After the HLS stage, there is generally no additional help for users to integrate their resulting system.2.2. Soft ProcessorsAs an alternative to the inflexible custom core approach, it has become popular to provide cores for simple programmable processors. These allow users to program in high-level languages. A soft processor is achieved by configuring FPGA hardware resources as a processor. Soft processors can reduce the design time through using a high-level language. They also reduce the hardware knowledge required to design a full system. However, single core performance of a soft processor is usually poor, since soft processors go through the standard fetch-execute cycle for each instruction, and they cannot run at as high a clock rate as normal hard-core processors. For example, Xilinx Microblaze usually runs under 400 MHz, while Intel and ARM processors can run at well over 1 GHz [25,26,27,28,29].When users program these soft processor systems, they do not normally have to think in terms of the hardware but at a relatively high-level, and potentially get decent performance. Unfortunately, there are no soft processors optimized directly for image processing from vendors such as Intel (Altera) and Xilinx. Two soft processors developed specifically for image processing are, for example, IPPro [30] and a RISC-V soft processor [31]. These processors require fewer resources than Nios II and Microblaze.2.3. Image Algebra and Pixel Level AbstractionsImage Algebra (IA) [32] is a mathematical theory concerned with the transformation and analysis of digital images at the whole image (rather than pixel) level. Its main goal is the establishment of a comprehensive and unifying theory of image transformations, image analysis and image understanding. Basic IA operations can be classified as: point operations, neighborhood operations and global operations.In point operations (P2P), the same operation is applied at every input pixel position using only pixels at that position. Operations can be binary or unary; they include relational (e.g., ‘>’, ‘=’), arithmetic (e.g., ‘+’, ‘×’), and logical (e.g., ‘and’, ‘or’) operations. Normally one output pixel is generated for each corresponding input pixel position.A neighborhood operation (N2P) is applied to each (potentially overlapping) region of an image. It is most common to use a 3 × 3 or 5 × 5 window. A new pixel value will be generated for each window position. The user specifies the matrix of weights for the window which is used in calculating the result value.A global operation is a reduction operation that is applied to the whole image and produces a scalar (R2S) or a vector (R2V). For example, global maximum will produce one scalar value, whereas histogram will produce a 256-element vector (for standard grey level images).2.4. FPGA-Based Image ProcessingIn embedded systems, FPGAs are powerful tools for accelerating image processing algorithms, especially for real-time embedded applications, where latency and power are important considerations. FPGAs can be embedded in a camera to directly provide pre-processed image streams. In this way, the sensor will provide an output data stream rather than merely a sequence of images [33]. FPGAs can achieve both data parallelism and task parallelism within many image processing tasks. Unfortunately, simply putting a PC-based algorithm onto an FPGA usually gives disappointing results [34]. In addition, many image processing algorithms have been optimized for scalar processors. Thus it is usually necessary to optimize the algorithm specifically for an FPGA before implementing.There have typically been three approaches to implementing an image processing algorithm/system on FPGAs:Custom hardware designed using Verilog HDL or VHDL and combined with the vendor’s IPs.High-level synthesis tools used to convert a C-based representation of the algorithm to hardware.Algorithm mapped onto a network of soft processors.When users need to implement an algorithm on FPGAs using custom cores, they need to consider the memory mapping, architecture, and algorithmic optimizations. On the other hand, when users try to use soft processors to implement their complex algorithm, they will usually be limited by the poor single core performance on the one hand, and resource utilization of a multi-core architecture on the other. Thus, balancing programmability, resource utilization and performance is a key challenge for implementing algorithms on FPGAs.2.5. SummaryCurrently, HLS tools are the key to rapidly implementing FPGA-based image processing algorithms or systems. HLS tools can even accept different input languages, such as C/C++, Java, Python and LabVIEW. Users need to use Xilinx Vivado or Intel Quartus Prime to perform the integration. This stage usually requires detailed hardware knowledge.In terms of the efficiency of implementing image processing algorithms and systems on FPGAs, custom cores have better performance than soft processors, but require users to have detailed hardware knowledge to design efficient accelerators. Soft processors keep the high-level programming model, but single core performance is poor. Users need to use multiple soft processors in order to meet the performance requirements. Figure 1 indicates informally the programmability (ease of use) vs. performance (throughput) of the different approaches. Our goal is to move a step closer to achieving both performance and programmability at the same time. For suitable applications, our soft coprocessor approach seeks to have performance approaching HLS products and HDL custom cores, even if it is not as programmable or as general purpose as soft processors.To address these challenges and problems, in this paper we present our approach—the soft coprocessor (SCP) approach. This aims to achieve performance closer to custom cores while providing users with a higher-level programming model than the current Vivado toolchain. | [
"34434872",
"33385957",
"34465705",
"34460491",
"34111009"
] | [
{
"pmid": "34434872",
"title": "Video Surveillance Processing Algorithms utilizing Artificial Intelligent (AI) for Unmanned Autonomous Vehicles (UAVs).",
"abstract": "With the advancement of science and technology, the combination of the unmanned aerial vehicle (UAV) and camera surveillance systems (CSS) is currently a promising solution for practical applications related to security and surveillance operations. However, one of the biggest risks and challenges for the UAV-CSS is analysis, process, and transmission data, especially, the limitations of computational capacity, storage and overloading the transmission bandwidth. Regard to conventional methods, almost the data collected from UAVs is processed and transmitted that cost huge energy. A certain amount of data is redundant and not necessary to be processed or transmitted. This paper proposes an efficient algorithm to optimize the transmission and reception of data in UAV-CSS systems, based on the platforms of artificial intelligence (AI) for data processing. The algorithm creates an initial background frame and update to the complete background which is sent to server. It splits the region of interest (moving objects) in the scene and then sends only the changes. This supports the CSS to reduce significantly either data storage or data transmission. In addition, the complexity of the systems could be significantly reduced. The main contributions of the algorithm can be listed as follows;-The developed solution can reduce data transmission significantly.-The solution can empower smart manufacturing via camera surveillance.-Simulation results have validated practical viability of this approach.The experimental method results show that reducing up to 80% of storage capacity and transmission data."
},
{
"pmid": "33385957",
"title": "Safety critical event prediction through unified analysis of driver and vehicle volatilities: Application of deep learning methods.",
"abstract": "Transportation safety is highly correlated with driving behavior, especially human error playing a key role in a large portion of crashes. Modern instrumentation and computational resources allow for the monitorization of driver, vehicle, and roadway/environment to extract leading indicators of crashes from multi-dimensional data streams. To quantify variations that are beyond normal in driver behavior and vehicle kinematics, the concept of volatility is applied. The study measures driver-vehicle volatilities using the naturalistic driving data. By integrating and fusing multiple real-time streams of data, i.e., driver distraction, vehicular movements and kinematics, and instability in driving, this study aims to predict occurrence of safety critical events and generate appropriate feedback to drivers and surrounding vehicles. The naturalistic driving data is used which contains 7566 normal driving events, and 1315 severe events (i.e., crash and near-crash), vehicle kinematics, and driver behavior collected from more than 3500 drivers. In order to capture the local dependency and volatility in time-series data 1D-Convolutional Neural Network (1D-CNN), Long Short-Term Memory (LSTM), and 1DCNN-LSTM are applied. Vehicle kinematics, driving volatility, and impaired driving (in terms of distraction) are used as the input parameters. The results reveal that the 1DCNN-LSTM model provides the best performance, with 95.45% accuracy and prediction of 73.4% of crashes with a precision of 95.67%. Additional features are extracted with the CNN layers and temporal dependency between observations is addressed, which helps the network learn driving patterns and volatile behavior. The model can be used to monitor driving behavior in real-time and provide warnings and alerts to drivers in low-level automated vehicles, reducing their crash risk."
},
{
"pmid": "34465705",
"title": "FPGA-Based Processor Acceleration for Image Processing Applications.",
"abstract": "FPGA-based embedded image processing systems offer considerable computing resources but present programming challenges when compared to software systems. The paper describes an approach based on an FPGA-based soft processor called Image Processing Processor (IPPro) which can operate up to 337 MHz on a high-end Xilinx FPGA family and gives details of the dataflow-based programming environment. The approach is demonstrated for a k-means clustering operation and a traffic sign recognition application, both of which have been prototyped on an Avnet Zedboard that has Xilinx Zynq-7000 system-on-chip (SoC). A number of parallel dataflow mapping options were explored giving a speed-up of 8 times for the k-means clustering using 16 IPPro cores, and a speed-up of 9.6 times for the morphology filter operation of the traffic sign recognition using 16 IPPro cores compared to their equivalent ARM-based software implementations. We show that for k-means clustering, the 16 IPPro cores implementation is 57, 28 and 1.7 times more power efficient (fps/W) than ARM Cortex-A7 CPU, nVIDIA GeForce GTX980 GPU and ARM Mali-T628 embedded GPU respectively."
},
{
"pmid": "34460491",
"title": "Image Processing Using FPGAs.",
"abstract": "Nine articles have been published in this Special Issue on image processing using field programmable gate arrays (FPGAs). The papers address a diverse range of topics relating to the application of FPGA technology to accelerate image processing tasks. The range includes: Custom processor design to reduce the programming burden; memory management for full frames, line buffers, and image border management; image segmentation through background modelling, online K-means clustering, and generalised Laplacian of Gaussian filtering; connected components analysis; and visually lossless image compression."
},
{
"pmid": "34111009",
"title": "A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects.",
"abstract": "A convolutional neural network (CNN) is one of the most significant networks in the deep learning field. Since CNN made impressive achievements in many areas, including but not limited to computer vision and natural language processing, it attracted much attention from both industry and academia in the past few years. The existing reviews mainly focus on CNN's applications in different scenarios without considering CNN from a general perspective, and some novel ideas proposed recently are not covered. In this review, we aim to provide some novel ideas and prospects in this fast-growing field. Besides, not only 2-D convolution but also 1-D and multidimensional ones are involved. First, this review introduces the history of CNN. Second, we provide an overview of various convolutions. Third, some classic and advanced CNN models are introduced; especially those key points making them reach state-of-the-art results. Fourth, through experimental analysis, we draw some conclusions and provide several rules of thumb for functions and hyperparameter selection. Fifth, the applications of 1-D, 2-D, and multidimensional convolution are covered. Finally, some open issues and promising directions for CNN are discussed as guidelines for future work."
}
] |
Journal of Personalized Medicine | null | PMC8880720 | 10.3390/jpm12020309 | A Multi-Agent Deep Reinforcement Learning Approach for Enhancement of COVID-19 CT Image Segmentation | Currently, most mask extraction techniques are based on convolutional neural networks (CNNs). However, there are still numerous problems that mask extraction techniques need to solve. Thus, the most advanced methods to deploy artificial intelligence (AI) techniques are necessary. The use of cooperative agents in mask extraction increases the efficiency of automatic image segmentation. Hence, we introduce a new mask extraction method that is based on multi-agent deep reinforcement learning (DRL) to minimize the long-term manual mask extraction and to enhance medical image segmentation frameworks. A DRL-based method is introduced to deal with mask extraction issues. This new method utilizes a modified version of the Deep Q-Network to enable the mask detector to select masks from the image studied. Based on COVID-19 computed tomography (CT) images, we used DRL mask extraction-based techniques to extract visual features of COVID-19 infected areas and provide an accurate clinical diagnosis while optimizing the pathogenic diagnostic test and saving time. We collected CT images of different cases (normal chest CT, pneumonia, typical viral cases, and cases of COVID-19). Experimental validation achieved a precision of 97.12% with a Dice of 80.81%, a sensitivity of 79.97%, a specificity of 99.48%, a precision of 85.21%, an F1 score of 83.01%, a structural metric of 84.38%, and a mean absolute error of 0.86%. Additionally, the results of the visual segmentation clearly reflected the ground truth. The results reveal the proof of principle for using DRL to extract CT masks for an effective diagnosis of COVID-19. | 2. Related WorksApproaches in medical image segmentation can generally be categorized into unsupervised or semi-supervised methods. These approaches are either trained on a subset of the target dataset or pre-trained on datasets that are comparable to the target dataset. Recently, several studies have been based on deep learning [16,17,18]. However, in a medical environment, such models may become semantically biased, resulting in severe performance loss. Labeling images, especially medical images, requires time and high attention to diagnose diseases. Practically, the remaining medical image segmentation techniques are based on neural networks. The last few years have brought a rapid advance in deep learning techniques for semantic segmentation with promising results. The first milestone of deep learning was the FCN by Long et al. [19]. In numerous well-known architectures, FCN was used to cast the fully convolutional layers: e.g., AlexNet [20], VGG-16 [16], GoogleLeNet [17], ResNet [18]. FCN provides a hopping architecture that allows the fine layers of the network to visualize and analyze information from the coarse layers. This increases the capacity of the model to handle a global context while improving the quality of the semantic segmentation. However, with an increasing number of layers, the receptive field of FCN filters increases linearly. This influences local predictions when integrating global knowledge [21]. Hence, later researchers improved the capacity of their image models to process the global context with different approaches.Several works have studied the division of an image into semantically meaningful regions as well as the clustering of the found regions. During the training phase [22], processed the image composed of noises by limiting the number of annotated samples (image-level annotation, not pixel-level). Label consistency and inter-label correlation were examined by training CRF from the weakly labeled image [23]. Multiple-task and multiple-instance learning were employed by [24] to segment semantically the images weakly annotated using geometric context estimation. Xu et al. [25] deployed weak annotation to reduce the label complexity; this model was needed to decrease labeling cost. A neural network and iterative graphical optimization were combined by Rajchl et al. [26] to approximate pixel-wise object segmentation. FCN was widely used independently or cooperatively with other methods [27,28,29,30,31]. In some of these methods, FCNs report some deficit for local dependency coverage, so the implementation of new approaches is still a strong point for developing semantic segmentation models.In the context of segmentation of infections and lesions, very promising deep learning systems have been suggested for medical image analysis [32,33,34,35,36,37]. Mostly, this work is dedicated to segmenting the lungs and the classification of regions of infection to support clinical evaluation and diagnosis [38]. In the case of COVID-19, segmentation allows delineating regions of interest in lung images for further evaluation. Wang et al. suggested a weakly supervised deep learning software system that uses CT images for the detection of COVID-19 [39]. Gozes et al. presented a system that studies 2D slices to analyze 3D volumes [40]. Wang et al. presented a rapid diagnostic system for COVID-19, based on segmentation for the localization of lung lesions and then classification to determine the lesion analogy [41]. Li et al. introduced a COVNet to obtain visual information from volumetric chest CT scans to distinguish community-acquired pneumonia (CAP) from COVID-19 [42]. Chen et al. proposed the use of the U-Net++ network for extraction of COVID-19 affected areas and detection of suspicious lesions on CT images [43]. Akram et al. [44], to extract the relevant features, used discrete wavelet transform and extended segmentation-based fractal texture analysis methods followed by an entropy-controlled genetic algorithm to select the best features from each feature type, which were then combined serially.Khan et al. [45] began with a contrast enhancement using a top-hat and Wiener filter combination. Two pretrained deep learning models (AlexNet and VGG16) were fine-tuned based on the target classes (COVID-19 and healthy). A parallel fusion approach—parallel positive correlation—was used to extract and fuse features. The entropy-controlled firefly optimization method was used to select optimal features. Machine learning classifiers such as the multiclass support vector machine (MC-SVM) were used to classify the selected features and achieved 98% accuracy using the Radiopaedia database. Rehman et al. [46] proposed a framework for the detection of 15 different types of chest diseases, including COVID-19, using a chest X-ray modality. A two-way classification was performed. The first step used CNN architecture with a soft-max classifier. Second, transfer learning was used to extract deep features from the CNN′s fully connected layer. Deep features were fed into traditional machine learning (ML) classification methods, which improved the accuracy of COVID-19 detection and increased predictability for other chest diseases.Reinforcement learning approaches have recently attracted more interest due to the effective union of RL and deep networks by Mnih et al. [47]. RL has been used extensively in numerous computer vision problems, e.g., video content summarization [48], tracking [49], object localization [50], and object segmentation [51]. Previously, numerous papers highlighted the importance of optimizing human efforts through interactive or automatic annotation of images or videos studied using RL. For example, the GrabCut algorithm was improved by generating seed points with a Deep-Q-Network (DQN) agent [52]. Acuna et al. [53] evaluated their approach in a human-in-the-loop structure. Rudovic et al. [54] integrated RL into an active learning context, so that the agent could decide whether the labeling is required from the user or not.Although each of these methods has shown strong performance, they have a lower likelihood of scaling new high-level semantic information in training data. Inspired by the mentioned RL works, our approach utilizes DRL to straighten CT segmentation by an advanced mask extraction approach. We note that the use of MAS is ideal for extracting regions of interest to properly form the mask base for segmentation, since it provides a partitioning of the lung CT image at an arbitrary level of granularity. | [
"34840938",
"33167376",
"34511689",
"34141890",
"34359295",
"34943443",
"27845654",
"32143797",
"32143786",
"32143788",
"35161552",
"32305937",
"33156775",
"33199977",
"33154542",
"33500681",
"34770595",
"30144101"
] | [
{
"pmid": "34840938",
"title": "COVID-19 anomaly detection and classification method based on supervised machine learning of chest X-ray images.",
"abstract": "The term COVID-19 is an abbreviation of Coronavirus 2019, which is considered a global pandemic that threatens the lives of millions of people. Early detection of the disease offers ample opportunity of recovery and prevention of spreading. This paper proposes a method for classification and early detection of COVID-19 through image processing using X-ray images. A set of procedures are applied, including preprocessing (image noise removal, image thresholding, and morphological operation), Region of Interest (ROI) detection and segmentation, feature extraction, (Local binary pattern (LBP), Histogram of Gradient (HOG), and Haralick texture features) and classification (K-Nearest Neighbor (KNN) and Support Vector Machine (SVM)). The combinations of the feature extraction operators and classifiers results in six models, namely LBP-KNN, HOG-KNN, Haralick-KNN, LBP-SVM, HOG-SVM, and Haralick-SVM. The six models are tested based on test samples of 5,000 images with the percentage of training of 5-folds cross-validation. The evaluation results show high diagnosis accuracy from 89.2% up to 98.66%. The LBP-KNN model outperforms the other models in which it achieves an average accuracy of 98.66%, a sensitivity of 97.76%, specificity of 100%, and precision of 100%. The proposed method for early detection and classification of COVID-19 through image processing using X-ray images is proven to be usable in which it provides an end-to-end structure without the need for manual feature extraction and manual selection methods."
},
{
"pmid": "33167376",
"title": "CSID: A Novel Multimodal Image Fusion Algorithm for Enhanced Clinical Diagnosis.",
"abstract": "Technology-assisted clinical diagnosis has gained tremendous importance in modern day healthcare systems. To this end, multimodal medical image fusion has gained great attention from the research community. There are several fusion algorithms that merge Computed Tomography (CT) and Magnetic Resonance Images (MRI) to extract detailed information, which is used to enhance clinical diagnosis. However, these algorithms exhibit several limitations, such as blurred edges during decomposition, excessive information loss that gives rise to false structural artifacts, and high spatial distortion due to inadequate contrast. To resolve these issues, this paper proposes a novel algorithm, namely Convolutional Sparse Image Decomposition (CSID), that fuses CT and MR images. CSID uses contrast stretching and the spatial gradient method to identify edges in source images and employs cartoon-texture decomposition, which creates an overcomplete dictionary. Moreover, this work proposes a modified convolutional sparse coding method and employs improved decision maps and the fusion rule to obtain the final fused image. Simulation results using six datasets of multimodal images demonstrate that CSID achieves superior performance, in terms of visual quality and enriched information extraction, in comparison with eminent image fusion algorithms."
},
{
"pmid": "34511689",
"title": "Review on COVID-19 diagnosis models based on machine learning and deep learning approaches.",
"abstract": "COVID-19 is the disease evoked by a new breed of coronavirus called the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Recently, COVID-19 has become a pandemic by infecting more than 152 million people in over 216 countries and territories. The exponential increase in the number of infections has rendered traditional diagnosis techniques inefficient. Therefore, many researchers have developed several intelligent techniques, such as deep learning (DL) and machine learning (ML), which can assist the healthcare sector in providing quick and precise COVID-19 diagnosis. Therefore, this paper provides a comprehensive review of the most recent DL and ML techniques for COVID-19 diagnosis. The studies are published from December 2019 until April 2021. In general, this paper includes more than 200 studies that have been carefully selected from several publishers, such as IEEE, Springer and Elsevier. We classify the research tracks into two categories: DL and ML and present COVID-19 public datasets established and extracted from different countries. The measures used to evaluate diagnosis methods are comparatively analysed and proper discussion is provided. In conclusion, for COVID-19 diagnosing and outbreak prediction, SVM is the most widely used machine learning mechanism, and CNN is the most widely used deep learning mechanism. Accuracy, sensitivity, and specificity are the most widely used measurements in previous studies. Finally, this review paper will guide the research community on the upcoming development of machine learning for COVID-19 and inspire their works for future development. This review paper will guide the research community on the upcoming development of ML and DL for COVID-19 and inspire their works for future development."
},
{
"pmid": "34141890",
"title": "Overview of current state of research on the application of artificial intelligence techniques for COVID-19.",
"abstract": "BACKGROUND\nUntil now, there are still a limited number of resources available to predict and diagnose COVID-19 disease. The design of novel drug-drug interaction for COVID-19 patients is an open area of research. Also, the development of the COVID-19 rapid testing kits is still a challenging task.\n\n\nMETHODOLOGY\nThis review focuses on two prime challenges caused by urgent needs to effectively address the challenges of the COVID-19 pandemic, i.e., the development of COVID-19 classification tools and drug discovery models for COVID-19 infected patients with the help of artificial intelligence (AI) based techniques such as machine learning and deep learning models.\n\n\nRESULTS\nIn this paper, various AI-based techniques are studied and evaluated by the means of applying these techniques for the prediction and diagnosis of COVID-19 disease. This study provides recommendations for future research and facilitates knowledge collection and formation on the application of the AI techniques for dealing with the COVID-19 epidemic and its consequences.\n\n\nCONCLUSIONS\nThe AI techniques can be an effective tool to tackle the epidemic caused by COVID-19. These may be utilized in four main fields such as prediction, diagnosis, drug design, and analyzing social implications for COVID-19 infected patients."
},
{
"pmid": "34359295",
"title": "Dilated Semantic Segmentation for Breast Ultrasonic Lesion Detection Using Parallel Feature Fusion.",
"abstract": "Breast cancer is becoming more dangerous by the day. The death rate in developing countries is rapidly increasing. As a result, early detection of breast cancer is critical, leading to a lower death rate. Several researchers have worked on breast cancer segmentation and classification using various imaging modalities. The ultrasonic imaging modality is one of the most cost-effective imaging techniques, with a higher sensitivity for diagnosis. The proposed study segments ultrasonic breast lesion images using a Dilated Semantic Segmentation Network (Di-CNN) combined with a morphological erosion operation. For feature extraction, we used the deep neural network DenseNet201 with transfer learning. We propose a 24-layer CNN that uses transfer learning-based feature extraction to further validate and ensure the enriched features with target intensity. To classify the nodules, the feature vectors obtained from DenseNet201 and the 24-layer CNN were fused using parallel fusion. The proposed methods were evaluated using a 10-fold cross-validation on various vector combinations. The accuracy of CNN-activated feature vectors and DenseNet201-activated feature vectors combined with the Support Vector Machine (SVM) classifier was 90.11 percent and 98.45 percent, respectively. With 98.9 percent accuracy, the fused version of the feature vector with SVM outperformed other algorithms. When compared to recent algorithms, the proposed algorithm achieves a better breast cancer diagnosis rate."
},
{
"pmid": "34943443",
"title": "VGG19 Network Assisted Joint Segmentation and Classification of Lung Nodules in CT Images.",
"abstract": "Pulmonary nodule is one of the lung diseases and its early diagnosis and treatment are essential to cure the patient. This paper introduces a deep learning framework to support the automated detection of lung nodules in computed tomography (CT) images. The proposed framework employs VGG-SegNet supported nodule mining and pre-trained DL-based classification to support automated lung nodule detection. The classification of lung CT images is implemented using the attained deep features, and then these features are serially concatenated with the handcrafted features, such as the Grey Level Co-Occurrence Matrix (GLCM), Local-Binary-Pattern (LBP) and Pyramid Histogram of Oriented Gradients (PHOG) to enhance the disease detection accuracy. The images used for experiments are collected from the LIDC-IDRI and Lung-PET-CT-Dx datasets. The experimental results attained show that the VGG19 architecture with concatenated deep and handcrafted features can achieve an accuracy of 97.83% with the SVM-RBF classifier."
},
{
"pmid": "27845654",
"title": "DeepCut: Object Segmentation From Bounding Box Annotations Using Convolutional Neural Networks.",
"abstract": "In this paper, we propose DeepCut, a method to obtain pixelwise object segmentations given an image dataset labelled weak annotations, in our case bounding boxes. It extends the approach of the well-known GrabCut [1] method to include machine learning by training a neural network classifier from bounding box annotations. We formulate the problem as an energy minimisation problem over a densely-connected conditional random field and iteratively update the training targets to obtain pixelwise object segmentations. Additionally, we propose variants of the DeepCut method and compare those to a naïve approach to CNN training under weak supervision. We test its applicability to solve brain and lung segmentation problems on a challenging fetal magnetic resonance dataset and obtain encouraging results in terms of accuracy."
},
{
"pmid": "32143797",
"title": "An effective approach for CT lung segmentation using mask region-based convolutional neural networks.",
"abstract": "Computer vision systems have numerous tools to assist in various medical fields, notably in image diagnosis. Computed tomography (CT) is the principal imaging method used to assist in the diagnosis of diseases such as bone fractures, lung cancer, heart disease, and emphysema, among others. Lung cancer is one of the four main causes of death in the world. The lung regions in the CT images are marked manually by a specialist as this initial step is a significant challenge for computer vision techniques. Once defined, the lung regions are segmented for clinical diagnoses. This work proposes an automatic segmentation of the lungs in CT images, using the Convolutional Neural Network (CNN) Mask R-CNN, to specialize the model for lung region mapping, combined with supervised and unsupervised machine learning methods (Bayes, Support Vectors Machine (SVM), K-means and Gaussian Mixture Models (GMMs)). Our approach using Mask R-CNN with the K-means kernel produced the best results for lung segmentation reaching an accuracy of 97.68 ± 3.42% and an average runtime of 11.2 s. We compared our results against other works for validation purposes, and our approach had the highest accuracy and was faster than some state-of-the-art methods."
},
{
"pmid": "32143786",
"title": "A multi-context CNN ensemble for small lesion detection.",
"abstract": "In this paper, we propose a novel method for the detection of small lesions in digital medical images. Our approach is based on a multi-context ensemble of convolutional neural networks (CNNs), aiming at learning different levels of image spatial context and improving detection performance. The main innovation behind the proposed method is the use of multiple-depth CNNs, individually trained on image patches of different dimensions and then combined together. In this way, the final ensemble is able to find and locate abnormalities on the images by exploiting both the local features and the surrounding context of a lesion. Experiments were focused on two well-known medical detection problems that have been recently faced with CNNs: microcalcification detection on full-field digital mammograms and microaneurysm detection on ocular fundus images. To this end, we used two publicly available datasets, INbreast and E-ophtha. Statistically significantly better detection performance were obtained by the proposed ensemble with respect to other approaches in the literature, demonstrating its effectiveness in the detection of small abnormalities."
},
{
"pmid": "32143788",
"title": "Multi-planar 3D breast segmentation in MRI via deep convolutional neural networks.",
"abstract": "Nowadays, Dynamic Contrast Enhanced-Magnetic Resonance Imaging (DCE-MRI) has demonstrated to be a valid complementary diagnostic tool for early detection and diagnosis of breast cancer. However, without a CAD (Computer Aided Detection) system, manual DCE-MRI examination can be difficult and error-prone. The early stage of breast tissue segmentation, in a typical CAD, is crucial to increase reliability and reduce the computational effort by reducing the number of voxels to analyze and removing foreign tissues and air. In recent years, the deep convolutional neural networks (CNNs) enabled a sensible improvement in many visual tasks automation, such as image classification and object recognition. These advances also involved radiomics, enabling high-throughput extraction of quantitative features, resulting in a strong improvement in automatic diagnosis through medical imaging. However, machine learning and, in particular, deep learning approaches are gaining popularity in the radiomics field for tissue segmentation. This work aims to accurately segment breast parenchyma from the air and other tissues (such as chest-wall) by applying an ensemble of deep CNNs on 3D MR data. The novelty, besides applying cutting-edge techniques in the radiomics field, is a multi-planar combination of U-Net CNNs by a suitable projection-fusing approach, enabling multi-protocol applications. The proposed approach has been validated over two different datasets for a total of 109 DCE-MRI studies with histopathologically proven lesions and two different acquisition protocols. The median dice similarity index for both the datasets is 96.60 % (±0.30 %) and 95.78 % (±0.51 %) respectively with p < 0.05, and 100% of neoplastic lesion coverage."
},
{
"pmid": "35161552",
"title": "Breast Cancer Classification from Ultrasound Images Using Probability-Based Optimal Deep Learning Feature Fusion.",
"abstract": "After lung cancer, breast cancer is the second leading cause of death in women. If breast cancer is detected early, mortality rates in women can be reduced. Because manual breast cancer diagnosis takes a long time, an automated system is required for early cancer detection. This paper proposes a new framework for breast cancer classification from ultrasound images that employs deep learning and the fusion of the best selected features. The proposed framework is divided into five major steps: (i) data augmentation is performed to increase the size of the original dataset for better learning of Convolutional Neural Network (CNN) models; (ii) a pre-trained DarkNet-53 model is considered and the output layer is modified based on the augmented dataset classes; (iii) the modified model is trained using transfer learning and features are extracted from the global average pooling layer; (iv) the best features are selected using two improved optimization algorithms known as reformed differential evaluation (RDE) and reformed gray wolf (RGW); and (v) the best selected features are fused using a new probability-based serial approach and classified using machine learning algorithms. The experiment was conducted on an augmented Breast Ultrasound Images (BUSI) dataset, and the best accuracy was 99.1%. When compared with recent techniques, the proposed framework outperforms them."
},
{
"pmid": "32305937",
"title": "Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation, and Diagnosis for COVID-19.",
"abstract": "The pandemic of coronavirus disease 2019 (COVID-19) is spreading all over the world. Medical imaging such as X-ray and computed tomography (CT) plays an essential role in the global fight against COVID-19, whereas the recently emerging artificial intelligence (AI) technologies further strengthen the power of the imaging tools and help medical specialists. We hereby review the rapid responses in the community of medical imaging (empowered by AI) toward COVID-19. For example, AI-empowered image acquisition can significantly help automate the scanning procedure and also reshape the workflow with minimal contact to patients, providing the best protection to the imaging technicians. Also, AI can improve work efficiency by accurate delineation of infections in X-ray and CT images, facilitating subsequent quantification. Moreover, the computer-aided platforms help radiologists make clinical decisions, i.e., for disease diagnosis, tracking, and prognosis. In this review paper, we thus cover the entire pipeline of medical imaging and analysis techniques involved with COVID-19, including image acquisition, segmentation, diagnosis, and follow-up. We particularly focus on the integration of AI with X-ray and CT, both of which are widely used in the frontline hospitals, in order to depict the latest progress of medical imaging and radiology fighting against COVID-19."
},
{
"pmid": "33156775",
"title": "A Weakly-Supervised Framework for COVID-19 Classification and Lesion Localization From Chest CT.",
"abstract": "Accurate and rapid diagnosis of COVID-19 suspected cases plays a crucial role in timely quarantine and medical treatment. Developing a deep learning-based model for automatic COVID-19 diagnosis on chest CT is helpful to counter the outbreak of SARS-CoV-2. A weakly-supervised deep learning framework was developed using 3D CT volumes for COVID-19 classification and lesion localization. For each patient, the lung region was segmented using a pre-trained UNet; then the segmented 3D lung region was fed into a 3D deep neural network to predict the probability of COVID-19 infectious; the COVID-19 lesions are localized by combining the activation regions in the classification network and the unsupervised connected components. 499 CT volumes were used for training and 131 CT volumes were used for testing. Our algorithm obtained 0.959 ROC AUC and 0.976 PR AUC. When using a probability threshold of 0.5 to classify COVID-positive and COVID-negative, the algorithm obtained an accuracy of 0.901, a positive predictive value of 0.840 and a very high negative predictive value of 0.982. The algorithm took only 1.93 seconds to process a single patient's CT volume using a dedicated GPU. Our weakly-supervised deep learning model can accurately predict the COVID-19 infectious probability and discover lesion regions in chest CT without the need for annotating the lesions for training. The easily-trained and high-performance deep learning algorithm provides a fast way to identify COVID-19 patients, which is beneficial to control the outbreak of SARS-CoV-2. The developed deep learning software is available at https://github.com/sydney0zq/covid-19-detection."
},
{
"pmid": "33199977",
"title": "AI-assisted CT imaging analysis for COVID-19 screening: Building and deploying a medical AI system.",
"abstract": "The sudden outbreak of novel coronavirus 2019 (COVID-19) increased the diagnostic burden of radiologists. In the time of an epidemic crisis, we hope artificial intelligence (AI) to reduce physician workload in regions with the outbreak, and improve the diagnosis accuracy for physicians before they could acquire enough experience with the new disease. In this paper, we present our experience in building and deploying an AI system that automatically analyzes CT images and provides the probability of infection to rapidly detect COVID-19 pneumonia. The proposed system which consists of classification and segmentation will save about 30%-40% of the detection time for physicians and promote the performance of COVID-19 detection. Specifically, working in an interdisciplinary team of over 30 people with medical and/or AI background, geographically distributed in Beijing and Wuhan, we are able to overcome a series of challenges (e.g. data discrepancy, testing time-effectiveness of model, data security, etc.) in this particular situation and deploy the system in four weeks. In addition, since the proposed AI system provides the priority of each CT image with probability of infection, the physicians can confirm and segregate the infected patients in time. Using 1,136 training cases (723 positives for COVID-19) from five hospitals, we are able to achieve a sensitivity of 0.974 and specificity of 0.922 on the test dataset, which included a variety of pulmonary diseases."
},
{
"pmid": "33154542",
"title": "Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography.",
"abstract": "Computed tomography (CT) is the preferred imaging method for diagnosing 2019 novel coronavirus (COVID19) pneumonia. We aimed to construct a system based on deep learning for detecting COVID-19 pneumonia on high resolution CT. For model development and validation, 46,096 anonymous images from 106 admitted patients, including 51 patients of laboratory confirmed COVID-19 pneumonia and 55 control patients of other diseases in Renmin Hospital of Wuhan University were retrospectively collected. Twenty-seven prospective consecutive patients in Renmin Hospital of Wuhan University were collected to evaluate the efficiency of radiologists against 2019-CoV pneumonia with that of the model. An external test was conducted in Qianjiang Central Hospital to estimate the system's robustness. The model achieved a per-patient accuracy of 95.24% and a per-image accuracy of 98.85% in internal retrospective dataset. For 27 internal prospective patients, the system achieved a comparable performance to that of expert radiologist. In external dataset, it achieved an accuracy of 96%. With the assistance of the model, the reading time of radiologists was greatly decreased by 65%. The deep learning model showed a comparable performance with expert radiologist, and greatly improved the efficiency of radiologists in clinical practice."
},
{
"pmid": "33500681",
"title": "A novel framework for rapid diagnosis of COVID-19 on computed tomography scans.",
"abstract": "Since the emergence of COVID-19, thousands of people undergo chest X-ray and computed tomography scan for its screening on everyday basis. This has increased the workload on radiologists, and a number of cases are in backlog. This is not only the case for COVID-19, but for the other abnormalities needing radiological diagnosis as well. In this work, we present an automated technique for rapid diagnosis of COVID-19 on computed tomography images. The proposed technique consists of four primary steps: (1) data collection and normalization, (2) extraction of the relevant features, (3) selection of the most optimal features and (4) feature classification. In the data collection step, we collect data for several patients from a public domain website, and perform preprocessing, which includes image resizing. In the successive step, we apply discrete wavelet transform and extended segmentation-based fractal texture analysis methods for extracting the relevant features. This is followed by application of an entropy controlled genetic algorithm for selection of the best features from each feature type, which are combined using a serial approach. In the final phase, the best features are subjected to various classifiers for the diagnosis. The proposed framework, when augmented with the Naive Bayes classifier, yields the best accuracy of 92.6%. The simulation results are supported by a detailed statistical analysis as a proof of concept."
},
{
"pmid": "34770595",
"title": "COVID-19 Case Recognition from Chest CT Images by Deep Learning, Entropy-Controlled Firefly Optimization, and Parallel Feature Fusion.",
"abstract": "In healthcare, a multitude of data is collected from medical sensors and devices, such as X-ray machines, magnetic resonance imaging, computed tomography (CT), and so on, that can be analyzed by artificial intelligence methods for early diagnosis of diseases. Recently, the outbreak of the COVID-19 disease caused many deaths. Computer vision researchers support medical doctors by employing deep learning techniques on medical images to diagnose COVID-19 patients. Various methods were proposed for COVID-19 case classification. A new automated technique is proposed using parallel fusion and optimization of deep learning models. The proposed technique starts with a contrast enhancement using a combination of top-hat and Wiener filters. Two pre-trained deep learning models (AlexNet and VGG16) are employed and fine-tuned according to target classes (COVID-19 and healthy). Features are extracted and fused using a parallel fusion approach-parallel positive correlation. Optimal features are selected using the entropy-controlled firefly optimization method. The selected features are classified using machine learning classifiers such as multiclass support vector machine (MC-SVM). Experiments were carried out using the Radiopaedia database and achieved an accuracy of 98%. Moreover, a detailed analysis is conducted and shows the improved performance of the proposed scheme."
},
{
"pmid": "30144101",
"title": "Autosegmentation for thoracic radiation treatment planning: A grand challenge at AAPM 2017.",
"abstract": "PURPOSE\nThis report presents the methods and results of the Thoracic Auto-Segmentation Challenge organized at the 2017 Annual Meeting of American Association of Physicists in Medicine. The purpose of the challenge was to provide a benchmark dataset and platform for evaluating performance of autosegmentation methods of organs at risk (OARs) in thoracic CT images.\n\n\nMETHODS\nSixty thoracic CT scans provided by three different institutions were separated into 36 training, 12 offline testing, and 12 online testing scans. Eleven participants completed the offline challenge, and seven completed the online challenge. The OARs were left and right lungs, heart, esophagus, and spinal cord. Clinical contours used for treatment planning were quality checked and edited to adhere to the RTOG 1106 contouring guidelines. Algorithms were evaluated using the Dice coefficient, Hausdorff distance, and mean surface distance. A consolidated score was computed by normalizing the metrics against interrater variability and averaging over all patients and structures.\n\n\nRESULTS\nThe interrater study revealed highest variability in Dice for the esophagus and spinal cord, and in surface distances for lungs and heart. Five out of seven algorithms that participated in the online challenge employed deep-learning methods. Although the top three participants using deep learning produced the best segmentation for all structures, there was no significant difference in the performance among them. The fourth place participant used a multi-atlas-based approach. The highest Dice scores were produced for lungs, with averages ranging from 0.95 to 0.98, while the lowest Dice scores were produced for esophagus, with a range of 0.55-0.72.\n\n\nCONCLUSION\nThe results of the challenge showed that the lungs and heart can be segmented fairly accurately by various algorithms, while deep-learning methods performed better on the esophagus. Our dataset together with the manual contours for all training cases continues to be available publicly as an ongoing benchmarking resource."
}
] |
PLoS Computational Biology | 35157695 | PMC8880931 | 10.1371/journal.pcbi.1009862 | Patient contrastive learning: A performant, expressive, and practical approach to electrocardiogram modeling | Supervised machine learning applications in health care are often limited due to a scarcity of labeled training data. To mitigate the effect of small sample size, we introduce a pre-training approach, Patient Contrastive Learning of Representations (PCLR), which creates latent representations of electrocardiograms (ECGs) from a large number of unlabeled examples using contrastive learning. The resulting representations are expressive, performant, and practical across a wide spectrum of clinical tasks. We develop PCLR using a large health care system with over 3.2 million 12-lead ECGs and demonstrate that training linear models on PCLR representations achieves a 51% performance increase, on average, over six training set sizes and four tasks (sex classification, age regression, and the detection of left ventricular hypertrophy and atrial fibrillation), relative to training neural network models from scratch. We also compared PCLR to three other ECG pre-training approaches (supervised pre-training, unsupervised pre-training with an autoencoder, and pre-training using a contrastive multi ECG-segment approach), and show significant performance benefits in three out of four tasks. We found an average performance benefit of 47% over the other models and an average of a 9% performance benefit compared to best model for each task. We release PCLR to enable others to extract ECG representations at https://github.com/broadinstitute/ml4h/tree/master/model_zoo/PCLR. | Related workDeep learning on ECGsThere is a growing body of work that applies techniques from deep image classification to 12-lead ECGs in the presence of large labeled datasets. [8] train a residual network in a cohort containing millions of labeled ECGs to classify cardiac blocks and other arrhythmias with high accuracy, and we use their model as a baseline for comparison. Residual networks have also been shown to outperform automatic labeling systems [9], and even physician labels [10], and to triage patients [11]. Latent features of the ECG have also been shown to be useful for a wide range of tasks, such as to regress age from the ECG as a marker of cardiac health [12], or to predict incident atrial fibrillation (AF) [13], or one-year mortality [14]. We contribute by reducing the need for labels, and by focusing on extracting generally expressive representations from ECGs rather than a representation optimized for a single task.Contrastive learningContrastive learning is a self-supervised learning method that requires training data only to be labeled with notions of positive pairs (data that go together) and negative pairs (data that are distinct). The SimCLR procedure introduced by [7] shows that contrastive learning yields reduced representations of data that are linearly reusable in new tasks. Many papers have recently experimented with the SimCLR procedure in the medical domain. [5] used the SimCLR procedure in dermatology and X-ray classification tasks. They defined positive pairs as both modified versions of the same image and images from different views of the same skin condition. [15] experimented with SimCLR using many different definitions of a positive pair, including MRI images from different patients that show the same area of the body. [16] defines positive pairs across modalities, between an X-ray image and its associated text report.A few works utilize the value of subject-specific information by defining positive pairs using non-overlapping segments of temporal signals [17], [18]. These approaches are shown to be especially beneficial in the low-label regime. [6] apply the SimCLR procedure to 12-lead ECGs, defining positive pairs by different leads from the same ECG or as different non-overlapping time segments within a single ECG. They show improved performance in rhythm classification tasks compared to other pre-training strategies in both transfer learning and representation learning regimes. PCLR builds on these works by defining positive pairs across different ascertainments from the same patient rather than segments of the same ascertainment. PCLR does not require segments to be taken from the ECGs, or augmentations that modify the ECGs, which means that the model trains on the unmodified data seen at evaluation time. Furthermore, compared to other ECG contrastive pre-training work, PCLR was trained with millions rather than tens of thousands of ECGs.[19] apply a different contrastive learning procedure introduced by [20] in lumbar MRIs, but notably define positive pairs in the same way that PCLR does—as pairs of MRIs from the same patient at different times. [20] also make use of image domain specific data augmentations, such as random rotations. We build on this work by demonstrating that with a large dataset of ECGs, a contrastive loss based on patient identity across different ECGs across time is highly performant, expressive, and practical. Unlike all of the other approaches, PCLR does not utilize augmentations and instead relies on the shared underlying biology of different ECGs taken from the same patient. | [
"11684624",
"20309766",
"32273514",
"31517038",
"30617320",
"32406296",
"31450977",
"33588584",
"1866765",
"7829796",
"32190336",
"15199036"
] | [
{
"pmid": "11684624",
"title": "Sudden cardiac death in the United States, 1989 to 1998.",
"abstract": "BACKGROUND\nSudden cardiac death (SCD) is a major clinical and public health problem.\n\n\nMETHODS AND RESULTS\nUnited States (US) vital statistics mortality data from 1989 to 1998 were analyzed. SCD is defined as deaths occurring out of the hospital or in the emergency room or as \"dead on arrival\" with an underlying cause of death reported as a cardiac disease (ICD-9 code 390 to 398, 402, or 404 to 429). Death rates were calculated for residents of the US aged >/=35 years and standardized to the 2000 US population. Of 719 456 cardiac deaths among adults aged >/=35 years in 1998, 456 076 (63%) were defined as SCD. Among decedents aged 35 to 44 years, 74% of cardiac deaths were SCD. Of all SCDs in 1998, coronary heart disease (ICD-9 codes 410 to 414) was the underlying cause on 62% of death certificates. Death rates for SCD increased with age and were higher in men than women, although there was no difference at age >/=85 years. The black population had higher death rates for SCD than white, American Indian/Alaska Native, or Asian/Pacific Islander populations. The Hispanic population had lower death rates for SCD than the non-Hispanic population. From 1989 to 1998, SCD, as the proportion of all cardiac deaths, increased 12.4% (56.3% to 63.9%), and age-adjusted SCD rates declined 11.7% in men and 5.8% in women. During the same time, age-specific death rates for SCD increased 21% among women aged 35 to 44 years.\n\n\nCONCLUSIONS\nSCD remains an important public health problem in the US. The increase in death rates for SCD among younger women warrants additional investigation."
},
{
"pmid": "20309766",
"title": "Practical issues in building risk-predicting models for complex diseases.",
"abstract": "Recent genome-wide association studies have identified many genetic variants affecting complex human diseases. It is of great interest to build disease risk prediction models based on these data. In this article, we first discuss statistical challenges in using genome-wide association data for risk predictions, and then review the findings from the literature on this topic. We also demonstrate the performance of different methods through both simulation studies and application to real-world data."
},
{
"pmid": "32273514",
"title": "Automatic diagnosis of the 12-lead ECG using a deep neural network.",
"abstract": "The role of automatic electrocardiogram (ECG) analysis in clinical practice is limited by the accuracy of existing models. Deep Neural Networks (DNNs) are models composed of stacked transformations that learn tasks by examples. This technology has recently achieved striking success in a variety of task and there are great expectations on how it might improve clinical practice. Here we present a DNN model trained in a dataset with more than 2 million labeled exams analyzed by the Telehealth Network of Minas Gerais and collected under the scope of the CODE (Clinical Outcomes in Digital Electrocardiology) study. The DNN outperform cardiology resident medical doctors in recognizing 6 types of abnormalities in 12-lead ECG recordings, with F1 scores above 80% and specificity over 99%. These results indicate ECG analysis based on DNNs, previously studied in a single-lead setup, generalizes well to 12-lead exams, taking the technology closer to the standard clinical practice."
},
{
"pmid": "31517038",
"title": "A deep neural network for 12-lead electrocardiogram interpretation outperforms a conventional algorithm, and its physician overread, in the diagnosis of atrial fibrillation.",
"abstract": "BACKGROUND\nAutomated electrocardiogram (ECG) interpretations may be erroneous, and lead to erroneous overreads, including for atrial fibrillation (AF). We compared the accuracy of the first version of a new deep neural network 12-Lead ECG algorithm (Cardiologs®) to the conventional Veritas algorithm in interpretation of AF.\n\n\nMETHODS\n24,123 consecutive 12-lead ECGs recorded over 6 months were interpreted by 1) the Veritas® algorithm, 2) physicians who overread Veritas® (Veritas® + physician), and 3) Cardiologs® algorithm. We randomly selected 500 out of 858 ECGs with a diagnosis of AF according to either algorithm, then compared the algorithms' interpretations, and Veritas® + physician, with expert interpretation. To assess sensitivity for AF, we analyzed a separate database of 1473 randomly selected ECGs interpreted by both algorithms and by blinded experts.\n\n\nRESULTS\nAmong the 500 ECGs selected, 399 had a final classification of AF; 101 (20.2%) had ≥1 false positive automated interpretation. Accuracy of Cardiologs® (91.2%; CI: 82.4-94.4) was higher than Veritas® (80.2%; CI: 76.5-83.5) (p < 0.0001), and equal to Veritas® + physician (90.0%, CI:87.1-92.3) (p = 0.12). When Veritas® was incorrect, accuracy of Veritas® + physician was only 62% (CI 52-71); among those ECGs, Cardiologs® accuracy was 90% (CI: 82-94; p < 0.0001). The second database had 39 AF cases; sensitivity was 92% vs. 87% (p = 0.46) and specificity was 99.5% vs. 98.7% (p = 0.03) for Cardiologs® and Veritas® respectively.\n\n\nCONCLUSION\nCardiologs® 12-lead ECG algorithm improves the interpretation of atrial fibrillation."
},
{
"pmid": "30617320",
"title": "Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network.",
"abstract": "Computerized electrocardiogram (ECG) interpretation plays a critical role in the clinical ECG workflow1. Widely available digital ECG data and the algorithmic paradigm of deep learning2 present an opportunity to substantially improve the accuracy and scalability of automated ECG analysis. However, a comprehensive evaluation of an end-to-end deep learning approach for ECG analysis across a wide variety of diagnostic classes has not been previously reported. Here, we develop a deep neural network (DNN) to classify 12 rhythm classes using 91,232 single-lead ECGs from 53,549 patients who used a single-lead ambulatory ECG monitoring device. When validated against an independent test dataset annotated by a consensus committee of board-certified practicing cardiologists, the DNN achieved an average area under the receiver operating characteristic curve (ROC) of 0.97. The average F1 score, which is the harmonic mean of the positive predictive value and sensitivity, for the DNN (0.837) exceeded that of average cardiologists (0.780). With specificity fixed at the average specificity achieved by cardiologists, the sensitivity of the DNN exceeded the average cardiologist sensitivity for all rhythm classes. These findings demonstrate that an end-to-end deep learning approach can classify a broad range of distinct arrhythmias from single-lead ECGs with high diagnostic performance similar to that of cardiologists. If confirmed in clinical settings, this approach could reduce the rate of misdiagnosed computerized ECG interpretations and improve the efficiency of expert human ECG interpretation by accurately triaging or prioritizing the most urgent conditions."
},
{
"pmid": "32406296",
"title": "Automatic Triage of 12-Lead ECGs Using Deep Convolutional Neural Networks.",
"abstract": "BACKGROUND The correct interpretation of the ECG is pivotal for the accurate diagnosis of many cardiac abnormalities, and conventional computerized interpretation has not been able to reach physician-level accuracy in detecting (acute) cardiac abnormalities. This study aims to develop and validate a deep neural network for comprehensive automated ECG triage in daily practice. METHODS AND RESULTS We developed a 37-layer convolutional residual deep neural network on a data set of free-text physician-annotated 12-lead ECGs. The deep neural network was trained on a data set with 336.835 recordings from 142.040 patients and validated on an independent validation data set (n=984), annotated by a panel of 5 cardiologists electrophysiologists. The 12-lead ECGs were acquired in all noncardiology departments of the University Medical Center Utrecht. The algorithm learned to classify these ECGs into the following 4 triage categories: normal, abnormal not acute, subacute, and acute. Discriminative performance is presented with overall and category-specific concordance statistics, polytomous discrimination indexes, sensitivities, specificities, and positive and negative predictive values. The patients in the validation data set had a mean age of 60.4 years and 54.3% were men. The deep neural network showed excellent overall discrimination with an overall concordance statistic of 0.93 (95% CI, 0.92-0.95) and a polytomous discriminatory index of 0.83 (95% CI, 0.79-0.87). CONCLUSIONS This study demonstrates that an end-to-end deep neural network can be accurately trained on unstructured free-text physician annotations and used to consistently triage 12-lead ECGs. When further fine-tuned with other clinical outcomes and externally validated in clinical practice, the demonstrated deep learning-based ECG interpretation can potentially improve time to treatment and decrease healthcare burden."
},
{
"pmid": "31450977",
"title": "Age and Sex Estimation Using Artificial Intelligence From Standard 12-Lead ECGs.",
"abstract": "BACKGROUND\nSex and age have long been known to affect the ECG. Several biologic variables and anatomic factors may contribute to sex and age-related differences on the ECG. We hypothesized that a convolutional neural network (CNN) could be trained through a process called deep learning to predict a person's age and self-reported sex using only 12-lead ECG signals. We further hypothesized that discrepancies between CNN-predicted age and chronological age may serve as a physiological measure of health.\n\n\nMETHODS\nWe trained CNNs using 10-second samples of 12-lead ECG signals from 499 727 patients to predict sex and age. The networks were tested on a separate cohort of 275 056 patients. Subsequently, 100 randomly selected patients with multiple ECGs over the course of decades were identified to assess within-individual accuracy of CNN age estimation.\n\n\nRESULTS\nOf 275 056 patients tested, 52% were males and mean age was 58.6±16.2 years. For sex classification, the model obtained 90.4% classification accuracy with an area under the curve of 0.97 in the independent test data. Age was estimated as a continuous variable with an average error of 6.9±5.6 years (R-squared =0.7). Among 100 patients with multiple ECGs over the course of at least 2 decades of life, most patients (51%) had an average error between real age and CNN-predicted age of <7 years. Major factors seen among patients with a CNN-predicted age that exceeded chronologic age by >7 years included: low ejection fraction, hypertension, and coronary disease (P<0.01). In the 27% of patients where correlation was >0.8 between CNN-predicted and chronologic age, no incident events occurred over follow-up (33±12 years).\n\n\nCONCLUSIONS\nApplying artificial intelligence to the ECG allows prediction of patient sex and estimation of age. The ability of an artificial intelligence algorithm to determine physiological age, with further validation, may serve as a measure of overall health."
},
{
"pmid": "33588584",
"title": "Deep Neural Networks Can Predict New-Onset Atrial Fibrillation From the 12-Lead ECG and Help Identify Those at Risk of Atrial Fibrillation-Related Stroke.",
"abstract": "BACKGROUND\nAtrial fibrillation (AF) is associated with substantial morbidity, especially when it goes undetected. If new-onset AF could be predicted, targeted screening could be used to find it early. We hypothesized that a deep neural network could predict new-onset AF from the resting 12-lead ECG and that this prediction may help identify those at risk of AF-related stroke.\n\n\nMETHODS\nWe used 1.6 M resting 12-lead digital ECG traces from 430 000 patients collected from 1984 to 2019. Deep neural networks were trained to predict new-onset AF (within 1 year) in patients without a history of AF. Performance was evaluated using areas under the receiver operating characteristic curve and precision-recall curve. We performed an incidence-free survival analysis for a period of 30 years following the ECG stratified by model predictions. To simulate real-world deployment, we trained a separate model using all ECGs before 2010 and evaluated model performance on a test set of ECGs from 2010 through 2014 that were linked to our stroke registry. We identified the patients at risk for AF-related stroke among those predicted to be high risk for AF by the model at different prediction thresholds.\n\n\nRESULTS\nThe area under the receiver operating characteristic curve and area under the precision-recall curve were 0.85 and 0.22, respectively, for predicting new-onset AF within 1 year of an ECG. The hazard ratio for the predicted high- versus low-risk groups over a 30-year span was 7.2 (95% CI, 6.9-7.6). In a simulated deployment scenario, the model predicted new-onset AF at 1 year with a sensitivity of 69% and specificity of 81%. The number needed to screen to find 1 new case of AF was 9. This model predicted patients at high risk for new-onset AF in 62% of all patients who experienced an AF-related stroke within 3 years of the index ECG.\n\n\nCONCLUSIONS\nDeep learning can predict new-onset AF from the 12-lead ECG in patients with no previous history of AF. This prediction may help identify patients at risk for AF-related strokes."
},
{
"pmid": "1866765",
"title": "Atrial fibrillation as an independent risk factor for stroke: the Framingham Study.",
"abstract": "The impact of nonrheumatic atrial fibrillation, hypertension, coronary heart disease, and cardiac failure on stroke incidence was examined in 5,070 participants in the Framingham Study after 34 years of follow-up. Compared with subjects free of these conditions, the age-adjusted incidence of stroke was more than doubled in the presence of coronary heart disease (p less than 0.001) and more than trebled in the presence of hypertension (p less than 0.001). There was a more than fourfold excess of stroke in subjects with cardiac failure (p less than 0.001) and a near fivefold excess when atrial fibrillation was present (p less than 0.001). In persons with coronary heart disease or cardiac failure, atrial fibrillation doubled the stroke risk in men and trebled the risk in women. With increasing age the effects of hypertension, coronary heart disease, and cardiac failure on the risk of stroke became progressively weaker (p less than 0.05). Advancing age, however, did not reduce the significant impact of atrial fibrillation. For persons aged 80-89 years, atrial fibrillation was the sole cardiovascular condition to exert an independent effect on stroke incidence (p less than 0.001). The attributable risk of stroke for all cardiovascular contributors decreased with age except for atrial fibrillation, for which the attributable risk increased significantly (p less than 0.01), rising from 1.5% for those aged 50-59 years to 23.5% for those aged 80-89 years. While these findings highlight the impact of each cardiovascular condition on the risk of stroke, the data suggest that the elderly are particularly vulnerable to stroke when atrial fibrillation is present.(ABSTRACT TRUNCATED AT 250 WORDS)"
},
{
"pmid": "7829796",
"title": "Electrocardiographic identification of increased left ventricular mass by simple voltage-duration products.",
"abstract": "OBJECTIVES\nThis study was conducted to validate the hypothesis that the product of QRS voltage and duration, as an approximation of the time-voltage area of the QRS complex, can improve the electrocardiographic (ECG) detection of echocardiographically determined left ventricular hypertrophy and to further assess the relative contribution of QRS duration to the ECG detection of hypertrophy.\n\n\nBACKGROUND\nThe ECG identification of left ventricular hypertrophy has been limited by the poor sensitivity of standard voltage criteria alone. However, increases in left ventricular mass can be more accurately related to increases in the time-voltage area of the QRS complex than to changes in QRS voltage or duration alone.\n\n\nMETHODS\nStandard 12-lead ECGs and echocardiograms were obtained for 389 patients, including 116 patients with left ventricular hypertrophy. Simple voltage-duration products were calculated by multiplying Cornell voltage by QRS duration (Cornell product) and the 12-lead sum of voltage by QRS duration (12-lead product).\n\n\nRESULTS\nIn a stepwise logistic regression model that also included Cornell voltage, Sokolow-Lyon voltage, age and gender, QRS duration remained a highly significant predictor of the presence of left ventricular hypertrophy (chi-square 26.9, p < 0.0001). At a matched specificity of 96%, each voltage-duration product significantly improved sensitivity for the detection of left ventricular hypertrophy compared with simple voltage criteria alone (Cornell product 37% vs. Cornell voltage 28%, p < 0.02, and 12-lead product 50% vs. 12-lead voltage 43%, p < 0.005). Sensitivities of both the Cornell product and the 12-lead product were significantly greater than the 27% sensitivity of QRS duration alone (p < 0.01 vs. p < 0.001), the 20% sensitivity of a Romhilt-Estes point score > 4 (p < 0.001) and the 33% sensitivity of the best-fit logistic regression model in this cohort (p < 0.05 vs. p < 0.001).\n\n\nCONCLUSIONS\nQRS duration is an independent ECG predictor of the presence of left ventricular hypertrophy, and the simple product of either Cornell voltage or 12-lead voltage and QRS duration significantly improves identification of left ventricular hypertrophy relative to other ECG criteria that use QRS duration and voltages in linear combinations."
},
{
"pmid": "32190336",
"title": "Detection and classification of ECG noises using decomposition on mixed codebook for quality analysis.",
"abstract": "In this Letter, a robust technique is presented to detect and classify different electrocardiogram (ECG) noises including baseline wander (BW), muscle artefact (MA), power line interference (PLI) and additive white Gaussian noise (AWGN) based on signal decomposition on mixed codebooks. These codebooks employ temporal and spectral-bound waveforms which provide sparse representation of ECG signals and can extract ECG local waves as well as ECG noises including BW, PLI, MA and AWGN simultaneously. Further, different statistical approaches and temporal features are applied on decomposed signals for detecting the presence of the above mentioned noises. The accuracy and robustness of the proposed technique are evaluated using a large set of noise-free and noisy ECG signals taken from the Massachusetts Institute of Technology-Boston's Beth Israel Hospital (MIT-BIH) arrhythmia database, MIT-BIH polysmnographic database and Fantasia database. It is shown from the results that the proposed technique achieves an average detection accuracy of above 99% in detecting all kinds of ECG noises. Furthermore, average results show that the technique can achieve an average sensitivity of 98.55%, positive productivity of 98.6% and classification accuracy of 97.19% for ECG signals taken from all three databases."
},
{
"pmid": "15199036",
"title": "Parental atrial fibrillation as a risk factor for atrial fibrillation in offspring.",
"abstract": "CONTEXT\nAtrial fibrillation (AF) is the most common cardiac dysrhythmia in the United States. Whereas rare cases of familial AF have been reported, it is unknown if AF among unselected individuals is a heritable condition.\n\n\nOBJECTIVE\nTo determine whether parental AF increases the risk for the development of offspring AF.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nProspective cohort study (1983-2002) within the Framingham Heart Study, a population-based epidemiologic study. Participants were 2243 offspring (1165 women, 1078 men) at least 30 years of age and free of AF whose parents had both been evaluated in the original cohort.\n\n\nMAIN OUTCOME MEASURES\nDevelopment of new-onset AF in the offspring was prospectively examined in association with previously documented parental AF.\n\n\nRESULTS\nAmong 2243 offspring participants, 681 (30%) had at least 1 parent with documented AF; 70 offspring participants (23 women; mean age, 62 [range, 40-81] years) developed AF in follow-up. Compared with no parental AF, AF in at least 1 parent increased the risk of offspring AF (multivariable-adjusted odds ratio [OR], 1.85; 95% confidence interval [CI], 1.12-3.06; P =.02). These results were stronger when age was limited to younger than 75 years in both parents and offspring (multivariable-adjusted OR, 3.23; 95% CI, 1.87-5.58; P<.001) and when the sample was further limited to those without antecedent myocardial infarction, heart failure, or valve disease (multivariable-adjusted OR, 3.17; 95% CI, 1.71-5.86; P<.001).\n\n\nCONCLUSIONS\nParental AF increases the future risk for offspring AF, an observation supporting a genetic susceptibility to developing this dysrhythmia. Further research into the genetic factors predisposing to AF is warranted."
}
] |
JMIR Mental Health | 35147507 | PMC8881775 | 10.2196/31724 | In Search of State and Trait Emotion Markers in Mobile-Sensed Language: Field Study | BackgroundEmotions and mood are important for overall well-being. Therefore, the search for continuous, effortless emotion prediction methods is an important field of study. Mobile sensing provides a promising tool and can capture one of the most telling signs of emotion: language.ObjectiveThe aim of this study is to examine the separate and combined predictive value of mobile-sensed language data sources for detecting both momentary emotional experience as well as global individual differences in emotional traits and depression.MethodsIn a 2-week experience sampling method study, we collected self-reported emotion ratings and voice recordings 10 times a day, continuous keyboard activity, and trait depression severity. We correlated state and trait emotions and depression and language, distinguishing between speech content (spoken words), speech form (voice acoustics), writing content (written words), and writing form (typing dynamics). We also investigated how well these features predicted state and trait emotions using cross-validation to select features and a hold-out set for validation.ResultsOverall, the reported emotions and mobile-sensed language demonstrated weak correlations. The most significant correlations were found between speech content and state emotions and between speech form and state emotions, ranging up to 0.25. Speech content provided the best predictions for state emotions. None of the trait emotion–language correlations remained significant after correction. Among the emotions studied, valence and happiness displayed the most significant correlations and the highest predictive performance.ConclusionsAlthough using mobile-sensed language as an emotion marker shows some promise, correlations and predictive R2 values are low. | Previous Related WorkSpeech ContentStudies on speech and emotional word use have generally focused on positive or negative emotions. Induced positive emotions coincide with more positive and less negative emotions between persons [27,28]. In addition, in natural language snippets, a positive association between trait positive affectivity and positive emotion words was found [29]. Higher trait negative affectivity and higher within-person negative emotions coincided with more negative emotions and more sadness-related words in experimental and natural settings [27-29]. However, a recent study did not find any significant correlations between emotion words and self-reported emotions either within or between persons [30].Because of these inconsistencies, The Secret Life of Pronouns supports the use of nonemotion words to assess emotional tone. In particular, depression and negative emotionality show a small correlation with first-person pronouns [31,32]. A larger variety of studies was conducted with writing, which will be further addressed in the Writing Content section.Speech FormEach voice has a unique sound because of age, gender, and accent. However, psychological features such as attitudes, intentions, and emotions also affect our sound [26]. Johnstone and Scherer [33] discern three types of features: time-related (eg, speech rate and speech duration), intensity-related (eg, speaking intensity and loudness), and features related to the fundamental frequency (F0; eg, F0 floor and F0 range). A fourth type could be timbre-related features (eg, jitter, shimmer, and formants). (Mobile-sensed) voice features have repeatedly been used in affective computing for the automatic classification of depression, bipolar disorder, and Parkinson disease [34-38].Higher-arousal emotions (eg, fear, anger, and joy) generally induce a higher speech intensity, F0, and speech rate, whereas lower-arousal emotions (eg, sadness and boredom) induce a lower speech intensity, F0, and speech rate (Table 1) [33,39-43]. Other features include a harmonics to noise ratio, which was found unrelated to arousal [44], and jitter, which showed a positive correlation with depression [45]. Arousal has been easiest to detect based on voice acoustics [46]. Discrete emotion recognition based on these features in deep neural networks has also been successful [47]. It is not yet clear whether these features could also discriminate between discrete emotions in simple models [48].Table 1Expected emotion–speech form correlations.EmotionF0a-meanF0-SDF0-rangeF0-riseF0-fallLoudness meanLoudness riseLoudness fallJitterbShimmercHNRdSpeech ratePause durationValence
Arousal(+)e+++++++
(+=)f(−)gAnger+++++++++=+=+(+−)h−Anxiety++−++−+−+
++−+−+−Sadness or depression−−−−−−−−+
−−+Stress++++++++
+−Happiness+++++++++=+=++−aF0: fundamental frequency.bDeviations in individual consecutive fundamental frequency period lengths.cDifference in the peak amplitudes of consecutive fundamental frequency periods.dHNR: harmonics to noise ratio (energy in harmonic components and energy in noise components).ePositive correlation.fPositive or no correlation.gNegative correlation.hUndirected correlation.Writing ContentHigher valence has repeatedly been associated with more positive and less negative emotion words on a within- and between-person level, along with a higher word count in both natural and induced emotion conditions (Table 2) [28,49-51]. Other studies have demonstrated 1-time links between higher valence and more exclamation marks and fewer negations between persons and between higher valence and less sadness-related words within persons [50,51], although the latter 2 have also been found to be unrelated [28,49]. Pennebaker [52] states that people use more first-person plural pronouns when they are happy.Table 2Expected emotion–speech and writing content correlations.EmotionWCaIWeYouNegatePosemobNegemocAnxdAngerSadCertaineSwearExclamfValence(+)g(−)h
+
++Arousal
Anger
+++
+
Anxiety
+++++++
+
Sadness
++
+
+
Stress
+
+
Happiness+−
+
++Depression
+++++++
+
aWC: word count.bPosemo: positive emotions.cNegemo: negative emotions.dAnx: anxiety.eCertain: absolutist words.fExclam: exclamation marks.gPositive correlation.hNegative correlation.Negative emotion, anxiety, and anger words recur as linguistic markers of anger within and between persons [49,51]. Pennebaker [52] adds to that the use of second-person pronouns. Recurrent linguistic markers of trait anxiety include negative emotion, sadness, and anger words [53,54]. The results with explicit anxiety words are mixed, and some isolated findings suggest a relationship with first-person, negation, swear, and certainty words [53,54]. Momentary and trait sadness have been linked to more negative emotion, sadness, and anger words in multiple studies [28,49,51]. In contrast, they were unrelated to sadness words in daily diaries [51]. A positive correlation existed between stress on one side and negative emotion and anger words between and within persons on the other [51,54]. Anxiety words have been related to stress both on a weekly and daily level [51], but this could not be replicated with trait stress [54]. Apart from the explicit emotion categories, several studies have linked depressive symptoms to the use of I words [23,55-58]. Other correlations include more negative emotion words, more swear words, and more negations [53,59,60]. More anxiety, sadness, and anger words were found in 1 study but were not significant in all studies [51,54]. In fact, Capecelatro et al [31] found depression to be unrelated to all Linguistic Inquiry and Word Count (LIWC) emotion categories.Writing FormInitially, studies concerning typing dynamics used external computer keyboards to predict stress and depression, among other emotions [61-65]. More recent studies have tried to use soft keyboards on smartphones for emotion, depression, and bipolar disorder detection [66-69]. It has been easier to distinguish between broad emotion dimensions—valence in 1 study and arousal in another [66,70].Despite the high predictive accuracies of deep learning models, separate correlations between emotional states and typing dynamics are small (Table 3). They exist between increased arousal and decreased keystroke duration and latency [70]. The dynamics used in depression detection include a shorter key press duration and latency, with a medium reduction in duration for severe depression but a high reduction for mild depression [61]. No correlation was found between depression and the number of backspaces. For emotions, typing speed was the most predictive feature [66].Table 3Expected emotion–writing form correlations.EmotionNumber of charactersTyping speedAverage key press durationNumber of entriesBackspacesTyping durationValence(+)a
Arousal
+(−)b
−Anger
Anxiety
Sadness
Stress
+−−−−Happiness+
Depression
−
aPositive correlation.bNegative correlation. | [
"3389582",
"25822133",
"28950968",
"19222325",
"15576620",
"21534661",
"26818126",
"28375728",
"27899727",
"32865506",
"32990638",
"22734058",
"12185209",
"17650921",
"18925496",
"30945904",
"23510497",
"29504797",
"27434490",
"10916253",
"8445120",
"28747478",
"15376501",
"23730828",
"30886766",
"26257692",
"26500601",
"32072331",
"31527640",
"26065902",
"31898294"
] | [
{
"pmid": "25822133",
"title": "The relation between short-term emotion dynamics and psychological well-being: A meta-analysis.",
"abstract": "Not only how good or bad people feel on average, but also how their feelings fluctuate across time is crucial for psychological health. The last 2 decades have witnessed a surge in research linking various patterns of short-term emotional change to adaptive or maladaptive psychological functioning, often with conflicting results. A meta-analysis was performed to identify consistent relationships between patterns of short-term emotion dynamics-including patterns reflecting emotional variability (measured in terms of within-person standard deviation of emotions across time), emotional instability (measured in terms of the magnitude of consecutive emotional changes), and emotional inertia of emotions over time (measured in terms of autocorrelation)-and relatively stable indicators of psychological well-being or psychopathology. We determined how such relationships are moderated by the type of emotional change, type of psychological well-being or psychopathology involved, valence of the emotion, and methodological factors. A total of 793 effect sizes were identified from 79 articles (N = 11,381) and were subjected to a 3-level meta-analysis. The results confirmed that overall, low psychological well-being co-occurs with more variable (overall ρ̂ = -.178), unstable (overall ρ̂ = -.205), but also more inert (overall ρ̂ = -.151) emotions. These effect sizes were stronger when involving negative compared with positive emotions. Moreover, the results provided evidence for consistency across different types of psychological well-being and psychopathology in their relation with these dynamical patterns, although specificity was also observed. The findings demonstrate that psychological flourishing is characterized by specific patterns of emotional fluctuations across time, and provide insight into what constitutes optimal and suboptimal emotional functioning. (PsycINFO Database Record"
},
{
"pmid": "28950968",
"title": "Emotion dynamics.",
"abstract": "The study of emotion dynamics involves the study of the trajectories, patterns, and regularities with which emotions (or rather, the experiential, physiological, and behavioral elements that constitute an emotion) fluctuate across time, their underlying processes, and downstream consequences. Here, we formulate some of the basic principles underlying emotional change over time, discuss methods to study emotion dynamics, their relevance for psychological well-being, and a number of challenges and opportunities for the future."
},
{
"pmid": "19222325",
"title": "Analytic strategies for understanding affective (in)stability and other dynamic processes in psychopathology.",
"abstract": "The dynamics of psychopathological symptoms as a topic of research has been neglected for some time, likely because of the inability of cross-sectional and retrospective reports to uncover the ebb and flow of symptoms. Data gathered with the experience sampling method (ESM) enable researchers to study symptom variability and instability over time as well as the dynamic interplay between the environment, personal experiences, and psychopathological symptoms. ESM data can illuminate these dynamic processes, if time is both considered and integrated into (a) the research question itself, (b) the assessment or sampling method, and (c) the data analytic strategy. The authors highlight the complexity of assessing affective instability and unstable interpersonal relationships and explore sampling and analytic methods. Finally, they propose guidelines for future investigations. For the assessment of affective instability, the authors endorse the use of time-contingent recordings and of instability indices that address temporal dependency. For the assessment of unstable interpersonal relationships, they advocate the use of event-contingent recordings and separate analyses within and across dyads."
},
{
"pmid": "15576620",
"title": "A survey method for characterizing daily life experience: the day reconstruction method.",
"abstract": "The Day Reconstruction Method (DRM) assesses how people spend their time and how they experience the various activities and settings of their lives, combining features of time-budget measurement and experience sampling. Participants systematically reconstruct their activities and experiences of the preceding day with procedures designed to reduce recall biases. The DRM's utility is shown by documenting close correspondences between the DRM reports of 909 employed women and established results from experience sampling. An analysis of the hedonic treadmill shows the DRM's potential for well-being research."
},
{
"pmid": "21534661",
"title": "Subjective responses to emotional stimuli during labeling, reappraisal, and distraction.",
"abstract": "Although multiple neuroimaging studies suggest that affect labeling (i.e., putting feelings into words) can dampen affect-related responses in the amygdala, the consequences of affect labeling have not been examined in other channels of emotional responding. We conducted four studies examining the effect of affect labeling on self-reported emotional experience. In study one, self-reported distress was lower during affect labeling, compared to passive watching, of negative emotional pictures. Studies two and three added reappraisal and distraction conditions, respectively. Affect labeling showed similar effects on self-reported distress as both of these intentional emotion regulation strategies. In each of the first three studies, however, participant predictions about the effects of affect labeling suggest that unlike reappraisal and distraction, people do not believe affect labeling to be an effective emotion regulation strategy. Even after having the experience of affect labels leading to lower distress, participants still predicted that affect labeling would increase distress in the future. Thus, affect labeling is best described as an incidental emotion regulation process. Finally, study four employed positive emotional pictures and here, affect labeling was associated with diminished self-reported pleasure, relative to passive watching. This suggests that affect labeling tends to dampen affective responses in general, rather than specifically alleviating negative affect."
},
{
"pmid": "28375728",
"title": "Personal Sensing: Understanding Mental Health Using Ubiquitous Sensors and Machine Learning.",
"abstract": "Sensors in everyday devices, such as our phones, wearables, and computers, leave a stream of digital traces. Personal sensing refers to collecting and analyzing data from sensors embedded in the context of daily life with the aim of identifying human behaviors, thoughts, feelings, and traits. This article provides a critical review of personal sensing research related to mental health, focused principally on smartphones, but also including studies of wearables, social media, and computers. We provide a layered, hierarchical model for translating raw sensor data into markers of behaviors and states related to mental health. Also discussed are research methods as well as challenges, including privacy and problems of dimensionality. Although personal sensing is still in its infancy, it holds great promise as a method for conducting mental health research and as a clinical tool for monitoring at-risk populations and providing the foundation for the next generation of mobile health (or mHealth) interventions."
},
{
"pmid": "27899727",
"title": "Using Smartphones to Collect Behavioral Data in Psychological Science: Opportunities, Practical Considerations, and Challenges.",
"abstract": "Smartphones now offer the promise of collecting behavioral data unobtrusively, in situ, as it unfolds in the course of daily life. Data can be collected from the onboard sensors and other phone logs embedded in today's off-the-shelf smartphone devices. These data permit fine-grained, continuous collection of people's social interactions (e.g., speaking rates in conversation, size of social groups, calls, and text messages), daily activities (e.g., physical activity and sleep), and mobility patterns (e.g., frequency and duration of time spent at various locations). In this article, we have drawn on the lessons from the first wave of smartphone-sensing research to highlight areas of opportunity for psychological research, present practical considerations for designing smartphone studies, and discuss the ongoing methodological and ethical challenges associated with research in this domain. It is our hope that these practical guidelines will facilitate the use of smartphones as a behavioral observation tool in psychological science."
},
{
"pmid": "32865506",
"title": "Predicting Early Warning Signs of Psychotic Relapse From Passive Sensing Data: An Approach Using Encoder-Decoder Neural Networks.",
"abstract": "BACKGROUND\nSchizophrenia spectrum disorders (SSDs) are chronic conditions, but the severity of symptomatic experiences and functional impairments vacillate over the course of illness. Developing unobtrusive remote monitoring systems to detect early warning signs of impending symptomatic relapses would allow clinicians to intervene before the patient's condition worsens.\n\n\nOBJECTIVE\nIn this study, we aim to create the first models, exclusively using passive sensing data from a smartphone, to predict behavioral anomalies that could indicate early warning signs of a psychotic relapse.\n\n\nMETHODS\nData used to train and test the models were collected during the CrossCheck study. Hourly features derived from smartphone passive sensing data were extracted from 60 patients with SSDs (42 nonrelapse and 18 relapse >1 time throughout the study) and used to train models and test performance. We trained 2 types of encoder-decoder neural network models and a clustering-based local outlier factor model to predict behavioral anomalies that occurred within the 30-day period before a participant's date of relapse (the near relapse period). Models were trained to recreate participant behavior on days of relative health (DRH, outside of the near relapse period), following which a threshold to the recreation error was applied to predict anomalies. The neural network model architecture and the percentage of relapse participant data used to train all models were varied.\n\n\nRESULTS\nA total of 20,137 days of collected data were analyzed, with 726 days of data (0.037%) within any 30-day near relapse period. The best performing model used a fully connected neural network autoencoder architecture and achieved a median sensitivity of 0.25 (IQR 0.15-1.00) and specificity of 0.88 (IQR 0.14-0.96; a median 108% increase in behavioral anomalies near relapse). We conducted a post hoc analysis using the best performing model to identify behavioral features that had a medium-to-large effect (Cohen d>0.5) in distinguishing anomalies near relapse from DRH among 4 participants who relapsed multiple times throughout the study. Qualitative validation using clinical notes collected during the original CrossCheck study showed that the identified features from our analysis were presented to clinicians during relapse events.\n\n\nCONCLUSIONS\nOur proposed method predicted a higher rate of anomalies in patients with SSDs within the 30-day near relapse period and can be used to uncover individual-level behaviors that change before relapse. This approach will enable technologists and clinicians to build unobtrusive digital mental health tools that can predict incipient relapse in SSDs."
},
{
"pmid": "32990638",
"title": "Using Machine Learning and Smartphone and Smartwatch Data to Detect Emotional States and Transitions: Exploratory Study.",
"abstract": "BACKGROUND\nEmotional state in everyday life is an essential indicator of health and well-being. However, daily assessment of emotional states largely depends on active self-reports, which are often inconvenient and prone to incomplete information. Automated detection of emotional states and transitions on a daily basis could be an effective solution to this problem. However, the relationship between emotional transitions and everyday context remains to be unexplored.\n\n\nOBJECTIVE\nThis study aims to explore the relationship between contextual information and emotional transitions and states to evaluate the feasibility of detecting emotional transitions and states from daily contextual information using machine learning (ML) techniques.\n\n\nMETHODS\nThis study was conducted on the data of 18 individuals from a publicly available data set called ExtraSensory. Contextual and sensor data were collected using smartphone and smartwatch sensors in a free-living condition, where the number of days for each person varied from 3 to 9. Sensors included an accelerometer, a gyroscope, a compass, location services, a microphone, a phone state indicator, light, temperature, and a barometer. The users self-reported approximately 49 discrete emotions at different intervals via a smartphone app throughout the data collection period. We mapped the 49 reported discrete emotions to the 3 dimensions of the pleasure, arousal, and dominance model and considered 6 emotional states: discordant, pleased, dissuaded, aroused, submissive, and dominant. We built general and personalized models for detecting emotional transitions and states every 5 min. The transition detection problem is a binary classification problem that detects whether a person's emotional state has changed over time, whereas state detection is a multiclass classification problem. In both cases, a wide range of supervised ML algorithms were leveraged, in addition to data preprocessing, feature selection, and data imbalance handling techniques. Finally, an assessment was conducted to shed light on the association between everyday context and emotional states.\n\n\nRESULTS\nThis study obtained promising results for emotional state and transition detection. The best area under the receiver operating characteristic (AUROC) curve for emotional state detection reached 60.55% in the general models and an average of 96.33% across personalized models. Despite the highly imbalanced data, the best AUROC curve for emotional transition detection reached 90.5% in the general models and an average of 88.73% across personalized models. In general, feature analyses show that spatiotemporal context, phone state, and motion-related information are the most informative factors for emotional state and transition detection. Our assessment showed that lifestyle has an impact on the predictability of emotion.\n\n\nCONCLUSIONS\nOur results demonstrate a strong association of daily context with emotional states and transitions as well as the feasibility of detecting emotional states and transitions using data from smartphone and smartwatch sensors."
},
{
"pmid": "22734058",
"title": "The co-evolution of language and emotions.",
"abstract": "We argue that language evolution started like the evolution of reading and writing, through cultural evolutionary processes. Genuinely new behavioural patterns emerged from collective exploratory processes that individuals could learn because of their brain plasticity. Those cultural-linguistic innovative practices that were consistently socially and culturally selected drove a process of genetic accommodation of both general and language-specific aspects of cognition. We focus on the affective facet of this culture-driven cognitive evolution, and argue that the evolution of human emotions co-evolved with that of language. We suggest that complex tool manufacture and alloparenting played an important role in the evolution of emotions, by leading to increased executive control and inter-subjective sensitivity. This process, which can be interpreted as a special case of self-domestication, culminated in the construction of human-specific social emotions, which facilitated information-sharing. Once in place, language enhanced the inhibitory control of emotions, enabled the development of novel emotions and emotional capacities, and led to a human mentality that departs in fundamental ways from that of other apes. We end by suggesting experimental approaches that can help in evaluating some of these proposals and hence lead to better understanding of the evolutionary biology of language and emotions."
},
{
"pmid": "12185209",
"title": "Psychological aspects of natural language. use: our words, our selves.",
"abstract": "The words people use in their daily lives can reveal important aspects of their social and psychological worlds. With advances in computer technology, text analysis allows researchers to reliably and quickly assess features of what people say as well as subtleties in their linguistic styles. Following a brief review of several text analysis programs, we summarize some of the evidence that links natural word use to personality, social and situational fluctuations, and psychological interventions. Of particular interest are findings that point to the psychological value of studying particles-parts of speech that include pronouns, articles, prepositions, conjunctives, and auxiliary verbs. Particles, which serve as the glue that holds nouns and regular verbs together, can serve as markers of emotional state, social identity, and cognitive styles."
},
{
"pmid": "17650921",
"title": "Measuring emotional expression with the Linguistic Inquiry and Word Count.",
"abstract": "The Linguistic Inquiry and Word Count (LIWC) text analysis program often is used as a measure of emotion expression, yet the construct validity of its use for this purpose has not been examined. Three experimental studies assessed whether the LIWC counts of emotion processes words are sensitive to verbal expression of sadness and amusement. Experiment 1 determined that sad and amusing written autobiographical memories differed in LIWC emotion counts in expected ways. Experiment 2 revealed that reactions to emotionally provocative film clips designed to manipulate the momentary experience of sadness and amusement differed in LIWC counts. Experiment 3 replicated the findings of Experiment 2 and found generally weak relations between LIWC emotion counts and individual differences in emotional reactivity, dispositional expressivity, and personality. The LIWC therefore appears to be a valid method for measuring verbal expression of emotion."
},
{
"pmid": "18925496",
"title": "Clarifying the linguistic signature: measuring personality from natural speech.",
"abstract": "In this study, we examined the viability of measuring personality using computerized lexical analysis of natural speech. Two well-validated models of personality were measured, one involving trait positive affectivity (PA) and negative affectivity (NA) dimensions and the other involving a separate behavioral inhibition motivational system (BIS) and a behavioral activation motivational system (BAS). Individuals with high levels of trait PA and sensitive BAS expressed high levels of positive emotion in their natural speech, whereas individuals with high levels of trait NA and sensitive BIS tended to express high levels of negative emotion. The personality variables accounted for almost a quarter of the variance in emotional expressivity."
},
{
"pmid": "30945904",
"title": "The language of well-being: Tracking fluctuations in emotion experience through everyday speech.",
"abstract": "The words that people use have been found to reflect stable psychological traits, but less is known about the extent to which everyday fluctuations in spoken language reflect transient psychological states. We explored within-person associations between spoken words and self-reported state emotion among 185 participants who wore the Electronically Activated Recorder (EAR; an unobtrusive audio recording device) and completed experience sampling reports of their positive and negative emotions 4 times per day for 7 days (1,579 observations). We examined language using the Linguistic Inquiry and Word Count program (LIWC; theoretically created dictionaries) and open-vocabulary themes (clusters of data-driven semantically-related words). Although some studies give the impression that LIWC's positive and negative emotion dictionaries can be used as indicators of emotion experience, we found that when computed on spoken language, LIWC emotion scores were not significantly associated with self-reports of state emotion experience. Exploration of other categories of language variables suggests a number of hypotheses about substantive everyday correlates of momentary positive and negative emotion that can be tested in future studies. These findings (a) suggest that LIWC positive and negative emotion dictionaries may not capture self-reported subjective emotion experience when applied to everyday speech, (b) emphasize the importance of establishing the validity of language-based measures within one's target domain, (c) demonstrate the potential for developing new hypotheses about personality processes from the open-ended words that are used in everyday speech, and (d) extend perspectives on intraindividual variability to the domain of spoken language. (PsycINFO Database Record (c) 2020 APA, all rights reserved)."
},
{
"pmid": "23510497",
"title": "Major depression duration reduces appetitive word use: an elaborated verbal recall of emotional photographs.",
"abstract": "INTRODUCTION\nMajor depressive disorder (MDD) is characterized by cognitive biases in attention, memory and language use. Language use biases often parallel depression symptoms, and contain over-representations of both negative emotive and death words as well as low levels of positive emotive words. This study further explores cognitive biases in depression by comparing the effect of current depression status to cumulative depression history on an elaborated verbal recall of emotional photographs.\n\n\nMETHODS\nFollowing a negative mood induction, fifty-two individuals (42 women) with partially-remitted depression viewed - then recalled and verbally described - slides from the International Affective Picture System (IAPS). Descriptions were transcribed and frequency of depression-related word use (positive emotion, negative emotion, sex, ingestion and death) was analyzed using the Linguistic Inquiry and Word Count program (LIWC).\n\n\nRESULTS\nContrary to expectations and previous findings, current depression status did not affect word use in any categories of interest. However, individuals with more than 5 years of previous depression used fewer words related to positive emotion (t(50) = 2.10, p = .04, (d = 0.57)), and sex (t(48) = 2.50, p = .013 (d = 0.81)), and there was also a trend for these individuals to use fewer ingestion words (t(50) = 1.95, p = .057 (d = 0.58)), suggesting a deficit in appetitive processing.\n\n\nCONCLUSIONS\nOur findings suggest that depression duration affects appetitive information processing and that appetitive word use may be a behavioral marker for duration related brain changes which may be used to inform treatment."
},
{
"pmid": "29504797",
"title": "Depression, negative emotionality, and self-referential language: A multi-lab, multi-measure, and multi-language-task research synthesis.",
"abstract": "Depressive symptomatology is manifested in greater first-person singular pronoun use (i.e., I-talk), but when and for whom this effect is most apparent, and the extent to which it is specific to depression or part of a broader association between negative emotionality and I-talk, remains unclear. Using pooled data from N = 4,754 participants from 6 labs across 2 countries, we examined, in a preregistered analysis, how the depression-I-talk effect varied by (a) first-person singular pronoun type (i.e., subjective, objective, and possessive), (b) the communication context in which language was generated (i.e., personal, momentary thought, identity-related, and impersonal), and (c) gender. Overall, there was a small but reliable positive correlation between depression and I-talk (r = .10, 95% CI [.07, .13]). The effect was present for all first-person singular pronouns except the possessive type, in all communication contexts except the impersonal one, and for both females and males with little evidence of gender differences. Importantly, a similar pattern of results emerged for negative emotionality. Further, the depression-I-talk effect was substantially reduced when controlled for negative emotionality but this was not the case when the negative emotionality-I-talk effect was controlled for depression. These results suggest that the robust empirical link between depression and I-talk largely reflects a broader association between negative emotionality and I-talk. Self-referential language using first-person singular pronouns may therefore be better construed as a linguistic marker of general distress proneness or negative emotionality rather than as a specific marker of depression. (PsycINFO Database Record (c) 2019 APA, all rights reserved)."
},
{
"pmid": "27434490",
"title": "Voice analysis as an objective state marker in bipolar disorder.",
"abstract": "Changes in speech have been suggested as sensitive and valid measures of depression and mania in bipolar disorder. The present study aimed at investigating (1) voice features collected during phone calls as objective markers of affective states in bipolar disorder and (2) if combining voice features with automatically generated objective smartphone data on behavioral activities (for example, number of text messages and phone calls per day) and electronic self-monitored data (mood) on illness activity would increase the accuracy as a marker of affective states. Using smartphones, voice features, automatically generated objective smartphone data on behavioral activities and electronic self-monitored data were collected from 28 outpatients with bipolar disorder in naturalistic settings on a daily basis during a period of 12 weeks. Depressive and manic symptoms were assessed using the Hamilton Depression Rating Scale 17-item and the Young Mania Rating Scale, respectively, by a researcher blinded to smartphone data. Data were analyzed using random forest algorithms. Affective states were classified using voice features extracted during everyday life phone calls. Voice features were found to be more accurate, sensitive and specific in the classification of manic or mixed states with an area under the curve (AUC)=0.89 compared with an AUC=0.78 for the classification of depressive states. Combining voice features with automatically generated objective smartphone data on behavioral activities and electronic self-monitored data increased the accuracy, sensitivity and specificity of classification of affective states slightly. Voice features collected in naturalistic settings using smartphones may be used as objective state markers in patients with bipolar disorder."
},
{
"pmid": "10916253",
"title": "Acoustical properties of speech as indicators of depression and suicidal risk.",
"abstract": "Acoustic properties of speech have previously been identified as possible cues to depression, and there is evidence that certain vocal parameters may be used further to objectively discriminate between depressed and suicidal speech. Studies were performed to analyze and compare the speech acoustics of separate male and female samples comprised of normal individuals and individuals carrying diagnoses of depression and high-risk, near-term suicidality. The female sample consisted of ten control subjects, 17 dysthymic patients, and 21 major depressed patients. The male sample contained 24 control subjects, 21 major depressed patients, and 22 high-risk suicidal patients. Acoustic analyses of voice fundamental frequency (Fo), amplitude modulation (AM), formants, and power distribution were performed on speech samples extracted from audio recordings collected from the sample members. Multivariate feature and discriminant analyses were performed on feature vectors representing the members of the control and disordered classes. Features derived from the formant and power spectral density measurements were found to be the best discriminators of class membership in both the male and female studies. AM features emerged as strong class discriminators of the male classes. Features describing Fo were generally ineffective discriminators in both studies. The results support theories that identify psychomotor disturbances as central elements in depression and suicidality."
},
{
"pmid": "8445120",
"title": "Toward the simulation of emotion in synthetic speech: a review of the literature on human vocal emotion.",
"abstract": "There has been considerable research into perceptible correlates of emotional state, but a very limited amount of the literature examines the acoustic correlates and other relevant aspects of emotion effects in human speech; in addition, the vocal emotion literature is almost totally separate from the main body of speech analysis literature. A discussion of the literature describing human vocal emotion, and its principal findings, are presented. The voice parameters affected by emotion are found to be of three main types: voice quality, utterance timing, and utterance pitch contour. These parameters are described both in general and in detail for a range of specific emotions. Current speech synthesizer technology is such that many of the parameters of human speech affected by emotion could be manipulated systematically in synthetic speech to produce a simulation of vocal emotion; application of the literature to construction of a system capable of producing synthetic speech with emotion is discussed."
},
{
"pmid": "28747478",
"title": "Humans recognize emotional arousal in vocalizations across all classes of terrestrial vertebrates: evidence for acoustic universals.",
"abstract": "Writing over a century ago, Darwin hypothesized that vocal expression of emotion dates back to our earliest terrestrial ancestors. If this hypothesis is true, we should expect to find cross-species acoustic universals in emotional vocalizations. Studies suggest that acoustic attributes of aroused vocalizations are shared across many mammalian species, and that humans can use these attributes to infer emotional content. But do these acoustic attributes extend to non-mammalian vertebrates? In this study, we asked human participants to judge the emotional content of vocalizations of nine vertebrate species representing three different biological classes-Amphibia, Reptilia (non-aves and aves) and Mammalia. We found that humans are able to identify higher levels of arousal in vocalizations across all species. This result was consistent across different language groups (English, German and Mandarin native speakers), suggesting that this ability is biologically rooted in humans. Our findings indicate that humans use multiple acoustic parameters to infer relative arousal in vocalizations for each species, but mainly rely on fundamental frequency and spectral centre of gravity to identify higher arousal vocalizations across species. These results suggest that fundamental mechanisms of vocal emotional expression are shared among vertebrates and could represent a homologous signalling system."
},
{
"pmid": "15376501",
"title": "Investigation of vocal jitter and glottal flow spectrum as possible cues for depression and near-term suicidal risk.",
"abstract": "Among the many clinical decisions that psychiatrists must make, assessment of a patient's risk of committing suicide is definitely among the most important, complex, and demanding. When reviewing his clinical experience, one of the authors observed that successful predictions of suicidality were often based on the patient's voice independent of content. The voices of suicidal patients judged to be high-risk near-term exhibited unique qualities, which distinguished them from nonsuicidal patients. We investigated the discriminating power of two excitation-based speech parameters, vocal jitter and glottal flow spectrum, for distinguishing among high-risk near-term suicidal, major depressed, and nonsuicidal patients. Our sample consisted of ten high-risk near-term suicidal patients, ten major depressed patients, and ten nondepressed control subjects. As a result of two sample statistical analyses, mean vocal jitter was found to be a significant discriminator only between suicidal and nondepressed control groups (p < 0.05). The slope of the glottal flow spectrum, on the other hand, was a significant discriminator between all three groups (p < 0.05). A maximum likelihood classifier, developed by combining the a posteriori probabilities of these two features, yielded correct classification scores of 85% between near-term suicidal patients and nondepressed controls, 90% between depressed patients and nondepressed controls, and 75% between near-term suicidal patients and depressed patients. These preliminary classification results support the hypothesized link between phonation and near-term suicidal risk. However, validation of the proposed measures on a larger sample size is necessary."
},
{
"pmid": "23730828",
"title": "Detecting well-being via computerized content analysis of brief diary entries.",
"abstract": "Two studies evaluated the correspondence between self-reported well-being and codings of emotion and life content by the Linguistic Inquiry and Word Count (LIWC; Pennebaker, Booth, & Francis, 2011). Open-ended diary responses were collected from 206 participants daily for 3 weeks (Study 1) and from 139 participants twice a week for 8 weeks (Study 2). LIWC negative emotion consistently correlated with self-reported negative emotion. LIWC positive emotion correlated with self-reported positive emotion in Study 1 but not in Study 2. No correlations were observed with global life satisfaction. Using a co-occurrence coding method to combine LIWC emotion codings with life-content codings, we estimated the frequency of positive and negative events in 6 life domains (family, friends, academics, health, leisure, and money). Domain-specific event frequencies predicted self-reported satisfaction in all domains in Study 1 but not consistently in Study 2. We suggest that the correspondence between LIWC codings and self-reported well-being is affected by the number of writing samples collected per day as well as the target period (e.g., past day vs. past week) assessed by the self-report measure. Extensions and possible implications for the analyses of similar types of open-ended data (e.g., social media messages) are discussed. (PsycINFO Database Record (c) 2013 APA, all rights reserved)."
},
{
"pmid": "30886766",
"title": "In an Absolute State: Elevated Use of Absolutist Words Is a Marker Specific to Anxiety, Depression, and Suicidal Ideation.",
"abstract": "Absolutist thinking is considered a cognitive distortion by most cognitive therapies for anxiety and depression. Yet, there is little empirical evidence of its prevalence or specificity. Across three studies, we conducted a text analysis of 63 Internet forums (over 6,400 members) using the Linguistic Inquiry and Word Count software to examine absolutism at the linguistic level. We predicted and found that anxiety, depression, and suicidal ideation forums contained more absolutist words than control forums (ds > 3.14). Suicidal ideation forums also contained more absolutist words than anxiety and depression forums (ds > 1.71). We show that these differences are more reflective of absolutist thinking than psychological distress. It is interesting that absolutist words tracked the severity of affective disorder forums more faithfully than negative emotion words. Finally, we found elevated levels of absolutist words in depression recovery forums. This suggests that absolutist thinking may be a vulnerability factor."
},
{
"pmid": "26257692",
"title": "Sharing feelings online: studying emotional well-being via automated text analysis of Facebook posts.",
"abstract": "Digital traces of activity on social network sites represent a vast source of ecological data with potential connections with individual behavioral and psychological characteristics. The present study investigates the relationship between user-generated textual content shared on Facebook and emotional well-being. Self-report measures of depression, anxiety, and stress were collected from 201 adult Facebook users from North Italy. Emotion-related textual indicators, including emoticon use, were extracted form users' Facebook posts via automated text analysis. Correlation analyses revealed that individuals with higher levels of depression, anxiety expressed negative emotions on Facebook more frequently. In addition, use of emoticons expressing positive emotions correlated negatively with stress level. When comparing age groups, younger users reported higher frequency of both emotion-related words and emoticon use in their posts. Also, the relationship between online emotional expression and self-report emotional well-being was generally stronger in the younger group. Overall, findings support the feasibility and validity of studying individual emotional well-being by means of examination of Facebook profiles. Implications for online screening purposes and future research directions are discussed."
},
{
"pmid": "26500601",
"title": "Me, myself, and I: self-referent word use as an indicator of self-focused attention in relation to depression and anxiety.",
"abstract": "Self-focused attention (SFA) is considered a cognitive bias that is closely related to depression. However, it is not yet well understood whether it represents a disorder-specific or a trans-diagnostic phenomenon and which role the valence of a given context is playing in this regard. Computerized quantitative text-analysis offers an integrative psycho-linguistic approach that may help to provide new insights into these complex relationships. The relative frequency of first-person singular pronouns in natural language is regarded as an objective, linguistic marker of SFA. Here we present two studies that examined the associations between SFA and symptoms of depression and anxiety in two different contexts (positive vs. negative valence), as well as the convergence between pronoun-use and self-reported aspects of SFA. In the first study, we found that the use of first-person singular pronouns during negative but not during positive memory recall was positively related to symptoms of depression and anxiety in patients with anorexia nervosa with varying levels of co-morbid depression and anxiety. In the second study, we found the same pattern of results in non-depressed individuals. In addition, use of first-person singular pronouns during negative memory recall was positively related to brooding (i.e., the assumed maladaptive sub-component of rumination) but not to reflection. These findings could not be replicated in two samples of depressed patients. However, non-chronically depressed patients used more first-person singular pronouns than healthy controls, irrespective of context. Taken together, the findings lend partial support to theoretical models that emphasize the effects of context on self-focus and consider SFA as a relevant trans-diagnostic phenomenon. In addition, the present findings point to the construct validity of pronoun-use as a linguistic marker of maladaptive self-focus."
},
{
"pmid": "32072331",
"title": "Stress Detection via Keyboard Typing Behaviors by Using Smartphone Sensors and Machine Learning Techniques.",
"abstract": "Stress is one of the biggest problems in modern society. It may not be possible for people to perceive if they are under high stress or not. It is important to detect stress early and unobtrusively. In this context, stress detection can be considered as a classification problem. In this study, it was investigated the effects of stress by using accelerometer and gyroscope sensor data of the writing behavior on a smartphone touchscreen panel. For this purpose, smartphone data including two states (stress and calm) were collected from 46 participants. The obtained sensor signals were divided into 5, 10 and 15 s interval windows to create three different data sets and 112 different features were defined from the raw data. To obtain more effective feature subsets, these features were ranked by using Gain Ratio feature selection algorithm. Afterwards, writing behaviors were classified by C4.5 Decision Trees, Bayesian Networks and k-Nearest Neighbor methods. As a result of the experiments, 74.26%, 67.86%, and 87.56% accuracy classification results were obtained respectively."
},
{
"pmid": "31527640",
"title": "Touchscreen typing pattern analysis for remote detection of the depressive tendency.",
"abstract": "Depressive disorder (DD) is a mental illness affecting more than 300 million people worldwide, whereas social stigma and subtle, variant symptoms impede diagnosis. Psychomotor retardation is a common component of DD with a negative impact on motor function, usually reflected on patients' routine activities, including, nowadays, their interaction with mobile devices. Therefore, such interactions constitute an enticing source of information towards unsupervised screening for DD symptoms in daily life. In this vein, this paper proposes a machine learning-based method for discriminating between subjects with depressive tendency and healthy controls, as denoted by self-reported Patient Health Questionnaire-9 (PHQ-9) compound scores, based on typing patterns captured in-the-wild. The latter consisted of keystroke timing sequences and typing metadata, passively collected during natural typing on touchscreen smartphones by 11/14 subjects with/without depressive tendency. Statistical features were extracted and tested in univariate and multivariate classification pipelines to reach a decision on subjects' status. The best-performing pipeline achieved an AUC = 0.89 (0.72-1.00; 95% Confidence Interval) and 0.82/0.86 sensitivity/specificity, with the outputted probabilities significantly correlating (>0.60) with the respective PHQ-9 scores. This work adds to the findings of previous research associating typing patterns with psycho-motor impairment and contributes to the development of an unobtrusive, high-frequency monitoring of depressive tendency in everyday living."
},
{
"pmid": "26065902",
"title": "The Influence of Emotion on Keyboard Typing: An Experimental Study Using Auditory Stimuli.",
"abstract": "In recent years, a novel approach for emotion recognition has been reported, which is by keystroke dynamics. The advantages of using this approach are that the data used is rather non-intrusive and easy to obtain. However, there were only limited investigations about the phenomenon itself in previous studies. Hence, this study aimed to examine the source of variance in keyboard typing patterns caused by emotions. A controlled experiment to collect subjects' keystroke data in different emotional states induced by International Affective Digitized Sounds (IADS) was conducted. Two-way Valence (3) x Arousal (3) ANOVAs was used to examine the collected dataset. The results of the experiment indicate that the effect of arousal is significant in keystroke duration (p < .05), keystroke latency (p < .01), but not in the accuracy rate of keyboard typing. The size of the emotional effect is small, compared to the individual variability. Our findings support the conclusion that the keystroke duration and latency are influenced by arousal. The finding about the size of the effect suggests that the accuracy rate of emotion recognition technology could be further improved if personalized models are utilized. Notably, the experiment was conducted using standard instruments and hence is expected to be highly reproducible."
},
{
"pmid": "31898294",
"title": "mobileQ: A free user-friendly application for collecting experience sampling data.",
"abstract": "In this article we introduce mobileQ, which is a free, open-source platform that our lab has developed to use in experience sampling studies. Experience sampling has several strengths and is becoming more widely conducted, but there are few free software options. To address this gap, mobileQ has freely available servers, a web interface, and an Android app. To reduce the barrier to entry, it requires no high-level programming and uses an easy, point-and-click interface. It is designed to be used on dedicated research phones, allowing for experimenter control and eliminating selection bias. In this article, we introduce setting up a study in mobileQ, outline the set of help resources available for new users, and highlight the success with which mobileQ has been used in our lab."
}
] |
BMC Medical Informatics and Decision Making | null | PMC8881866 | 10.1186/s12911-022-01788-8 | Interpretable instance disease prediction based on causal feature selection and effect analysis | BackgroundIn the big wave of artificial intelligence sweeping the world, machine learning has made great achievements in healthcare in the past few years, however, these methods are only based on correlation, not causation. The particularities of the healthcare determines that the research method must comply with the causality norm, otherwise the wrong intervention measures may bring the patients a lifetime of misfortune.MethodsWe propose a two-stage prediction method (instance feature selection prediction and causal effect analysis) for instance disease prediction. Feature selection is based on the counterfactual and uses the reinforcement learning framework to design an interpretable qualitative instance feature selection prediction. The model is composed of three neural networks (counterfactual prediction network, fact prediction network and counterfactual feature selection network), and the actor-critical method is used to train the network. Then we take the counterfactual prediction network as a structured causal model and improve the neural network attribution algorithm based on gradient integration to quantitatively calculate the causal effect of selection features on the output results.ResultsThe results of our experiments on synthetic data, open source data and real medical data show that our proposed method can provide qualitative and quantitative causal explanations for the model while giving prediction results.ConclusionsThe experimental results demonstrate that causality can further explore more essential relationships between variables and the prediction method based on causal feature selection and effect analysis can build a more reliable disease prediction model.Supplementary InformationThe online version contains supplementary material available at 10.1186/s12911-022-01788-8. | Related workMachine learning has made great progress in the health [11–13].These apps must satisfy two conditions: (1) they must be causal and (2) they must be explainable. For example, in order to find the effect of a drug on a patient's health, it is necessary to estimate the causal relationship between the drug and the patient's health status. Moreover, in order for the results to be reliable to the doctor, it is necessary to explain how the decision was made.Recently, interpretability models based on traditional methods have been studied in the following aspects. Attention network: neural network model based on attention mechanism can not only improve the accuracy of prediction, but also specifically show which input features or learning representation are more important for specific prediction, such as graph embedding [14] and machine translation [15, 16]. Representation learning: One goal of representation learning is to decompose features into independent latent variables that are highly correlated with meaningful patterns [11]. In traditional machine learning, methods such as PCA [17], ICA [18]and spectral analysis [19] are proposed to discover entangled components of data. Recently researchers have developed deep latent variable models such as VAE [20], InfoGan [10] and β-VAE [21] to learn to untangle the latent variables through variation reasoning. Locally interpretable model: LIME [9] is a representative and precursor framework that can estimate any black box prediction through a local proxy interpretable model. Saliency mapping: Originally developed by Simonyan et al. [22] as a "category saliency map for a particular image", it highlights the pixels of a given input image. These pixels are primarily concerned with identifying a particular category of label for an image. To extract these pixels, a back propagation algorithm can traverse (deconvolution) to find the derivative of the weight vector, and the magnitude of the derivative indicates the importance of each pixel to the category score. Other researchers have used similar concepts to deconvolve predictions and show the location of input images that strongly influence neuronal activation [23–25]. Although these methods are popular tools for interpretability, Adebayo et al. [26] and Ghorbani et al. [27] argue that relying on visual assessments is insufficient and may be misleading.In addition, feature selection based on information theory also has corresponding work. Fast correlation-based filter (FCBF) was proposed by Lei Yu and Huan Liu in [33]. This paper mainly proposes to use symmetric uncertainty instead of information gain to measure whether a feature is related to classification C or redundant. Minimum redundancy and maximum relevance (MRMR) algorithm [34] is a feature selection algorithm for single label data. The main purpose of this typical feature attribute selection algorithm is to select m features from n features and ensure that the feature subset can keep the classification results of data samples close to or even better than those of all features. Brown et al. [35] present a unifying framework for information theoretic feature selection, bringing almost two decades of research on heuristic filter criteria under a single theoretical interpretation. This paper mainly focuses on the feature selection of causality. Counterfactual analysis and causal inference have gained a lot of attention from the interpretable machine learning field. Research in this area has mainly focused on generating counterfactual explanations from both the data perspective [28, 29] as well as the components of a model [30, 31].Pearl [32] introduces different levels of said interpretability and argues that generating counterfactual explanations is the way to achieve the highest level of interpretability. Therefore, this paper attempts to select causal features based on neural network and causal reasoning. The relevant methods are described as follows. | [
"27898976",
"10946390"
] | [
{
"pmid": "27898976",
"title": "Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.",
"abstract": "Importance\nDeep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation.\n\n\nObjective\nTo apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs.\n\n\nDesign and Setting\nA specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency.\n\n\nExposure\nDeep learning-trained algorithm.\n\n\nMain Outcomes and Measures\nThe sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity.\n\n\nResults\nThe EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0.990 (95% CI, 0.986-0.995) for Messidor-2. Using the first operating cut point with high specificity, for EyePACS-1, the sensitivity was 90.3% (95% CI, 87.5%-92.7%) and the specificity was 98.1% (95% CI, 97.8%-98.5%). For Messidor-2, the sensitivity was 87.0% (95% CI, 81.1%-91.0%) and the specificity was 98.5% (95% CI, 97.7%-99.1%). Using a second operating point with high sensitivity in the development set, for EyePACS-1 the sensitivity was 97.5% and specificity was 93.4% and for Messidor-2 the sensitivity was 96.1% and specificity was 93.9%.\n\n\nConclusions and Relevance\nIn this evaluation of retinal fundus photographs from adults with diabetes, an algorithm based on deep machine learning had high sensitivity and specificity for detecting referable diabetic retinopathy. Further research is necessary to determine the feasibility of applying this algorithm in the clinical setting and to determine whether use of the algorithm could lead to improved care and outcomes compared with current ophthalmologic assessment."
},
{
"pmid": "10946390",
"title": "Independent component analysis: algorithms and applications.",
"abstract": "A fundamental problem in neural network research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis, factor analysis, and projection pursuit. Independent component analysis (ICA) is a recently developed method in which the goal is to find a linear representation of non-Gaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. In this paper, we present the basic theory and applications of ICA, and our recent work on the subject."
}
] |
Frontiers in Bioengineering and Biotechnology | null | PMC8882909 | 10.3389/fbioe.2022.804454 | Traffic Flow Prediction Model Based on the Combination of Improved Gated Recurrent Unit and Graph Convolutional Network | With the rapid economic growth and the continuous increase in population, cars have become a necessity for most people to travel. The increase in the number of cars is accompanied by serious traffic congestion. In order to alleviate traffic congestion, many places have introduced policies such as vehicle restriction, and intelligent transportation systems have gradually been put into use. Due to the chaotic complexity of the traffic road network and the short-term mobility of the population, traffic flow prediction is affected by many complex factors, and an effective traffic flow forecasting system is very challenging. This paper proposes a model to predict the traffic flow of Wenyi Road in Hangzhou. Wenyi Road consists of four crossroads. The four intersections have the same changing trend in traffic flow at the same time, which indicates that the roads influence each other spatially, and the traffic flow has spatial and temporal correlation. Based on this feature of traffic flow, we propose the IMgru model to better extract the traffic flow temporal characteristics. In addition, the IMgruGcn model is proposed, which combines the graph convolutional network (GCN) module and the IMgru module, to extract the spatiotemporal features of traffic flow simultaneously. Finally, according to the morning and evening peak characteristics of Hangzhou, the Wenyi Road dataset is divided into peak period and off-peak period for prediction. Comparing the IMgruGcn model with five baseline models and a state-of-the-art method, the IMgruGcn model achieves better results. Best results were also achieved on a public dataset, demonstrating the generalization ability of the IMgruGcn model. | Related workTraditional traffic flow forecasting methods are mainly based on linear statistical models and nonlinear theory-based models. Linear statistical-based models use mathematical statistical theory to analyze historical traffic and predict future traffic, including autoregressive sliding average (ARIMA) models (Williams et al., 1998), historical average (HA) models (Chang et al., 2011), Kalman filter prediction models (Kalman and R, 1960), and support vector regression classifier (SVR) models (Smola and Scholkopf, 2004). The historical averaging model is a simple method and can solve the problem of traffic flow variations at different times to some extent, but its prediction is static and cannot solve sudden traffic accidents and unconventional traffic conditions. Although the equipment used in the linear statistical model is relatively simple and low cost, the real-time performance is poor.Nonlinear theoretical models to predict traffic flow by finding the original features of the traffic system in high-dimensional space through phase space reconstruction include wavelet analysis models (Ouyang et al., 2017), chaos theory models (Jieni and Zhongke, 2008), and mutation theory-based models. Nonlinear models have some advantages in the processing of time series, but there are disadvantages, such as more complex models and large computational effort.The current mainstream models are neural networks, deep learning models, etc. (Yi et al., 2017), and the commonly used one is the BP neural network. Using deep learning long short-term memory neural networks (LSTM) (Kang and Zhang, 2020), MF-CNN (Yang et al., 2019), etc., traffic flow features are extracted from the temporal perspective for prediction. DMVST networks (Yao et al., 2018), ST-ResNet (Zhang et al., 2017), and traffic flow features are mined from the spatial perspective using convolutional neural networks (CNN). Currently, more and more researchers are studying the characteristics of traffic flow from both spatial and temporal perspectives to make more accurate predictions. For example, the ASTGCN model (Guo et al., 2019), an attention-based spatiotemporal graph convolutional network, consists of three independent components that model three temporal characteristics of traffic flow: current dependence, daily cycle dependence, and weekly cycle dependence. The STGCN model (Yu et al., 2017) is where STGCN effectively captures comprehensive spatiotemporal correlations by modeling multiscale traffic networks. The DCRNN model (Li et al., 2017) models traffic flow as a diffusion process on a directed graph and introduces a diffusion convolutional recurrent neural network (DCRNN) that captures spatial correlation using bidirectional random wandering on the graph, and captures temporal correlation using an encoder–decoder architecture with scheduled sampling.Traditional convolutional networks, such as CNNs, have strong feature extraction and integration capabilities. CNNs are able to learn the pixel arrangement patterns in images by iterative updates of the convolutional kernel parameters to learn different shape features and spatial features. However, CNNs process data with very regular structured networks, i.e., very neatly arranged matrices, which are difficult to process for data with topological graph structure. The traffic flow data we study has a lot of irregular data structure, which requires the use of graph convolutional network to process, and the essence and purpose of the graph convolutional network is to mine the spatial features of the topological graph. In real life, there are many irregularly shaped data structures. Graph structures that are more typical, such as traffic road networks, social networks, chemical structures, and so on, do not have a regular internal structure like pictures or language. Graph structures are generally irregular; each node in the graph is unique around the structure, for this structure of data. With the use of traditional CNN, RNN network cannot be solved, or the effect is not ideal.Inspired by the above research, this paper used both graph convolutional network (GCN) (Kipf and Welling, 2016) and gate recurrent unit (GRU) (Cho et al., 2014) to mine the spatiotemporal characteristics of traffic flow, and improved the GRU network by proposing the IM module, which enables a richer interactive representation between the input of the current moment and the hidden state passed down from the previous moment. This model is called the IMgru model, which enhances the significant information and weakens the secondary information, and enhances the modeling capability to better predict the traffic flow in the next moment.In addition, we also found that the traffic flow at the four intersections in the Wenyi Road dataset has the same trend at the same moment, which is due to the interaction of traffic flow between upstream and downstream roads, indicating that the traffic flow is spatially correlated, so the IMgruGcn model was proposed, which combined the GCN module and the IMgru module to obtain the spatial and temporal characteristics of traffic flow, making the traffic flow prediction results more accurate. Comparing the GruGcn model with the IMgruGcn model proved the effectiveness of our proposed IM module, and comparing the IMgru model with the IMgruGcn model proved the effectiveness of combining the spatial module with the temporal module. According to the temporal and spatial correlation of the traffic flow, it is more effective to predict the traffic flow at the next moment. | [] | [] |
Frontiers in Robotics and AI | null | PMC8882984 | 10.3389/frobt.2021.725780 | How to be Helpful? Supportive Behaviors and Personalization for Human-Robot Collaboration | The field of Human-Robot Collaboration (HRC) has seen a considerable amount of progress in recent years. Thanks in part to advances in control and perception algorithms, robots have started to work in increasingly unstructured environments, where they operate side by side with humans to achieve shared tasks. However, little progress has been made toward the development of systems that are truly effective in supporting the human, proactive in their collaboration, and that can autonomously take care of part of the task. In this work, we present a collaborative system capable of assisting a human worker despite limited manipulation capabilities, incomplete model of the task, and partial observability of the environment. Our framework leverages information from a high-level, hierarchical model that is shared between the human and robot and that enables transparent synchronization between the peers and mutual understanding of each other’s plan. More precisely, we firstly derive a partially observable Markov model from the high-level task representation; we then use an online Monte-Carlo solver to compute a short-horizon robot-executable plan. The resulting policy is capable of interactive replanning on-the-fly, dynamic error recovery, and identification of hidden user preferences. We demonstrate that the system is capable of robustly providing support to the human in a realistic furniture construction task. | 2 Background and Related WorkThis work capitalizes on past research in the field of high-level task reasoning and representation. As detailed in Section 3, the core contribution of this paper is a system able to convert human-understandable hierarchical task models into robot-executable planners capable of online interaction with the human. Contrarily to more traditional techniques that leverage full observability in the context of HRC applications [e.g., Kaelbling and Lozano-Pérez (2010); Toussaint et al. (2016)], our system deliberately optimizes its actions based on the interaction dynamics between the human and the robot. We explicitly account for uncertainty in the state of the world (e.g., task progression, availability of objects in the workspace) as well as in the state of the human partner (i.e., their beliefs, intents, and preferences). To this end, we employ a Partially Observable Markov Decision Process (POMDP) that plans optimal actions in the belief space.To some extent, this approach builds on top of results in the field of task and motion planning [TAMP, see e.g., Kaelbling et al. (1998); Kaelbling and Lozano-Pérez (2013); Koval et al. (2016)]. Indeed, similarly to Kaelbling and Lozano-Pérez (2013) we find approximate solutions to large POMDP problems through planning in belief space combined with just-in-time re-planning. However, our work differs from the literature in a number of ways: 1) the hierarchical nature of the task is not explicitly dealt with in the POMDP model, but rather at a higher level of abstraction (that of the task representation, cf. Section 3.1), which reduces complexity at planning stage; 2) we encapsulate the complexity relative to physically interacting with the environment away from the POMDP model, which results in broader applicability and ease of deployment if compared with standard TAMP methods; 3) most notably, we handle uncertainty in the human-robot interaction rather than in the physical interaction between the robot and the environment. That is, our domain of application presents fundamental differences with that targeted by TAMP techniques; there is still no shared consensus in the literature on how to model uncertainty about human’s beliefs and intents in general, and the collaboration in particular. Our work contributes to filling this gap.Planning techniques can enable human robot collaboration when a precise model of the task is known, and might adapt to hidden user preferences as demonstrated by Wilcox et al. (2012). Similarly, partially observable models can provide robustness to unpredicted events and account for unobservable states. Of particular note is the work by Gopalan and Tellex (2015) which, similarly to the approach presented in this paper, uses a POMDP to model a collaborative task. Indeed, POMDPs and similar models (e.g., MOMDPs) have been shown to improve robot assistance (Hoey et al., 2010) and team efficiency (Nikolaidis et al., 2015) in related works. Such models of the task are however computationally expensive and not transparent to the user. Hence, a significant body of work in the fields of human-robot collaboration and physical human-robot interaction focuses on how to best take over the human partner by learning parts of the task that are burdensome in terms of physical safety or cognitive load. Under this perspective, the majority of the research in the field has focused on frameworks for learning new skills from human demonstration [LfD, Billard et al. (2008)], efficiently learn or model task representations (Ilghami et al., 2005; Gombolay et al., 2013; Hayes and Scassellati, 2016; Toussaint et al., 2016), or interpreting the human partner’s actions and social signals (Grizou et al., 2013).No matter how efficient such models are at exhibiting the intended behavior, they are often limited to simple tasks and are not transparent to the human peer. Indeed, evidences from the study of human-human interactions have demonstrated the importance of sharing mental task models to improve the efficiency of the collaboration (Shah and Breazeal, 2010; Johnson et al., 2014). Similarly, studies on human-robot interactions show that an autonomous robot with a model of the task shared with a human peer can decrease the idle time for the human during the collaboration (Shah et al., 2011). Without enabling the robot to learn the task, other approaches have demonstrated the essential capability for collaborative robots to dynamically adapt their plans with respect to the task in order to accommodate for human’s actions or unforeseen events (Hoffman and Breazeal, 2004). Likewise, rich tasks models can also enable the optimization of the decision with respect to extrinsic metrics such as risk on the human (Hoffman and Breazeal, 2007) or completion time (Roncone et al., 2017).Our paper is positioned within this growing body of work related to task representations in HRC. We sit on a large body of literature on task and motion planning and POMDP planning with the goal of designing novel human–robot interactions. Unfortunately, little attention has been given to the issue of explicitly tackling the problem of effectively supporting the human partner, and only few works go in this direction. Hayes and Scassellati (2015) presents an algorithm to generate supportive behaviors during collaborative activity, although its results in simulation fall short in terms of providing practical demonstrations of the technique. Grigore et al. (2018) proposes a model to predict supportive behaviors from observed demonstration trajectories and hidden human preferences, but the results are not operationalized and evaluated within a full HRC system. On the other side of the spectrum, a number of works cited above achieve to a certain amount supportive behaviors without explicitly targeting them (Hoffman and Breazeal, 2007; Shah et al., 2011; Gopalan and Tellex, 2015; Toussaint et al., 2016). A limitation of these approaches is that, as mentioned previously, they rely on exact task knowledge that is not always available for complex tasks in practical applications. In this work, we incorporate unknown human preferences (i.e., they are not directly provided to the system and they need to be inferred via interaction) while automatically generating a complex POMDP from a minimal data abstraction (i.e., the HTM) and forbidding the robot to explicitly communicate with the human (which we did in our prior work, Roncone et al. (2017)). To the best of our knowledge, to date no work in human-robot collaboration has tackled the problem of high-level decision making in collaborative scenarios with this level of uncertainty of the human (in terms of human state, human beliefs, and human preferences). | [
"20942253",
"23757544"
] | [
{
"pmid": "20942253",
"title": "An empirical analysis of team coordination behaviors and action planning with application to human-robot teaming.",
"abstract": "OBJECTIVE\nWe conducted an empirical analysis of human teamwork to investigate the ways teammates incorporate coordination behaviors, including verbal and nonverbal cues, into their action planning.\n\n\nBACKGROUND\nIn space, military, aviation, and medical industries, teams of people effectively coordinate to perform complex tasks under stress induced by uncertainty, ambiguity, and time pressure. As robots increasingly are introduced into these domains, we seek to understand effective human-team coordination to inform natural and effective human-robot coordination.\n\n\nMETHOD\nWe conducted teamwork experiments in which teams of two people performed a complex task, involving ordering, timing, and resource constraints. Half the teams performed under time pressure, and half performed without time pressure. We cataloged the coordination behaviors used by each team and analyzed the speed of response and specificity of each coordination behavior.\n\n\nRESULTS\nAnalysis shows that teammates respond to explicit cues, including commands meant to control actions, more quickly than implicit cues, which include short verbal and gestural attention getters and status updates. Analysis also shows that nearly all explicit cues and implicit gestural cues were used to refer to one specific action, whereas approximately half of implicit cues did not often refer to one specific action.\n\n\nCONCLUSION\nThese results provide insight into how human teams use coordination behaviors in their action planning. For example, implicit cues seem to offer the teammate flexibility on when to perform the indicated action, whereas explicit cues seem to demand immediate response.\n\n\nAPPLICATION\nWe discuss how these findings inform the design of more natural and fluid human-robot teaming."
},
{
"pmid": "23757544",
"title": "Task-based decomposition of factored POMDPs.",
"abstract": "Recently, partially observable Markov decision processes (POMDP) solvers have shown the ability to scale up significantly using domain structure, such as factored representations. In many domains, the agent is required to complete a set of independent tasks. We propose to decompose a factored POMDP into a set of restricted POMDPs over subsets of task relevant state variables. We solve each such model independently, acquiring a value function. The combination of the value functions of the restricted POMDPs is then used to form a policy for the complete POMDP. We explain the process of identifying variables that correspond to tasks, and how to create a model restricted to a single task, or to a subset of tasks. We demonstrate our approach on a number of benchmarks from the factored POMDP literature, showing that our methods are applicable to models with more than 100 state variables."
}
] |
Frontiers in Robotics and AI | null | PMC8883277 | 10.3389/frobt.2022.762051 | Exploratory State Representation Learning | Not having access to compact and meaningful representations is known to significantly increase the complexity of reinforcement learning (RL). For this reason, it can be useful to perform state representation learning (SRL) before tackling RL tasks. However, obtaining a good state representation can only be done if a large diversity of transitions is observed, which can require a difficult exploration, especially if the environment is initially reward-free. To solve the problems of exploration and SRL in parallel, we propose a new approach called XSRL (eXploratory State Representation Learning). On one hand, it jointly learns compact state representations and a state transition estimator which is used to remove unexploitable information from the representations. On the other hand, it continuously trains an inverse model, and adds to the prediction error of this model a k-step learning progress bonus to form the maximization objective of a discovery policy. This results in a policy that seeks complex transitions from which the trained models can effectively learn. Our experimental results show that the approach leads to efficient exploration in challenging environments with image observations, and to state representations that significantly accelerate learning in RL tasks. | 2 Related WorkSeveral other SRL algorithms with a near-future prediction objective have been proposed recently (Assael et al., 2015; Böhmer et al., 2015; Wahlström et al., 2015; Watter et al., 2015; van Hoof et al., 2016; Jaderberg et al., 2017; Shelhamer et al., 2017; de Bruin et al., 2018). However, they separately learn state representations from which current observations can be reconstructed, and train a forward model on the learned states. The main limitation of these approaches is the inefficiency of the reconstruction objective, which leads to representations that contain unnecessary information about the observations. Instead, XSRL jointly learns a state transition estimator with the next observation prediction objective. On the one hand, this forces the learned state representations to retrieve information and memorize it through the recursive loop in order to restore the observability of the environment (in this work, the partial observability is due to image observations) and to verify the Markovian property. On the other hand, this forces the learned state representations to filter out unnecessary information, in particular information about distractors (i.e. elements that are not controllable or do not affect an agent).The XSRL exploration strategy is inspired by the line of work that maximizes intrinsic rewards corresponding to prediction errors of a trained forward model, which is a form of dynamics-based curiosity (Hester and Stone, 2012; Pathak et al., 2017; Burda et al., 2018). These strategies often combine intrinsic rewards with extrinsic rewards to solve the complex exploration/exploitation tradeoff. Instead, the first phase of XSRL ignores extrinsic reward to focus on SRL and prediction model learning. Extrinsic reward only comes in a second step (the RL tasks). In addition, for intrinsic motivation XSRL relies on prediction errors of an inverse model instead of those of a forward model. Prediction errors of an inverse model have the advantage of depending only on elements of the environment controllable by an agent (assuming there are no surjective transitions). It allows to discard the rest and thus to significantly reduces the size of the acquired state representation.Finally, a variant of k-step learning progress bonus is used to focus on transitions for which the forward model predictions are changing. Learning progress estimation was initially proposed in the field of developmental robotics (Oudeyer et al., 2007). Lopes et al. (2012) initiated the estimation of learning progress bonuses to solve the exploitation/exploration tradeoff in the model-based RL domain with finite MDPs. Achiam and Sastry (2017) have scaled this approach to continuous MDPs with compact observations of several dozen dimensions. We apply the approach of Achiam and Sastry (2017) to image observations and in the SRL context. | [
"30268059"
] | [
{
"pmid": "30268059",
"title": "State representation learning for control: An overview.",
"abstract": "Representation learning algorithms are designed to learn abstract features that characterize data. State representation learning (SRL) focuses on a particular kind of representation learning where learned features are in low dimension, evolve through time, and are influenced by actions of an agent. The representation is learned to capture the variation in the environment generated by the agent's actions; this kind of representation is particularly suitable for robotics and control scenarios. In particular, the low dimension characteristic of the representation helps to overcome the curse of dimensionality, provides easier interpretation and utilization by humans and can help improve performance and speed in policy learning algorithms such as reinforcement learning. This survey aims at covering the state-of-the-art on state representation learning in the most recent years. It reviews different SRL methods that involve interaction with the environment, their implementations and their applications in robotics control tasks (simulated or real). In particular, it highlights how generic learning objectives are differently exploited in the reviewed algorithms. Finally, it discusses evaluation methods to assess the representation learned and summarizes current and future lines of research."
}
] |
Soft Robotics | 33464996 | PMC8885438 | 10.1089/soro.2020.0056 | Mechanics and Morphological Compensation Strategy for Trimmed Soft Whisker Sensor | Recent studies have been inspired by natural whiskers for a proposal of tactile sensing system to augment the sensory ability of autonomous robots. In this study, we propose a novel artificial soft whisker sensor that is not only flexible but also adapts and compensates for being trimmed or broken during operation. In this morphological compensation designed from an analytical model of the whisker, our sensing device actively adjusts its morphology to regain sensitivity close to that of its original form (before being broken). To serve this purpose, the body of the whisker comprises a silicon-rubber truncated cone with an air chamber inside as the medulla layer, which is inflated to achieve rigidity. A small strain gauge is attached to the outer wall of the chamber for recording strain variation upon contact of the whisker. The chamber wall is reinforced by two inextensible nylon fibers wound around it to ensure that morphology change occurs only in the measuring direction of the strain gauge by compressing or releasing pressurized air contained in the chamber. We investigated an analytical model for the regulation of whisker sensitivity by changing the chamber morphology. Experimental results showed good agreement with the numerical results of performance by an intact whisker in normal mode, as well as in compensation mode. Finally, adaptive functionality was tested in two separate scenarios for thorough evaluation: (1) A short whisker (65 mm) compensating for a longer one (70 mm), combined with a special case (self-compensation), and (2) vice versa. Preliminary results showed good feasibility of the idea and efficiency of the analytical model in the compensation process, in which the compensator in the typical scenario performed with 20.385% average compensation error. Implementation of the concept in the present study fulfills the concept of morphological computation in soft robotics and paves the way toward accomplishment of an active sensing system that overcomes a critical event (broken whisker) based on optimized morphological compensation. | Related WorksBioinspired active vibrissal sensor in roboticsHaptic sensing through touch offers enormous potential to robots in assessing their surrounding environment, including interaction with human beings. There has been a great deal of research regarding proposals of new tactile sensing systems based on a variety of touch phenomena. Vibrissa sensory systems are typical examples that were inspired by the tactile discriminatory abilities of natural animals.The first implementation by Russell and Wijaya9 involved a continuum rigid wire to extract contact location, shape, and contour information regarding a target object through contact interaction. Tactile responses were collected by a simple mechanical system comprising a servo-potentiometer and springs. In the following years, a series of studies focused on the application of biomimetic whiskers in robotics. Kaneko et al.10 established the first idea of investigating the correlation between the curvature of a flexible wire and tactile information. The term rotational compliance introduced in this work was also used as a fundamental hypothesis to explain how rats convert perceptible mechanical signals received during the sensing process into perceivable tactile information.11 Ahn and Kim12 proposed another example of a bio-whisker system integrated into the Koala robot platform. A Hall-effect magnetic sensor was used to measure the protraction angle to detect contact location, as well as the texture of the object. Researchers in Emnett et al.2 designed an artificial follicle containing four pairs of strain gauges that functioned as mechanoreceptors at the follicle-sinus complex. By collecting and analyzing a set of three different mechanical components related to bending moments and axial force along the length of the whisker, the location of the whisker-object contact point in three-dimension working space could be determined. Anderson et al.13 suggested an efficient solution to the problem of discriminating sensory responses caused by self-active movement based on the SCRATCHbot robotic platform. More recently, TacWhiskers were proposed in Lepora et al.,14 in which a vision-based vibrissal sensor utilizes a camera to capture the deformation of inner nodular pins upon interactions with obstacles. Besides inspiration from land mammals, systems inspired by whiskers of aquatic animals have been proposed such as flow-whisker sensor.15 One system mimicked the seal's hunting ability of using its whiskers to track vortices left by moving prey.16 A similar effort inspired by seal whiskers was introduced by Zhu et al.,17 which was able to work on the land (exploring an obstacle and its surface), as well as underwater (detecting water flow direction and velocity).In general, most of these whisker sensors focus on embedding sensing components into the base, similar to the follicle-sinus complex in mammals, where external impacts along the length are transferred to the base.2,18 Such models were considered efficient copies of rodent's natural behavior in environment exploration and were proven successful in performance of sensing and navigation tasks by robotic systems. Nevertheless, consideration of systems that can adapt to change the mechanical properties (elasticity, stiffness, moment of inertia) with variation in geometry of the whisker is still limited. The novelty of our study lies in the intriguing inspiration from bio-whisker's structure analysis and plasticity in the neural circuits to create advantageous adaptability for the artificial whisker sensory system.Morphological computation and soft active tactile sensingMorphological computation is a method that exploits the structure, material characteristics, and dynamics of a flexible body during interaction with its environment and thus outsources relevant computational function to the “body” rather than assign it to “central nervous system.”19 This concept has been widely applied to the design of bioinspired robots and related applications, especially in soft robotics. Hauser et al.20 exploited static feedback generated from a muscle-skeleton system (equivalent to a mass-spring system) to generate periodic locomotion. Nakajima et al.21 proposed a soft arm inspired by the octopus and demonstrated that the high-dimensional nonlinearity system could be partially emulated using the dynamics of a physical body.Recently, the demand for improving the interaction of robots with humans and the surrounding environments has been growing. Since soft-bodied robots are inherently soft and deformable, it is expected that they can not only efficiently accommodate their environment but also actively change their own morphology to realize various sensing tasks. For example, Ho et al.22 introduced a simple active sensing platform Wrin'Tac, which mimicked water-induced wrinkles on a human finger by extended periods in a wet environment. They argued that changing morphology of the contact surface would give benefit in varying the stimuli perceived from the environment and allow the agent to actively select the sensing modality. Similarly, Qi et al.,23 also inspired by human wrinkles, addressed the idea of forming wrinkles to localize the sliding movement on a soft path, as well as discriminate a surface profile. Further research highlighted the potential of using embodiment to explain the behavior of a robot24 and gave some clues to understand theoretically how an organ's morphology defines the agent's behavior.Regarding the whisker sensor, most recent studies have used steel or aluminum wires to fabricate such devices. However, such materials may experience unexpected oscillation of the whole structure during strong external interactions. Moreover, there is little work considering the medulla layer, which plays an important role in the mechanical properties of the vibrissae,25,26 in designing robotic whiskers. In this study, we propose a design of soft whisker sensor with novel inclusion of the medulla layer, which plays a crucial role in variation of whisker sensitivity upon morphological change. The main contributions of this article can be summarized as follows:
(1)Proposal of a novel structure sensory system mimicking the geometry of a rodent whisker with the ability to compensate if broken/trimmed or eroded during operation.(2)Construction of a detailed analytical model to describe and compute morphological deformation. Through this model, we can predict the required chamber morphology for the compensation process, namely a morphological compensation strategy.(3)A suggestion of a new platform for soft tactile sensing systems that can actively change its morphology to facilitate desired responses. | [
"27260019",
"28701930",
"23595731",
"29297780",
"23186344",
"22956025",
"23847526",
"27881718",
"21993474",
"25823761",
"7965093",
"19693029",
"16207785",
"20098714"
] | [
{
"pmid": "27260019",
"title": "The mathematical whisker: A review of numerical models of the rat׳s vibrissa biomechanics.",
"abstract": "The vibrissal system of the rat refers to specialized hairs the animal uses for tactile sensory perception. Rats actively move their whiskers in a characteristic way called \"whisking\". Interaction with the environment produces elastic deformation of the whiskers, generating mechanical signals in the whisker-follicle complex. Advances in our understanding of the vibrissal complex biomechanics is of interest not only for the biological research field, but also for biomimetic approaches. The recent development of whisker numerical models has contributed to comprehending its sophisticated movements and its interactions with the follicle. The great diversity of behavioral patterns and complexities of the whisker-follicle ensemble encouraged the creation of many different biomechanical models. This review analyzes most of the whisker biomechanical models that have been developed so far. This review was written so as to render it accessible to readers coming from different research areas."
},
{
"pmid": "28701930",
"title": "Biogenic Amines in Insect Antennae.",
"abstract": "Insect antenna is a multisensory organ, each modality of which can be modulated by biogenic amines. Octopamine (OA) and its metabolic precursor tyramine (TA) affect activity of antennal olfactory receptor neurons. There is some evidence that dopamine (DA) modulates gustatory neurons. Serotonin can serve as a neurotransmitter in some afferent mechanosensory neurons and both as a neurotransmitter and neurohormone in efferent fibers targeted at the antennal vessel and mechanosensory organs. As a neurohormone, serotonin affects the generation of the transepithelial potential by sensillar accessory cells. Other possible targets of biogenic amines in insect antennae are hygro- and thermosensory neurons and epithelial cells. We suggest that the insect antenna is partially autonomous in the sense that biologically active substances entering its hemolymph may exert their effects and be cleared from this compartment without affecting other body parts."
},
{
"pmid": "23595731",
"title": "The mechanical variables underlying object localization along the axis of the whisker.",
"abstract": "Rodents move their whiskers to locate objects in space. Here we used psychophysical methods to show that head-fixed mice can localize objects along the axis of a single whisker, the radial dimension, with one-millimeter precision. High-speed videography allowed us to estimate the forces and bending moments at the base of the whisker, which underlie radial distance measurement. Mice judged radial object location based on multiple touches. Both the number of touches (1-17) and the forces exerted by the pole on the whisker (up to 573 μN; typical peak amplitude, 100 μN) varied greatly across trials. We manipulated the bending moment and lateral force pressing the whisker against the sides of the follicle and the axial force pushing the whisker into the follicle by varying the compliance of the object during behavior. The behavioral responses suggest that mice use multiple variables (bending moment, axial force, lateral force) to extract radial object localization. Characterization of whisker mechanics revealed that whisker bending stiffness decreases gradually with distance from the face over five orders of magnitude. As a result, the relative amplitudes of different stress variables change dramatically with radial object distance. Our data suggest that mice use distance-dependent whisker mechanics to estimate radial object location using an algorithm that does not rely on precise control of whisking, is robust to variability in whisker forces, and is independent of object compliance and object movement. More generally, our data imply that mice can measure the amplitudes of forces in the sensory follicles for tactile sensation."
},
{
"pmid": "29297780",
"title": "Fully 3D Printed Multi-Material Soft Bio-Inspired Whisker Sensor for Underwater-Induced Vortex Detection.",
"abstract": "Bio-mimicking the underwater sensors has tremendous potential in soft robotics, under water exploration and human interfaces. Pinniped are semiaquatic carnivores that use their whiskers to sense food by tracking the vortices left by potential prey. To detect and track the vortices inside the water, a fully 3D printed pinniped inspired multi-material whisker sensor is fabricated and characterized. The fabricated whisker is composed of a polyurethane rod with a length-to-diameter ratio (L/d) of 20:1 with four graphene patterns (length × diameter: 60 × 0.3 mm) perpendicular to each other. The graphene patterns are further connected with output signal wires via copper tape. The displacement (∼5 mm) of the whisker rod in any direction (0-360°) causes the change in resistance [Formula: see text] because of generated tensile. The analog signals (resistance change) are digitalized by using analog to digital modules and fed to a microcontroller to detect the vortex. A virtual environment is designed such that it consists of a 3D printed fish fin, a water tank, a camera, and data loggers to study the response of fabricated whisker. The underwater sensitivity of the whisker sensor in any direction is detectable and remarkably high ([Formula: see text]% ∼1180). The mechanical reliability of the whisker sensor is tested by bending it up to 2000 cycles. The fabricated whisker's structure and material are unique, and no one has fabricated them by using cost-effective 3D printing methods earlier. This fully 3D printable flexible whisker sensor should be applicable to a wide range of soft robotic applications."
},
{
"pmid": "23186344",
"title": "Morphological computation and morphological control: steps toward a formal theory and applications.",
"abstract": "Morphological computation can be loosely defined as the exploitation of the shape, material properties, and physical dynamics of a physical system to improve the efficiency of a computation. Morphological control is the application of morphological computing to a control task. In its theoretical part, this article sharpens and extends these definitions by suggesting new formalized definitions and identifying areas in which the definitions we propose are still inadequate. We go on to describe three ongoing studies, in which we are applying morphological control to problems in medicine and in chemistry. The first involves an inflatable support system for patients with impaired movement, and is based on macroscopic physics and concepts already tested in robotics. The two other case studies (self-assembly of chemical microreactors; models of induced cell repair in radio-oncology) describe processes and devices on the micrometer scale, in which the emergent dynamics of the underlying physical system (e.g., phase transitions) are dominated by stochastic processes such as diffusion."
},
{
"pmid": "22956025",
"title": "The role of feedback in morphological computation with compliant bodies.",
"abstract": "The generation of robust periodic movements of complex nonlinear robotic systems is inherently difficult, especially, if parts of the robots are compliant. It has previously been proposed that complex nonlinear features of a robot, similarly as in biological organisms, might possibly facilitate its control. This bold hypothesis, commonly referred to as morphological computation, has recently received some theoretical support by Hauser et al. (Biol Cybern 105:355-370, doi: 10.1007/s00422-012-0471-0 , 2012). We show in this article that this theoretical support can be extended to cover not only the case of fading memory responses to external signals, but also the essential case of autonomous generation of adaptive periodic patterns, as, e.g., needed for locomotion. The theory predicts that feedback into the morphological computing system is necessary and sufficient for such tasks, for which a fading memory is insufficient. We demonstrate the viability of this theoretical analysis through computer simulations of complex nonlinear mass-spring systems that are trained to generate a large diversity of periodic movements by adapting the weights of a simple linear feedback device. Hence, the results of this article substantially enlarge the theoretically tractable application domain of morphological computation in robotics, and also provide new paradigms for understanding control principles of biological organisms."
},
{
"pmid": "23847526",
"title": "A soft body as a reservoir: case studies in a dynamic model of octopus-inspired soft robotic arm.",
"abstract": "The behaviors of the animals or embodied agents are characterized by the dynamic coupling between the brain, the body, and the environment. This implies that control, which is conventionally thought to be handled by the brain or a controller, can partially be outsourced to the physical body and the interaction with the environment. This idea has been demonstrated in a number of recently constructed robots, in particular from the field of \"soft robotics\". Soft robots are made of a soft material introducing high-dimensionality, non-linearity, and elasticity, which often makes the robots difficult to control. Biological systems such as the octopus are mastering their complex bodies in highly sophisticated manners by capitalizing on their body dynamics. We will demonstrate that the structure of the octopus arm cannot only be exploited for generating behavior but also, in a sense, as a computational resource. By using a soft robotic arm inspired by the octopus we show in a number of experiments how control is partially incorporated into the physical arm's dynamics and how the arm's dynamics can be exploited to approximate non-linear dynamical systems and embed non-linear limit cycles. Future application scenarios as well as the implications of the results for the octopus biology are also discussed."
},
{
"pmid": "27881718",
"title": "Variations in vibrissal geometry across the rat mystacial pad: base diameter, medulla, and taper.",
"abstract": "Many rodents tactually sense the world through active motions of their vibrissae (whiskers), which are regularly arranged in rows and columns (arcs) on the face. The present study quantifies several geometric parameters of rat whiskers that determine the tactile information acquired. Findings include the following. 1) A meta-analysis of seven studies shows that whisker base diameter varies with arc length with a surprisingly strong dependence on the whisker's row position within the array. 2) The length of the whisker medulla varies linearly with whisker length, and the medulla's base diameter varies linearly with whisker base diameter. 3) Two parameters are required to characterize whisker \"taper\": radius ratio (base radius divided by tip radius) and radius slope (the difference between base and tip radius, divided by arc length). A meta-analysis of five studies shows that radius ratio exhibits large variability due to variations in tip radius, while radius slope varies systematically across the array. 4) Within the resolution of the present study, radius slope does not differ between the proximal and distal segments of the whisker, where \"proximal\" is defined by the presence of the medulla. 5) Radius slope of the medulla is offset by a constant value from radius slope of the proximal portion of the whisker. We conclude with equations for all geometric parameters as functions of row and column position.NEW & NOTEWORTHY Rats tactually explore their world by brushing and tapping their whiskers against objects. Each whisker's geometry will have a large influence on its mechanics and thus on the tactile signals the rat obtains. We performed a meta-analysis of seven studies to generate equations that describe systematic variations in whisker geometry across the rat's face. We also quantified the geometry of the whisker medulla. A database provides access to geometric parameters of over 500 rat whiskers."
},
{
"pmid": "21993474",
"title": "Variation in Young's modulus along the length of a rat vibrissa.",
"abstract": "Rats use specialized tactile hairs on their snout, called vibrissae (whiskers), to explore their surroundings. Vibrissae have no sensors along their length, but instead transmit mechanical information to receptors embedded in the follicle at the vibrissa base. The transmission of mechanical information along the vibrissa, and thus the tactile information ultimately received by the nervous system, depends critically on the mechanical properties of the vibrissa. In particular, transmission depends on the bending stiffness of the vibrissa, defined as the product of the area moment of inertia and Young's modulus. To date, Young's modulus of the rat vibrissa has not been measured in a uniaxial tensile test. We performed tensile tests on 22 vibrissae cut into two halves: a tip-segment and a base-segment. The average Young's modulus across all segments was 3.34±1.48GPa. The average modulus of a tip-segment was 3.96±1.60GPa, and the average modulus of a base-segment was 2.90±1.25GPa. Thus, on average, tip-segments had a higher Young's modulus than base-segments. High-resolution images of vibrissae were taken to seek structural correlates of this trend. The fraction of the cross-sectional area occupied by the vibrissa cuticle was found to increase along the vibrissa length, and may be responsible for the increase in Young's modulus near the tip."
},
{
"pmid": "25823761",
"title": "Role of whiskers in sensorimotor development of C57BL/6 mice.",
"abstract": "The mystacial vibrissae (whiskers) of nocturnal rodents play a major role in their sensorimotor behaviors. Relatively little information exists on the role of whiskers during early development. We characterized the contribution of whiskers to sensorimotor development in postnatal C57BL/6 mice. A comparison between intact and whisker-clipped mice in a battery of behavioral tests from postnatal day (P) 4-17 revealed that both male and female pups develop reflexive motor behavior even when the whiskers are clipped. Daily whisker trimming from P3 onwards results in diminished weight gain by P17, and impairment in whisker sensorimotor coordination behaviors, such as cliff avoidance and littermate huddling from P4 to P17, while facilitation of righting reflex at P4 and grasp response at P12. Since active whisker palpation does not start until 2 weeks of age, passive whisker touch during early neonatal stage must play a role in regulating these behaviors. Around the onset of exploratory behaviors (P12) neonatal whisker-clipped pups also display persistent searching movements when they encounter cage walls as a compensatory mechanism of sensorimotor development. Spontaneous whisker motion (whisking) is distinct from respiratory fluttering of whiskers. It is a symmetrical vibration of whiskers at a rate of approximately ∼8 Hz and begins around P10. Oriented, bundled movements of whiskers at higher frequencies of ∼12 Hz during scanning object surfaces, i.e., palpation whisking, emerges at P14. The establishment of locomotive body coordination before eyes open accompanies palpation whisking, indicating an important role in the guidance of exploratory motor behaviors."
},
{
"pmid": "7965093",
"title": "An innocuous bias in whisker use in adult rats modifies receptive fields of barrel cortex neurons.",
"abstract": "The effect of innocuously biasing the flow of sensory activity from the whiskers for periods of 3-30 d in awake, behaving adult rats on the receptive field organization of rat SI barrel cortex neurons was studied. One pair of adjacent whiskers, D2 and either D1 or D3, remained intact unilaterally (whisker pairing), all others being trimmed throughout the period of altered sensation. Receptive fields of single cells in the contralateral D2 barrel were analyzed under urethane anesthesia by peristimulus time histogram (PSTH) and latency histogram analysis after 3, 7-10, and 30 d of pairing and compared with controls, testing all whiskers cut to the same length. Response magnitudes to surround receptive field in-row whiskers D1 and D3 were not significantly different for control animals. The same was found for surround in-arc whiskers C2 and E2. However, after 3 d of whisker pairing a profound bias occurred in response to the paired D-row surround whisker relative to the opposite trimmed surround D-row whisker and to the C2 and E2 whiskers. This bias increased with the duration of pairing, regardless of which surround whisker (D1 or D3) was paired with D2. For all three periods of pairing the mean response to the paired surround whisker was increased relative to controls, but peaked at 7-10 d. Response to the principal center-receptive (D2) whisker was increased for the 3 and 7-10 d groups and then decreased at 30 d. Responses to trimmed arc surround whiskers (C2 and E2) were decreased in proportion to the duration of changed experience. Analysis of PSTH data showed that earliest discharges (5-10 msec poststimulus) to the D2 whisker increased progressively in magnitude with duration of pairing. For the paired surround whisker similar early discharges newly appeared after 30 d of pairing. At 3 and 7-10 d of pairing, increases in response to paired whiskers and decreases to cut surround whiskers were confined to late portions of the PSTH (10-100 msec poststimulus). Changes at 3-10 d can be attributed to alterations in intracortical synaptic relay between barrels. Longer-term changes in response to both paired whisker inputs (30 d) largely appear to reflect increases in thalamocortical synaptic efficacy. Our findings suggest that novel innocuous somatosensory experiences produce changes in the receptive field configuration of cortical cells that are consistent with Hebbian theories of experience-dependent potentiation and weakening of synaptic efficacy within SI neocortical circuitry, for correlated and uncorrelated sensory inputs, respectively."
},
{
"pmid": "19693029",
"title": "Experience-dependent structural synaptic plasticity in the mammalian brain.",
"abstract": "Synaptic plasticity in adult neural circuits may involve the strengthening or weakening of existing synapses as well as structural plasticity, including synapse formation and elimination. Indeed, long-term in vivo imaging studies are beginning to reveal the structural dynamics of neocortical neurons in the normal and injured adult brain. Although the overall cell-specific morphology of axons and dendrites, as well as of a subpopulation of small synaptic structures, are remarkably stable, there is increasing evidence that experience-dependent plasticity of specific circuits in the somatosensory and visual cortex involves cell type-specific structural plasticity: some boutons and dendritic spines appear and disappear, accompanied by synapse formation and elimination, respectively. This Review focuses on recent evidence for such structural forms of synaptic plasticity in the mammalian cortex and outlines open questions."
},
{
"pmid": "16207785",
"title": "Responses of trigeminal ganglion neurons to the radial distance of contact during active vibrissal touch.",
"abstract": "Rats explore their environment by actively moving their whiskers. Recently, we described how object location in the horizontal (front-back) axis is encoded by first-order neurons in the trigeminal ganglion (TG) by spike timing. Here we show how TG neurons encode object location along the radial coordinate, i.e., from the snout outward. Using extracellular recordings from urethane-anesthetized rats and electrically induced whisking, we found that TG neurons encode radial distance primarily by the number of spikes fired. When an object was positioned closer to the whisker root, all touch-selective neurons recorded fired more spikes. Some of these cells responded exclusively to objects located near the base of whiskers, signaling proximal touch by an identity (labeled-line) code. A number of tonic touch-selective neurons also decreased delays from touch to the first spike and decreased interspike intervals for closer object positions. Information theory analysis revealed that near-certainty discrimination between two objects separated by 30% of the length of whiskers was possible for some single cells. However, encoding reliability was usually lower as a result of large trial-by-trial response variability. Our current findings, together with the identity coding suggested by anatomy for the vertical dimension and the temporal coding of the horizontal dimension, suggest that object location is encoded by separate neuronal variables along the three spatial dimensions: temporal for the horizontal, spatial for the vertical, and spike rate for the radial dimension."
},
{
"pmid": "20098714",
"title": "The advantages of a tapered whisker.",
"abstract": "The role of facial vibrissae (whiskers) in the behavior of terrestrial mammals is principally as a supplement or substitute for short-distance vision. Each whisker in the array functions as a mechanical transducer, conveying forces applied along the shaft to mechanoreceptors in the follicle at the whisker base. Subsequent processing of mechanoreceptor output in the trigeminal nucleus and somatosensory cortex allows high accuracy discriminations of object distance, direction, and surface texture. The whiskers of terrestrial mammals are tapered and approximately circular in cross section. We characterize the taper of whiskers in nine mammal species, measure the mechanical deflection of isolated felid whiskers, and discuss the mechanics of a single whisker under static and oscillatory deflections. We argue that a tapered whisker provides some advantages for tactile perception (as compared to a hypothetical untapered whisker), and that this may explain why the taper has been preserved during the evolution of terrestrial mammals."
}
] |
Frontiers in Artificial Intelligence | null | PMC8886211 | 10.3389/frai.2022.807320 | Exploring Behavioral Patterns for Data-Driven Modeling of Learners' Individual Differences | Educational data mining research has demonstrated that the large volume of learning data collected by modern e-learning systems could be used to recognize student behavior patterns and group students into cohorts with similar behavior. However, few attempts have been done to connect and compare behavioral patterns with known dimensions of individual differences. To what extent learner behavior is defined by known individual differences? Which of them could be a better predictor of learner engagement and performance? Could we use behavior patterns to build a data-driven model of individual differences that could be more useful for predicting critical outcomes of the learning process than traditional models? Our paper attempts to answer these questions using a large volume of learner data collected in an online practice system. We apply a sequential pattern mining approach to build individual models of learner practice behavior and reveal latent student subgroups that exhibit considerably different practice behavior. Using these models we explored the connections between learner behavior and both, the incoming and outgoing parameters of the learning process. Among incoming parameters we examined traditionally collected individual differences such as self-esteem, gender, and knowledge monitoring skills. We also attempted to bridge the gap between cluster-based behavior pattern models and traditional scale-based models of individual differences by quantifying learner behavior on a latent data-driven scale. Our research shows that this data-driven model of individual differences performs significantly better than traditional models of individual differences in predicting important parameters of the learning process, such as performance and engagement. | 2. Related Work2.1. Individual Differences and Academic AchievementIndividual differences have been the focus of research on educational psychology and learning technology (Jonassen and Grabowski, 1993). Numerous works have attempted to discover and examine various dimensions of individual differences, find their connections to academic achievement, and address these differences in order to better support teaching and learning. A learner's position within a specific dimension of individual differences is usually determined by processing carefully calibrated questionnaires and placing the learner on a linear scale, frequently between two extreme ends. In this section, we briefly review several dimensions of individual differences that are frequently used in learning technology research.Self-efficacy refers to one's evaluation of their ability to perform a future task (Bandura, 1982) and is shown to be a good predictor of educational performance (Multon et al., 1991; Britner and Pajares, 2006). Students with higher self-efficacy beliefs are more willing to put effort into learning tasks and persist more, as compared to students with lower self-efficacy. Self-esteem represents individuals' beliefs about their self-worth and competence (Matthews et al., 2003). Some studies have shown the positive effect of self-esteem on academic achievement, while other studies have pointed out how academic achievement affects self-esteem (Baumeister et al., 2003; Di Giunta et al., 2013). Researchers also stated the indirect effect of low self-esteem on achievement through distress and decreased motivation (Liu et al., 1992). Learners can also differ by their achievement goals, which guide their learning behaviors and performance by defining the expectations used to evaluate success (Linnenbrink and Pintrich, 2001). Studies have demonstrated the positive effects of achievement goals on performance (Harackiewicz et al., 2002; Linnenbrink and Pintrich, 2002). There are several known questionnaire-based instruments to capture achievement goals (Midgley et al., 1998; Elliot and McGregor, 2001).Another important group of individual differences is related to metacognition, which plays an important role in academic performance (Dunning et al., 2003). In particular, students who successfully distinguish what they know and do not know can expand their knowledge instead of concentrating on already mastered concepts (Tobias and Everson, 2009). It has been shown that high-achieving students are more accurate in assessing their knowledge (DiFrancesca et al., 2016). To measure some metacognitive differences, Tobias and Everson (Tobias and Everson, 1996) proposed a knowledge monitoring assessment instrument to evaluate the discrepancy between the actual performance of students and their own estimates of their knowledge in a specific domain.2.2. User Behavior Modeling and Performance PredictionThe rise of interest to modeling learner behavior in online learning system is associated with attempts to understand learner behavior in early Massive Open Online Courses (MOOCs) with their surprisingly high dropout rate. Since MOOCs usually recorded full traces of learner behavior producing rich data for a large number of students, it was natural to explore this data to predict dropouts (Balakrishnan, 2013) and performance (Anderson et al., 2014; Champaign et al., 2014). This appealing research direction quickly engaged researchers from the educational datamining community who were working on log mining and performance prediction in other educational contexts and led to a rapid expansion of research that connected learner behavior with learning outcomes in MOOCs and beyond.While the first generation of this research focused on one-step MOOC performance prediction from learning data (Anderson et al., 2014; Champaign et al., 2014; Boyer and Veeramachaneni, 2015; Brinton and Chiang, 2015), the second generation attempted to uncover the roots of performance differences to better understand the process and improve predictions. The core assumption of this stream of work was the presence of latent learner cohorts composed of students who exhibit similar patterns. By examining connections between these cohorts and learning outcomes, the researchers expected to identify positive and negative patterns and advance from simple prediction of learner behavior to possible interventions. While the idea of cohorts was pioneered by the first generation research, the early work on cohorts attempted to define them using either learner demographic (Guo and Reinecke, 2014) or simple activity measures (Anderson et al., 2014; Sharma et al., 2015). In contrast, the second generation research attempted to automatically discover these cohorts from available data. Over just a few years, a range of approaches to discover behavior patterns and use them to cluster learners into similarly-behaving cohorts were explored. This included various combinations of simple behavior clustering (Hosseini et al., 2017; Boubekki et al., 2018), transition analysis (Boubekki et al., 2018; Gitinabard et al., 2019), Markov models (Sharma et al., 2015; Geigle and Zhai, 2017; Hansen et al., 2017), matrix factorization (Lorenzen et al., 2018; Mouri et al., 2018; Mirzaei et al., 2020), tensor factorization (Wen et al., 2019), sequence mining (Hansen et al., 2017; Hosseini et al., 2017; Venant et al., 2017; Boroujeni and Dillenbourg, 2018; Mirzaei et al., 2020), random forests (Pinto et al., 2021), and deep learning (Loginova and Benoit, 2021). In this paper, we focus on the sequence mining approach to behavior modeling which is reviewed in more detail in the next section.2.3. Sequential Pattern MiningIn educational research, mining sequential patterns has become one of the common techniques to analyze and model students' activity sequences. This technique helped researchers to find student learning behaviors in different learning environments. Nesbit et al. (2007) applied this technique to find self-regulated behaviors in a multimedia learning environment. In Maldonado et al. (2011), authors identified the most frequent usage interactions to detect high/low performing students in collaborative learning activities. To find differences among predefined groups (e.g., high-performing/low-performing), Kinnebrew and Biswas (2012) proposed a differential sequence mining procedure by analyzing the students' frequent patterns. Herold et al. (2013) used sequential pattern mining to predict course performance, based on sequences of handwritten tasks. Guerra et al. (2014) examined the students' problem solving patterns to detect stable and distinguishable student behaviors. In addition, Hosseini et al. (2017) used a similar approach to Guerra et al. (2014) and detected different student coding behaviors on mandatory programming assignments, as well as their impact on student performance. Venant et al. (2017) discovered frequent sequential patterns of students' learning actions in a laboratory environment and identified learning strategies that associated with learners' performance. Mirzaei et al. (2020) explored specific patterns in learner behavior by applying both sequential pattern mining and matrix factorization approaches. In the earlier version of this paper (Akhuseyinoglu and Brusilovsky, 2021), the authors applied the sequence mining approach to analyzing student behavior in an online practice system. | [
"26151640",
"27914356",
"20509061",
"21199485",
"11300582",
"26151468",
"9576837"
] | [
{
"pmid": "26151640",
"title": "Does High Self-Esteem Cause Better Performance, Interpersonal Success, Happiness, or Healthier Lifestyles?",
"abstract": "Self-esteem has become a household word. Teachers, parents, therapists, and others have focused efforts on boosting self-esteem, on the assumption that high self-esteem will cause many positive outcomes and benefits-an assumption that is critically evaluated in this review. Appraisal of the effects of self-esteem is complicated by several factors. Because many people with high self-esteem exaggerate their successes and good traits, we emphasize objective measures of outcomes. High self-esteem is also a heterogeneous category, encompassing people who frankly accept their good qualities along with narcissistic, defensive, and conceited individuals. The modest correlations between self-esteem and school performance do not indicate that high self-esteem leads to good performance. Instead, high self-esteem is partly the result of good school performance. Efforts to boost the self-esteem of pupils have not been shown to improve academic performance and may sometimes be counterproductive. Job performance in adults is sometimes related to self-esteem, although the correlations vary widely, and the direction of causality has not been established. Occupational success may boost self-esteem rather than the reverse. Alternatively, self-esteem may be helpful only in some job contexts. Laboratory studies have generally failed to find that self-esteem causes good task performance, with the important exception that high self-esteem facilitates persistence after failure. People high in self-esteem claim to be more likable and attractive, to have better relationships, and to make better impressions on others than people with low self-esteem, but objective measures disconfirm most of these beliefs. Narcissists are charming at first but tend to alienate others eventually. Self-esteem has not been shown to predict the quality or duration of relationships. High self-esteem makes people more willing to speak up in groups and to criticize the group's approach. Leadership does not stem directly from self-esteem, but self-esteem may have indirect effects. Relative to people with low self-esteem, those with high self-esteem show stronger in-group favoritism, which may increase prejudice and discrimination. Neither high nor low self-esteem is a direct cause of violence. Narcissism leads to increased aggression in retaliation for wounded pride. Low self-esteem may contribute to externalizing behavior and delinquency, although some studies have found that there are no effects or that the effect of self-esteem vanishes when other variables are controlled. The highest and lowest rates of cheating and bullying are found in different subcategories of high self-esteem. Self-esteem has a strong relation to happiness. Although the research has not clearly established causation, we are persuaded that high self-esteem does lead to greater happiness. Low self-esteem is more likely than high to lead to depression under some circumstances. Some studies support the buffer hypothesis, which is that high self-esteem mitigates the effects of stress, but other studies come to the opposite conclusion, indicating that the negative effects of low self-esteem are mainly felt in good times. Still others find that high self-esteem leads to happier outcomes regardless of stress or other circumstances. High self-esteem does not prevent children from smoking, drinking, taking drugs, or engaging in early sex. If anything, high self-esteem fosters experimentation, which may increase early sexual activity or drinking, but in general effects of self-esteem are negligible. One important exception is that high self-esteem reduces the chances of bulimia in females. Overall, the benefits of high self-esteem fall into two categories: enhanced initiative and pleasant feelings. We have not found evidence that boosting self-esteem (by therapeutic interventions or school programs) causes benefits. Our findings do not support continued widespread efforts to boost self-esteem in the hope that it will by itself foster improved outcomes. In view of the heterogeneity of high self-esteem, indiscriminate praise might just as easily promote narcissism, with its less desirable consequences. Instead, we recommend using praise to boost self-esteem as a reward for socially desirable behavior and self-improvement."
},
{
"pmid": "27914356",
"title": "Can podcasts for assessment guidance and feedback promote self-efficacy among undergraduate nursing students? A qualitative study.",
"abstract": "BACKGROUND\nImproving assessment guidance and feedback for students has become an international priority within higher education. Podcasts have been proposed as a tool for enhancing teaching, learning and assessment. However, a stronger theory-based rationale for using podcasts, particularly as a means of facilitating assessment guidance and feedback, is required.\n\n\nOBJECTIVE\nTo explore students' experiences of using podcasts for assessment guidance and feedback. To consider how these podcasts shaped beliefs about their ability to successfully engage with, and act on, assessment guidance and feedback Design Exploratory qualitative study. Setting Higher education institution in North-East Scotland. Participants Eighteen third year undergraduate nursing students who had utilised podcasts for assessment guidance and feedback within their current programme of study.\n\n\nMETHODS\nParticipants took part in one of four focus groups, conducted between July and September 2013. Purposive sampling was utilised to recruit participants of different ages, gender, levels of self-assessed information technology skills and levels of academic achievement. Data analysis was guided by the framework approach.\n\n\nFINDINGS\nThematic analysis highlighted similarities and differences in terms of students' experiences of using podcasts for assessment guidance and feedback. Further analysis revealed that Self-Efficacy Theory provided deeper theoretical insights into how the content, structure and delivery of podcasts can be shaped to promote more successful engagement with assessment guidance and feedback from students. The structured, logical approach of assessment guidance podcasts appeared to strengthen self-efficacy by providing readily accessible support and by helping students convert intentions into action. Students with high self-efficacy in relation to tasks associated with assessment were more likely to engage with feedback, whereas those with low self-efficacy tended to overlook opportunities to access feedback due to feelings of helplessness and futility.\n\n\nCONCLUSIONS\nAdopting well-structured podcasts as an educational tool, based around the four major sources of information (performance accomplishments, vicarious experience, social persuasion, and physiological and emotional states), has potential to promote self efficacy for individuals, as well as groups of students, in terms of assessment guidance and feedback."
},
{
"pmid": "20509061",
"title": "Social learning: medical student perceptions of geriatric house calls.",
"abstract": "Bandura's social learning theory provides a useful conceptual framework to understand medical students' perceptions of a house calls experience at Virginia Commonwealth University School of Medicine. Social learning and role modeling reflect Liaison Committee on Medical Education guidelines for \"Medical schools (to) ensure that the learning environment for medical students promotes the development of explicit and appropriate professional attributes (attitudes, behaviors, and identity) in their medical students.\" This qualitative study reports findings from open-ended survey questions from 123 medical students who observed a preceptor during house calls to elderly homebound patients. Their comments included reflections on the medical treatment as well as interactions with family and professional care providers. Student insights about the social learning process they experienced during house calls to geriatric patients characterized physician role models as dedicated, compassionate, and communicative. They also described patient care in the home environment as comprehensive, personalized, more relaxed, and comfortable. Student perceptions reflect an appreciation of the richness and complexity of details learned from home visits and social interaction with patients, families, and caregivers."
},
{
"pmid": "21199485",
"title": "The contribution of personality traits and self-efficacy beliefs to academic achievement: a longitudinal study.",
"abstract": "BACKGROUND. The personal determinants of academic achievement and success have captured the attention of many scholars for the last decades. Among other factors, personality traits and self-efficacy beliefs have proved to be important predictors of academic achievement. AIMS. The present study examines the unique contribution and the pathways through which traits (i.e., openness and conscientiousness) and academic self-efficacy beliefs are conducive to academic achievement at the end of junior and senior high school. SAMPLE. Participants were 412 Italian students, 196 boys and 216 girls, ranging in age from 13 to 19 years. METHODS. The hypothesized relations among the variables were tested within the framework of structural equation model. RESULTS AND CONCLUSIONS. Openness and academic self-efficacy at the age of 13 contributed to junior high-school grades, after controlling for socio-economic status (SES). Junior high-school grades contribute to academic self-efficacy beliefs at the age of 16, which in turn contributed to high-school grades, over and above the effects of SES and prior academic achievement. In accordance with the posited hypothesis, academic self-efficacy beliefs partially mediated the contribution of traits to later academic achievement. In particular, conscientiousness at the age of 13 affected high-school grades indirectly, through its effect on academic self-efficacy beliefs at the age of 16. These findings have broad implications for interventions aimed to enhance children's academic pursuits. Whereas personality traits represent stable individual characteristics that mostly derive from individual genetic endowment, social cognitive theory provides guidelines for enhancing students' efficacy to regulate their learning activities."
},
{
"pmid": "11300582",
"title": "A 2 X 2 achievement goal framework.",
"abstract": "A 2 x 2 achievement goal framework comprising mastery-approach, mastery-avoidance, performance-approach, and performance-avoidance goals was proposed and tested in 3 studies. Factor analytic results supported the independence of the 4 achievement goal constructs. The goals were examined with respect to several important antecedents (e.g., motive dispositions, implicit theories, socialization histories) and consequences (e.g., anticipatory test anxiety, exam performance, health center visits), with particular attention allocated to the new mastery-avoidance goal construct. The results revealed distinct empirical profiles for each of the achievement goals; the pattern for mastery-avoidance goals was, as anticipated, more negative than that for mastery-approach goals and more positive than that for performance-avoidance goals. Implications of the present work for future theoretical development in the achievement goal literature are discussed."
},
{
"pmid": "26151468",
"title": "Reciprocal Effects of Self-Concept and Performance From a Multidimensional Perspective: Beyond Seductive Pleasure and Unidimensional Perspectives.",
"abstract": "We (Marsh & Craven, 1997) have claimed that academic self-concept and achievement are mutually reinforcing, each leading to gains in the other. Baumeister, Campbell, Krueger, and Vohs (2003) have claimed that self-esteem has no benefits beyond seductive pleasure and may even be detrimental to subsequent performance. Integrating these seemingly contradictory conclusions, we distinguish between (a) older, unidimensional perspectives that focus on global self-esteem and underpin the Baumeister et al. review and (b) more recent, multidimensional perspectives that focus on specific components of self-concept and are the basis of our claim. Supporting the construct validity of a multidimensional perspective, studies show that academic achievement is substantially related to academic self-concept, but nearly unrelated to self-esteem. Consistent with this distinction, research based on our reciprocal-effects model (REM) and a recent meta-analysis show that prior academic self-concept (as opposed to self-esteem) and achievement both have positive effects on subsequent self-concept and achievement. We provide an overview of new support for the generality of the REM for young children, cross-cultural research in non-Western countries, health (physical activity), and nonelite (gymnastics) and elite (international swimming championships) sport. We conclude that future reviews elucidating the significant implications of self-concept for theory, policy, and practice need to account for current research supporting the REM and a multidimensional perspective of self-concept."
},
{
"pmid": "9576837",
"title": "The Development and Validation of Scales Assessing Students' Achievement Goal Orientations",
"abstract": "Achievement goal theory has emerged as a major new direction in motivational research. A distinction is made among conceptually different achievement goal orientations including the goal to develop ability (task goal orientation), the goal to demonstrate ability (ability-approach goal orientation), and the goal to avoid the demonstration of lack of ability (ability-avoid goal orientation). Scales assessing each of these goal orientations were developed over an eight year period by a group of researchers at the University of Michigan. The results of studies conducted with seven different samples of elementary and middle school students are used to describe the internal consistency, stability, and construct validity of the scales. Comparisons of these scales with those developed by Nicholls and his colleagues provide evidence of convergent validity. Confirmatory factor analysis attests to the discriminant validity of the scales. Copyright 1998 Academic Press."
}
] |
Frontiers in Neuroinformatics | null | PMC8888432 | 10.3389/fninf.2022.771730 | Parameter Estimation of Two Spiking Neuron Models With Meta-Heuristic Optimization Algorithms | The automatic fitting of spiking neuron models to experimental data is a challenging problem. The integrate and fire model and Hodgkin–Huxley (HH) models represent the two complexity extremes of spiking neural models. Between these two extremes lies two and three differential-equation-based models. In this work, we investigate the problem of parameter estimation of two simple neuron models with a sharp reset in order to fit the spike timing of electro-physiological recordings based on two problem formulations. Five optimization algorithms are investigated; three of them have not been used to tackle this problem before. The new algorithms show improved fitting when compared with the old ones in both problems under investigation. The improvement in fitness function is between 5 and 8%, which is achieved by using the new algorithms while also being more consistent between independent trials. Furthermore, a new problem formulation is investigated that uses a lower number of search space variables when compared to the ones reported in related literature. | 2.1. Related WorksThere are a few papers that used the experimental spiking neuron data provided by the International Neuroinformatics Coordinating Facility (INCF) at their QSNMC in 2009. The details and results of this competition are outlined in Gerstner and Naud (2009) and Naud et al. (2009, 2011, 2012). The dataset was made available at the Github repository of the INCF to allow further research on the problem (Gerkin, 2009). Although many papers cite this dataset, only a few papers used it to identify the parameters of spiking neuron models (Rossant et al., 2010, 2011; Russell et al., 2010; Yamauchi et al., 2011; Mitra et al., 2013; Lynch and Houghton, 2015).One of the earliest papers to use the QSNMC2009 dataset is Rossant et al. (2010). The authors used a particle swarm optimizer (PSO) as the global optimization algorithm and the coincidence factor (Γ), defined in Section 3.2, as the objective function. They also provided a model fitting library integrated with the Brian neuron simulator and capable of running in parallel on GPUs. They proposed an online approach to calculate the Γ factor where the spike coincidences are counted as the model is simulated, and not post-simulation as usual. The model was simulated using time slicing, with an overlapping concept to parallelize the evaluation of the model further. The optimization procedure was tested on the synthetic data of the LIF model with an adaptive threshold by injecting an Ornstein-Uhlenbeck process current for 500 ms. The coincidence window was set to δ = 0.1 ms for this test. A perfect match was obtained within a few iterations. The parameter values were within ±15% of the ones used for synthesis and were found to be within ±3% when the number of particles was increased. However, the number of particles was not reported. For the experimental data, results were summarized for the dataset of Challenges A and B of QSNMC2009. Challenge A is for regular spiking neurons, and Challenge B is for fast-spiking neurons. The data of each recording in each challenge were divided into training and testing periods, each of which had a duration of 10 s. For Challenge A, the optimization was performed over the period from 17.5 s to 28 s, and the coincidence factor was reported for 28 s to 38 s with a coincidence window of δ = 4 ms. The intrinsic reliability (Γin), defined in Section 3.2, was explicitly reported in this paper to be Γin = 0.78 and Γin = 0.74 for Challenges A and B, respectively, but neither the source for these values nor the period over which these values were computed was provided. A time shift parameter was used to shift the model spike to align with the recorded spike. This happens because the spike times were recorded as the times when the membrane voltage crossed zero. The optimization was performed on each record independently, and the mean and standard deviation of the coincidence factor were reported. For example, the adaptive exponential IF model achieved Γ = 0.51 ± 0.04(65%) for Challenge A and Γ = 0.76 ± 0.05(102%) for Challenge B. The values in brackets are the normalized value with respect to the intrinsic reliability.In the review (Rossant et al., 2011), the authors applied their previously developed toolbox, which was based on Brian and used efficient parallelization concepts, to the QSNMC2009 dataset in order to estimate many of the parameters of the models that participated in the competition. The authors reported that their results were different from the ones reported in the competition due to the fact that they used only the available dataset and divided it into fitting and testing parts, whereas the entire available data was used for fitting in the competition. However, the authors did not specify the time periods used for fitting and testing. They developed and used a parallel version the Covariance Matrix Adaptation Evolution Strategy as the optimization algorithm suitable for GPU and multiple CPU simulations, and the coincidence factor as the objective function.An optimization method based on the maximum likelihood (ML) function was proposed for the Mihalas–Niebur spiking neuron model in Russell et al. (2010). To validate their method, the authors simulated 250 ms of tonic bursting and used it as the target spike pattern. The results of optimization of the synthetic case were described qualitatively to be an almost perfect match for spike timing. One second of the QSNMC2009 dataset was used to configure/predict the parameters of the model using the ML function. First, the spike time of each repetition of the 13 recordings was extracted as the time when the voltage trace crossed 0 mV. The optimization was then performed on the 13 recordings. However, the authors have not reported either the optimization algorithm or the period they have optimized over. They reported that their resulting voltage response was a 1.2 ms average difference in its inter-spike intervals compared with the experimental response, with a standard deviation of 1.12 ms. As the recorded data had an average inter-spike interval of 39.4 ms, this error is approximately 3%.An augmented Multi-timescale Adaptive Threshold (MAT) model was proposed in Yamauchi et al. (2011) by adding voltage dependency to the adaptive threshold in order to increase the variety of its firing patterns. The original MAT model won first place in QSNMC2009 competition, Challenge A. The authors did not use synthetic data to validate their method. The AugMAT model parameters were adjusted to match the data of 10 out of 13 voltage responses of the QSNMC2009, and then validated using the rest of the trials. The 10 trials were randomly selected, but the details of the parameter tuning were omitted, and the time window of evaluating Γ was not mentioned. The paper is more focused on introducing the AugMAT model and its new spiking patterns than the parameter identification problem. This process was repeated 100 times, and the coincidence factor, Γ, was used to assess the performance of the model using a coincidence window of 4 ms. The AugMAT model achieved Γ = 0.84, while the original MAT model achieved Γ = 0.77 in their predictive performance. Augmented MAT was superior at different coincidence windows (2 ms to 10 ms).In Mitra et al. (2013), the authors used a gradient descent algorithm to estimate the parameters of the AugMAT model using a synthetic dataset. The stimulating current had a length of 1 s and was generated by a sum of different exponentials to emulate the stimulus received from a dendrite tree. The search space in the synthetic case was five-dimensional, and the gradient descent reached the minimum within a finite number of steps. The authors proposed a differentiable performance function that can be viewed as a special case of the Van-Rossum metric. The performance function is given as
(1)
ζ=1tf2∫0tf[ψ1(t)-ψ2(t)]2dt,
where ψi(t) is the convolution of the spike train of the neuron number i with the Heaviside unit step function. A hybrid technique was proposed for the parameter estimation of the AugMAT model applied to the experimental spike train dataset provided with QSNMC2009. They chose to use the same part of the dataset discussed in Yamauchi et al. (2011), which is a 4s window from the time instance 17.5 s to 21.5 s. However, the record number, out of 13, was not specified. The hybrid technique used a gradient descent and a Nelder–Mead algorithm. The gradient descent started at a randomly chosen initial point using the ζ performance index until a previously specified number of iterations was reached, and the Nelder–Mead then used the result of the previous phase as the initial point of its search using the coincidence factor (Γ). This hybridization was aimed to overcome the limitations of using each of these algorithms alone. The simulation was conducted 100 times with different random initial parameters for the hybrid method (GD+NM) and the NM alone. The statistics of the results are 0.65 ± 0.09 and 0.55 ± 0.1 for GD+NM and NM, respectively.The latest paper to use the QSNMC2009 dataset is Lynch and Houghton (2015). The authors used the genetic algorithm for optimization and the Van-Rossum metric as the objective function. The validity of the optimization procedure was tested on the adaptive exponential integrate and fire model (aEIF) by using 4 s of synthetic data generated from a random input current signal. The first 2 s were for training, and the last 2 s were for validation. This test was run 20 times. The population size of the genetic algorithm was set to 240, and the number of iterations was set to 1, 000. Three synthetic tests with the time constant, τ, of the Van-Rossum metric were made. The first test used varying values of τ that start with half of the simulation time and gradually decrease until it reaches the mean inter-spike interval. In the second one, τ was set to be the simulation time; in the third one, τ was set to be the mean inter-spike interval. When comparing the results based on the coincidence factor, the larger time scale had the lowest Γ, the short time scale was better, and the varying time scale was the best. It is worth mentioning that the mean Γ factor value for a varying time scale case was not equal to 1. For the experimental data, the authors used the same 20.5 s used in Rossant et al. (2010), starting at 17.5 s, where the first 10.5 s were used for fitting, and the last 10 s were used for validation. The spike time was determined to be the times in the trace where the voltage crosses a certain value, but this threshold value was not specified in the paper. The genetic algorithm was set to run for 800 iterations in the experimental data case for 10 interdependent runs. The authors tested five neuron models in that paper: the aIF, atIF, aEIF, a2EIF, and Izhikevich models. The aEIF and a2EIF models achieved the top Γ factor values. | [
"34380016",
"16014787",
"16622699",
"34383821",
"19833951",
"34358629",
"12991237",
"18244602",
"15484883",
"18160135",
"25941485",
"34149387",
"21919785",
"23749146",
"21415925",
"20224819",
"20959265",
"22787170",
"30408443",
"19011918",
"22203798",
"25855820",
"27295638"
] | [
{
"pmid": "34380016",
"title": "Single cortical neurons as deep artificial neural networks.",
"abstract": "Utilizing recent advances in machine learning, we introduce a systematic approach to characterize neurons' input/output (I/O) mapping complexity. Deep neural networks (DNNs) were trained to faithfully replicate the I/O function of various biophysical models of cortical neurons at millisecond (spiking) resolution. A temporally convolutional DNN with five to eight layers was required to capture the I/O mapping of a realistic model of a layer 5 cortical pyramidal cell (L5PC). This DNN generalized well when presented with inputs widely outside the training distribution. When NMDA receptors were removed, a much simpler network (fully connected neural network with one hidden layer) was sufficient to fit the model. Analysis of the DNNs' weight matrices revealed that synaptic integration in dendritic branches could be conceptualized as pattern matching from a set of spatiotemporal templates. This study provides a unified characterization of the computational complexity of single neurons and suggests that cortical networks therefore have a unique architecture, potentially supporting their computational power."
},
{
"pmid": "16014787",
"title": "Adaptive exponential integrate-and-fire model as an effective description of neuronal activity.",
"abstract": "We introduce a two-dimensional integrate-and-fire model that combines an exponential spike mechanism with an adaptation equation, based on recent theoretical findings. We describe a systematic method to estimate its parameters with simple electrophysiological protocols (current-clamp injection of pulses and ramps) and apply it to a detailed conductance-based model of a regular spiking neuron. Our simple model predicts correctly the timing of 96% of the spikes (+/-2 ms) of the detailed model in response to injection of noisy synaptic conductances. The model is especially reliable in high-conductance states, typical of cortical activity in vivo, in which intrinsic conductances were found to have a reduced role in shaping spike trains. These results are promising because this simple model has enough expressive power to reproduce qualitatively several electrophysiological classes described in vitro."
},
{
"pmid": "16622699",
"title": "A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input.",
"abstract": "The integrate-and-fire neuron model is one of the most widely used models for analyzing the behavior of neural systems. It describes the membrane potential of a neuron in terms of the synaptic inputs and the injected current that it receives. An action potential (spike) is generated when the membrane potential reaches a threshold, but the actual changes associated with the membrane voltage and conductances driving the action potential do not form part of the model. The synaptic inputs to the neuron are considered to be stochastic and are described as a temporally homogeneous Poisson process. Methods and results for both current synapses and conductance synapses are examined in the diffusion approximation, where the individual contributions to the postsynaptic potential are small. The focus of this review is upon the mathematical techniques that give the time distribution of output spikes, namely stochastic differential equations and the Fokker-Planck equation. The integrate-and-fire neuron model has become established as a canonical model for the description of spiking neurons because it is capable of being analyzed mathematically while at the same time being sufficiently complex to capture many of the essential features of neural processing. A number of variations of the model are discussed, together with the relationship with the Hodgkin-Huxley neuron model and the comparison with electrophysiological data. A brief overview is given of two issues in neural information processing that the integrate-and-fire neuron model has contributed to - the irregular nature of spiking in cortical neurons and neural gain modulation."
},
{
"pmid": "34383821",
"title": "Marine predators algorithm for solving single-objective optimal power flow.",
"abstract": "This study presents a nature-inspired, and metaheuristic-based Marine predator algorithm (MPA) for solving the optimal power flow (OPF) problem. The significant insight of MPA is the widespread foraging strategy called the Levy walk and Brownian movements in ocean predators, including the optimal encounter rate policy in biological interaction among predators and prey which make the method to solve the real-world engineering problems of OPF. The OPF problem has been extensively used in power system operation, planning, and management over a long time. In this work, the MPA is analyzed to solve the single-objective OPF problem considering the fuel cost, real and reactive power loss, voltage deviation, and voltage stability enhancement index as objective functions. The proposed method is tested on IEEE 30-bus test system and the obtained results by the proposed method are compared with recent literature studies. The acquired results demonstrate that the proposed method is quite competitive among the nature-inspired optimization techniques reported in the literature."
},
{
"pmid": "34358629",
"title": "Parallel and Recurrent Cascade Models as a Unifying Force for Understanding Subcellular Computation.",
"abstract": "Neurons are very complicated computational devices, incorporating numerous non-linear processes, particularly in their dendrites. Biophysical models capture these processes directly by explicitly modelling physiological variables, such as ion channels, current flow, membrane capacitance, etc. However, another option for capturing the complexities of real neural computation is to use cascade models, which treat individual neurons as a cascade of linear and non-linear operations, akin to a multi-layer artificial neural network. Recent research has shown that cascade models can capture single-cell computation well, but there are still a number of sub-cellular, regenerative dendritic phenomena that they cannot capture, such as the interaction between sodium, calcium, and NMDA spikes in different compartments. Here, we propose that it is possible to capture these additional phenomena using parallel, recurrent cascade models, wherein an individual neuron is modelled as a cascade of parallel linear and non-linear operations that can be connected recurrently, akin to a multi-layer, recurrent, artificial neural network. Given their tractable mathematical structure, we show that neuron models expressed in terms of parallel recurrent cascades can themselves be integrated into multi-layered artificial neural networks and trained to perform complex tasks. We go on to discuss potential implications and uses of these models for artificial intelligence. Overall, we argue that parallel, recurrent cascade models provide an important, unifying tool for capturing single-cell computation and exploring the algorithmic implications of physiological phenomena."
},
{
"pmid": "18244602",
"title": "Simple model of spiking neurons.",
"abstract": "A model is presented that reproduces spiking and bursting behavior of known types of cortical neurons. The model combines the biologically plausibility of Hodgkin-Huxley-type dynamics and the computational efficiency of integrate-and-fire neurons. Using this model, one can simulate tens of thousands of spiking cortical neurons in real time (1 ms resolution) using a desktop PC."
},
{
"pmid": "15484883",
"title": "Which model to use for cortical spiking neurons?",
"abstract": "We discuss the biological plausibility and computational efficiency of some of the most useful models of spiking and bursting neurons. We compare their applicability to large-scale simulations of cortical neural networks."
},
{
"pmid": "18160135",
"title": "A benchmark test for a quantitative assessment of simple neuron models.",
"abstract": "Several methods and algorithms have recently been proposed that allow for the systematic evaluation of simple neuron models from intracellular or extracellular recordings. Models built in this way generate good quantitative predictions of the future activity of neurons under temporally structured current injection. It is, however, difficult to compare the advantages of various models and algorithms since each model is designed for a different set of data. Here, we report about one of the first attempts to establish a benchmark test that permits a systematic comparison of methods and performances in predicting the activity of rat cortical pyramidal neurons. We present early submissions to the benchmark test and discuss implications for the design of future tests and simple neurons models."
},
{
"pmid": "25941485",
"title": "Parameter estimation of neuron models using in-vitro and in-vivo electrophysiological data.",
"abstract": "Spiking neuron models can accurately predict the response of neurons to somatically injected currents if the model parameters are carefully tuned. Predicting the response of in-vivo neurons responding to natural stimuli presents a far more challenging modeling problem. In this study, an algorithm is presented for parameter estimation of spiking neuron models. The algorithm is a hybrid evolutionary algorithm which uses a spike train metric as a fitness function. We apply this to parameter discovery in modeling two experimental data sets with spiking neurons; in-vitro current injection responses from a regular spiking pyramidal neuron are modeled using spiking neurons and in-vivo extracellular auditory data is modeled using a two stage model consisting of a stimulus filter and spiking neuron model."
},
{
"pmid": "34149387",
"title": "On the Use of a Multimodal Optimizer for Fitting Neuron Models. Application to the Cerebellar Granule Cell.",
"abstract": "This article extends a recent methodological workflow for creating realistic and computationally efficient neuron models whilst capturing essential aspects of single-neuron dynamics. We overcome the intrinsic limitations of the extant optimization methods by proposing an alternative optimization component based on multimodal algorithms. This approach can natively explore a diverse population of neuron model configurations. In contrast to methods that focus on a single global optimum, the multimodal method allows directly obtaining a set of promising solutions for a single but complex multi-feature objective function. The final sparse population of candidate solutions has to be analyzed and evaluated according to the biological plausibility and their objective to the target features by the expert. In order to illustrate the value of this approach, we base our proposal on the optimization of cerebellar granule cell (GrC) models that replicate the essential properties of the biological cell. Our results show the emerging variability of plausible sets of values that this type of neuron can adopt underlying complex spiking characteristics. Also, the set of selected cerebellar GrC models captured spiking dynamics closer to the reference model than the single model obtained with off-the-shelf parameter optimization algorithms used in our previous article. The method hereby proposed represents a valuable strategy for adjusting a varied population of realistic and simplified neuron models. It can be applied to other kinds of neuron models and biological contexts."
},
{
"pmid": "21919785",
"title": "Improved similarity measures for small sets of spike trains.",
"abstract": "Multiple measures have been developed to quantify the similarity between two spike trains. These measures have been used for the quantification of the mismatch between neuron models and experiments as well as for the classification of neuronal responses in neuroprosthetic devices and electrophysiological experiments. Frequently only a few spike trains are available in each class. We derive analytical expressions for the small-sample bias present when comparing estimators of the time-dependent firing intensity. We then exploit analogies between the comparison of firing intensities and previously used spike train metrics and show that improved spike train measures can be successfully used for fitting neuron models to experimental data, for comparisons of spike trains, and classification of spike train data. In classification tasks, the improved similarity measures can increase the recovered information. We demonstrate that when similarity measures are used for fitting mathematical models, all previous methods systematically underestimate the noise. Finally, we show a striking implication of this deterministic bias by reevaluating the results of the single-neuron prediction challenge."
},
{
"pmid": "23749146",
"title": "Temporal whitening by power-law adaptation in neocortical neurons.",
"abstract": "Spike-frequency adaptation (SFA) is widespread in the CNS, but its function remains unclear. In neocortical pyramidal neurons, adaptation manifests itself by an increase in the firing threshold and by adaptation currents triggered after each spike. Combining electrophysiological recordings in mice with modeling, we found that these adaptation processes lasted for more than 20 s and decayed over multiple timescales according to a power law. The power-law decay associated with adaptation mirrored and canceled the temporal correlations of input current received in vivo at the somata of layer 2/3 somatosensory pyramidal neurons. These findings suggest that, in the cortex, SFA causes temporal decorrelation of output spikes (temporal whitening), an energy-efficient coding procedure that, at high signal-to-noise ratio, improves the information transfer."
},
{
"pmid": "21415925",
"title": "Fitting neuron models to spike trains.",
"abstract": "Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input-output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model."
},
{
"pmid": "20224819",
"title": "Automatic fitting of spiking neuron models to electrophysiological recordings.",
"abstract": "Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains) that can run in parallel on graphics processing units (GPUs). The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models."
},
{
"pmid": "20959265",
"title": "Optimization methods for spiking neurons and networks.",
"abstract": "Spiking neurons and spiking neural circuits are finding uses in a multitude of tasks such as robotic locomotion control, neuroprosthetics, visual sensory processing, and audition. The desired neural output is achieved through the use of complex neuron models, or by combining multiple simple neurons into a network. In either case, a means for configuring the neuron or neural circuit is required. Manual manipulation of parameters is both time consuming and non-intuitive due to the nonlinear relationship between parameters and the neuron's output. The complexity rises even further as the neurons are networked and the systems often become mathematically intractable. In large circuits, the desired behavior and timing of action potential trains may be known but the timing of the individual action potentials is unknown and unimportant, whereas in single neuron systems the timing of individual action potentials is critical. In this paper, we automate the process of finding parameters. To configure a single neuron we derive a maximum likelihood method for configuring a neuron model, specifically the Mihalas-Niebur Neuron. Similarly, to configure neural circuits, we show how we use genetic algorithms (GAs) to configure parameters for a network of simple integrate and fire with adaptation neurons. The GA approach is demonstrated both in software simulation and hardware implementation on a reconfigurable custom very large scale integration chip."
},
{
"pmid": "30408443",
"title": "Global and Multiplexed Dendritic Computations under In Vivo-like Conditions.",
"abstract": "Dendrites integrate inputs nonlinearly, but it is unclear how these nonlinearities contribute to the overall input-output transformation of single neurons. We developed statistically principled methods using a hierarchical cascade of linear-nonlinear subunits (hLN) to model the dynamically evolving somatic response of neurons receiving complex, in vivo-like spatiotemporal synaptic input patterns. We used the hLN to predict the somatic membrane potential of an in vivo-validated detailed biophysical model of a L2/3 pyramidal cell. Linear input integration with a single global dendritic nonlinearity achieved above 90% prediction accuracy. A novel hLN motif, input multiplexing into parallel processing channels, could improve predictions as much as conventionally used additional layers of local nonlinearities. We obtained similar results in two other cell types. This approach provides a data-driven characterization of a key component of cortical circuit computations: the input-output transformation of neurons during in vivo-like conditions."
},
{
"pmid": "19011918",
"title": "Automated neuron model optimization techniques: a review.",
"abstract": "The increase in complexity of computational neuron models makes the hand tuning of model parameters more difficult than ever. Fortunately, the parallel increase in computer power allows scientists to automate this tuning. Optimization algorithms need two essential components. The first one is a function that measures the difference between the output of the model with a given set of parameter and the data. This error function or fitness function makes the ranking of different parameter sets possible. The second component is a search algorithm that explores the parameter space to find the best parameter set in a minimal amount of time. In this review we distinguish three types of error functions: feature-based ones, point-by-point comparison of voltage traces and multi-objective functions. We then detail several popular search algorithms, including brute-force methods, simulated annealing, genetic algorithms, evolution strategies, differential evolution and particle-swarm optimization. Last, we shortly describe Neurofitter, a free software package that combines a phase-plane trajectory density fitness function with several search algorithms."
},
{
"pmid": "22203798",
"title": "Elemental spiking neuron model for reproducing diverse firing patterns and predicting precise firing times.",
"abstract": "In simulating realistic neuronal circuitry composed of diverse types of neurons, we need an elemental spiking neuron model that is capable of not only quantitatively reproducing spike times of biological neurons given in vivo-like fluctuating inputs, but also qualitatively representing a variety of firing responses to transient current inputs. Simplistic models based on leaky integrate-and-fire mechanisms have demonstrated the ability to adapt to biological neurons. In particular, the multi-timescale adaptive threshold (MAT) model reproduces and predicts precise spike times of regular-spiking, intrinsic-bursting, and fast-spiking neurons, under any fluctuating current; however, this model is incapable of reproducing such specific firing responses as inhibitory rebound spiking and resonate spiking. In this paper, we augment the MAT model by adding a voltage dependency term to the adaptive threshold so that the model can exhibit the full variety of firing responses to various transient current pulses while maintaining the high adaptability inherent in the original MAT model. Furthermore, with this addition, our model is actually able to better predict spike times. Despite the augmentation, the model has only four free parameters and is implementable in an efficient algorithm for large-scale simulation due to its linearity, serving as an element neuron model in the simulation of realistic neuronal circuitry."
},
{
"pmid": "25855820",
"title": "Erratum: Borderud SP, Li Y, Burkhalter JE, Sheffer CE and Ostroff JS. Electronic cigarette use among patients with cancer: Characteristics of electronic cigarette users and their smoking cessation outcomes. Cancer. doi: 10.1002/ cncr.28811.",
"abstract": "The authors discovered some errors regarding reference group labels in Table 2. The corrected table is attached. The authors regret these errors."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
}
] |
Scientific Reports | 35233013 | PMC8888645 | 10.1038/s41598-022-07276-3 | TCP-WBQ: a backlog-queue-based congestion control mechanism for heterogeneous wireless networks | In heterogeneous wireless networks, random packet loss and high latency lead to conventional TCP variants performing unsatisfactorily in the case of competing communications. Especially on high-latency wireless links, conventional TCP variants are unable to estimate congestion degrees accurately for fine-grained congestion control because of the effects of random packet loss and delay oscillations. This paper proposes a TCP variant at the sender side to identify congestion degrees, namely TCP-WBQ, which quickly responses to the real congestion and effectively shields against random packet loss and oscillations of latency time. The proposed algorithm of congestion control firstly constructs a backlog-queue model based on the dynamics of the congestion window, and deduces the two bounds of the model which delimit oscillations of the backlog queue for non-congestion and random packet loss respectively. TCP-WBQ detects congestion degrees more accurately and thus implements the corresponding schemes of adjusting the congestion window, maintaining a tradeoff between high throughputs and congestion avoidance. The comprehensive simulations show that TCP-WBQ works efficiently in bandwidth utilization with single and multiple bottleneck scenarios, and achieves high performance and competitive fairness in heterogeneous wireless networks. | Related workRecently, many TCP variants for congestion control are proposed for wireless networks. We summarize the existing work promoting the performance of wireless streams.A-CAFDSP22 respectively considers the carrying capacity, the dissimilar characteristics, and the background traffic intensity of parallel links to select the efficient links for concurrent transmissions. Early window tailoring (EWT), a new network-return technique solution, was proposed in23. By scaling the TCP receiver’s cwnd in accord with the gateway’s available memory space, EWT maintains a satisfied throughput required by specific applications within a given packet-loss rate. TCP-NACK24 inserts a negative acknowledgment (NACK) flag into the TCP segment to retransmit only the lost packet without specially reducing the transmission rate. The solution mainly considers that unacknowledged packets in the receiver’s buffer affect the sender’s cwnd. Meanwhile, TCP-NACK establishes a Markov state space in the congestion avoidance phase to predict the error probability of each packet so as to increase cwnd efficiently to achieve the high utilization of link capacity. It is at the expense of extending Rtt, thus degrading the performance of wireless networks. However, these solutions are deployed at not only the TCP sender but also intermediate devices, and thus the deployment cost and complexity are high.TCP-W + , inherited from TCP-W, shields the oscillation of Rtt25 by continuously detecting the Acks’ arrival rate, and thus estimates the occupied bandwidth. TCP-W + reduces cwnd and the start threshold (ssthresh) according to the estimated bandwidth (EB) rather than recklessly cutting these values26 as loss-based variants do. Nevertheless, due to adopting additive increase in cwnd, TCP-W + , in the multi-traffic competition, occupies the bandwidth of the bottleneck link slower than the loss-based TCP variants over better-wired links27. This unfairness increases the latency upon the wireless link, thus enabling TCP-W + to often under-estimate the available bandwidth.In28, the scheme considers the discrimination of packet loss based on machine learning in wireless networks. This scheme learns how to distinguish the true congestion from packet loss by the multi-layer perceptron. According to the learning results, the congestion control classifies the reasons for packet error. It performs well in the case of low bandwidth utilization. However, with the utilization increasing, both noise and congestion events increase the latency, the high latency that leads to inaccurate decision-making.To adaptively adjust rather than linearly increase cwnd, TCP-ACC29 proposed a real-time reordering metric to infer the probabilities of packet loss and RTO events. The mechanism measures and estimates Rtt according to the inferred probabilities. During the congestion avoidance, it transmits as many packets as possible by setting appropriate cwnd based on the probabilities. Although its cwnd setting addresses the problems of packet loss and reordering in wireless networks, it is not suitable for the competition of multiple flows.Hybla stemmed from TCP-NewNeno is increasingly recognized for yielding high throughput, especially over the satellite channel. Due to cwnd exponentially growing, Hybla is superior to TCP-NewReno in terms of throughput on wireless connections with longer Rtt. Nevertheless, when the Rtt and packet loss exceed a certain range, its performance drops dramatically because the cwnd threshold in the slow start phase is very low.Although the above-related TCP variants achieved very important contributions for wireless communications, they either are difficult to be deployed, or lack overall consideration of the random packet loss and high latency of the wireless link. These solutions in the heterogeneous wireless network still have low throughput when the wireless stream they controlled compete with other wired streams. | [] | [] |
Scientific Reports | 35236902 | PMC8891285 | 10.1038/s41598-022-07494-9 | A hybrid with distributed pooling blockchain protocol for image storage | As a distributed storage scheme, the blockchain network lacks storage space has been a long-term concern in this field. At present, there are relatively few research on algorithms and protocols to reduce the storage requirement of blockchain, and the existing research has limitations such as sacrificing fault tolerance performance and raising time cost, which need to be further improved. Facing the above problems, this paper proposes a protocol based on Distributed Image Storage Protocol (DISP), which can effectively improve blockchain storage space and reduces computational costs in the help of InterPlanetary File System (IPFS). In order to prove the feasibility of the protocol, we make full use of IPFS and distributed database to design a simulation experiment for blockchain. Through distributed pooling (DP) algorithm in this protocol, we can divide image evidence into recognizable several small files and stored in several nodes. And these files can be restored to lossless original documents again by inverse distributed pooling (IDP) algorithm after authorization. These advantages in performance create conditions for large scale industrial and commercial applications. | Related workThe traditional data compression method is mainly based on the compressed sensing theory4, a lossy compression method for sparse signals5. That is to find the sparse representation of the signal lower than Nyquist sampling rate, and restore the signal as undistorted as possible by reconstruction. Eldar and Kutyniok6 list some signal reconstruction algorithms for solving local optimum by greedy pursuit, including matching pursuit (MP)7, orthogonal matching pursuit (OMP)8 and subspace pursuit (SP)9. OMP algorithm has the most far-reaching influence on the future research in the field of compressed sensing, which derived a variety of improved versions of the algorithm. For example, generalized orthogonal matching pursuit (GOMP)10 is formed by enlarging the range of residual selection in iteration; regularized orthogonal matching pursuit (ROMP)11 is formed by replacing single column iteration with multi-column iteration; stage orthogonal matching pursuit (StOMP)12 is formed by replacing single atom iteration with multiatom iteration; compressed sample matching pursuit (CoSaMP)13 is formed by setting the atoms selected in each iteration will be discarded instead of being reserved in the next iteration. There are also some dictionary learning algorithms14, such as principal component analysis (PCA), Ksingular value decomposition (KSVD)15, optimal direction method, greedy adaptive dictionary (GAD)16, etc.Although compressed sensing technology can achieve similar effect to the original file through various restoration algorithms, the limitations of this method are obvious. First of all, the compression condition must be sparse signal, such as images with most gray value of 0 or audio signals with most amplitude value of 0. Secondly, any restoration algorithm needs to reconstruct each row and column through the estimation technology, which takes a long time and is hard to meet the needs of large-scale industrial applications. Finally, considering the requirement of restoration quality, the compression ratio of compressed sensing technology is still limited. For the blockchain network with geometric growth of data volume, how to meet the demand of releasing storage space reasonably is very difficult.With the development of blockchain, the data stored on chain is usually some important supporting documents, such as business contracts, legal documents, voice evidence, etc. In order to ensure the storage security and lossless restoration of files, the demand for reducing the storage cost, transmission cost and node space by transferring storage gradually appears.Jafari and Plumbley17 proposed a blockchain data compression scheme for IPFS, which enables most transactions to be confirmed locally. Only a small number of transactions need to access the IPFS network, and the compression ratio in bitcoin network reaches 0.0817. This also provides a feasible idea for us to prove that the protocol can improve the blockchain storage space and storage cost with the help of IPFS. In 201818, a distributed cooperative spectrum sensing scheme is proposed, and one hop information fusion is realized by using spatial information fusion technology based on average hops and distributed computing technology. Then, the fusion supporting estimation results are used as prior information to guide the next local signal reconstruction, and the algorithm is implemented by our weighted binary iterative hard threshold (BIHT) algorithm. Local signal reconstruction and distributed fusion of supporting information alternate until reliable spectrum detection is achieved. Yan et al.19 also compressed the bitcoin transaction records by replacing the hash pointer of the previous block hash value with the index pointer, which can reduce the storage space by 12.71%. On the basis of synchronous OMP (SOMP) algorithm and KSVD dictionary learning, an improved version of adaptive joint reconstruction algorithm DCS SOMP was proposed20. SOMP is used to estimate and reconstruct the sampled data, and KSVD method can update the over complete dictionary many times and reduce the error between the reconstructed signal and the original signal.Although the above-mentioned methods have focused on solving the storage problem in blockchain or distributed network, there are still some problems to be further studied, such as the time cost of algorithms, the fault tolerant performance of storage nodes, and the redundant of information21. Therefore, this paper proposes a solution that can reduce the storage space requirement of blockchain nodes with low time overhead, high fault tolerance and lossless restoration, so as to help the wider industrial and commercial applications of blockchain network.Distributed image storage protocol based on blockchainIn this section, we will introduce the architecture of the distributed image storage protocol based on blockchain. Through the architecture overview, the general process of compressing and storing images by this protocol will be presented, as shown in Fig. 1. What’s more, the community information on blockchain will also be analyzed in detail.Figure 1Architecture of distributed image storage protocol.Architecture overviewThis protocol is designed optional and not mandatory, and users who do not accept DISP protocol can still use the traditional way of full redundancy storage. However, users who sign the agreement can enjoy the benefits of space saving without reducing safety performance. In the traditional blockchain network each user must store exactly the same data to ensure the fault tolerance of the whole network and avoid the bifurcation caused by malicious attack or fraud. DISP protocol changes the full redundancy storage relative to individuals into the community level full redundancy, namely there is no redundancy for the data stored in each node of one community.In DISP, distributed storage ensures that all data will not be lost when a few nodes are attacked or fail so that the performance of data security can be enhanced. And the distributed pooling algorithm reduces the data redundancy of distributed storage and saves storage space greatly.Community formationIn the preprocessing stage before the implementation of distributed pooling algorithm, the original image is first divided into several pooling regions according to the shape of the pooling kernel to form a set of pooling regions to be processed. Then the image can be divided into several parts and stored in several nodes through distributed pooling algorithm.The address who signs the protocol will be gathered to form different communities and the number of nodes in each community is determined by the number of pooled images obtained after the decomposition algorithm. Under a certain compression ratio, every piece of data is still recognizable as well as can be restored losslessly by compressed sensing or super resolution representation. If all the pieces of data in the community is collected, then the original data can be restored losslessly by the inverse operation of DP algorithm.The data saved by each node is different from that of other nodes in the community, otherwise, the node will be divided into another community. Each node in a community can see the whole picture of the data. If the corresponding pieces of data stored by each node in the community are collected together, then the origin data can be restored losslessly after invoking evidence phase.Pooling algorithm for imageIn this section, we will try to design a compression algorithm that can restore losslessly in the future, which is called distributed pooling. The idea comes from a down-sampling algorithm called pooling, but distributed pooling algorithm is essentially different from any existing pooling algorithm. An image will be split into several pictures and stored in different addresses of the distributed network, which enables each node to pay less storage cost and the original image can be synthesized losslessly if necessary.Pooling algorithmThe early pooling algorithm simulates the characteristics of receptive field of cortical neurons in primary visual area, and extracts features from original images using the principle of independent maximization of sparse coding22,23. With the rise of data driven methods, related research began to focus on the nonlinear characteristics of optimal pooling and probabilistic representation of subspace size. By maximizing the independence between projection norms on a linear subspace, Reference24 explained the emergence of spatial phase shift in variance, and used independent component analysis (ICA) modeling:1\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\begin{array}{*{20}c} {I \left( {x,y} \right) = \mathop \sum \limits_{i = 1}^{m} b_{i} \left( {x,y} \right)s_{i} } \\ \end{array}$$\end{document}Ix,y=∑i=1mbix,ysiwhere \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$s_{i}$$\end{document}si is the randomly generated component coefficient, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$b_{i} \left( {x,y} \right)$$\end{document}bix,y is the base equation representing the feature mapping. Then a single channel gray image can be expressed as a linear combination of multiple characteristic components. \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$s_{i}$$\end{document}si can be expressed as a filter to extract features from the image, namely the inner product of filter coefficients and region pixels:2\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\begin{array}{*{20}c} {s_{i} = \mathop \sum \limits_{x,y} w_{i} \left( {x,y} \right)I\left( {x,y} \right) = \left\langle {w_{i} ,I} \right\rangle } \\ \end{array}$$\end{document}si=∑x,ywix,yIx,y=wi,ILp norm poolingWith the further research on representation learning and signal reduction, the introduction of Lp norm increases the interpretability of pooling definition. The n dimensional spherical form formed by the contour lines of Lp norm in Euclidean space is described as Lp ellipsoidal distribution25. This concept was introduced into Lp norm subspace analysis (LpISA), which has formed a mathematical expression of pooling:3\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\begin{array}{*{20}c} {\left( {\left| {x_{{I_{i} }} } \right|^{p} + \cdots + \left| {x_{{I_{l} }} } \right|^{p} } \right)^{\frac{1}{p}} } \\ \end{array}$$\end{document}xIip+⋯+xIlp1pWhen \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$p = 1$$\end{document}p=1, the Lp norm corresponds to the average pooling. And when \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$p \to \infty$$\end{document}p→∞, Lp norm is the form of maximum pooling, which can be expressed as:4\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\begin{array}{*{20}c} {max\left( {\left| {x_{{I_{i} }} \left| {, \cdots ,} \right|x_{{I_{l} }} } \right|} \right)} \\ \end{array}$$\end{document}maxxIi,⋯,xIlEstrach et al.26 conducted an experiment on the MINIST image data set based on the L1, L2 and Lp norm respectively. By comparing the reversibility after pooling, he concluded that the difficulty of recovery of the three methods was roughly the same, and proposed a more general pooling operator based on \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$K$$\end{document}K disjoint pixel block, where \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$s$$\end{document}s is the stride, which is similar to convolution operation, and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$K$$\end{document}K is the size of the convolution kernel:5\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\begin{array}{*{20}c} {I\left( {i,j} \right) = \left[ {\mathop \sum \limits_{x = 1}^{K} \mathop \sum \limits_{y = 1}^{K} I \left( {s \cdot i + x,s \cdot j + y} \right)^{p} } \right]^{\frac{1}{p}} } \\ \end{array}$$\end{document}Ii,j=∑x=1K∑y=1KIs·i+x,s·j+yp1pDistributed poolingThe proposed method named is distributed pooling because its idea comes from pooling algorithm, which is essentially kind of like an image decomposition algorithm. As shown in Fig. 2, an origin picture will be divided into several ordered regions, and then the pixels of the same position in each region will be extracted to recompose a new smaller picture. This process can be repeated many times until all the pixels in the original image has been extracted. The number of decomposed images is equal to the number of pixels contained in the ordered region.Figure 2Sketch map of distributed pooling algorithm.Such an operation process is a pooling algorithm for one specific decomposed image, but this process is a pixel level decomposition for all decomposed images. In addition, the decomposed images can be completely restored to lossless images, which is impossible by traditional pooling algorithm for they will discard pixels.Let \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$X$$\end{document}X be an image to be decomposed for distributed storage. \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$f_{s.t}$$\end{document}fs.t are small square areas demarcated on the original image \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$X$$\end{document}X, where \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$s$$\end{document}s and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$t$$\end{document}t represent the division area of row \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$s$$\end{document}s and column \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$t$$\end{document}t on the original image. Each pixel in the region \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$f_{s.t}$$\end{document}fs.t can be denoted as \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$f_{s.t,u,v}$$\end{document}fs.t,u,v. where \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$u$$\end{document}u and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$v$$\end{document}v represent the row position and column position of the pixel in region \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$f_{s.t}$$\end{document}fs.t. The size of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$f_{s,t}$$\end{document}fs,t, namely \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$max\left\{ u \right\}{ } \times { }max\left\{ v \right\}$$\end{document}maxu×maxv, is called pooling kernel size, which determines the size of the decomposed image.Let \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$G_{k,l}$$\end{document}Gk,l be the decomposed image, where \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$k$$\end{document}k and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$l$$\end{document}l mean which pixel is from in the division area \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$f_{s.t}$$\end{document}fs.t. Each pixel in \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$G_{k,l}$$\end{document}Gk,l can be denoted as \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$G_{k,l,i,j}$$\end{document}Gk,l,i,j, where the subscripts \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$i$$\end{document}i,\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${ }j$$\end{document}j represents the row position and the column position of the pixels in decomposed image \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$G_{k,l}$$\end{document}Gk,l . \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$N$$\end{document}N is the number of images after decomposition, which satisfies \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$N{ } = { }max\left\{ k \right\} \times { }max\left\{ l \right\}$$\end{document}N=maxk×maxl. The whole pooling process follows the following expression:6\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\begin{array}{*{20}c} {G_{k,l,i,j} = f_{i,j,k,l} } \\ \end{array}$$\end{document}Gk,l,i,j=fi,j,k,lTherefore, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$G_{1,1,1,2} = { }f_{1,2,1,1}$$\end{document}G1,1,1,2=f1,2,1,1 means that the pixel in first row and second column in the first decomposed image \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$G_{1,1}$$\end{document}G1,1 is derive from the first row and first column in division area \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$f_{1,2}$$\end{document}f1,2. Equation (6) describes the operation rules of Fig. 1 in the form of mathematical pixel expression.Different from other Lp norm pooling, the distributed pooling algorithm is a method to extract pixels orderly at the same position in each pooling area from original tensors, which can also be understood as a down-sampling method according to certain rules. Different from the common pooling algorithm, such a down-sampling algorithm will carry out multiple times according to the different positions of each pooling area until all the pixel positions on each pooling region have been extracted. Each image after pooling is different, but they are all down-sampling representations of the original image.Main implementation stepsWhen building a decentralized application (DAPP), the widely adopted approach is to store the hash values on the blockchain and store the information that needs to be stored in the centralized database. In this way, storage becomes a big shortcoming in decentralized applications and is a vulnerable link in the blockchain network. Fortunately, IPFS presents a solution: one can use it to store files and place a unique and permanently available IPFS address into a blockchain transaction. By this means, it is not necessary to place the space-hungry data itself on the blockchain. On the other hand, IPFS can also assist various different blockchain networks in transferring information and files which can improve the scalability of blockchain. Based on these advantages about IPFS, we design the main contents of distributed pooling include Distributed Storage Phase and Invoking Evidence Phase, as shown in Fig. 3.Figure 3Address writing and calling through IPFS.The left part of Fig. 3 is Distributed Storage Phase, and the right part of Fig. 3 is Invoking Evidence Phase which is used to recover the original image by inverse distributed pooling operation. After finishing the distribute storage phase, querying the corresponding image hash addresses will be easier from database. Through the distributed characteristics of IPFS, the hash value of addresses is unique and trusted. Thus, in the condition of trusted nodes, we can make use of inverse distributed pooling operation to recover the original image after confirming the correction of node addresses. | [
"8637596",
"10935923"
] | [
{
"pmid": "8637596",
"title": "Emergence of simple-cell receptive field properties by learning a sparse code for natural images.",
"abstract": "The receptive fields of simple cells in mammalian primary visual cortex can be characterized as being spatially localized, oriented and bandpass (selective to structure at different spatial scales), comparable to the basis functions of wavelet transforms. One approach to understanding such response properties of visual neurons has been to consider their relationship to the statistical structure of natural images in terms of efficient coding. Along these lines, a number of studies have attempted to train unsupervised learning algorithms on natural images in the hope of developing receptive fields with similar properties, but none has succeeded in producing a full set that spans the image space and contains all three of the above properties. Here we investigate the proposal that a coding strategy that maximizes sparseness is sufficient to account for these properties. We show that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex. The resulting sparse image code provides a more efficient representation for later stages of processing because it possesses a higher degree of statistical independence among its outputs."
},
{
"pmid": "10935923",
"title": "Emergence of phase- and shift-invariant features by decomposition of natural images into independent feature subspaces.",
"abstract": "Olshausen and Field (1996) applied the principle of independence maximization by sparse coding to extract features from natural images. This leads to the emergence of oriented linear filters that have simultaneous localization in space and in frequency, thus resembling Gabor functions and simple cell receptive fields. In this article, we show that the same principle of independence maximization can explain the emergence of phase- and shift-invariant features, similar to those found in complex cells. This new kind of emergence is obtained by maximizing the independence between norms of projections on linear subspaces (instead of the independence of simple linear filter outputs). The norms of the projections on such \"independent feature subspaces\" then indicate the values of invariant features."
}
] |
Scientific Reports | null | PMC8891294 | 10.1038/s41598-022-07165-9 | Public communication can facilitate low-risk coordination under surveillance | Consider a sub-population of rebels aiming at initiating a revolution. To avoid initializing a failed revolution, rebels would first strive to estimate their “power”, which is often correlated with their number. However, especially in non-democratic countries, rebels avoid disclosing themselves. This poses a significant challenge for rebels: estimating their number while minimizing the risk of being identified as rebels. This paper introduces a distributed computing framework to study this question. Our main insight is that the communication pattern plays a crucial role in achieving such a task. Specifically, we distinguish between public communication, in which each message announced by an individual can be viewed by all its neighbors, and private communication, in which each message is received by one neighbor. We describe a simple protocol in the public communication model that allows rebels to estimate their number while keeping a negligible risk of being identified as rebels. The proposed protocol, inspired by historical events, can be executed covertly even under extreme conditions of surveillance. Conversely, we show that under private communication, protocols of similar simplicity are either inefficient or non-covert. These results suggest that public communication can facilitate the emergence of revolutions in non-democratic countries. | Related worksThis paper studies the problem of distributively estimating the relative size of a sub-population whose members refrain from disclosing themselves as such. Unlike multiple works in the distributed computing and computer network disciplines that study size estimation14–20, our work initiates the analytical study of such procedures in the presence of surveillance, which imposes direct risk on individuals in such a sub-population that identify themselves as such.Secure computation under cryptographic assumptions was introduced by Yao13. By now, there is a huge body of literature on secure computations, in both two-party scenarios and multi-party scenarios, including the case of tolerating a quorum of Byzantine players21, and making known distributed algorithms secure22. The concept of covert computation was introduced in23 for two-party scenarios and in24 for multi-party scenarios. The idea behind covert protocols is that parties do not know if other parties are also participating in the protocol or not. In general, however, most of schemes in the cryptography literature employ sophisticated operations on both the encoding and the decoding parts (e.g., many of the protocols are based on first generating a huge random number and then manipulating it in a sophisticated manner13). While such operations can be implemented by computers, they cannot be expected to be employed directly by humans or other biological entities.Here, we are mostly interested in simple estimation protocols, based on sampling and sensing a certain tendency. Such mechanisms are natural for humans and other biological entities25. For example, similar protocols are executed by bacteria communities aiming to identify when their density passes a certain threshold26. | [
"29880688",
"16212498",
"28867924",
"11524130",
"12261830",
"17989231",
"12176318",
"25874462",
"11544353",
"16816200",
"15466044",
"28033323",
"26927849",
"31710602",
"21807995"
] | [
{
"pmid": "29880688",
"title": "Experimental evidence for tipping points in social convention.",
"abstract": "Theoretical models of critical mass have shown how minority groups can initiate social change dynamics in the emergence of new social conventions. Here, we study an artificial system of social conventions in which human subjects interact to establish a new coordination equilibrium. The findings provide direct empirical demonstration of the existence of a tipping point in the dynamics of changing social conventions. When minority groups reached the critical mass-that is, the critical group size for initiating social change-they were consistently able to overturn the established behavior. The size of the required critical mass is expected to vary based on theoretically identifiable features of a social setting. Our results show that the theoretically predicted dynamics of critical mass do in fact emerge as expected within an empirical system of social coordination."
},
{
"pmid": "16212498",
"title": "Quorum sensing: cell-to-cell communication in bacteria.",
"abstract": "Bacteria communicate with one another using chemical signal molecules. As in higher organisms, the information supplied by these molecules is critical for synchronizing the activities of large groups of cells. In bacteria, chemical communication involves producing, releasing, detecting, and responding to small hormone-like molecules termed autoinducers . This process, termed quorum sensing, allows bacteria to monitor the environment for other bacteria and to alter behavior on a population-wide scale in response to changes in the number and/or species present in a community. Most quorum-sensing-controlled processes are unproductive when undertaken by an individual bacterium acting alone but become beneficial when carried out simultaneously by a large number of cells. Thus, quorum sensing confuses the distinction between prokaryotes and eukaryotes because it enables bacteria to act as multicellular organisms. This review focuses on the architectures of bacterial chemical communication networks; how chemical information is integrated, processed, and transduced to control gene expression; how intra- and interspecies cell-cell communication is accomplished; and the intriguing possibility of prokaryote-eukaryote cross-communication."
},
{
"pmid": "28867924",
"title": "Estimating the Size of a Large Network and its Communities from a Random Sample.",
"abstract": "Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V, E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K, and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios."
},
{
"pmid": "11524130",
"title": "Quorum-sensing in Gram-negative bacteria.",
"abstract": "It has become increasingly and widely recognised that bacteria do not exist as solitary cells, but are colonial organisms that exploit elaborate systems of intercellular communication to facilitate their adaptation to changing environmental conditions. The languages by which bacteria communicate take the form of chemical signals, excreted from the cells, which can elicit profound physiological changes. Many types of signalling molecules, which regulate diverse phenotypes across distant genera, have been described. The most common signalling molecules found in Gram-negative bacteria are N-acyl derivatives of homoserine lactone (acyl HSLs). Modulation of the physiological processes controlled by acyl HSLs (and, indeed, many of the non-acyl HSL-mediated systems) occurs in a cell density- and growth phase-dependent manner. Therefore, the term 'quorum-sensing' has been coined to describe this ability of bacteria to monitor cell density before expressing a phenotype. In this paper, we review the current state of research concerning acyl HSL-mediated quorum-sensing. We also describe two non-acyl HSL-based systems utilised by the phytopathogens Ralstonia solanacearum and Xanthomonas campestris."
},
{
"pmid": "17989231",
"title": "Distribution of node characteristics in complex networks.",
"abstract": "Our enhanced ability to map the structure of various complex networks is increasingly accompanied by the possibility of independently identifying the functional characteristics of each node. Although this led to the observation that nodes with similar characteristics have a tendency to link to each other, in general we lack the tools to quantify the interplay between node properties and the structure of the underlying network. Here we show that when nodes in a network belong to two distinct classes, two independent parameters are needed to capture the detailed interplay between the network structure and node properties. We find that the network structure significantly limits the values of these parameters, requiring a phase diagram to uniquely characterize the configurations available to the system. The phase diagram shows a remarkable independence from the network size, a finding that, together with a proposed heuristic algorithm, allows us to determine its shape even for large networks. To test the usefulness of the developed methods, we apply them to biological and socioeconomic systems, finding that protein functions and mobile phone usage occupy distinct regions of the phase diagram, indicating that the proposed parameters have a strong discriminating power."
},
{
"pmid": "12176318",
"title": "Parallel quorum sensing systems converge to regulate virulence in Vibrio cholerae.",
"abstract": "The marine bacterium Vibrio harveyi possesses two quorum sensing systems (System 1 and System 2) that regulate bioluminescence. Although the Vibrio cholerae genome sequence reveals that a V. harveyi-like System 2 exists, it does not predict the existence of a V. harveyi-like System 1 or any obvious quorum sensing-controlled target genes. In this report we identify and characterize the genes encoding an additional V. cholerae autoinducer synthase and its cognate sensor. Analysis of double mutants indicates that a third as yet unidentified sensory circuit exists in V. cholerae. This quorum sensing apparatus is unusually complex, as it is composed of at least three parallel signaling channels. We show that in V. cholerae these communication systems converge to control virulence."
},
{
"pmid": "25874462",
"title": "Quadruple quorum-sensing inputs control Vibrio cholerae virulence and maintain system robustness.",
"abstract": "Bacteria use quorum sensing (QS) for cell-cell communication to carry out group behaviors. This intercellular signaling process relies on cell density-dependent production and detection of chemical signals called autoinducers (AIs). Vibrio cholerae, the causative agent of cholera, detects two AIs, CAI-1 and AI-2, with two histidine kinases, CqsS and LuxQ, respectively, to control biofilm formation and virulence factor production. At low cell density, these two signal receptors function in parallel to activate the key regulator LuxO, which is essential for virulence of this pathogen. At high cell density, binding of AIs to their respective receptors leads to deactivation of LuxO and repression of virulence factor production. However, mutants lacking CqsS and LuxQ maintain a normal LuxO activation level and remain virulent, suggesting that LuxO is activated by additional, unidentified signaling pathways. Here we show that two other histidine kinases, CqsR (formerly known as VC1831) and VpsS, act upstream in the central QS circuit of V. cholerae to activate LuxO. V. cholerae strains expressing any one of these four receptors are QS proficient and capable of colonizing animal hosts. In contrast, mutants lacking all four receptors are phenotypically identical to LuxO-defective mutants. Importantly, these four functionally redundant receptors act together to prevent premature induction of a QS response caused by signal perturbations. We suggest that the V. cholerae QS circuit is composed of quadruple sensory inputs and has evolved to be refractory to sporadic AI level perturbations."
},
{
"pmid": "11544353",
"title": "Quorum sensing in bacteria.",
"abstract": "Quorum sensing is the regulation of gene expression in response to fluctuations in cell-population density. Quorum sensing bacteria produce and release chemical signal molecules called autoinducers that increase in concentration as a function of cell density. The detection of a minimal threshold stimulatory concentration of an autoinducer leads to an alteration in gene expression. Gram-positive and Gram-negative bacteria use quorum sensing communication circuits to regulate a diverse array of physiological activities. These processes include symbiosis, virulence, competence, conjugation, antibiotic production, motility, sporulation, and biofilm formation. In general, Gram-negative bacteria use acylated homoserine lactones as autoinducers, and Gram-positive bacteria use processed oligo-peptides to communicate. Recent advances in the field indicate that cell-cell communication via autoinducers occurs both within and between bacterial species. Furthermore, there is mounting data suggesting that bacterial autoinducers elicit specific responses from host organisms. Although the nature of the chemical signals, the signal relay mechanisms, and the target genes controlled by bacterial quorum sensing systems differ, in every case the ability to communicate with one another allows bacteria to coordinate the gene expression, and therefore the behavior, of the entire community. Presumably, this process bestows upon bacteria some of the qualities of higher organisms. The evolution of quorum sensing systems in bacteria could, therefore, have been one of the early steps in the development of multicellularity."
},
{
"pmid": "16816200",
"title": "Modulation of the ComA-dependent quorum response in Bacillus subtilis by multiple Rap proteins and Phr peptides.",
"abstract": "In Bacillus subtilis, extracellular peptide signaling regulates several biological processes. Secreted Phr signaling peptides are imported into the cell and act intracellularly to antagonize the activity of regulators known as Rap proteins. B. subtilis encodes several Rap proteins and Phr peptides, and the processes regulated by many of these Rap proteins and Phr peptides are unknown. We used DNA microarrays to characterize the roles that several rap-phr signaling modules play in regulating gene expression. We found that rapK-phrK regulates the expression of a number of genes activated by the response regulator ComA. ComA activates expression of genes involved in competence development and the production of several secreted products. Two Phr peptides, PhrC and PhrF, were previously known to stimulate the activity of ComA. We assayed the roles that PhrC, PhrF, and PhrK play in regulating gene expression and found that these three peptides stimulate ComA-dependent gene expression to different levels and are all required for full expression of genes activated by ComA. The involvement of multiple Rap proteins and Phr peptides allows multiple physiological cues to be integrated into a regulatory network that modulates the timing and magnitude of the ComA response."
},
{
"pmid": "15466044",
"title": "Three parallel quorum-sensing systems regulate gene expression in Vibrio harveyi.",
"abstract": "In a process called quorum sensing, bacteria communicate using extracellular signal molecules termed autoinducers. Two parallel quorum-sensing systems have been identified in the marine bacterium Vibrio harveyi. System 1 consists of the LuxM-dependent autoinducer HAI-1 and the HAI-1 sensor, LuxN. System 2 consists of the LuxS-dependent autoinducer AI-2 and the AI-2 detector, LuxPQ. The related bacterium, Vibrio cholerae, a human pathogen, possesses System 2 (LuxS, AI-2, and LuxPQ) but does not have obvious homologues of V. harveyi System 1. Rather, System 1 of V. cholerae is made up of the CqsA-dependent autoinducer CAI-1 and a sensor called CqsS. Using a V. cholerae CAI-1 reporter strain we show that many other marine bacteria, including V. harveyi, produce CAI-1 activity. Genetic analysis of V. harveyi reveals cqsA and cqsS, and phenotypic analysis of V. harveyi cqsA and cqsS mutants shows that these functions comprise a third V. harveyi quorum-sensing system that acts in parallel to Systems 1 and 2. Together these communication systems act as a three-way coincidence detector in the regulation of a variety of genes, including those responsible for bioluminescence, type III secretion, and metalloprotease production."
},
{
"pmid": "28033323",
"title": "Transient Duplication-Dependent Divergence and Horizontal Transfer Underlie the Evolutionary Dynamics of Bacterial Cell-Cell Signaling.",
"abstract": "Evolutionary expansion of signaling pathway families often underlies the evolution of regulatory complexity. Expansion requires the acquisition of a novel homologous pathway and the diversification of pathway specificity. Acquisition can occur either vertically, by duplication, or through horizontal transfer, while divergence of specificity is thought to occur through a promiscuous protein intermediate. The way by which these mechanisms shape the evolution of rapidly diverging signaling families is unclear. Here, we examine this question using the highly diversified Rap-Phr cell-cell signaling system, which has undergone massive expansion in the genus Bacillus. To this end, genomic sequence analysis of >300 Bacilli genomes was combined with experimental analysis of the interaction of Rap receptors with Phr autoinducers and downstream targets. Rap-Phr expansion is shown to have occurred independently in multiple Bacillus lineages, with >80 different putative rap-phr alleles evolving in the Bacillius subtilis group alone. The specificity of many rap-phr alleles and the rapid gain and loss of Rap targets are experimentally demonstrated. Strikingly, both horizontal and vertical processes were shown to participate in this expansion, each with a distinct role. Horizontal gene transfer governs the acquisition of already diverged rap-phr alleles, while intralocus duplication and divergence of the phr gene create the promiscuous intermediate required for the divergence of Rap-Phr specificity. Our results suggest a novel role for transient gene duplication and divergence during evolutionary shifts in specificity."
},
{
"pmid": "26927849",
"title": "Social Evolution Selects for Redundancy in Bacterial Quorum Sensing.",
"abstract": "Quorum sensing is a process of chemical communication that bacteria use to monitor cell density and coordinate cooperative behaviors. Quorum sensing relies on extracellular signal molecules and cognate receptor pairs. While a single quorum-sensing system is sufficient to probe cell density, bacteria frequently use multiple quorum-sensing systems to regulate the same cooperative behaviors. The potential benefits of these redundant network structures are not clear. Here, we combine modeling and experimental analyses of the Bacillus subtilis and Vibrio harveyi quorum-sensing networks to show that accumulation of multiple quorum-sensing systems may be driven by a facultative cheating mechanism. We demonstrate that a strain that has acquired an additional quorum-sensing system can exploit its ancestor that possesses one fewer system, but nonetheless, resume full cooperation with its kin when it is fixed in the population. We identify the molecular network design criteria required for this advantage. Our results suggest that increased complexity in bacterial social signaling circuits can evolve without providing an adaptive advantage in a clonal population."
},
{
"pmid": "31710602",
"title": "The intragenus and interspecies quorum-sensing autoinducers exert distinct control over Vibrio cholerae biofilm formation and dispersal.",
"abstract": "Vibrio cholerae possesses multiple quorum-sensing (QS) systems that control virulence and biofilm formation among other traits. At low cell densities, when QS autoinducers are absent, V. cholerae forms biofilms. At high cell densities, when autoinducers have accumulated, biofilm formation is repressed, and dispersal occurs. Here, we focus on the roles of two well-characterized QS autoinducers that function in parallel. One autoinducer, called cholerae autoinducer-1 (CAI-1), is used to measure Vibrio abundance, and the other autoinducer, called autoinducer-2 (AI-2), is widely produced by different bacterial species and presumed to enable V. cholerae to assess the total bacterial cell density of the vicinal community. The two V. cholerae autoinducers funnel information into a shared signal relay pathway. This feature of the QS system architecture has made it difficult to understand how specific information can be extracted from each autoinducer, how the autoinducers might drive distinct output behaviors, and, in turn, how the bacteria use QS to distinguish kin from nonkin in bacterial communities. We develop a live-cell biofilm formation and dispersal assay that allows examination of the individual and combined roles of the two autoinducers in controlling V. cholerae behavior. We show that the QS system works as a coincidence detector in which both autoinducers must be present simultaneously for repression of biofilm formation to occur. Within that context, the CAI-1 QS pathway is activated when only a few V. cholerae cells are present, whereas the AI-2 pathway is activated only at much higher cell density. The consequence of this asymmetry is that exogenous sources of AI-2, but not CAI-1, contribute to satisfying the coincidence detector to repress biofilm formation and promote dispersal. We propose that V. cholerae uses CAI-1 to verify that some of its kin are present before committing to the high-cell-density QS mode, but it is, in fact, the broadly made autoinducer AI-2 that sets the pace of the V. cholerae QS program. This first report of unique roles for the different V. cholerae autoinducers suggests that detection of kin fosters a distinct outcome from detection of nonkin."
},
{
"pmid": "21807995",
"title": "Social conflict drives the evolutionary divergence of quorum sensing.",
"abstract": "In microbial \"quorum sensing\" (QS) communication systems, microbes produce and respond to a signaling molecule, enabling a cooperative response at high cell densities. Many species of bacteria show fast, intraspecific, evolutionary divergence of their QS pathway specificity--signaling molecules activate cognate receptors in the same strain but fail to activate, and sometimes inhibit, those of other strains. Despite many molecular studies, it has remained unclear how a signaling molecule and receptor can coevolve, what maintains diversity, and what drives the evolution of cross-inhibition. Here I use mathematical analysis to show that when QS controls the production of extracellular enzymes--\"public goods\"--diversification can readily evolve. Coevolution is positively selected by cycles of alternating \"cheating\" receptor mutations and \"cheating immunity\" signaling mutations. The maintenance of diversity and the evolution of cross-inhibition between strains are facilitated by facultative cheating between the competing strains. My results suggest a role for complex social strategies in the long-term evolution of QS systems. More generally, my model of QS divergence suggests a form of kin recognition where different kin types coexist in unstructured populations."
}
] |
Frontiers in Psychology | null | PMC8891523 | 10.3389/fpsyg.2022.817787 | Connected Through Mediated Social Touch: “Better Than a Like on Facebook.” A Longitudinal Explorative Field Study Among Geographically Separated Romantic Couples | In recent years, there has been a significant increase in research on mediated communication via social touch. Previous studies indicated that mediated social touch (MST) can induce similar positive outcomes to interpersonal touch. However, studies investigating the user experience of MST technology predominantly involve brief experiments that are performed in well-controlled laboratory conditions. Hence, it is still unknown how MST affects the relationship and communication between physically separated partners in a romantic relationship, in a naturalistic setting and over a longer period of time. In a longitudinal explorative field study, the effects of MST on social connectedness and longing for touch among geographically separated romantic couples were investigated in a naturalistic setting. For 2 weeks, 17 couples used haptic bracelets, that were connected via the internet, to exchange mediated squeeze-like touch signals. Before and after this period, they reported their feelings of social connectedness and longing for touch through questionnaires. The results show that the use of haptic bracelets (1) enhanced social connectedness among geographically separated couples but (2) did not affect their longing for touch. Interviews conducted at the end of the study were analyzed following the thematic analysis method to generate prominent themes and patterns in using MST technology among participant couples. Two main themes were generated that captured (a) the way the bracelets fostered a positive one-to-one connection between partners and (b) the way in which participants worked around their frustrations with the bracelets. Detailed findings and limitations of this longitudinal field study are further discussed, and suggestions are made for future research. | Related WorkResearch on MST has culminated in the development of a wide range of prototype systems, such as Huggy Pajama (Teh et al., 2012), InTouch (Brave and Dahley, 1997), POKE (Park et al., 2013), Vibrobod (Dobson et al., 2001), and TaSST (Huisman et al., 2013) (for an extensive survey see Huisman, 2017). Previous research using these prototype systems shows mixed results in terms of replicating findings from unmediated social touch research (Ipakchian Askari et al., 2020b). Hence, it is currently not clear to what degree mediated touch can replicate the effects of unmediated social touch (Toet et al., 2013; van Erp and Toet, 2015). MST is typically not recognized as interpersonal touch (Ipakchian Askari et al., 2020a; Jewitt et al., 2020). It is also highly context dependent (Huisman, 2017; Ipakchian Askari et al., 2020a). Since MST can cause feelings of discomfort between strangers (Smith and MacLean, 2007), a closer (e.g., romantic) relationship may be preferred for this kind of tactile stimulation (Rantala et al., 2013; Suvilehto et al., 2015). Although currently available MST devices do not provide the emotional and contextual complexity of unmediated social touch, previous studies on MST still show some promising results. For instance, Bailenson et al. (2007) found that MST can communicate emotions to a certain degree, while others found that MST can induce increased feelings of intimacy and sympathy (Takahashi et al., 2011) and connectedness toward another person (van Erp and Toet, 2015). Also, a brief MST can induce prosocial behavior to the same degree as a brief unmediated touch (Haans and IJsselsteijn, 2009; Haans et al., 2014). | [
"21668111",
"31452194",
"29731417",
"33329093",
"27295638",
"26557429",
"11575511",
"27788077",
"33101154",
"28092577",
"32658811",
"26504228",
"34527270",
"11698114",
"24141714"
] | [
{
"pmid": "21668111",
"title": "Nonverbal channel use in communication of emotion: how may depend on why.",
"abstract": "This study investigated the hypothesis that different emotions are most effectively conveyed through specific, nonverbal channels of communication: body, face, and touch. Experiment 1 assessed the production of emotion displays. Participants generated nonverbal displays of 11 emotions, with and without channel restrictions. For both actual production and stated preferences, participants favored the body for embarrassment, guilt, pride, and shame; the face for anger, disgust, fear, happiness, and sadness; and touch for love and sympathy. When restricted to a single channel, participants were most confident about their communication when production was limited to the emotion's preferred channel. Experiment 2 examined the reception or identification of emotion displays. Participants viewed videos of emotions communicated in unrestricted and restricted conditions and identified the communicated emotions. Emotion identification in restricted conditions was most accurate when participants viewed emotions displayed via the emotion's preferred channel. This study provides converging evidence that some emotions are communicated predominantly through different nonverbal channels. Further analysis of these channel-emotion correspondences suggests that the social function of an emotion predicts its primary channel: The body channel promotes social-status emotions, the face channel supports survival emotions, and touch supports intimate emotions."
},
{
"pmid": "31452194",
"title": "The \"Longing for Interpersonal Touch Picture Questionnaire\": Development of a new measurement for touch perception.",
"abstract": "Touch is a crucial factor of physiological and psychological health in humans. A lack of touch in contrast is associated with adverse implications on mental health. A new \"Longing for Interpersonal Touch Picture Questionnaire (LITPQ)\" was developed and tested for its concurrent, predictive, discriminant and face validity as well as its relation to psychological distress. Six different types of touch were depicted and touch frequency and touch wish concerning different interaction partners assessed. A sample of 110 participants aged 18-56 years completed the LITPQ as well as an existing touch deprivation questionnaire and a questionnaire on mental health. Frequency and wish for touch were higher for close interaction partners than for strangers. For 72.7% of the participants, their touch wish exceeded the reported touch frequency. The LITPQ correlated moderately with the existing questionnaire for touch deprivation and was independent of relationship status or gender but positively correlated with depressiveness, anxiety and somatization. Measuring longing for touch is a very complex task considering the many aspects of subjective touch perception and confounds in the method of self-report of touch. In our view, the LITPQ provides promising insights into this matter."
},
{
"pmid": "29731417",
"title": "Social touch and human development.",
"abstract": "Social touch is a powerful force in human development, shaping social reward, attachment, cognitive, communication, and emotional regulation from infancy and throughout life. In this review, we consider the question of how social touch is defined from both bottom-up and top-down perspectives. In the former category, there is a clear role for the C-touch (CT) system, which constitutes a unique submodality that mediates affective touch and contrasts with discriminative touch. Top-down factors such as culture, personal relationships, setting, gender, and other contextual influences are also important in defining and interpreting social touch. The critical role of social touch throughout the lifespan is considered, with special attention to infancy and young childhood, a time during which social touch and its neural, behavioral, and physiological contingencies contribute to reinforcement-based learning and impact a variety of developmental trajectories. Finally, the role of social touch in an example of disordered development -autism spectrum disorder-is reviewed."
},
{
"pmid": "33329093",
"title": "Calming Effects of Touch in Human, Animal, and Robotic Interaction-Scientific State-of-the-Art and Technical Advances.",
"abstract": "Small everyday gestures such as a tap on the shoulder can affect the way humans feel and act. Touch can have a calming effect and alter the way stress is handled, thereby promoting mental and physical health. Due to current technical advances and the growing role of intelligent robots in households and healthcare, recent research also addressed the potential of robotic touch for stress reduction. In addition, touch by non-human agents such as animals or inanimate objects may have a calming effect. This conceptual article will review a selection of the most relevant studies reporting the physiological, hormonal, neural, and subjective effects of touch on stress, arousal, and negative affect. Robotic systems capable of non-social touch will be assessed together with control strategies and sensor technologies. Parallels and differences of human-to-human touch and human-to-non-human touch will be discussed. We propose that, under appropriate conditions, touch can act as (social) signal for safety, even when the interaction partner is an animal or a machine. We will also outline potential directions for future research and clinical relevance. Thereby, this review can provide a foundation for further investigations into the beneficial contribution of touch by different agents to regulate negative affect and arousal in humans."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "26557429",
"title": "Effects of mediated social touch on affective experiences and trust.",
"abstract": "This study investigated whether communication via mediated hand pressure during a remotely shared experience (watching an amusing video) can (1) enhance recovery from sadness, (2) enhance the affective quality of the experience, and (3) increase trust towards the communication partner. Thereto participants first watched a sad movie clip to elicit sadness, followed by a funny one to stimulate recovery from sadness. While watching the funny clip they signaled a hypothetical fellow participant every time they felt amused. In the experimental condition the participants responded by pressing a hand-held two-way mediated touch device (a Frebble), which also provided haptic feedback via simulated hand squeezes. In the control condition they responded by pressing a button and they received abstract visual feedback. Objective (heart rate, galvanic skin conductance, number and duration of joystick or Frebble presses) and subjective (questionnaires) data were collected to assess the emotional reactions of the participants. The subjective measurements confirmed that the sad movie successfully induced sadness while the funny movie indeed evoked more positive feelings. Although their ranking agreed with the subjective measurements, the physiological measurements confirmed this conclusion only for the funny movie. The results show that recovery from movie induced sadness, the affective experience of the amusing movie, and trust towards the communication partner did not differ between both experimental conditions. Hence, feedback via mediated hand touching did not enhance either of these factors compared to visual feedback. Further analysis of the data showed that participants scoring low on Extraversion (i.e., persons that are more introvert) or low on Touch Receptivity (i.e., persons who do not like to be touched by others) felt better understood by their communication partner when receiving mediated touch feedback instead of visual feedback, while the opposite was found for participants scoring high on these factors. The implications of these results for further research are discussed, and some suggestions for follow-up experiments are presented."
},
{
"pmid": "11575511",
"title": "Analyses of Digman's child-personality data: derivation of Big-Five factor scores from each of six samples.",
"abstract": "One of the world's richest collections of teacher descriptions of elementary-school children was obtained by John M. Digman from 1959 to 1967 in schools on two Hawaiian islands. In six phases of data collection, 88 teachers described 2,572 of their students, using one of five different sets of personality variables. The present report provides findings from new analyses of these important data, which have never before been analyzed in a comprehensive manner. When factors developed from carefully selected markers of the Big-Five factor structure were compared to those based on the total set of variables in each sample, the congruence between both types of factors was quite high. Attempts to extend the structure to 6 and 7 factors revealed no other broad factors beyond the Big Five in any of the 6 samples. These robust findings provide significant new evidence for the structure of teacher-based assessments of child personality attributes."
},
{
"pmid": "27788077",
"title": "The Virtual Midas Touch: Helping Behavior After a Mediated Social Touch.",
"abstract": "A brief touch on the upper arm increases people's altruistic behavior and willingness to comply with a request. In this paper, we investigate whether this Midas touch phenomenon would also occur under mediated conditions (i.e., touching via an arm strap equipped with electromechanical actuators). Helping behavior was more frequently endorsed in the touch, compared to the no-touch condition, but this difference was not found to be statistically significant. However, a meta-analytical comparison with published research demonstrated that the strength of the virtual Midas touch is of the same magnitude as that of the Midas touch in unmediated situations. The present experiment, thus, provides empirical evidence that touch-like qualities can be attributed to electromechanical stimulation. This is important for the field of mediated social touch of which the design rationale is based on the assumption that mediated touch by means of tactile feedback technologies is processed in ways similar to real physical contact."
},
{
"pmid": "33101154",
"title": "The Effect of COVID-19 on Loneliness in the Elderly. An Empirical Comparison of Pre-and Peri-Pandemic Loneliness in Community-Dwelling Elderly.",
"abstract": "Old-age loneliness is a global problem with many members of the scientific community suspecting increased loneliness in the elderly population during COVID-19 and the associated safety measures. Although hypothesized, a direct comparison of loneliness before and during the pandemic is hard to achieve without a survey of loneliness prior to the pandemic. This study provides a direct comparison of reported loneliness before and during the pandemic using 1:1 propensity score matching (PSM) on a pre- and a peri-pandemic sample of elderly (60+ years) individuals from Lower Austria, a county of Austria (Europe). Differences on a loneliness index computed from the short De Jong Gierveld scale were found to be significant, evidencing that loneliness in the elderly population had in fact risen slightly during the COVID-19 pandemic and its associated safety measures. Although the reported loneliness remained rather low, this result illustrated the effect of the \"new normal\" under COVID-19. As loneliness is a risk factor for physical and mental illness, this result is important in planning the future handling of the pandemic, as safety measures seem to have a negative impact on loneliness. This work confirms the anticipated increase in loneliness in the elderly population during COVID-19."
},
{
"pmid": "28092577",
"title": "Social Touch Technology: A Survey of Haptic Technology for Social Touch.",
"abstract": "This survey provides an overview of work on haptic technology for social touch. Social touch has been studied extensively in psychology and neuroscience. With the development of new technologies, it is now possible to engage in social touch at a distance or engage in social touch with artificial social agents. Social touch research has inspired research into technology mediated social touch, and this line of research has found effects similar to actual social touch. The importance of haptic stimulus qualities, multimodal cues, and contextual factors in technology mediated social touch is discussed. This survey is concluded by reflecting on the current state of research into social touch technology, and providing suggestions for future research and applications."
},
{
"pmid": "26504228",
"title": "Topography of social touching depends on emotional bonds between humans.",
"abstract": "Nonhuman primates use social touch for maintenance and reinforcement of social structures, yet the role of social touch in human bonding in different reproductive, affiliative, and kinship-based relationships remains unresolved. Here we reveal quantified, relationship-specific maps of bodily regions where social touch is allowed in a large cross-cultural dataset (N = 1,368 from Finland, France, Italy, Russia, and the United Kingdom). Participants were shown front and back silhouettes of human bodies with a word denoting one member of their social network. They were asked to color, on separate trials, the bodily regions where each individual in their social network would be allowed to touch them. Across all tested cultures, the total bodily area where touching was allowed was linearly dependent (mean r(2) = 0.54) on the emotional bond with the toucher, but independent of when that person was last encountered. Close acquaintances and family members were touched for more reasons than less familiar individuals. The bodily area others are allowed to touch thus represented, in a parametric fashion, the strength of the relationship-specific emotional bond. We propose that the spatial patterns of human social touch reflect an important mechanism supporting the maintenance of social bonds."
},
{
"pmid": "34527270",
"title": "Social touch deprivation during COVID-19: effects on psychological wellbeing and craving interpersonal touch.",
"abstract": "Social touch has positive effects on social affiliation and stress alleviation. However, its ubiquitous presence in human life does not allow the study of social touch deprivation 'in the wild'. Nevertheless, COVID-19-related restrictions such as social distancing allowed the systematic study of the degree to which social distancing affects tactile experiences and mental health. In this study, 1746 participants completed an online survey to examine intimate, friendly and professional touch experiences during COVID-19-related restrictions, their impact on mental health and the extent to which touch deprivation results in craving touch. We found that intimate touch deprivation during COVID-19-related restrictions is associated with higher anxiety and greater loneliness even though this type of touch is still the most experienced during the pandemic. Moreover, intimate touch is reported as the type of touch most craved during this period, thus being more prominent as the days practising social distancing increase. However, our results also show that the degree to which individuals crave touch during this period depends on individual differences in attachment style: the more anxiously attached, the more touch is craved; with the reverse pattern for avoidantly attached. These findings point to the important role of interpersonal and particularly intimate touch in times of distress and uncertainty."
},
{
"pmid": "11698114",
"title": "Social anxiety and response to touch: incongruence between self-evaluative and physiological reactions.",
"abstract": "Touch is an important form of social interaction, and one that can have powerful emotional consequences. Appropriate touch can be calming, while inappropriate touch can be anxiety provoking. To examine the impact of social touching, this study compared socially high-anxious (N=48) and low-anxious (N=47) women's attitudes concerning social touch, as well as their affective and physiological responses to a wrist touch by a male experimenter. Compared to low-anxious participants, high-anxious participants reported greater anxiety to a variety of social situations involving touch. Consistent with these reports, socially anxious participants reacted to the experimenter's touch with markedly greater increases in self-reported anxiety, self-consciousness, and embarrassment. Physiologically, low-anxious and high-anxious participants showed a distinct pattern of sympathetic-parasympathetic coactivation, as reflected by decreased heart rate and tidal volume, and increased respiratory sinus arrhythmia, skin conductance, systolic/diastolic blood pressure, stroke volume, and respiratory rate. Interestingly, physiological responses were comparable in low and high-anxious groups. These findings indicate that social anxiety is accompanied by heightened aversion towards social situations that involve touch, but this enhanced aversion and negative-emotion report is not reflected in differential physiological responding."
}
] |
Frontiers in Robotics and AI | null | PMC8891697 | 10.3389/frobt.2022.733876 | Agree to Disagree: Subjective Fairness in Privacy-Restricted Decentralised Conflict Resolution | Fairness is commonly seen as a property of the global outcome of a system and assumes centralisation and complete knowledge. However, in real decentralised applications, agents only have partial observation capabilities. Under limited information, agents rely on communication to divulge some of their private (and unobservable) information to others. When an agent deliberates to resolve conflicts, limited knowledge may cause its perspective of a correct outcome to differ from the actual outcome of the conflict resolution. This is subjective unfairness. As human systems and societies are organised by rules and norms, hybrid human-agent and multi-agent environments of the future will require agents to resolve conflicts in a decentralised and rule-aware way. Prior work achieves such decentralised, rule-aware conflict resolution through cultures: explainable architectures that embed human regulations and norms via argumentation frameworks with verification mechanisms. However, this prior work requires agents to have full state knowledge of each other, whereas many distributed applications in practice admit partial observation capabilities, which may require agents to communicate and carefully opt to release information if privacy constraints apply. To enable decentralised, fairness-aware conflict resolution under privacy constraints, we have two contributions: 1) a novel interaction approach and 2) a formalism of the relationship between privacy and fairness. Our proposed interaction approach is an architecture for privacy-aware explainable conflict resolution where agents engage in a dialogue of hypotheses and facts. To measure the privacy-fairness relationship, we define subjective and objective fairness on both the local and global scope and formalise the impact of partial observability due to privacy in these different notions of fairness. We first study our proposed architecture and the privacy-fairness relationship in the abstract, testing different argumentation strategies on a large number of randomised cultures. We empirically demonstrate the trade-off between privacy, objective fairness, and subjective fairness and show that better strategies can mitigate the effects of privacy in distributed systems. In addition to this analysis across a broad set of randomised abstract cultures, we analyse a case study for a specific scenario: we instantiate our architecture in a multi-agent simulation of prioritised rule-aware collision avoidance with limited information disclosure. | 1.2 Related WorkStudies of privacy in multi-agent systems have gained recent popularity (Such et al., 2014; Prorok and Kumar, 2017; Torreno et al., 2017). More closely (Gao et al., 2016), also propose the use of argumentation in privacy-constrained environments, although applied to distributed constraint satisfaction problems. Their approach, however, treats privacy in an absolute way, while in our notion is softer, with information having costs, and we consider varying degrees of privacy restrictions.Contemporaneously, the burgeoning research on fairness in multi-agent systems focuses on objective global fairness, assuming complete knowledge about all agents (Bin-Obaid and Trafalis, 2018). Some works break the global assumption by applying fairness definitions to a neighborhood rather than an entire population (Emelianov et al., 2019) or by assuming that fairness solely depends on an individual (Nguyen and Rothe, 2016). The former studies objective fairness of a neighborhood, assuming full information of a subset of the population subset, whilst the latter assumes agents have no information outside of their own to make judgments about fairness. These works do not address fairness under partial observability, wherein agents have partial information on a subset of the population, which we call subjective local fairness.To study privacy and subjective fairness in distributed multi-agent environments, we look to previous work in explainable human-agent deconfliction. The architecture proposed in (Raymond et al., 2020) introduces a model for explainable conflict resolution in multi-agent norm-aware environments by converting rules into arguments in a culture (see Section 2.1). Disputes between agents are solved by a dialogue game, and the arguments uttered in the history of the exchange compose an explanation for the decision agreed upon by the agents.Notwithstanding its abstract nature, this architecture relies on two important assumptions: 1) that agents have complete information about themselves and other agents; and 2) that dialogues extend indefinitely until an agent is cornered out of arguments—thus being convinced to concede. In most real-life applications, however, those assumptions occur rather infrequently. Fully decentralised agents often rely on local observations and communication to compose partial representations of the world state, and indefinite dialogues are both practically and computationally restrictive. We build on the state of the art by extending this architecture to support gradual disclosure of information and privacy restrictions. | [
"17141137"
] | [
{
"pmid": "17141137",
"title": "Triage in medicine, part II: Underlying values and principles.",
"abstract": "Part I of this 2-article series reviewed the concept and history of triage and the settings in which triage is commonly practiced. We now examine the moral foundations of the practice of triage. We begin by recognizing the moral significance of triage decisions. We then note that triage systems tend to promote the values of human life, health, efficient use of resources, and fairness, and tend to disregard the values of autonomy, fidelity, and ownership of resources. We conclude with an analysis of three principles of distributive justice that have been proposed to guide triage decisions."
}
] |
Frontiers in Psychiatry | null | PMC8892210 | 10.3389/fpsyt.2021.813460 | From Sound Perception to Automatic Detection of Schizophrenia: An EEG-Based Deep Learning Approach | Deep learning techniques have been applied to electroencephalogram (EEG) signals, with promising applications in the field of psychiatry. Schizophrenia is one of the most disabling neuropsychiatric disorders, often characterized by the presence of auditory hallucinations. Auditory processing impairments have been studied using EEG-derived event-related potentials and have been associated with clinical symptoms and cognitive dysfunction in schizophrenia. Due to consistent changes in the amplitude of ERP components, such as the auditory N100, some have been proposed as biomarkers of schizophrenia. In this paper, we examine altered patterns in electrical brain activity during auditory processing and their potential to discriminate schizophrenia and healthy subjects. Using deep convolutional neural networks, we propose an architecture to perform the classification based on multi-channels auditory-related EEG single-trials, recorded during a passive listening task. We analyzed the effect of the number of electrodes used, as well as the laterality and distribution of the electrical activity over the scalp. Results show that the proposed model is able to classify schizophrenia and healthy subjects with an average accuracy of 78% using only 5 midline channels (Fz, FCz, Cz, CPz, and Pz). The present study shows the potential of deep learning methods in the study of impaired auditory processing in schizophrenia with implications for diagnosis. The proposed design can provide a base model for future developments in schizophrenia research. | 1.1. Related WorkSZ classification on the basis of early auditory EEG-derived ERP components has already been attempted with classical machine learning models (20–22). Components such as the P300, MMN, or N100 were mainly elicited with auditory oddball and passive listening paradigms and used as input features in SZ recognition with random forest (RF), support vector machine (SVM), or linear discriminant analysis (LDA) classifiers. The results of these studies underscore the potential of auditory ERP components recurrently proposed as SZ biomarkers when their characteristics are used as features to discriminate patients from healthy subjects. A limitation of these approaches relates to the process underlying feature extraction and selection, which requires domain expertise. The few studies that probed the potential of deep learning in EEG-based classification of SZ (23–27) achieved the best performances with CNN-based models applied to resting-state EEG data, which is independent of cognitive or sensory processing. Despite their capacity to discriminate healthy from SZ subjects, these models do not inform about auditory processing, which is affected in SZ (4). Very recently, Aristizabal et al. (28) explored both machine and deep learning techniques to identify children at risk of SZ on the basis of EEG data collected during a passive auditory oddball task. In the classical machine learning approach, the mean amplitude was extracted in the 80–220 ms and 160–290 ms latency intervals, when ERP components indexing of sensory processing (N100 and P200, respectively) are expected to emerge. The mean values were extracted for 5 midline electrodes: Fz, FCz, Cz, CPz, and Pz. Using common classifiers, such as decision trees, k-nearest neighbors, and SVM, the discrimination between healthy and at-risk children based on those features was unsuccessful. As for the deep learning approach, a 2D-CNN-LSTM was proposed, composed of one 2D convolutional layer, followed by normalization and fully-connected layers. The information from the previous block was processed with a stack of two LSTM (long-short term memory) networks, whose output was transformed with sigmoid non-linearity for classification purposes. This model based on EEG single-trials achieved the best performance in at-risk children identification. For each trial, a spatio-temporal 2D signal was created using a 300 ms post-stimulus window focusing on the 5 midline electrodes. The machine learning attempt illustrates the difficulty in specifying stimulus-related signal features that allow a precise identification of SZ risk. The results of both approaches may reflect the developmental phase of the population under study, namely ongoing developmental brain maturation processes. Notwithstanding, this study demonstrates the potential of deep learning methods in subjects' discrimination as a function of psychosis risk.Evidence for altered auditory processing in SZ has fostered the investigation of the dynamics of electrical brain activity targeting the differentiation between patients and healthy subjects. Amplitude reduction of auditory evoked potentials such as the N100 have been consistently reported in the literature (14, 29). Those alterations have driven the use of machine learning methods for automatic SZ recognition. Beyond the time-consuming feature extraction, both machine learning models and ERP analysis exhibit a major limitation: the non-uniformity of the time windows and electrodes used for feature selection across studies. By contrast, deep learning methods profit from automatic pattern learning, with minimal human intervention. Although deep learning architectures based on EEG signals have been proposed for SZ classification, the learning of patterns from the electrical brain response to auditory stimuli is a scarcely investigated topic. A recent review provided a critical analysis of deep learning and classical machine learning methods to detect SZ based on EEG signals (30), highlighting the potentialities of these methods in clinical research. Notwithstanding, from this review it is also clear that more studies are necessary and that surpass the limitations of the existing ones. The current work intends to assess the potential of deep models to learn discriminatory EEG patterns in the early stages of auditory processing, which may inform about the significance of sensory changes to SZ diagnosis and prognosis. We followed good practices for the development and implementation of machine learning methods proposed in Barros et al. (30). | [
"32233682",
"26373471",
"27872259",
"26289573",
"10784469",
"25636178",
"25449710",
"18064038",
"14638286",
"29202656",
"33192632",
"32061454",
"18926573",
"21629768",
"29486863",
"28782865",
"28560151",
"27295638",
"32926393",
"32505420",
"32310808",
"21153832",
"33875158",
"9390837",
"22197447",
"19515106",
"26017442",
"33635820",
"23754836",
"32553846",
"15102499",
"20654646",
"34820480",
"32939066",
"18684772",
"19765689",
"18571375",
"30804830",
"19282472",
"33815168"
] | [
{
"pmid": "26373471",
"title": "Social cognition in schizophrenia.",
"abstract": "Individuals with schizophrenia exhibit impaired social cognition, which manifests as difficulties in identifying emotions, feeing connected to others, inferring people's thoughts and reacting emotionally to others. These social cognitive impairments interfere with social connections and are strong determinants of the degree of impaired daily functioning in such individuals. Here, we review recent findings from the fields of social cognition and social neuroscience and identify the social processes that are impaired in schizophrenia. We also consider empathy as an example of a complex social cognitive function that integrates several social processes and is impaired in schizophrenia. This information may guide interventions to improve social cognition in patients with this disorder."
},
{
"pmid": "27872259",
"title": "Hallucinations: A Systematic Review of Points of Similarity and Difference Across Diagnostic Classes.",
"abstract": "Hallucinations constitute one of the 5 symptom domains of psychotic disorders in DSM-5, suggesting diagnostic significance for that group of disorders. Although specific featural properties of hallucinations (negative voices, talking in the third person, and location in external space) are no longer highlighted in DSM, there is likely a residual assumption that hallucinations in schizophrenia can be identified based on these candidate features. We investigated whether certain featural properties of hallucinations are specifically indicative of schizophrenia by conducting a systematic review of studies showing direct comparisons of the featural and clinical characteristics of (auditory and visual) hallucinations among 2 or more population groups (one of which included schizophrenia). A total of 43 articles were reviewed, which included hallucinations in 4 major groups (nonclinical groups, drug- and alcohol-related conditions, medical and neurological conditions, and psychiatric disorders). The results showed that no single hallucination feature or characteristic uniquely indicated a diagnosis of schizophrenia, with the sole exception of an age of onset in late adolescence. Among the 21 features of hallucinations in schizophrenia considered here, 95% were shared with other psychiatric disorders, 85% with medical/neurological conditions, 66% with drugs and alcohol conditions, and 52% with the nonclinical groups. Additional differences rendered the nonclinical groups somewhat distinctive from clinical disorders. Overall, when considering hallucinations, it is inadvisable to give weight to the presence of any featural properties alone in making a schizophrenia diagnosis. It is more important to focus instead on the co-occurrence of other symptoms and the value of hallucinations as an indicator of vulnerability."
},
{
"pmid": "26289573",
"title": "Auditory dysfunction in schizophrenia: integrating clinical and basic features.",
"abstract": "Schizophrenia is a complex neuropsychiatric disorder that is associated with persistent psychosocial disability in affected individuals. Although studies of schizophrenia have traditionally focused on deficits in higher-order processes such as working memory and executive function, there is an increasing realization that, in this disorder, deficits can be found throughout the cortex and are manifest even at the level of early sensory processing. These deficits are highly amenable to translational investigation and represent potential novel targets for clinical intervention. Deficits, moreover, have been linked to specific structural abnormalities in post-mortem auditory cortex tissue from individuals with schizophrenia, providing unique insights into underlying pathophysiological mechanisms."
},
{
"pmid": "10784469",
"title": "Central auditory processing in patients with auditory hallucinations.",
"abstract": "OBJECTIVE\nData from a full assessment of auditory perception in patients with schizophrenia were used to investigate whether auditory hallucinations are associated with abnormality of central auditory processing.\n\n\nMETHOD\nThree groups of subjects participated in auditory assessments: 22 patients with psychosis and a recent history of auditory hallucinations, 16 patients with psychosis but no history of auditory hallucinations, and 22 normal subjects. Nine auditory assessments, including auditory brainstem response, monotic and dichotic speech perception tests, and nonspeech perceptual tests, were performed. Statistical analyses for group differences were performed using analysis of variance and Kruskal-Wallis tests. The results of individual patients with test scores in the severely abnormal range (more than three standard deviations from the mean for the normal subjects) were examined for patterns that suggested sites of dysfunction in the central auditory system.\n\n\nRESULTS\nThe results showed significant individual variability among the subjects in both patient groups. There were no group differences on tests that are sensitive to low brainstem function. Both patient groups performed poorly in tests that are sensitive to cortical or high brainstem function, and hallucinating patients differed from nonhallucinating patients in scores on tests of filtered speech perception and response bias patterns on dichotic speech tests. Six patients in the hallucinating group had scores in the severely abnormal range on more than one test.\n\n\nCONCLUSIONS\nHallucinations may be associated with auditory dysfunction in the right hemisphere or in the interhemispheric pathways. However, comparison of results for the patient groups suggests that the deficits seen in hallucinating patients may represent a greater degree of the same types of deficits seen in nonhallucinating patients."
},
{
"pmid": "25636178",
"title": "Forecasting psychosis by event-related potentials-systematic review and specific meta-analysis.",
"abstract": "BACKGROUND\nPrediction and prevention of psychosis have become major research topics. Clinical approaches warrant objective biological parameters to enhance validity in prediction of psychosis onset. In this regard, event-related potentials (ERPs) have been identified as promising tools for improving psychosis prediction.\n\n\nMETHODS\nHerein, the focus is on sensory gating, mismatch negativity (MMN) and P300, thereby discussing which parameters allow for a timely and valid detection of future converters to psychosis. In a first step, we systematically reviewed the studies that resulted from a search of the MEDLINE database. In a second step, we performed a meta-analysis of those investigations reporting transitions that statistically compared ERPs in converting versus nonconverting subjects.\n\n\nRESULTS\nSensory gating, MMN, and P300 have been demonstrated to be impaired in subjects clinically at risk of developing a psychotic disorder. In the meta-analysis, duration MMN achieved the highest effect size measures.\n\n\nCONCLUSIONS\nIn summary, MMN studies have produced the most convincing results until now, including independent replication of the predictive validity. However, a synopsis of the literature revealed a relative paucity of ERP studies addressing the psychosis risk state. Considering the high clinical relevance of valid psychosis prediction, future research should question for the most informative paradigms and should allow for meta-analytic evaluation with regard to specificity and sensitivity of the most appropriate parameters."
},
{
"pmid": "25449710",
"title": "Validation of mismatch negativity and P3a for use in multi-site studies of schizophrenia: characterization of demographic, clinical, cognitive, and functional correlates in COGS-2.",
"abstract": "Mismatch negativity (MMN) and P3a are auditory event-related potential (ERP) components that show robust deficits in schizophrenia (SZ) patients and exhibit qualities of endophenotypes, including substantial heritability, test-retest reliability, and trait-like stability. These measures also fulfill criteria for use as cognition and function-linked biomarkers in outcome studies, but have not yet been validated for use in large-scale multi-site clinical studies. This study tested the feasibility of adding MMN and P3a to the ongoing Consortium on the Genetics of Schizophrenia (COGS) study. The extent to which demographic, clinical, cognitive, and functional characteristics contribute to variability in MMN and P3a amplitudes was also examined. Participants (HCS n=824, SZ n=966) underwent testing at 5 geographically distributed COGS laboratories. Valid ERP recordings were obtained from 91% of HCS and 91% of SZ patients. Highly significant MMN (d=0.96) and P3a (d=0.93) amplitude reductions were observed in SZ patients, comparable in magnitude to those observed in single-lab studies with no appreciable differences across laboratories. Demographic characteristics accounted for 26% and 18% of the variance in MMN and P3a amplitudes, respectively. Significant relationships were observed among demographically-adjusted MMN and P3a measures and medication status as well as several clinical, cognitive, and functional characteristics of the SZ patients. This study demonstrates that MMN and P3a ERP biomarkers can be feasibly used in multi-site clinical studies. As with many clinical tests of brain function, demographic factors contribute to MMN and P3a amplitudes and should be carefully considered in future biomarker-informed clinical studies."
},
{
"pmid": "18064038",
"title": "Neurophysiological biomarkers for drug development in schizophrenia.",
"abstract": "Schizophrenia represents a pervasive deficit in brain function, leading to hallucinations and delusions, social withdrawal and a decline in cognitive performance. As the underlying genetic and neuronal abnormalities in schizophrenia are largely unknown, it is challenging to measure the severity of its symptoms objectively, or to design and evaluate psychotherapeutic interventions. Recent advances in neurophysiological techniques provide new opportunities to measure abnormal brain functions in patients with schizophrenia and to compare these with drug-induced alterations. Moreover, many of these neurophysiological processes are phylogenetically conserved and can be modelled in preclinical studies, offering unique opportunities for use as translational biomarkers in schizophrenia drug discovery."
},
{
"pmid": "14638286",
"title": "Interpreting abnormality: an EEG and MEG study of P50 and the auditory paired-stimulus paradigm.",
"abstract": "Interpretation of neurophysiological differences between control and patient groups on the basis of scalp-recorded event-related brain potentials (ERPs), although common and promising, is often complicated in the absence of information on the distinct neural generators contributing to the ERP, particularly information regarding individual differences in the generators. For example, while sensory gating differences frequently observed in patients with schizophrenia in the P50 paired-click gating paradigm are typically interpreted as reflecting group differences in generator source strength, differences in the latency and/or orientation of P50 generators may also account for observed group differences. The present study examined how variability in source strength, amplitude, or orientation affects the P50 component of the scalp-recorded ERP. In Experiment 1, simulations examined the effect of changes in source strength, orientation, or latency in superior temporal gyrus (STG) dipoles on P50 recorded at Cz. In Experiment 2, within- and between-subject variability in left and right M50 STG dipole source strength, latency, and orientation was examined in 19 subjects. Given the frequently reported differences in left and right STG anatomy and function, substantial inter-subject and inter-hemispheric variability in these parameters were expected, with important consequences for how P50 at Cz reflects brain activity from relevant generators. In Experiment 1, simulated P50 responses were computed from hypothetical left- and right-hemisphere STG generators, with latency, amplitude, and orientation of the generators varied systematically. In Experiment 2, electroencephalographic (EEG) and magnetoencephalographic (MEG) data were collected from 19 subjects. Generators were modeled from the MEG data to assess and illustrate the generator variability evaluated parametrically in Experiment 1. In Experiment 1, realistic amounts of variability in generator latency, amplitude, and orientation produced ERPs in which P50 scoring was compromised and interpretation complicated. In Experiment 2, significant within and between subject variability was observed in the left and right hemisphere STG M50 sources. Given the variability in M50 source strength, orientation, and amplitude observed here in nonpatient subjects, future studies should examine whether group differences in P50 gating ratios typically observed for patient vs. control groups are specific to a particular hemisphere, as well as whether the group differences are due to differences in dipole source strength, latency, orientation, or a combination of these parameters. Present analyses focused on P50/M50 merely as an example of the broader need to evaluate scalp phenomena in light of underlying generators. The development and widespread use of EEG/MEG source localization methods will greatly enhance the interpretation and value of EEG/MEG data."
},
{
"pmid": "29202656",
"title": "Clinical and Cognitive Significance of Auditory Sensory Processing Deficits in Schizophrenia.",
"abstract": "OBJECTIVE\nAlthough patients with schizophrenia exhibit impaired suppression of the P50 event-related brain potential in response to the second of two identical auditory stimuli during a paired-stimulus paradigm, uncertainty remains over whether this deficit in inhibitory gating of auditory sensory processes has relevance for patients' clinical symptoms or cognitive performance. The authors examined associations between P50 suppression deficits and several core features of schizophrenia to address this gap.\n\n\nMETHOD\nP50 was recorded from 52 patients with schizophrenia and 41 healthy comparison subjects during a standard auditory paired-stimulus task. Clinical symptoms were assessed with the Scale for the Assessment of Positive Symptoms and the Scale for the Assessment of Negative Symptoms. The MATRICS Consensus Cognitive Battery was utilized to measure cognitive performance in a subsample of 39 patients. Correlation and regression analyses were conducted to examine P50 suppression in relation to clinical symptom and cognitive performance measures.\n\n\nRESULTS\nSchizophrenia patients demonstrated a deficit in P50 suppression when compared with healthy subjects, replicating prior research. Within the patient sample, impaired P50 suppression covaried reliably with greater difficulties in attention, poorer working memory, and reduced processing speed.\n\n\nCONCLUSIONS\nImpaired suppression of auditory stimuli was associated with core pathological features of schizophrenia, increasing confidence that P50 inhibitory processing can inform the development of interventions that target cognitive impairments in this chronic and debilitating mental illness."
},
{
"pmid": "33192632",
"title": "P50, N100, and P200 Auditory Sensory Gating Deficits in Schizophrenia Patients.",
"abstract": "BACKGROUND\nSensory gating describes neurological processes of filtering out redundant or unnecessary stimuli during information processing, and sensory gating deficits may contribute to the symptoms of schizophrenia. Among the three components of auditory event-related potentials reflecting sensory gating, P50 implies pre-attentional filtering of sensory information and N100/P200 reflects attention triggering and allocation processes. Although diminished P50 gating has been extensively documented in patients with schizophrenia, previous studies on N100 were inconclusive, and P200 has been rarely examined. This study aimed to investigate whether patients with schizophrenia have P50, N100, and P200 gating deficits compared with control subjects.\n\n\nMETHODS\nControl subjects and clinically stable schizophrenia patients were recruited. The mid-latency auditory evoked responses, comprising P50, N100, and P200, were measured using the auditory-paired click paradigm without manipulation of attention. Sensory gating parameters included S1 amplitude, S2 amplitude, amplitude difference (S1-S2), and gating ratio (S2/S1). We also evaluated schizophrenia patients with PANSS to be correlated with sensory gating indices.\n\n\nRESULTS\nOne hundred four patients and 102 control subjects were examined. Compared to the control group, schizophrenia patients had significant sensory gating deficits in P50, N100, and P200, reflected by larger gating ratios and smaller amplitude differences. Further analysis revealed that the S2 amplitude of P50 was larger, while the S1 amplitude of N100/P200 was smaller, in schizophrenia patients than in the controls. We found no correlations between sensory gating indices and schizophrenia positive or negative symptom clusters. However, we found a negative correlation between the P200 S2 amplitude and Bell's emotional discomfort factor/Wallwork's depressed factor.\n\n\nCONCLUSION\nTill date, this study has the largest sample size to analyze P50, N100, and P200 collectively by adopting the passive auditory paired-click paradigm without distractors. With covariates controlled for possible confounds, such as age, education, smoking amount and retained pairs, we found that schizophrenia patients had significant sensory gating deficits in P50-N100-P200. The schizophrenia patients had demonstrated a unique pattern of sensory gating deficits, including repetition suppression deficits in P50 and stimulus registration deficits in N100/200. These results suggest that sensory gating is a pervasive cognitive abnormality in schizophrenia patients that is not limited to the pre-attentive phase of information processing. Since P200 exhibited a large effect size and did not require additional time during recruitment, future studies of P50-N100-P200 collectively are highly recommended."
},
{
"pmid": "32061454",
"title": "P50 inhibitory sensory gating in schizophrenia: analysis of recent studies.",
"abstract": "INTRODUCTION\nInhibitory sensory gating of the P50 cerebral evoked potential to paired auditory stimuli (S1, S2) is a widely used paradigm for the study of schizophrenia and related conditions. Its use to measure genetic, treatment, and developmental effects requires a metric with more stable properties than the simple ratio of the paired responses.\n\n\nMETHODS\nThis study assessed the ratio P50S2μV/P50S1μV and P50S2μV co-varied for P50S1μV in all 27 independent published studies that compared schizophrenia patients with healthy controls from 2000 to 2019. The largest study from each research group was selected. The Colorado research group's studies were excluded to eliminate bias from the first report of the phenomenon.\n\n\nRESULTS\nAcross the 27 studies encompassing 1179 schizophrenia patients and 1091 healthy controls, both P50S2μV co-varied for P50S1μV and P50S2μV/P50S1μV significantly separated the patients from the controls (both P < 0.0001). Effect size for P50S2μV co-varied for P50S1μV is d' = 1.23. The normal distribution of P50S2μV co-varied for P50S1μV detected influences of maternal inflammation and effects on behavior in a recent developmental study, an emerging use for the P50 inhibitory gating measure. P50S2μV/P50S1μV was not normally distributed. Results from two multi-site NIMH genetics collaborations also support the use of P50S2μV as a biomarker.\n\n\nCONCLUSION\nBoth methods detect an abnormality of cerebral inhibition in schizophrenia with high significance across multiple independent laboratories. The normal distribution of P50S2μV co-varied for P50S1μV makes it more suitable for studies of genetic, treatment, and other influences on the development and expression of inhibitory deficits in schizophrenia."
},
{
"pmid": "18926573",
"title": "Reduced auditory evoked potential component N100 in schizophrenia--a critical review.",
"abstract": "The role of a reduced N100 (or N1) component of the auditory event related potential as a potential trait marker of schizophrenia is critically discussed in this review. We suggest that the extent of the N100 amplitude reduction in schizophrenia depends on experimental and subject factors, as well as on clinical variables: N100 is more consistently reduced in studies using interstimulus intervals (ISIs) >1 s than in studies using shorter ISIs. An increase of the N100 amplitude by allocation of attention is often lacking in schizophrenia patients. A reduction of the N100 amplitude is nevertheless also observed when such an allocation is not required, proposing that both endogenous and exogenous constituents of the N100 are affected by schizophrenia. N100 is more consistently reduced in medicated than unmedicated patients, but a reduction of the N100 amplitude as a consequence of antipsychotic medication was shown in only two of seven studies. In line with that, the association between the N100 reduction and degree of psychopathology of patients appears to be weak overall. A reduced N100 amplitude is found in first degree relatives of schizophrenia patients, but the risk of developing schizophrenia is not reflected in the N100 amplitude reduction."
},
{
"pmid": "21629768",
"title": "The neurophysiology of auditory hallucinations - a historical and contemporary review.",
"abstract": "Electroencephalography and magnetoencephalography are two techniques that distinguish themselves from other neuroimaging methodologies through their ability to directly measure brain-related activity and their high temporal resolution. A large body of research has applied these techniques to study auditory hallucinations. Across a variety of approaches, the left superior temporal cortex is consistently reported to be involved in this symptom. Moreover, there is increasing evidence that a failure in corollary discharge, i.e., a neural signal originating in frontal speech areas that indicates to sensory areas that forthcoming thought is self-generated, may underlie the experience of auditory hallucinations."
},
{
"pmid": "29486863",
"title": "Machine Learning for Precision Psychiatry: Opportunities and Challenges.",
"abstract": "The nature of mental illness remains a conundrum. Traditional disease categories are increasingly suspected to misrepresent the causes underlying mental disturbance. Yet psychiatrists and investigators now have an unprecedented opportunity to benefit from complex patterns in brain, behavior, and genes using methods from machine learning (e.g., support vector machines, modern neural-network algorithms, cross-validation procedures). Combining these analysis techniques with a wealth of data from consortia and repositories has the potential to advance a biologically grounded redefinition of major psychiatric disorders. Increasing evidence suggests that data-derived subgroups of psychiatric patients can better predict treatment outcomes than DSM/ICD diagnoses can. In a new era of evidence-based psychiatry tailored to single patients, objectively measurable endophenotypes could allow for early disease detection, individualized treatment selection, and dosage adjustment to reduce the burden of disease. This primer aims to introduce clinicians and researchers to the opportunities and challenges in bringing machine intelligence into psychiatric practice."
},
{
"pmid": "28782865",
"title": "Deep learning with convolutional neural networks for EEG decoding and visualization.",
"abstract": "Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end-to-end EEG analysis, but a better understanding of how to design and train ConvNets for end-to-end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task-related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG-based brain mapping. Hum Brain Mapp 38:5391-5420, 2017. © 2017 Wiley Periodicals, Inc."
},
{
"pmid": "28560151",
"title": "Auditory prediction errors as individual biomarkers of schizophrenia.",
"abstract": "Schizophrenia is a complex psychiatric disorder, typically diagnosed through symptomatic evidence collected through patient interview. We aim to develop an objective biologically-based computational tool which aids diagnosis and relies on accessible imaging technologies such as electroencephalography (EEG). To achieve this, we used machine learning techniques and a combination of paradigms designed to elicit prediction errors or Mismatch Negativity (MMN) responses. MMN, an EEG component elicited by unpredictable changes in sequences of auditory stimuli, has previously been shown to be reduced in people with schizophrenia and this is arguably one of the most reproducible neurophysiological markers of schizophrenia. EEG data were acquired from 21 patients with schizophrenia and 22 healthy controls whilst they listened to three auditory oddball paradigms comprising sequences of tones which deviated in 10% of trials from regularly occurring standard tones. Deviant tones shared the same properties as standard tones, except for one physical aspect: 1) duration - the deviant stimulus was twice the duration of the standard; 2) monaural gap - deviants had a silent interval omitted from the standard, or 3) inter-aural timing difference, which caused the deviant location to be perceived as 90° away from the standards. We used multivariate pattern analysis, a machine learning technique implemented in the Pattern Recognition for Neuroimaging Toolbox (PRoNTo) to classify images generated through statistical parametric mapping (SPM) of spatiotemporal EEG data, i.e. event-related potentials measured on the two-dimensional surface of the scalp over time. Using support vector machine (SVM) and Gaussian processes classifiers (GPC), we were able classify individual patients and controls with balanced accuracies of up to 80.48% (p-values = 0.0326, FDR corrected) and an ROC analysis yielding an AUC of 0.87. Crucially, a GP regression revealed that MMN predicted global assessment of functioning (GAF) scores (correlation = 0.73, R2 = 0.53, p = 0.0006)."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "32926393",
"title": "Transfer learning with deep convolutional neural network for automated detection of schizophrenia from EEG signals.",
"abstract": "Schizophrenia (SZ) is a severe disorder of the human brain which disturbs behavioral characteristics such as interruption in thinking, memory, perception, speech and other living activities. If the patient suffering from SZ is not diagnosed and treated in the early stages, damage to human behavioral abilities in its later stages could become more severe. Therefore, early discovery of SZ may help to cure or limit the effects. Electroencephalogram (EEG) is prominently used to study brain diseases such as SZ due to having high temporal resolution information, and being a noninvasive and inexpensive method. This paper introduces an automatic methodology based on transfer learning with deep convolutional neural networks (CNNs) for the diagnosis of SZ patients from healthy controls. First, EEG signals are converted into images by applying a time-frequency approach called continuous wavelet transform (CWT) method. Then, the images of EEG signals are applied to the four popular pre-trained CNNs: AlexNet, ResNet-18, VGG-19 and Inception-v3. The output of convolutional and pooling layers of these models are used as deep features and are fed into the support vector machine (SVM) classifier. We have tuned the parameters of SVM to classify SZ patients and healthy subjects. The efficiency of the proposed method is evaluated on EEG signals from 14 healthy subjects and 14 SZ patients. The experiments showed that the combination of frontal, central, parietal, and occipital regions applied to the ResNet-18-SVM achieved best results with accuracy, sensitivity and specificity of 98.60% ± 2.29, 99.65% ± 2.35 and 96.92% ± 2.25, respectively. Therefore, the proposed method as a diagnostic tool can help clinicians in detection of the SZ patients for early diagnosis and treatment."
},
{
"pmid": "32505420",
"title": "On the use of pairwise distance learning for brain signal classification with limited observations.",
"abstract": "The increasing access to brain signal data using electroencephalography creates new opportunities to study electrophysiological brain activity and perform ambulatory diagnoses of neurological disorders. This work proposes a pairwise distance learning approach for schizophrenia classification relying on the spectral properties of the signal. To be able to handle clinical trials with a limited number of observations (i.e. case and/or control individuals), we propose a Siamese neural network architecture to learn a discriminative feature space from pairwise combinations of observations per channel. In this way, the multivariate order of the signal is used as a form of data augmentation, further supporting the network generalization ability. Convolutional layers with parameters learned under a cosine contrastive loss are proposed to adequately explore spectral images derived from the brain signal. The proposed approach for schizophrenia diagnostic was tested on reference clinical trial data under resting-state protocol, achieving 0.95 ± 0.05 accuracy, 0.98 ± 0.02 sensitivity and 0.92 ± 0.07 specificity. Results show that the features extracted using the proposed neural network are remarkably superior than baselines to diagnose schizophrenia (+20pp in accuracy and sensitivity), suggesting the existence of non-trivial electrophysiological brain patterns able to capture discriminative neuroplasticity profiles among individuals. The code is available on Github: https://github.com/DCalhas/siamese_schizophrenia_eeg."
},
{
"pmid": "32310808",
"title": "Identification of Children at Risk of Schizophrenia via Deep Learning and EEG Responses.",
"abstract": "The prospective identification of children likely to develop schizophrenia is a vital tool to support early interventions that can mitigate the risk of progression to clinical psychosis. Electroencephalographic (EEG) patterns from brain activity and deep learning techniques are valuable resources in achieving this identification. We propose automated techniques that can process raw EEG waveforms to identify children who may have an increased risk of schizophrenia compared to typically developing children. We also analyse abnormal features that remain during developmental follow-up over a period of ∼ 4 years in children with a vulnerability to schizophrenia initially assessed when aged 9 to 12 years. EEG data from participants were captured during the recording of a passive auditory oddball paradigm. We undertake a holistic study to identify brain abnormalities, first by exploring traditional machine learning algorithms using classification methods applied to hand-engineered features (event-related potential components). Then, we compare the performance of these methods with end-to-end deep learning techniques applied to raw data. We demonstrate via average cross-validation performance measures that recurrent deep convolutional neural networks can outperform traditional machine learning methods for sequence modeling. We illustrate the intuitive salient information of the model with the location of the most relevant attributes of a post-stimulus window. This baseline identification system in the area of mental illness supports the evidence of developmental and disease effects in a pre-prodromal phase of psychosis. These results reinforce the benefits of deep learning to support psychiatric classification and neuroscientific research more broadly."
},
{
"pmid": "21153832",
"title": "The N1 auditory evoked potential component as an endophenotype for schizophrenia: high-density electrical mapping in clinically unaffected first-degree relatives, first-episode, and chronic schizophrenia patients.",
"abstract": "The N1 component of the auditory evoked potential (AEP) is a robust and easily recorded metric of auditory sensory-perceptual processing. In patients with schizophrenia, a diminution in the amplitude of this component is a near-ubiquitous finding. A pair of recent studies has also shown this N1 deficit in first-degree relatives of schizophrenia probands, suggesting that the deficit may be linked to the underlying genetic risk of the disease rather than to the disease state itself. However, in both these studies, a significant proportion of the relatives had other psychiatric conditions. As such, although the N1 deficit represents an intriguing candidate endophenotype for schizophrenia, it remains to be shown whether it is present in a group of clinically unaffected first-degree relatives. In addition to testing first-degree relatives, we also sought to replicate the N1 deficit in a group of first-episode patients and in a group of chronic schizophrenia probands. Subject groups consisted of 35 patients with schizophrenia, 30 unaffected first-degree relatives, 13 first-episode patients, and 22 healthy controls. Subjects sat in a dimly lit room and listened to a series of simple 1,000-Hz tones, indicating with a button press whenever they heard a deviant tone (1,500 Hz; 17% probability), while the AEP was recorded from 72 scalp electrodes. Both chronic and first-episode patients showed clear N1 amplitude decrements relative to healthy control subjects. Crucially, unaffected first-degree relatives also showed a clear N1 deficit. This study provides further support for the proposal that the auditory N1 deficit in schizophrenia is linked to the underlying genetic risk of developing this disorder. In light of recent studies, these results point to the N1 deficit as an endophenotypic marker for schizophrenia. The potential future utility of this metric as one element of a multivariate endophenotype is discussed."
},
{
"pmid": "33875158",
"title": "Advanced EEG-based learning approaches to predict schizophrenia: Promises and pitfalls.",
"abstract": "The complexity and heterogeneity of schizophrenia symptoms challenge an objective diagnosis, which is typically based on behavioral and clinical manifestations. Moreover, the boundaries of schizophrenia are not precisely demarcated from other nosologic categories, such as bipolar disorder. The early detection of schizophrenia can lead to a more effective treatment, improving patients' quality of life. Over the last decades, hundreds of studies aimed at specifying the neurobiological mechanisms that underpin clinical manifestations of schizophrenia, using techniques such as electroencephalography (EEG). Changes in event-related potentials of the EEG have been associated with sensory and cognitive deficits and proposed as biomarkers of schizophrenia. Besides contributing to a more effective diagnosis, biomarkers can be crucial to schizophrenia onset prediction and prognosis. However, any proposed biomarker requires substantial clinical research to prove its validity and cost-effectiveness. Fueled by developments in computational neuroscience, automatic classification of schizophrenia at different stages (prodromal, first episode, chronic) has been attempted, using brain imaging pattern recognition methods to capture differences in functional brain activity. Advanced learning techniques have been studied for this purpose, with promising results. This review provides an overview of recent machine learning-based methods for schizophrenia classification using EEG data, discussing their potentialities and limitations. This review is intended to serve as a starting point for future developments of effective EEG-based models that might predict the onset of schizophrenia, identify subjects at high-risk of psychosis conversion or differentiate schizophrenia from other disorders, promoting more effective early interventions."
},
{
"pmid": "9390837",
"title": "The N1 response and its applications.",
"abstract": "Some properties and applications of the N1-P2 complex (100-200 ms latency) are reviewed. N1-P2 is currently the auditory-evoked potential (AEP) of choice for estimating the pure-tone audiogram in certain subjects for whom a frequency-specific, non-behavioural measure is required. It is accurate in passively cooperative and alert older children and adults. Although generally underutilized, it is an excellent tool for assessment of functional hearing loss, and in medicolegal and industrial injury compensation claimants. Successful use of N1-P2 requires substantial tester training and skill, as well as carefully designed and efficient measurement protocols. N1-P2 reflects conscious detection of any discrete change in any subjective dimension of the auditory environment. In principle, it could be used to measure almost any threshold of discriminable change, such as in pitch, loudness, quality and source location. It is established as a physiologic correlate of phenomena such as the masking level difference. Thus, N1-P2 may have many applications as an 'objective' proxy for psychoacoustic measures that may be impractical in clinical subjects. Advances in dipole source localization and in auditory-evoked magnetic fields (AEMFs) have clarified the multiple, cortical origins of N1 and P2. These potentials are promising tools for the neurophysiologic characterization of many disorders of central auditory processing and of speech and language development. They also may be useful in direct 'functional imaging' of specific brain regions. A wide variety of potential research and clinical applications of N1 and P2, and considerable value as part of an integrated, goal-directed AEP/AEMF measurement scheme, have yet to be fully realized."
},
{
"pmid": "22197447",
"title": "The auditory P200 is both increased and reduced in schizophrenia? A meta-analytic dissociation of the effect for standard and target stimuli in the oddball task.",
"abstract": "OBJECTIVE\nConflicting reports of P200 amplitude and latency in schizophrenia have suggested that this component is increased, reduced or does not differ from healthy subjects. A systematic review and meta-analysis were undertaken to accurately describe P200 deficits in auditory oddball tasks in schizophrenia.\n\n\nMETHODS\nA systematic search identified 20 studies which were meta-analyzed. Effect size (ES) estimates were obtained: P200 amplitude and latency for target and standard tones at midline electrodes.\n\n\nRESULTS\nThe ES obtained for amplitude (Cz) for standard and target stimuli indicate significant effects in opposite directions: standard stimuli elicit smaller P200 in patients (d = -0.36; 95% CI [-0.26, -0.08]); target stimuli elicit larger P200 in patients (d = 0.48; 95% CI [0.16, 0.82]). A similar effect occurs for latency at Cz, which is shorter for standards (d = -0.32; 95% CI [-0.54, -0.10]) and longer for targets (d = 0.42; 95% CI [0.23, 0.62]). Meta-regression analyses revealed that samples with more males show larger ES for amplitude of target stimuli, while the amount of medication was negatively associated with the ES for the latency of standards.\n\n\nCONCLUSIONS\nThe results obtained suggest that claims of reduced or augmented P200 in schizophrenia based on the sole examination of standard or target stimuli fail to consider the stimulus effect.\n\n\nSIGNIFICANCE\nQuantification of effects for standard and target stimuli is a required first step to understand the nature of P200 deficits in schizophrenia."
},
{
"pmid": "19515106",
"title": "P50, N100, and P200 sensory gating: relationships with behavioral inhibition, attention, and working memory.",
"abstract": "P50, N100, and P200 auditory sensory gating could reflect mechanisms involved in protecting higher-order cognitive functions, suggesting relationships between sensory gating and cognition. This hypothesis was tested in 56 healthy adults who were administered the paired-click paradigm and two adaptations of the continuous performance test (Immediate/Delayed Memory Task, IMT/DMT). Stronger P50 gating correlated with fewer commission errors and prolonged reaction times on the DMT. Stronger N100 and P200 gating correlated with better discriminability on the DMT. Finally, prolonged P200 latency related to better discriminability on the IMT. These findings suggest that P50, N100, and P200 gating could be involved in protecting cognition by affecting response bias, behavioral inhibition, working memory, or attention."
},
{
"pmid": "26017442",
"title": "Deep learning.",
"abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech."
},
{
"pmid": "33635820",
"title": "When Does Diversity Help Generalization in Classification Ensembles?",
"abstract": "Ensembles, as a widely used and effective technique in the machine learning community, succeed within a key element-\"diversity.\" The relationship between diversity and generalization, unfortunately, is not entirely understood and remains an open research issue. To reveal the effect of diversity on the generalization of classification ensembles, we investigate three issues on diversity, that is, the measurement of diversity, the relationship between the proposed diversity and the generalization error, and the utilization of this relationship for ensemble pruning. In the diversity measurement, we measure diversity by error decomposition inspired by regression ensembles, which decompose the error of classification ensembles into accuracy and diversity. Then, we formulate the relationship between the measured diversity and ensemble performance through the theorem of margin and generalization and observe that the generalization error is reduced effectively only when the measured diversity is increased in a few specific ranges, while in other ranges, larger diversity is less beneficial to increasing the generalization of an ensemble. Besides, we propose two pruning methods based on diversity management to utilize this relationship, which could increase diversity appropriately and shrink the size of the ensemble without much-decreasing performance. The empirical results validate the reasonableness of the proposed relationship between diversity and ensemble generalization error and the effectiveness of the proposed pruning methods."
},
{
"pmid": "23754836",
"title": "Did I do that? Abnormal predictive processes in schizophrenia when button pressing to deliver a tone.",
"abstract": "Motor actions are preceded by an efference copy of the motor command, resulting in a corollary discharge of the expected sensation in sensory cortex. These mechanisms allow animals to predict sensations, suppress responses to self-generated sensations, and thereby process sensations efficiently and economically. During talking, patients with schizophrenia show less evidence of pretalking activity and less suppression of the speech sound, consistent with dysfunction of efference copy and corollary discharge, respectively. We asked if patterns seen in talking would generalize to pressing a button to hear a tone, a paradigm translatable to less vocal animals. In 26 patients [23 schizophrenia, 3 schizoaffective (SZ)] and 22 healthy controls (HC), suppression of the N1 component of the auditory event-related potential was estimated by comparing N1 to tones delivered by button presses and N1 to those tones played back. The lateralized readiness potential (LRP) associated with the motor plan preceding presses to deliver tones was estimated by comparing right and left hemispheres' neural activity. The relationship between N1 suppression and LRP amplitude was assessed. LRP preceding button presses to deliver tones was larger in HC than SZ, as was N1 suppression. LRP amplitude and N1 suppression were correlated in both groups, suggesting stronger efference copies are associated with stronger corollary discharges. SZ have reduced N1 suppression, reflecting corollary discharge action, and smaller LRPs preceding button presses to deliver tones, reflecting the efference copy of the motor plan. Effects seen during vocalization largely extend to other motor acts more translatable to lab animals."
},
{
"pmid": "32553846",
"title": "Changes in motor preparation affect the sensory consequences of voice production in voice hearers.",
"abstract": "BACKGROUND\nAuditory verbal hallucinations (AVH) are a cardinal symptom of psychosis but are also present in 6-13% of the general population. Alterations in sensory feedback processing are a likely cause of AVH, indicative of changes in the forward model. However, it is unknown whether such alterations are related to anomalies in forming an efference copy during action preparation, selective for voices, and similar along the psychosis continuum. By directly comparing psychotic and nonclinical voice hearers (NCVH), the current study specifies whether and how AVH proneness modulates both the efference copy (Readiness Potential) and sensory feedback processing for voices and tones (N1, P2) with event-related brain potentials (ERPs).\n\n\nMETHODS\nControls with low AVH proneness (n = 15), NCVH (n = 16) and first-episode psychotic patients with AVH (n = 16) engaged in a button-press task with two types of stimuli: self-initiated and externally generated self-voices or tones during EEG recordings.\n\n\nRESULTS\nGroups differed in sensory feedback processing of expected and actual feedback: NCVH displayed an atypically enhanced N1 to self-initiated voices, while N1 suppression was reduced in psychotic patients. P2 suppression for voices and tones was strongest in NCVH, but absent for voices in patients. Motor activity preceding the button press was reduced in NCVH and patients, specifically for sensory feedback to self-voice in NCVH.\n\n\nCONCLUSIONS\nThese findings suggest that selective changes in sensory feedback to voice are core to AVH. These changes already show in preparatory motor activity, potentially reflecting changes in forming an efference copy. The results provide partial support for continuum models of psychosis."
},
{
"pmid": "15102499",
"title": "EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis.",
"abstract": "We have developed a toolbox and graphic user interface, EEGLAB, running under the crossplatform MATLAB environment (The Mathworks, Inc.) for processing collections of single-trial and/or averaged EEG data of any number of channels. Available functions include EEG data, channel and event information importing, data visualization (scrolling, scalp map and dipole model plotting, plus multi-trial ERP-image plots), preprocessing (including artifact rejection, filtering, epoch selection, and averaging), independent component analysis (ICA) and time/frequency decompositions including channel and component cross-coherence supported by bootstrap statistical methods based on data resampling. EEGLAB functions are organized into three layers. Top-layer functions allow users to interact with the data through the graphic interface without needing to use MATLAB syntax. Menu options allow users to tune the behavior of EEGLAB to available memory. Middle-layer functions allow users to customize data processing using command history and interactive 'pop' functions. Experienced MATLAB users can use EEGLAB data structures and stand-alone signal processing functions to write custom and/or batch analysis scripts. Extensive function help and tutorial information are included. A 'plug-in' facility allows easy incorporation of new EEG modules into the main menu. EEGLAB is freely available (http://www.sccn.ucsd.edu/eeglab/) under the GNU public license for noncommercial use and open source development, together with sample data, user tutorial and extensive documentation."
},
{
"pmid": "20654646",
"title": "FASTER: Fully Automated Statistical Thresholding for EEG artifact Rejection.",
"abstract": "Electroencephalogram (EEG) data are typically contaminated with artifacts (e.g., by eye movements). The effect of artifacts can be attenuated by deleting data with amplitudes over a certain value, for example. Independent component analysis (ICA) separates EEG data into neural activity and artifact; once identified, artifactual components can be deleted from the data. Often, artifact rejection algorithms require supervision (e.g., training using canonical artifacts). Many artifact rejection methods are time consuming when applied to high-density EEG data. We describe FASTER (Fully Automated Statistical Thresholding for EEG artifact Rejection). Parameters were estimated for various aspects of data (e.g., channel variance) in both the EEG time series and in the independent components of the EEG: outliers were detected and removed. FASTER was tested on both simulated EEG (n=47) and real EEG (n=47) data on 128-, 64-, and 32-scalp electrode arrays. FASTER was compared to supervised artifact detection by experts and to a variant of the Statistical Control for Dense Arrays of Sensors (SCADS) method. FASTER had >90% sensitivity and specificity for detection of contaminated channels, eye movement and EMG artifacts, linear trends and white noise. FASTER generally had >60% sensitivity and specificity for detection of contaminated epochs, vs. 0.15% for SCADS. FASTER also aggregates the ERP across subject datasets, and detects outlier datasets. The variance in the ERP baseline, a measure of noise, was significantly lower for FASTER than either the supervised or SCADS methods. ERP amplitude did not differ significantly between FASTER and the supervised approach."
},
{
"pmid": "34820480",
"title": "Clustering Analysis Methods for GNSS Observations: A Data-Driven Approach to Identifying California's Major Faults.",
"abstract": "We present a data-driven approach to clustering or grouping Global Navigation Satellite System (GNSS) stations according to observed velocities, displacements or other selected characteristics. Clustering GNSS stations provides useful scientific information, and is a necessary initial step in other analysis, such as detecting aseismic transient signals (Granat et al., 2013, https://doi.org/10.1785/0220130039). Desired features of the data can be selected for clustering, including some subset of displacement or velocity components, uncertainty estimates, station location, and other relevant information. Based on those selections, the clustering procedure autonomously groups the GNSS stations according to a selected clustering method. We have implemented this approach as a Python application, allowing us to draw upon the full range of open source clustering methods available in Python's scikit-learn package (Pedregosa et al., 2011, https://doi.org/10.5555/1953048.2078195). The application returns the stations labeled by group as a table and color coded KML file and is designed to work with the GNSS information available from GeoGateway (Donnellan et al., 2021, https://doi.org/10.1007/s12145-020-00561-7; Heflin et al., 2020, https://doi.org/10.1029/2019ea000644) but is easily extensible. We demonstrate the methodology on California and western Nevada. The results show partitions that follow faults or geologic boundaries, including for recent large earthquakes and post-seismic motion. The San Andreas fault system is most prominent, reflecting Pacific-North American plate boundary motion. Deformation reflected as class boundaries is distributed north and south of the central California creeping section. For most models a cluster boundary connects the southernmost San Andreas fault with the Eastern California Shear Zone (ECSZ) rather than continuing through the San Gorgonio Pass."
},
{
"pmid": "32939066",
"title": "Array programming with NumPy.",
"abstract": "Array programming provides a powerful, compact and expressive syntax for accessing, manipulating and operating on data in vectors, matrices and higher-dimensional arrays. NumPy is the primary array programming library for the Python language. It has an essential role in research analysis pipelines in fields as diverse as physics, chemistry, astronomy, geoscience, biology, psychology, materials science, engineering, finance and economics. For example, in astronomy, NumPy was an important part of the software stack used in the discovery of gravitational waves1 and in the first imaging of a black hole2. Here we review how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring and analysing scientific data. NumPy is the foundation upon which the scientific Python ecosystem is constructed. It is so pervasive that several projects, targeting audiences with specialized needs, have developed their own NumPy-like interfaces and array objects. Owing to its central position in the ecosystem, NumPy increasingly acts as an interoperability layer between such array computation libraries and, together with its application programming interface (API), provides a flexible framework to support the next decade of scientific and industrial analysis."
},
{
"pmid": "18684772",
"title": "Event-related EEG time-frequency analysis: an overview of measures and an analysis of early gamma band phase locking in schizophrenia.",
"abstract": "An increasing number of schizophrenia studies have been examining electroencephalography (EEG) data using time-frequency analysis, documenting illness-related abnormalities in neuronal oscillations and their synchronization, particularly in the gamma band. In this article, we review common methods of spectral decomposition of EEG, time-frequency analyses, types of measures that separately quantify magnitude and phase information from the EEG, and the influence of parameter choices on the analysis results. We then compare the degree of phase locking (ie, phase-locking factor) of the gamma band (36-50 Hz) response evoked about 50 milliseconds following the presentation of standard tones in 22 healthy controls and 21 medicated patients with schizophrenia. These tones were presented as part of an auditory oddball task performed by subjects while EEG was recorded from their scalps. The results showed prominent gamma band phase locking at frontal electrodes between 20 and 60 milliseconds following tone onset in healthy controls that was significantly reduced in patients with schizophrenia (P = .03). The finding suggests that the early-evoked gamma band response to auditory stimuli is deficiently synchronized in schizophrenia. We discuss the results in terms of pathophysiological mechanisms compromising event-related gamma phase synchrony in schizophrenia and further attempt to reconcile this finding with prior studies that failed to find this effect."
},
{
"pmid": "19765689",
"title": "Reduced early auditory evoked gamma-band response in patients with schizophrenia.",
"abstract": "BACKGROUND\nThere is growing evidence for abnormalities of certain gamma-aminobutyric acid (GABA)-ergic interneurons and their interaction with glutamatergic pyramidal cells in schizophrenia. These interneurons are critically involved in generating neural activity in the gamma-band (30-100 Hz) of the electroencephalogram. One example of such gamma oscillations is the early auditory evoked gamma-band response (GBR). Although auditory processing is obviously disturbed in schizophrenia, there is no direct evidence providing a reduced early auditory evoked GBR so far. We addressed two questions: 1) Is the early auditory evoked GBR decreased regarding power and phase-locking in schizophrenic patients?; and 2) Is this possible decrease a result of reduced activity in the auditory cortex and/or the anterior cingulate cortex (ACC), which were identified as sources of the GBR previously?\n\n\nMETHODS\nWe investigated the early auditory evoked GBR and its sources in the ACC and the auditory cortex in 90 medicated patients with schizophrenia and in age-, gender-, and education-matched healthy control subjects with an auditory reaction task.\n\n\nRESULTS\nPatients with schizophrenia showed a significant reduction of power and phase-locking of the early auditory evoked GBR. This effect was due to a reduced activity in the auditory cortex and the ACC/medial frontal gyrus region (low-resolution brain electromagnetic tomography analysis).\n\n\nCONCLUSIONS\nGenerally, these findings are in line with earlier reports on the impaired ability of schizophrenic patients in generating gamma activity. In addition, this is the first study to demonstrate disturbance of gamma activity in auditory processing as assessed by the early auditory GBR power."
},
{
"pmid": "18571375",
"title": "An auditory processing abnormality specific to liability for schizophrenia.",
"abstract": "Abnormal brain activity during the processing of simple sounds is evident in individuals with increased genetic liability for schizophrenia; however, the diagnostic specificity of these abnormalities has yet to be fully examined. Because recent evidence suggests that schizophrenia and bipolar disorder may share aspects of genetic etiology the present study was conducted to determine whether individuals with heightened genetic liability for each disorder manifested distinct neural abnormalities during auditory processing. Utilizing a dichotic listening paradigm, we assessed target tone discrimination and electrophysiological responses in schizophrenia patients, first-degree biological relatives of schizophrenia patients, bipolar disorder patients, first-degree biological relatives of bipolar patients and nonpsychiatric control participants. Schizophrenia patients and relatives of schizophrenia patients demonstrated reductions in an early neural response (i.e. N1) suggestive of deficient sensory registration of auditory stimuli. Bipolar patients and relatives of bipolar patients demonstrated no such abnormality. Both schizophrenia and bipolar patients failed to significantly augment N1 amplitude with attention. Schizophrenia patients also failed to show sensitivity of longer-latency neural processes (N2) to stimulus frequency suggesting a disorder specific deficit in stimulus classification. Only schizophrenia patients exhibited reduced target tone discrimination accuracy. Reduced N1 responses reflective of early auditory processing abnormalities are suggestive of a marker of genetic liability for schizophrenia and may serve as an endophenotype for the disorder."
},
{
"pmid": "30804830",
"title": "The Importance of Sensory Processing in Mental Health: A Proposed Addition to the Research Domain Criteria (RDoC) and Suggestions for RDoC 2.0.",
"abstract": "The time is ripe to integrate burgeoning evidence of the important role of sensory and motor functioning in mental health within the National Institute of Mental Health's [NIMH] Research Domain Criteria [RDoC] framework (National Institute of Mental Health, n.d.a), a multi-dimensional method of characterizing mental functioning in health and disease across all neurobiological levels of analysis ranging from genetic to behavioral. As the importance of motor processing in psychopathology has been recognized (Bernard and Mittal, 2015; Garvey and Cuthbert, 2017; National Institute of Mental Health, 2019), here we focus on sensory processing. First, we review the current design of the RDoC matrix, noting sensory features missing despite their prevalence in multiple mental illnesses. We identify two missing classes of sensory symptoms that we widely define as (1) sensory processing, including sensory sensitivity and active sensing, and (2) domains of perceptual signaling, including interoception and proprioception, which are currently absent or underdeveloped in the perception construct of the cognitive systems domain. Then, we describe the neurobiological basis of these psychological constructs and examine why these sensory features are important for understanding psychopathology. Where appropriate, we examine links between sensory processing and the domains currently included in the RDoC matrix. Throughout, we emphasize how the addition of these sensory features to the RDoC matrix is important for understanding a range of mental health disorders. We conclude with the suggestion that a separate sensation and perception domain can enhance the current RDoC framework, while discussing what we see as important principles and promising directions for the future development and use of the RDoC."
},
{
"pmid": "19282472",
"title": "Reductions in the N1 and P2 auditory event-related potentials in first-hospitalized and chronic schizophrenia.",
"abstract": "The N1 auditory event-related potential (ERP) is reduced in chronic schizophrenia, as is the P2 to attended tones. N1 reduction may be endophenotypic for schizophrenia, being reduced in twins of schizophrenic patients and showing heritability. Results in family members, however, are equivocal, with abnormally small N1 (consistent with an endophenotype) and abnormally large N1 (inconsistent with an endophenotype) reported. P2 has been little studied in schizophrenia or family members. One crucial step in establishing endophenotypes is to rule out causal chronicity factors. We examined schizophrenia patients within 1 year of first hospitalization (most within 2 wk), chronically ill patients, and matched controls to examine N1 and P2 reductions and disease stage. Two active target detection oddball tasks were used, one with 97-dB tones against 70-dB white masking noise, the second with 97-dB tones without noise. Results from 8 samples are reported: first-hospitalized patients and matched controls and chronic patients and matched controls for the 2 tasks. N1 and P2 were measured from the standard stimuli. N1 and P2 were significantly reduced in chronic patients, as expected, and reduced in first-hospitalized patients. Because N1 and P2 are reduced even at the first hospitalization for schizophrenia, they may serve as viable electrophysiological endophenotypes for the disorder. However, deficit early in the disease is necessary but not sufficient to establish these ERPs as endophenotypes. Deficits must next be demonstrated in at least a subset of unaffected family members, a crucial criterion for an endophenotype."
},
{
"pmid": "33815168",
"title": "Abnormal Habituation of the Auditory Event-Related Potential P2 Component in Patients With Schizophrenia.",
"abstract": "Auditory event-related potentials (ERP) may serve as diagnostic tools for schizophrenia and inform on the susceptibility for this condition. Particularly, the examination of N1 and P2 components of the auditory ERP may shed light on the impairments of information processing streams in schizophrenia. However, the habituation properties (i.e., decreasing amplitude with the repeated presentation of an auditory stimulus) of these components remain poorly studied compared to other auditory ERPs. Therefore, the current study used a roving paradigm to assess the modulation and habituation of N1 and P2 to simple (pure tones) and complex sounds (human voices and bird songs) in 26 first-episode patients with schizophrenia and 27 healthy participants. To explore the habituation properties of these ERPs, we measured the decrease in amplitude over a train of seven repetitions of the same stimulus (either bird songs or human voices). We observed that, for human voices, N1 and P2 amplitudes decreased linearly from stimulus 1-7, in both groups. Regarding bird songs, only the P2 component showed a decreased amplitude with stimulus presentation, exclusively in the control group. This suggests that patients did not show a fading of neural responses to repeated bird songs, reflecting abnormal habituation to this stimulus. This could reflect the inability to inhibit irrelevant or redundant information at later stages of auditory processing. In turn schizophrenia patients appear to have a preserved auditory processing of human voices."
}
] |
JMIR Serious Games | 35171103 | PMC8892275 | 10.2196/33459 | Teaching Students About Plagiarism Using a Serious Game (Plagi-Warfare): Design and Evaluation Study | BackgroundEducational games have been proven to support the teaching of various concepts across disciplines. Plagiarism is a major problem among undergraduate and postgraduate students at universities.ObjectiveIn this paper, we propose a game called Plagi-Warfare that attempts to teach students about plagiarism.MethodsTo do this at a level that is beyond quizzes, we proposed a game storyline and mechanics that allow the player (or student) to play as a mafia member or a detective. This either demonstrated their knowledge by plagiarizing within the game as a mafia member or catching plagiarists within the game as a detective. The game plays out in a 3D environment representing the major libraries of the University of Johannesburg, South Africa. In total, 30 students were selected to evaluate the game.ResultsEvaluation of the game mechanics and storyline showed that the student gamers enjoyed the game and learned about plagiarism.ConclusionsIn this paper, we presented a new educational game that teaches students about plagiarism by using a new crime story and an immersive 3D gaming environment representing the libraries of the University of Johannesburg. | Related Works and GapsIn this section, we present related works, specifically serious games in education or educational games for teaching plagiarism and games used by libraries around the world to teach specific subjects or topics.Existing Plagiarism Educational GamesTable 1 describes 4 educational games that are specifically designed to train/inform the player about plagiarism, and Table 2 lists existing library games worldwide.Table 1Existing plagiarism educational games.Game nameSimilarity to Plagi-WarfareCheats and Geeks [63]Designed in the style of a dice board game, the game places the player in the role of a graduate student who has to publish a paper for a competition. The player must complete pop quizzes about plagiarism before being able to complete the challenges in the game.Frenetic Filing [63]A retro-style-designed game in which the player needs to solve as many processed plagiarism cases before a set time elapses. The game rewards the player with virtual sneakers and coffee that assist in speeding up the player’s review of cases.Murky Misconduct [63]Murky Misconduct is a crime detective game where the player engages as a plagiarism investigator. They are tasked with tracking down potential perpetrators by comparing academic papers and finally prosecuting the players. Players are educated on the complex ethics issues and consequences of plagiarism.Goblin Threat [12]This game is designed as a clicking game, where the player must find hidden goblins, which ask the player questions related to plagiarism. Game content covers how to cite sources, types of plagiarism with their repercussions, and the variations between plagiarism and paraphrasing.Table 2Existing library games around the world.Game nameDescription of library gameUniversity/college libraryEducational contentNightmare on Vine StreetIn this game, the player is secured inside the library around evening time and must satisfy zombies with assets found in the library to get away [14,64].University of Tennessee at Chattanooga, USAThe game improves the information literacy of students.Within RangeIn this game, the player has to place books according to the call number before the allocated time is up [14,64,65].Carnegie Mellon University, USALibrary staff learns how to store books.Defense of HidgeonAt the beginning of this game, students are instructed to start library research while navigating, looking, and finding assets. During the game, students have to go to distinctive libraries and search web for answers [64,66].University of Michigan, USAStudents learn how to conduct research using library resources.Secret Agents in the LibraryThe player assumes the role of a secret agent in this online game. The objective of the game is to protect the library from an invader by answering questions as well as fetching assets from within the library [14].Lycoming College, USAThe game improves the information literacy of students, and students learn how to access the library website.It’s AliveHere, the player acts as a crazy scientist endeavoring to obtain body parts to build an animal, and to obtain these body parts, the player has to correctly respond to a sequence of questions [14,65].Lycoming College, USAPlayers are able to find out about biology research methods.Get a ClueStudents have to solve a crime by visiting assigned areas and getting rid of suspects and library resources in the game [64].Utah Valley University, USAStudents are able to navigate around the university library.LibraryCraftStudents can visit various pages of the library website. Students need to find a book in their catalogue and use the author’s name as a password to show that they have completed the task [14,67].Utah Valley University, USAStudents find out about the library resource/assets and how to manage research.GapsAlthough there have been attempts by different game developers and researchers to create educational games for teaching students what plagiarism is, the following issues are yet to be addressed:Story and gaming environment: For this, we proposed a 2-sided storyline that allows a student to play as a good guy or a bad guy, detecting plagiarism or plagiarizing and escaping being caught, respectively, within the game.Replayability: To the best of our knowledge, none of the existing games has the ability to present the player with new problems every time they play. We have written new algorithms for this task.First local solution: As far as we can tell, no South African university has an educational game for teaching plagiarism.These issues are addressed in the design of Plagi-Warfare, discussed in the Methods section, covering the design aspects of Plagi-Warfare, such as the game flow, story development, and other design considerations. | [
"30259556",
"21194645"
] | [
{
"pmid": "21194645",
"title": "Educational approaches for discouraging plagiarism.",
"abstract": "Suggested approaches to reduce the occurrence of plagiarism in academia, particularly among trainees. These include (1) educating individuals as to the definition of plagiarism and its consequences through written guidelines, active discussions, and practice in identifying proper and improper citation practices; (2) distributing checklists that break the writing task into more manageable steps, (3) requiring the submission of an outline and then a first draft prior to the deadline for a paper; (4) making assignments relevant to individual interests; and (5) providing trainees with access to software programs that detect plagiarism."
}
] |
Frontiers in Artificial Intelligence | null | PMC8892386 | 10.3389/frai.2021.796756 | Distributional Measures of Semantic Abstraction | This article provides an in-depth study of distributional measures for distinguishing between degrees of semantic abstraction. Abstraction is considered a “central construct in cognitive science” (Barsalou, 2003) and a “process of information reduction that allows for efficient storage and retrieval of central knowledge” (Burgoon et al., 2013). Relying on the distributional hypothesis, computational studies have successfully exploited measures of contextual co-occurrence and neighbourhood density to distinguish between conceptual semantic categorisations. So far, these studies have modeled semantic abstraction across lexical-semantic tasks such as ambiguity; diachronic meaning changes; abstractness vs. concreteness; and hypernymy. Yet, the distributional approaches target different conceptual types of semantic relatedness, and as to our knowledge not much attention has been paid to apply, compare or analyse the computational abstraction measures across conceptual tasks. The current article suggests a novel perspective that exploits variants of distributional measures to investigate semantic abstraction in English in terms of the abstract–concrete dichotomy (e.g., glory–banana) and in terms of the generality–specificity distinction (e.g., animal–fish), in order to compare the strengths and weaknesses of the measures regarding categorisations of abstraction, and to determine and investigate conceptual differences.In a series of experiments we identify reliable distributional measures for both instantiations of lexical-semantic abstraction and reach a precision higher than 0.7, but the measures clearly differ for the abstract–concrete vs. abstract–specific distinctions and for nouns vs. verbs. Overall, we identify two groups of measures, (i) frequency and word entropy when distinguishing between more and less abstract words in terms of the generality–specificity distinction, and (ii) neighbourhood density variants (especially target–context diversity) when distinguishing between more and less abstract words in terms of the abstract–concrete dichotomy. We conclude that more general words are used more often and are less surprising than more specific words, and that abstract words establish themselves empirically in semantically more diverse contexts than concrete words. Finally, our experiments once more point out that distributional models of conceptual categorisations need to take word classes and ambiguity into account: results for nouns vs. verbs differ in many respects, and ambiguity hinders fine-tuning empirical observations. | 2. Related WorkIn the following, we introduce previous research perspectives and studies on the two types of semantic abstraction we focus on, i.e., abstraction in terms of the abstract–concrete dichotomy and in terms of the generality–specificity distinction. In this vein, section 2.1 looks into abstraction from a cognitive perspective, while section 2.2 provides an overview of computational models of abstraction. In section 2.3, we describe previous empirical investigations across the two types of abstraction. From a terminological perspective, we will use the word “concepts” when referring to mental representations, and “words” when referring to the corresponding linguistic surface forms humans are exposed to. Given the distributional nature of our studies, we will always refer to words as the targets of our analyses.2.1. Cognitive Perspectives on AbstractionBarsalou (2003) considers abstraction as a “central construct in cognitive science” regarding the organization of categories in the human memory. He attributes six different senses to abstraction: (i) abstracting a conceptual category from the settings it occurs in; (ii) generalising across category members; (iii) generalising through summary representations which are necessary for the behavioural generalisations in (ii); (iv) sparse schematic representations; (v) flexible interpretation; and (vi) abstractness in contrast to concreteness. Barsalou's classification illustrates that the term “semantic abstraction” as well as its featural and inferential implications for memory representations are vague in that different instantiations go along with different representations; he himself focuses on summary representations (iii). Burgoon et al. (2013) provide an extensive list and description of past definitions of abstraction across research fields and research studies, and state that, at the meta level, the term abstraction is referred to as “a process of information reduction that allows for efficient storage and retrieval of central knowledge (e.g., categorization).” For their own study, they define abstraction as “as a process of identifying a set of invariant central characteristics of a thing,” and in what follows they compare existing definitions of abstraction regarding their roots, developments, antecedents, consequences, and methods for studying.The distinction of the two abstraction types adopted in the current study comes from Spreen and Schulz (1966) indicating that the “definition of abstractness or concreteness in previous studies shows that at least two distinctly different interpretations can be made,” and pointing back to previous collections with judgements on generality by Gorman (1961) and judgements on concreteness as well as generality by Darley et al. (1959). Spreen and Schulz (1966) themselves collected ratings on both abstractness–concreteness and abstractness–specificity (among others) for 329 English nouns, and found a correlation of 0.626 between the ratings of the two abstraction variables. The two-fold distinction of abstraction outlined in the work by Spreen and Schulz (1966) is also included in the various instantiations of abstraction in Barsalou (2003) and Burgoon et al. (2013). In the following, we describe the lines of research involved in the representation and processing of abstract vs. concrete concepts and then those involved in general vs. specific concepts.2.1.1. Abstract vs. Concrete ConceptsThe most influential proposal about the processing, storing and comprehension of abstract concepts in contrast to concrete concepts can be traced back to Paivio (1971). He suggested the dual-route theory where a verbal system is primarily responsible for language aspects of linguistic units (such as words), while a non-verbal system, in particular imagery, is primarily responsible for sensory-motor aspects. Even though in the meantime, a range of alternative as well as complementary theories have been suggested, Paivio's theory offers an explanation why concrete concepts (which are supposedly accessed via both routes) are generally processed faster in lexical memory than abstract concepts (which are supposedly accessed only via the non-verbal system) across tasks and datasets, cf. Pecher et al. (2011) and Borghi et al. (2017) for comprehensive overviews.Further than the dual-route theory, cognitive scientists have investigated other dimensions of abstractness. Most notably, Schwanenflugel and Shoben (1983) suggested the context availability theory where they compared the processing of abstract and concrete words in context and demonstrated that in appropriate contexts neither reading times nor lexical decision times differ, thus emphasising the role of context in conditions of abstractness. In addition, a number of properties have been pointed out where abstract and concrete concepts differ. (i) There is a strong consensus and experimental confirmation that concrete concepts are more imaginable than the abstract ones, and that it takes longer to generate images for abstract than for concrete concepts (Paivio et al., 1968; Paivio, 1971; Paivio and Begg, 1971, i.a.). (ii) Abstract concepts are considered to be more emotionally valenced than concrete concepts (Kousta et al., 2011; Vigliocco et al., 2014; Pollock, 2018). (iii) Free associations to abstract concepts are assumed to differ from free associations to concrete concepts in terms of the number of types, but at the same time associations to concrete concepts have been found weaker and more symmetric than for abstract concepts (Crutch and Warrington, 2010; Hill et al., 2014). (iv) Based on a feature generation task, features of abstract concepts are less property- and more situation-related than features of concrete words (Wiemer-Hastings and Xu, 2005). (v) Accordingly, an appropriate embedding into situations has been identified as crucial for abstract vs. concrete meaning representations (Barsalou and Wiemer-Hastings, 2005; Hare et al., 2009; Pecher et al., 2011; Frassinelli and Lenci, 2012; Recchia and Jones, 2012).Hand in hand with defining and investigating hypotheses about dimensions of abstract and concrete concepts, a number of data collections have been created. To name just a prominent subset of the large number of existing resources, Spreen and Schulz (1966) collected ratings of concreteness and specificity (among others) for 329 English nouns (see above); Paivio et al. (1968) collected ratings for 925 English nouns on concreteness, imagery and meaningfulness; Coltheart (1981) put together the MRC Psycholinguistic Database, mostly comprising pre-existing information for almost 100,000 English words including concreteness, imageability, familiarity as well as frequency, semantic, syntactic, and phonological information; Warriner et al. (2013) extended the ANEW norms from Bradley and Lang (1999) with 1,034 English words to almost 14,000, capturing emotion-relevant norms of valence, arousal and dominance; a similar collection for 20,000 English words regarding the same variables but using best–worst scaling instead of ratings has been done by Mohammad (2018); Brysbaert et al. (2014) created the so far largest human-generated collection containing concreteness ratings for 40,000 English words. The work by Connell and Lynott differs slightly on the variable depth, by focusing on the individual perception modalities and interoception (Lynott and Connell, 2009, 2013; Lynott et al., 2020). While the vast amount of abstractness/concreteness datasets has been created for English, we also find collections for other languages, such as those for 2,654/1,000 nouns in German (Lahl et al., 2009; Kanske and Kotz, 2010, respectively); 16,109 Spanish words (Algarabel et al., 1988); 417 Italian words (Della Rosa et al., 2010); and 1,659 French words (Bonin et al., 2018). While traditional collections have been pen-and-paper-based, the collections from the last decade have moved toward crowd-sourcing platforms. As alternative to human-generated ratings, previous research suggested semi-automatic algorithms to create large-scale norms (Mandera et al., 2015; Recchia and Louwerse, 2015; Köper and Schulte im Walde, 2016; Köper and Schulte im Walde, 2017; Aedmaa et al., 2018; Rabinovich et al., 2018).2.1.2. General vs. Specific ConceptsDifferently to the above distinction of semantic abstraction in terms of degrees of concreteness as opposed to abstractness, where concepts may be judged more or less abstract in comparison to otherwise semantically unrelated concepts (e.g., banana–glory), semantic abstraction in terms of generality is typically established in contrast to a semantically related concept (e.g., animal–fish). The lexical-semantic relation of interest here is hypernymy, where the more general concept represents the hypernym of the more specific hyponym.An enormous body of work discusses hypernymy next to further semantic relations in the mental lexicon. For example, a seminal description of lexical relations can be found in Cruse (1986), who states that lexical relations “reflect the way infinitely and continuously varied experienced reality is apprehended and controlled through being categorised, subcategorised and graded along specific dimensions of variation.” Murphy (2003) focuses on the representation of semantic relations in the lexicon and discusses synonymy, antonymy, contrast, hyponymy and meronymy, across word classes. Most of her discussions concern linguistic vs. meta-linguistic representations of relations, reference of relations to words vs. concepts, and lexicon storage. The most extensive resource that systematically explores and compares types of lexical-semantic relations across word classes is established by the taxonomy of the Princeton WordNet, where hypernymy represents a key organisation principle of semantic memory (Fellbaum, 1990; Gross and Miller, 1990; Miller et al., 1990). Miller and Fellbaum (1991) provide a meta-level summary of relational structures and decisions. As basis for the WordNet organisation, they state that “the mental lexicon is organised by semantic relations. Since a semantic relation is a relation between meanings, and since meanings can be represented by synsets, it is natural to think of semantic relations as pointers between synsets.” The semantic relations in WordNet include the paradigmatic relations synonymy, hypernymy/hyponymy, antonymy, and meronymy. For nouns, WordNet implements a hierarchical organisation of synsets (i.e., sets of synonymous word meanings) relying on hypernymy relations. Verbs are considered the most complex and polysemous word class; they are organised on a verb-specific variant of hypernymy, i.e., troponymy:
v1
is to
v2
in some manner, that operates on semantic fields instantiated through synsets. Troponymy itself is conditioned on entailment and temporal inclusion.2.2. Computational Models of AbstractionAcross both types of semantic abstraction, computational models have been suggested to automatically characterise or distinguish between more and less abstract words. They have been intertwined with cognitive perspectives to various degrees.2.2.1. Abstract vs. Concrete WordsA common idea in this research direction is the exploitation of corpus-based co-occurrence information to infer textual distributional characteristics of cognitive semantic variables, including abstractness as well as further variables such as emotion, imageability, familiarity, etc. These models are large-scale data approaches to explore the role of linguistic information and textual attributes when distinguishing between abstract and concrete words. A subset of these distributional approaches is strongly driven by a cognitive perspective, thus aiming to explain the organisation of human semantic memory and lexical processing effects by the contribution of linguistic attributes. Common techniques for organising the textual information are semantic vector spaces such as Latent Semantic Analysis (LSA) (Salton et al., 1975), the Hyperspace Analogue to Language (HAL) (Burgess, 1998), and more recent variants of standard Distributional Semantic Models (DSMs) (Baroni and Lenci, 2010; Turney and Pantel, 2010), in combination with measures of distributional similarity and clustering approaches (Glenberg and Robertson, 2000; Vigliocco et al., 2009; Bestgen and Vincze, 2012; Troche et al., 2014; Mandera et al., 2015; Recchia and Louwerse, 2015; Lenci et al., 2018). Finally, our own studies provide preliminary insights into co-occurrence characteristics of abstract and concrete words with respect to linguistic parameters such as window size, parts-of-speech and subcategorisation conditions (Frassinelli et al., 2017; Naumann et al., 2018; Frassinelli and Schulte im Walde, 2019). Overall, these studies agree on tendencies such that concrete words tend to have less diverse but more compact and more strongly associated distributional neighbours than abstract words.2.2.2. General vs. Specific WordsFrom a computational perspective, hypernymy—which we take as instantiation to represent degrees of generality vs. specificity—is central to solving a number of NLP tasks such as automatic taxonomy creation (Hearst, 1998; Cimiano et al., 2004; Snow et al., 2006; Navigli and Ponzetto, 2012) and textual entailment (Dagan et al., 2006; Clark et al., 2007). An enormous body of computational work has applied variants of lexico-syntactic patterns in order to distinguish hypernymy among word pairs from other lexical semantic relations (Hearst, 1992; Pantel and Pennacchiotti, 2006; Yap and Baldwin, 2009; Schulte im Walde and Köper, 2013; Roth and Schulte im Walde, 2014; Nguyen et al., 2017, i.a.). More closely related to the current study, Shwartz et al. (2017) provide an extensive overview and comparison of unsupervised distributional methods. They distinguish between families of distributional approaches, i.e., distributional similarity measures (assuming asymmetric distributional similarities for hypernyms and their hyponyms regarding their contexts, e.g., Santus et al., 2016), distributional inclusion measures (comparing asymmetric directional overlap of context words, e.g., Weeds and Weir, 2005; Kotlerman et al., 2010; Lenci and Benotto, 2012) and distributional informativeness measures (assuming different degrees of contextual informativeness, e.g., Rimell, 2014; Santus et al., 2014). Across modelling systems, most approaches model hypernymy between nouns; hypernymy between verbs has been addressed less extensively from an empirical perspective (Fellbaum, 1990, 1998a; Fellbaum and Chaffin, 1990).2.3. Empirical Models Across Types of AbstractionIn addition to interdisciplinary empirical research targeting concreteness or hypernymy that has been mentioned above, we find at least two empirical studies at the interface of cognitive and computational linguistics that brought together our two target types of abstraction beforehand, Theijssen et al. (2011) and Bolognesi et al. (2020). Similarly to the current work, Theijssen et al. (2011) used the observation in Spreen and Schulz (1966) defining abstraction in terms of concreteness and specificity as their starting point. They provide two empirical experimental setups to explore and distinguish between the abstraction types in actual system implementations, (1) based on existing annotations of noun senses in a corpus, and (2) based on human judgements on labelling nouns in English dative alternations. As resources they used the MRC database (Coltheart, 1981) and WordNet. Overall, they found cases where concreteness and specificity overlap and cases were the two types of abstraction diverge. Bolognesi et al. (2020) looked into the same two types of abstraction to correlate degrees of abstraction in the concreteness norms by Brysbaert et al. (2014) and in the WordNet hierarchy, and to investigate interactions between the four groups of more/less concrete × more/less specific English nouns from the two resources. Their studies illustrate that concreteness and specificity represent two distinct types of abstraction.Further computational approaches zoomed into statistical estimation of contextual diversity/neighbourhood density, in order to distinguish between degrees of semantic abstraction across types of abstraction. For example, McDonald and Shillcock (2001) applied the information-theoretic measure relative entropy to determine the degree of informativeness of words, where word-specific probability distributions over contexts were compared with distributions across corresponding sets of words. The contextual diversity measure by Adelman et al. (2006) is comparably more simple: they determined the number of documents in a corpus that contain a word. More recently, Danguecan and Buchanan (2016), Reilly and Desai (2017) and our own work in Naumann et al. (2018) explored variants of neighbourhood density measures for abstract and concrete words, i.e., the number of (different) context words and the distributional similarity between context words. Additional approaches to determine contextual diversity/neighbourhood density have arisen from other fields of research concerned with semantic abstraction, i.e., regarding ambiguity and diachronic meaning change (Sagi et al., 2009; Hoffman et al., 2013; Hoffman and Woollams, 2015). Overall, these studies demonstrated that contextual density/diversity differs for more vs. less abstract words and across types of abstraction, even though the applications of the measures were rather diverse. | [
"16984300",
"12903648",
"22396137",
"32180060",
"29435912",
"28095000",
"22396136",
"24142837",
"26173209",
"20658386",
"27458422",
"13655305",
"21139171",
"17950260",
"13707272",
"19298961",
"23941240",
"23239067",
"25751041",
"21139165",
"30886898",
"21171803",
"19182119",
"29630777",
"19363198",
"23055172",
"30684227",
"25695623",
"11814216",
"1790654",
"5672258",
"28707214",
"23205008",
"24998307",
"28818790",
"28474609",
"24808876",
"27138012",
"23408565",
"23404613",
"21702791"
] | [
{
"pmid": "16984300",
"title": "Contextual diversity, not word frequency, determines word-naming and lexical decision times.",
"abstract": "Word frequency is an important predictor of word-naming and lexical decision times. It is, however, confounded with contextual diversity, the number of contexts in which a word has been seen. In a study using a normative, corpus-based measure of contextual diversity, word-frequency effects were eliminated when effects of contextual diversity were taken into account (but not vice versa) across three naming and three lexical decision data sets; the same pattern of results was obtained regardless of which of three corpora was used to derive the frequency and contextual-diversity values. The results are incompatible with existing models of visual word recognition, which attribute frequency effects directly to frequency, and are particularly problematic for accounts in which frequency effects reflect learning. We argue that the results reflect the importance of likely need in memory processes, and that the continuity between reading and memory suggests using principles from memory research to inform theories of reading."
},
{
"pmid": "12903648",
"title": "Abstraction in perceptual symbol systems.",
"abstract": "After reviewing six senses of abstraction, this article focuses on abstractions that take the form of summary representations. Three central properties of these abstractions are established: ( i ) type-token interpretation; (ii) structured representation; and (iii) dynamic realization. Traditional theories of representation handle interpretation and structure well but are not sufficiently dynamical. Conversely, connectionist theories are exquisitely dynamic but have problems with structure. Perceptual symbol systems offer an approach that implements all three properties naturally. Within this framework, a loose collection of property and relation simulators develops to represent abstractions. Type-token interpretation results from binding a property simulator to a region of a perceived or simulated category member. Structured representation results from binding a configuration of property and relation simulators to multiple regions in an integrated manner. Dynamic realization results from applying different subsets of property and relation simulators to category members on different occasions. From this standpoint, there are no permanent or complete abstractions of a category in memory. Instead, abstraction is the skill to construct temporary online interpretations of a category's members. Although an infinite number of abstractions are possible, attractors develop for habitual approaches to interpretation. This approach provides new ways of thinking about abstraction phenomena in categorization, inference, background knowledge and learning."
},
{
"pmid": "22396137",
"title": "Checking and bootstrapping lexical norms by means of word similarity indexes.",
"abstract": "In psychology, lexical norms related to the semantic properties of words, such as concreteness and valence, are important research resources. Collecting such norms by asking judges to rate the words is very time consuming, which strongly limits the number of words that compose them. In the present article, we present a technique for estimating lexical norms based on the latent semantic analysis of a corpus. The analyses conducted emphasize the technique's effectiveness for several semantic dimensions. In addition to the extension of norms, this technique can be used to check human ratings to identify words for which the rating is very different from the corpus-based estimate."
},
{
"pmid": "32180060",
"title": "On abstraction: decoupling conceptual concreteness and categorical specificity.",
"abstract": "Conceptual concreteness and categorical specificity are two continuous variables that allow distinguishing, for example, justice (low concreteness) from banana (high concreteness) and furniture (low specificity) from rocking chair (high specificity). The relation between these two variables is unclear, with some scholars suggesting that they might be highly correlated. In this study, we operationalize both variables and conduct a series of analyses on a sample of > 13,000 nouns, to investigate the relationship between them. Concreteness is operationalized by means of concreteness ratings, and specificity is operationalized as the relative position of the words in the WordNet taxonomy, which proxies this variable in the hypernym semantic relation. Findings from our studies show only a moderate correlation between concreteness and specificity. Moreover, the intersection of the two variables generates four groups of words that seem to denote qualitatively different types of concepts, which are, respectively, highly specific and highly concrete (typical concrete concepts denoting individual nouns), highly specific and highly abstract (among them many words denoting human-born creation and concepts within the social reality domains), highly generic and highly concrete (among which many mass nouns, or uncountable nouns), and highly generic and highly abstract (typical abstract concepts which are likely to be loaded with affective information, as suggested by previous literature). These results suggest that future studies should consider concreteness and specificity as two distinct dimensions of the general phenomenon called abstraction."
},
{
"pmid": "29435912",
"title": "Concreteness norms for 1,659 French words: Relationships with other psycholinguistic variables and word recognition times.",
"abstract": "Words that correspond to a potential sensory experience-concrete words-have long been found to possess a processing advantage over abstract words in various lexical tasks. We collected norms of concreteness for a set of 1,659 French words, together with other psycholinguistic norms that were not available for these words-context availability, emotional valence, and arousal-but which are important if we are to achieve a better understanding of the meaning of concreteness effects. We then investigated the relationships of concreteness with these newly collected variables, together with other psycholinguistic variables that were already available for this set of words (e.g., imageability, age of acquisition, and sensory experience ratings). Finally, thanks to the variety of psychological norms available for this set of words, we decided to test further the embodied account of concreteness effects in visual-word recognition, championed by Kousta, Vigliocco, Vinson, Andrews, and Del Campo (Journal of Experimental Psychology: General, 140, 14-34, 2011). Similarly, we investigated the influences of concreteness in three word recognition tasks-lexical decision, progressive demasking, and word naming-using a multiple regression approach, based on the reaction times available in Chronolex (Ferrand, Brysbaert, Keuleers, New, Bonin, Méot, Pallier, Frontiers in Psychology, 2; 306, 2011). The norms can be downloaded as supplementary material provided with this article."
},
{
"pmid": "28095000",
"title": "The challenge of abstract concepts.",
"abstract": "concepts (\"freedom\") differ from concrete ones (\"cat\"), as they do not have a bounded, identifiable, and clearly perceivable referent. The way in which abstract concepts are represented has recently become a topic of intense debate, especially because of the spread of the embodied approach to cognition. Within this framework concepts derive their meaning from the same perception, motor, and emotional systems that are involved in online interaction with the world. Most of the evidence in favor of this view, however, has been gathered with regard to concrete concepts. Given the relevance of abstract concepts for higher-order cognition, we argue that being able to explain how they are represented is a crucial challenge that any theory of cognition needs to address. The aim of this article is to offer a critical review of the latest theories on abstract concepts, focusing on embodied ones. Starting with theories that question the distinction between abstract and concrete concepts, we review theories claiming that abstract concepts are grounded in metaphors, in situations and introspection, and in emotion. We then introduce multiple representation theories, according to which abstract concepts evoke both sensorimotor and linguistic information. We argue that the most promising approach is given by multiple representation views that combine an embodied perspective with the recognition of the importance of linguistic and social experience. We conclude by discussing whether or not a single theoretical framework might be able to explain all different varieties of abstract concepts. (PsycINFO Database Record"
},
{
"pmid": "22396136",
"title": "Adding part-of-speech information to the SUBTLEX-US word frequencies.",
"abstract": "The SUBTLEX-US corpus has been parsed with the CLAWS tagger, so that researchers have information about the possible word classes (parts-of-speech, or PoSs) of the entries. Five new columns have been added to the SUBTLEX-US word frequency list: the dominant (most frequent) PoS for the entry, the frequency of the dominant PoS, the frequency of the dominant PoS relative to the entry's total frequency, all PoSs observed for the entry, and the respective frequencies of these PoSs. Because the current definition of lemma frequency does not seem to provide word recognition researchers with useful information (as illustrated by a comparison of the lemma frequencies and the word form frequencies from the Corpus of Contemporary American English), we have not provided a column with this variable. Instead, we hope that the full list of PoS frequencies will help researchers to collectively determine which combination of frequencies is the most informative."
},
{
"pmid": "24142837",
"title": "Concreteness ratings for 40 thousand generally known English word lemmas.",
"abstract": "Concreteness ratings are presented for 37,058 English words and 2,896 two-word expressions (such as zebra crossing and zoom in), obtained from over 4,000 participants by means of a norming study using Internet crowdsourcing for data collection. Although the instructions stressed that the assessment of word concreteness would be based on experiences involving all senses and motor responses, a comparison with the existing concreteness norms indicates that participants, as before, largely focused on visual and haptic experiences. The reported data set is a subset of a comprehensive list of English lemmas and contains all lemmas known by at least 85 % of the raters. It can be used in future research as a reference list of generally known English lemmas."
},
{
"pmid": "26173209",
"title": "There Are Many Ways to See the Forest for the Trees: A Tour Guide for Abstraction.",
"abstract": "Abstraction is a useful process for broadening mental horizons, integrating new experiences, and communicating information to others. Much attention has been directed at identifying the causes and consequences of abstraction across the subdisciplines of psychology. Despite this attention, an integrative review of the methods that are used for studying abstraction is missing from the literature. The current article aims to fill this gap in several ways. First, we highlight the different ways in which abstraction has been defined in the literature and then suggest an integrative definition. Second, we provide a tour of the different ways abstraction has been manipulated and measured over the years. Finally, we highlight considerations for researchers in choosing methods for their own research."
},
{
"pmid": "20658386",
"title": "The differential dependence of abstract and concrete words upon associative and similarity-based information: Complementary semantic interference and facilitation effects.",
"abstract": "We report mirror-image effects of interference and facilitation in the semantic processing of identical sets of abstract and concrete words in a patient F.B.I. with global aphasia following a large left middle cerebral artery stroke. Interference was elicited when the tasks involved comprehending the spoken form of each word, but facilitation was found when the patient read aloud the written forms of the same words. More importantly, irrespective of whether the dynamic effect was one of facilitation or interference, effects of semantic association were observed for abstract words, whilst effects primarily of semantic similarity were observed for concrete words. These results offer further neuropsychological evidence that the more abstract a word, the greater its dependence upon associative information and the smaller its dependence upon similarity-based information. The investigations also contribute to a converging body of evidence that suggests that this theory generalizes across different experimental paradigms, stimuli, and participants and also across different cognitive processes within individual patients. The data support a graded rather than binary or ungraded model of the relationships between concreteness, association, and similarity, and the basis for concrete words' greater dependence upon similarity-based information is discussed in terms of the development of taxonomic structures and categorical thought in young children."
},
{
"pmid": "27458422",
"title": "Semantic Neighborhood Effects for Abstract versus Concrete Words.",
"abstract": "Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words."
},
{
"pmid": "21139171",
"title": "Beyond the abstract-concrete dichotomy: mode of acquisition, concreteness, imageability, familiarity, age of acquisition, context availability, and abstractness norms for a set of 417 Italian words.",
"abstract": "The main objective of this study is to investigate the abstract-concrete dichotomy by introducing a new variable: the mode of acquisition (MoA) of a concept. MoA refers to the way in which concepts are acquired: through experience, through language, or through both. We asked 250 participants to rate 417 words on seven dimensions: age of acquisition, concreteness, familiarity, context availability, imageability, abstractness, and MoA. The data were analyzed by considering MoA ratings and their relationship with the other psycholinguistic variables. Distributions for concreteness, abstractness, and MoA ratings indicate that they are qualitatively different. A partial correlation analysis revealed that MoA is an independent predictor of concreteness or abstractness, and a hierarchical multiple regression analysis confirmed MoA as being a valid predictor of abstractness. Strong correlations with measures for the English translation equivalents in the MRC database confirmed the reliability of our norms. The full database of MoA ratings and other psycholinguistic variables may be downloaded from http://brm.psychonomic-journals.org/content/supplemental or www.abstract-project.eu."
},
{
"pmid": "17950260",
"title": "Immediate integration of novel meanings: N400 support for an embodied view of language comprehension.",
"abstract": "A substantial part of language understanding depends on our previous experiences, but part of it consists of the creation of new meanings. Such new meanings cannot be retrieved from memory but still have to be constructed. The goals of this article were: first, to explore the nature of new meaning creation, and second, to test abstract symbol theories against embodied theories of meaning. We presented context-setting sentences followed by a test sentence to which ERPs were recorded that described a novel sensible or novel senseless situation (e.g., \"The boys searched for branches/bushes [sensible/senseless] with which they went drumming...\"). Novel sensible contexts that were not associatively nor semantically related were matched to novel senseless contexts in terms of familiarity and semantic similarity by Latent Semantic Analysis (LSA). Abstract symbol theories like LSA cannot explain facilitation for novel sensible situations, whereas the embodied theory of Glenberg and Robertson [Glenberg, A.M., Robertson, D.A., 2000. Symbol grounding and meaning: A comparison of high-dimensional and embodied theories of meaning. Journal of Memory and Language, 43, 379-401.] in which meaning is grounded in perception and action can account for facilitation. Experiment 1 revealed an N400 effect in a sensibility judgment task. Experiment 2 demonstrated that this effect generalizes to a situation in which participants read for comprehension. Our findings support the following conclusions: First, participants can establish new meanings not stored in memory. Second, this is the first ERP study that shows that N400 is sensitive to new meanings and that these are created immediately - that is, in the same time frame as associative and semantic relations. Third, our N400 effects support embodied theories of meaning and challenge abstract symbol theories that can only discover meaningfulness by consulting stored symbolic knowledge."
},
{
"pmid": "19298961",
"title": "Activating event knowledge.",
"abstract": "An increasing number of results in sentence and discourse processing demonstrate that comprehension relies on rich pragmatic knowledge about real-world events, and that incoming words incrementally activate such knowledge. If so, then even outside of any larger context, nouns should activate knowledge of the generalized events that they denote or typically play a role in. We used short stimulus onset asynchrony priming to demonstrate that (1) event nouns prime people (sale-shopper) and objects (trip-luggage) commonly found at those events; (2) location nouns prime people/animals (hospital-doctor) and objects (barn-hay) commonly found at those locations; and (3) instrument nouns prime things on which those instruments are commonly used (key-door), but not the types of people who tend to use them (hose-gardener). The priming effects are not due to normative word association. On our account, facilitation results from event knowledge relating primes and targets. This has much in common with computational models like LSA or BEAGLE in which one word primes another if they frequently occur in similar contexts. LSA predicts priming for all six experiments, whereas BEAGLE correctly predicted that priming should not occur for the instrument-people relation but should occur for the other five. We conclude that event-based relations are encoded in semantic memory and computed as part of word meaning, and have a strong influence on language comprehension."
},
{
"pmid": "23941240",
"title": "A quantitative empirical analysis of the abstract/concrete distinction.",
"abstract": "This study presents original evidence that abstract and concrete concepts are organized and represented differently in the mind, based on analyses of thousands of concepts in publicly available data sets and computational resources. First, we show that abstract and concrete concepts have differing patterns of association with other concepts. Second, we test recent hypotheses that abstract concepts are organized according to association, whereas concrete concepts are organized according to (semantic) similarity. Third, we present evidence suggesting that concrete representations are more strongly feature-based than abstract concepts. We argue that degree of feature-based structure may fundamentally determine concreteness, and we discuss implications for cognitive and computational models of meaning."
},
{
"pmid": "23239067",
"title": "Semantic diversity: a measure of semantic ambiguity based on variability in the contextual usage of words.",
"abstract": "Semantic ambiguity is typically measured by summing the number of senses or dictionary definitions that a word has. Such measures are somewhat subjective and may not adequately capture the full extent of variation in word meaning, particularly for polysemous words that can be used in many different ways, with subtle shifts in meaning. Here, we describe an alternative, computationally derived measure of ambiguity based on the proposal that the meanings of words vary continuously as a function of their contexts. On this view, words that appear in a wide range of contexts on diverse topics are more variable in meaning than those that appear in a restricted set of similar contexts. To quantify this variation, we performed latent semantic analysis on a large text corpus to estimate the semantic similarities of different linguistic contexts. From these estimates, we calculated the degree to which the different contexts associated with a given word vary in their meanings. We term this quantity a word's semantic diversity (SemD). We suggest that this approach provides an objective way of quantifying the subtle, context-dependent variations in word meaning that are often present in language. We demonstrate that SemD is correlated with other measures of ambiguity and contextual variability, as well as with frequency and imageability. We also show that SemD is a strong predictor of performance in semantic judgments in healthy individuals and in patients with semantic deficits, accounting for unique variance beyond that of other predictors. SemD values for over 30,000 English words are provided as supplementary materials."
},
{
"pmid": "25751041",
"title": "Opposing effects of semantic diversity in lexical and semantic relatedness decisions.",
"abstract": "Semantic ambiguity has often been divided into 2 forms: homonymy, referring to words with 2 unrelated interpretations (e.g., bark), and polysemy, referring to words associated with a number of varying but semantically linked uses (e.g., twist). Typically, polysemous words are thought of as having a fixed number of discrete definitions, or \"senses,\" with each use of the word corresponding to one of its senses. In this study, we investigated an alternative conception of polysemy, based on the idea that polysemous variation in meaning is a continuous, graded phenomenon that occurs as a function of contextual variation in word usage. We quantified this contextual variation using semantic diversity (SemD), a corpus-based measure of the degree to which a particular word is used in a diverse set of linguistic contexts. In line with other approaches to polysemy, we found a reaction time (RT) advantage for high SemD words in lexical decision, which occurred for words of both high and low imageability. When participants made semantic relatedness decisions to word pairs, however, responses were slower to high SemD pairs, irrespective of whether these were related or unrelated. Again, this result emerged irrespective of the imageability of the word. The latter result diverges from previous findings using homonyms, in which ambiguity effects have only been found for related word pairs. We argue that participants were slower to respond to high SemD words because their high contextual variability resulted in noisy, underspecified semantic representations that were more difficult to compare with one another. We demonstrated this principle in a connectionist computational model that was trained to activate distributed semantic representations from orthographic inputs. Greater variability in the orthography-to-semantic mappings of high SemD words resulted in a lower degree of similarity for related pairs of this type. At the same time, the representations of high SemD unrelated pairs were less distinct from one another. In addition, the model demonstrated more rapid semantic activation for high SemD words, thought to underpin the processing advantage in lexical decision. These results support the view that polysemous variation in word meaning can be conceptualized in terms of graded variation in distributed semantic representations."
},
{
"pmid": "21139165",
"title": "Leipzig Affective Norms for German: a reliability study.",
"abstract": "To facilitate investigations of verbal emotional processing, we introduce the Leipzig Affective Norms for German (LANG), a list of 1,000 German nouns that have been rated for emotional valence, arousal, and concreteness. A critical factor regarding the quality of normative word data is their reliability. We therefore acquired ratings from a sample that was tested twice, with an interval of 2 years, to calculate test-retest reliability. Furthermore, we recruited a second sample to test reliability across independent samples. The results show (1) the typical quadratic relation of valence and arousal, replicating previous data, (2) very high test-retest reliability (>.95), and (3) high correlations between the two samples (>.85). Because the range of ratings was also very high, we provide a comprehensive set of words with reliable affective norms, which makes it possible to select highly controlled subsamples varying in emotional status. The database is available as a supplement for this article at http://brm.psychonomic-journals.org/content/supplemental."
},
{
"pmid": "30886898",
"title": "Effects of Budesonide Combined with Noninvasive Ventilation on PCT, sTREM-1, Chest Lung Compliance, Humoral Immune Function and Quality of Life in Patients with AECOPD Complicated with Type II Respiratory Failure.",
"abstract": "OBJECTIVE\nOur objective is to explore the effects of budesonide combined with noninvasive ventilation on procalcitonin (PCT), soluble myeloid cell triggering receptor-1 (sTREM-1), thoracic and lung compliance, humoral immune function, and quality of life in patients with acute exacerbation of chronic obstructive pulmonary disease (AECOPD) complicated with type II respiratory failure.\n\n\nMETHODS\nThere were 82 patients with AECOPD complicated with type II respiratory failure admitted into our hospital between March, 2016-September, 2017. They were selected and randomly divided into observation group (n=41) and control group (n=41). The patients in the control group received noninvasive mechanical ventilation and the patients in the observation group received budesonide based on the control group. The treatment courses were both 10 days.\n\n\nRESULTS\nThe total effective rate in the observation group (90.25%) was higher than the control group (65.85%) (P<0.05). The scores of cough, expectoration, and dyspnea were decreased after treatment (Observation group: t=18.7498, 23.2195, 26.0043, control group: t=19.9456, 11.6261, 14.2881, P<0.05); the scores of cough, expectoration, and dyspnea in the observation group were lower than the control group after treatment (t=11.6205, 17.4139, 11.6484, P<0.05). PaO2 was increased and PaCO2 was decreased in both groups after treatment (Observation group: t=24.1385, 20.7360, control group: t=11.6606, 9.2268, P<0.05); PaO2 was higher and PaCO2 was lower in the observation group than the control group after treatment (t=10.3209, 12.0115, P<0.05). Serum PCT and sTREM-1 in both groups were decreased after treatment (Observation group: t=16.2174, 12.6698, control group: t=7.2283, 6.1634, P<0.05); serum PCT and sTREM-1 in the observation group were lower than the control group after treatment (t=10.1017, 7.8227, P<0.05). The thoracic and lung compliance in both groups were increased after treatment (Observation group: t=30.5359, 17.8471, control group: t=21.2426, 13.0007, P<0.05); the thoracic and lung compliance in the observation group were higher than the control group after treatment (t=10.8079, 5.9464, P<0.05). IgA and IgG in both groups were increased after treatment (Observation group: t=9.5794, 25.3274, control group: t=5.5000, 4.7943, P<0.05), however IgM was not statistically different after treatment (Observation group: t=0.7845, control group: t=0.1767, P>0.05); IgA and IgG in the observation group were higher than the control group (t=4.9190, 4.7943, P<0.05), however IgM was not statistically different between two groups after treatment (t=0.6168, P>0.05). COPD assessment test (CAT) scores were decreased in both groups after treatment (Observation group: t=20.6781, control group: t=9.0235, P<0.05); CAT score in the observation group was lower than the control group after treatment (t=12.9515, P<0.05). Forced expiratory volume in one second (FEV1%) and forced expiratory volume in one second/ forced expiratory volume in one second (FEV1/FVC) were increased in both groups after treatment (Observation group: t=15.3684, 15.9404, control group: t=10.6640, 12.8979, P<0.05); FEV1% and FEV1/FVC in the observation group were higher than the control group (t=6.9528, 7.3527,P<0.05). The rates of complication were not statistically different between two groups (P>0.05).\n\n\nCONCLUSION\nBudesonide combined with noninvasive mechanical ventilation has good curative effects in treating AECOPE patients complicated with type II respiratory failure. It can decrease serum PCT and sTREM-1, increase thoracic lung compliance, and improve the humoral immune function and life quality."
},
{
"pmid": "21171803",
"title": "The representation of abstract words: why emotion matters.",
"abstract": "Although much is known about the representation and processing of concrete concepts, knowledge of what abstract semantics might be is severely limited. In this article we first address the adequacy of the 2 dominant accounts (dual coding theory and the context availability model) put forward in order to explain representation and processing differences between concrete and abstract words. We find that neither proposal can account for experimental findings and that this is, at least partly, because abstract words are considered to be unrelated to experiential information in both of these accounts. We then address a particular type of experiential information, emotional content, and demonstrate that it plays a crucial role in the processing and representation of abstract concepts: Statistically, abstract words are more emotionally valenced than are concrete words, and this accounts for a residual latency advantage for abstract words, when variables such as imageability (a construct derived from dual coding theory) and rated context availability are held constant. We conclude with a discussion of our novel hypothesis for embodied abstract semantics."
},
{
"pmid": "19182119",
"title": "Using the World-Wide Web to obtain large-scale word norms: 190,212 ratings on a set of 2,654 German nouns.",
"abstract": "This article presents a new database of 2,654 German nouns rated by a sample of 3,907 subjects on three psycholinguistic attributes: concreteness, valence, and arousal. As a new means of data collection in the field of psycholinguistic research, all ratings were obtained via the Internet, using a tailored Web application. Analysis of the obtained word norms showed good agreement with two existing norm sets. A cluster analysis revealed a plausible set of four classes of nouns: abstract concepts, aversive events, pleasant activities, and physical objects. In an additional application example, we demonstrate the usefulness of the database for creating parallel word lists whose elements match as closely as possible. The complete database is available for free from ftp://ftp.uni-duesseldorf.de/pub/psycho/lahl/WWN. Moreover, the Web application used for data collection is inherently capable of collecting word norms in any language and is going to be released for public use as well."
},
{
"pmid": "29630777",
"title": "The Emotions of Abstract Words: A Distributional Semantic Analysis.",
"abstract": "Recent psycholinguistic and neuroscientific research has emphasized the crucial role of emotions for abstract words, which would be grounded by affective experience, instead of a sensorimotor one. The hypothesis of affective embodiment has been proposed as an alternative to the idea that abstract words are linguistically coded and that linguistic processing plays a key role in their acquisition and processing. In this paper, we use distributional semantic models to explore the complex interplay between linguistic and affective information in the representation of abstract words. Distributional analyses on Italian norming data show that abstract words have more affective content and tend to co-occur with contexts with higher emotive values, according to affective statistical indices estimated in terms of distributional similarity with a restricted number of seed words strongly associated with a set of basic emotions. Therefore, the strong affective content of abstract words might just be an indirect byproduct of co-occurrence statistics. This is consistent with a version of representational pluralism in which concepts that are fully embodied either at the sensorimotor or at the affective level live side-by-side with concepts only indirectly embodied via their linguistic associations with other embodied words."
},
{
"pmid": "19363198",
"title": "Modality exclusivity norms for 423 object properties.",
"abstract": "Recent work has shown that people routinely use perceptual information during language comprehension and conceptual processing, from single-word recognition to modality-switching costs in property verification. In investigating such links between perceptual and conceptual representations, the use of modality-specific stimuli plays a central role. To aid researchers working in this area, we provide a set of norms for 423 adjectives, each describing an object property, with mean ratings of how strongly that property is experienced through each of five perceptual modalities (visual, haptic, auditory, olfactory, and gustatory). The data set also contains estimates of modality exclusivity--that is, a measure of the extent to which a particular property may be considered unimodal (i.e., perceived through one sense alone). Although there already exists a number of sets of word and object norms, we provide the first set to categorize words describing object properties along the dimensions of the five perceptual modalities. We hope that the norms will be of use to researchers working at the interface between linguistic, conceptual, and perceptual systems. The modality exclusivity norms may be downloaded as supplemental materials for this article from brm.psychonomic-journals.org/content/supplemental."
},
{
"pmid": "23055172",
"title": "Modality exclusivity norms for 400 nouns: the relationship between perceptual experience and surface word form.",
"abstract": "We present modality exclusivity norms for 400 randomly selected noun concepts, for which participants provided perceptual strength ratings across five sensory modalities (i.e., hearing, taste, touch, smell, and vision). A comparison with previous norms showed that noun concepts are more multimodal than adjective concepts, as nouns tend to subsume multiple adjectival property concepts (e.g., perceptual experience of the concept baby involves auditory, haptic, olfactory, and visual properties, and hence leads to multimodal perceptual strength). To show the value of these norms, we then used them to test a prediction of the sound symbolism hypothesis: Analysis revealed a systematic relationship between strength of perceptual experience in the referent concept and surface word form, such that distinctive perceptual experience tends to attract distinctive lexical labels. In other words, modality-specific norms of perceptual strength are useful for exploring not just the nature of grounded concepts, but also the nature of form-meaning relationships. These norms will be of benefit to those interested in the representational nature of concepts, the roles of perceptual information in word processing and in grounded cognition more generally, and the relationship between form and meaning in language development and evolution."
},
{
"pmid": "30684227",
"title": "Recovering the variance of d' from hit and false alarm statistics.",
"abstract": "Sometimes the reports of primary studies that are potentially analyzable within the signal detection theory framework do not report sample statistics for its main indexes, especially the sample variance of d'. We describe a procedure for estimating the variance of d' from other sample statistics (specifically, the mean and variance of the observed rates of hit and false alarm). The procedure acknowledges that individuals can be heterogeneous in their sensitivity and/or decision criteria, and it does not adopt unjustifiable or needlessly complex assumptions. In two simulation studies reported here, we show that the procedure produces certain biases, but, when used in meta-analysis, it produces very reasonable results. Specifically, the weighted estimate of the mean sensitivity is very accurate, and the coverage of the confidence interval is very close to the nominal confidence level. We applied the procedure to 20 experimental groups or conditions from seven articles (employing recognition memory or attention tasks) that reported statistics for both the hit and false alarm rates, as well as for d'. In most of these studies the assumption of homogeneity was untenable. The variances estimated by our method, based on the hit and false alarm rates, approximate reasonably to the variances in d' reported in those articles. The method is useful for estimating unreported variances of d', so that the associated studies can be retained for meta-analyses."
},
{
"pmid": "25695623",
"title": "How useful are corpus-based methods for extrapolating psycholinguistic variables?",
"abstract": "Subjective ratings for age of acquisition, concreteness, affective valence, and many other variables are an important element of psycholinguistic research. However, even for well-studied languages, ratings usually cover just a small part of the vocabulary. A possible solution involves using corpora to build a semantic similarity space and to apply machine learning techniques to extrapolate existing ratings to previously unrated words. We conduct a systematic comparison of two extrapolation techniques: k-nearest neighbours, and random forest, in combination with semantic spaces built using latent semantic analysis, topic model, a hyperspace analogue to language (HAL)-like model, and a skip-gram model. A variant of the k-nearest neighbours method used with skip-gram word vectors gives the most accurate predictions but the random forest method has an advantage of being able to easily incorporate additional predictors. We evaluate the usefulness of the methods by exploring how much of the human performance in a lexical decision task can be explained by extrapolated ratings for age of acquisition and how precisely we can assign words to discrete categories based on extrapolated ratings. We find that at least some of the extrapolation methods may introduce artefacts to the data and produce results that could lead to different conclusions that would be reached based on the human ratings. From a practical point of view, the usefulness of ratings extrapolated with the described methods may be limited."
},
{
"pmid": "11814216",
"title": "Rethinking the word frequency effect: the neglected role of distributional information in lexical processing.",
"abstract": "Attempts to quantify lexical variation have produced a large number of theoretical and empirical constructs, such as Word Frequency, Concreteness, and Ambiguity, which have been claimed to predict between-word differences in lexical processing behavior. Models of word recognition that have been developed to account for the effects of these variables have typically lacked adequate semantic representations, and have dealt with words as if they exist in isolation from their environment. We present a new dimension of lexical variation that is addressed to this concern. Contextual Distinctiveness (CD), a corpus-derived summary measure of the frequency distribution of the contexts in which a word occurs, is naturally compatible with contextual theories of semantic representation and meaning. Experiment 1 demonstrates that CD is a significantly better predictor of lexical decision latencies than occurrence frequency, suggesting that CD is the more psychologically relevant variable. We additionally explore the relationship between CD and six subjectively-defined measures: Concreteness, Context Availability, Number of Contexts, Ambiguity, Age of Acquisition and Familiarity and find CD to be reliably related to Ambiguity only. We argue for the priority of immediate context in determining the representation and processing of language."
},
{
"pmid": "1790654",
"title": "Semantic networks of English.",
"abstract": "Principles of lexical semantics developed in the course of building an on-line lexical database are discussed. The approach is relational rather than componential. The fundamental semantic relation is synonymy, which is required in order to define the lexicalized concepts that words can be used to express. Other semantic relations between these concepts are then described. No single set of semantic relations or organizational structure is adequate for the entire lexicon: nouns, adjectives, and verbs each have their own semantic relations and their own organization determined by the role they must play in the construction of linguistic messages."
},
{
"pmid": "28707214",
"title": "Statistical and methodological problems with concreteness and other semantic variables: A list memory experiment case study.",
"abstract": "The purpose of this article is to highlight problems with a range of semantic psycholinguistic variables (concreteness, imageability, individual modality norms, and emotional valence) and to provide a way of avoiding these problems. Focusing on concreteness, I show that for a large class of words in the Brysbaert, Warriner, and Kuperman (Behavior Research Methods 46: 904-911, 2013) concreteness norms, the mean concreteness values do not reflect the judgments that actual participants made. This problem applies to nearly every word in the middle of the concreteness scale. Using list memory experiments as a case study, I show that many of the \"abstract\" stimuli in concreteness experiments are not unequivocally abstract. Instead, they are simply those words about which participants tend to disagree. I report three replications of list memory experiments in which the contrast between concrete and abstract stimuli was maximized, so that the mean concreteness values were accurate reflections of participants' judgments. The first two experiments did not produce a concreteness effect. After I introduced an additional control, the third experiment did produce a concreteness effect. The article closes with a discussion of the implications of these results, as well as a consideration of variables other than concreteness. The sensorimotor experience variables (imageability and individual modality norms) show the same distribution as concreteness. The distribution of emotional valence scores is healthier, but variability in ratings takes on a special significance for this measure because of how the scale is constructed. I recommend that researchers using these variables keep the standard deviations of the ratings of their stimuli as low as possible."
},
{
"pmid": "23205008",
"title": "The semantic richness of abstract concepts.",
"abstract": "We contrasted the predictive power of three measures of semantic richness-number of features (NFs), contextual dispersion (CD), and a novel measure of number of semantic neighbors (NSN)-for a large set of concrete and abstract concepts on lexical decision and naming tasks. NSN (but not NF) facilitated processing for abstract concepts, while NF (but not NSN) facilitated processing for the most concrete concepts, consistent with claims that linguistic information is more relevant for abstract concepts in early processing. Additionally, converging evidence from two datasets suggests that when NSN and CD are controlled for, the features that most facilitate processing are those associated with a concept's physical characteristics and real-world contexts. These results suggest that rich linguistic contexts (many semantic neighbors) facilitate early activation of abstract concepts, whereas concrete concepts benefit more from rich physical contexts (many associated objects and locations)."
},
{
"pmid": "24998307",
"title": "Reproducing affective norms with lexical co-occurrence statistics: Predicting valence, arousal, and dominance.",
"abstract": "Human ratings of valence, arousal, and dominance are frequently used to study the cognitive mechanisms of emotional attention, word recognition, and numerous other phenomena in which emotions are hypothesized to play an important role. Collecting such norms from human raters is expensive and time consuming. As a result, affective norms are available for only a small number of English words, are not available for proper nouns in English, and are sparse in other languages. This paper investigated whether affective ratings can be predicted from length, contextual diversity, co-occurrences with words of known valence, and orthographic similarity to words of known valence, providing an algorithm for estimating affective ratings for larger and different datasets. Our bootstrapped ratings achieved correlations with human ratings on valence, arousal, and dominance that are on par with previously reported correlations across gender, age, education and language boundaries. We release these bootstrapped norms for 23,495 English words."
},
{
"pmid": "28818790",
"title": "Effects of semantic neighborhood density in abstract and concrete words.",
"abstract": "Concrete and abstract words are thought to differ along several psycholinguistic variables, such as frequency and emotional content. Here, we consider another variable, semantic neighborhood density, which has received much less attention, likely because semantic neighborhoods of abstract words are difficult to measure. Using a corpus-based method that creates representations of words that emphasize featural information, the current investigation explores the relationship between neighborhood density and concreteness in a large set of English nouns. Two important observations emerge. First, semantic neighborhood density is higher for concrete than for abstract words, even when other variables are accounted for, especially for smaller neighborhood sizes. Second, the effects of semantic neighborhood density on behavior are different for concrete and abstract words. Lexical decision reaction times are fastest for words with sparse neighborhoods; however, this effect is stronger for concrete words than for abstract words. These results suggest that semantic neighborhood density plays a role in the cognitive and psycholinguistic differences between concrete and abstract words, and should be taken into account in studies involving lexical semantics. Furthermore, the pattern of results with the current feature-based neighborhood measure is very different from that with associatively defined neighborhoods, suggesting that these two methods should be treated as separate measures rather than two interchangeable measures of semantic neighborhoods."
},
{
"pmid": "28474609",
"title": "Rules to be adopted for publishing a scientific paper.",
"abstract": "The main question to ask himself when preparing to write an article is \"why publish a scientific paper?\" First of all to publish an own article qualifies his author - or authors - as \"scientist\". Because the surgery is a mixture of art and knowledge, which coexist and interreact mutually increasing each other, scientific publications are the world where ideas are shared. Secondly, to an academic career is essential to be Author of scientific publications; but also for those who follow an hospital career or simply exercise the surgical profession in other contexts it represents the opportunity to communicate their experience and give a personal contribution to the knowledge of the art. The commitment of the academic world in particular must also stimulate new generations to pursue not only technical skills but at the same time updating their knowledge, and its members must also take on the role of researchers. The dissemination of ideas in the scientific community is a milestone for progress, because if they are not shared their concrete value is fleeting, and professional surgical activity value is itself transient and ephemeral, while the written documentation very often goes beyond the time, but certainly beyond space, stably transmitting ideas: \"scripta manent\". To write a \"paper\" - as a scientific publication is conventionally and internationally named - requires compliance with specific rules, which make it suitable to diffusion and well used by the readers. These appropriate rules are stated in the similar although variable \"Guidelines for the Authors\" set by the editors of most scientific journals - as also of Annali Italiani di Chirurgia - on the common purpose of making clear, comprehensive and concise the exposure of the study that is the motivation of the publication. The printed papers - as well the more recent on-line publications in digital format - use a very different language from that spoken in conferences and in verbal communications. Exemplary is the form of presentation used in the best \"papers\" of British tradition, where every effort is aimed at the clarity and brevity, for definite, consequential and well understood conclusions. Beyond any residual national pride, in the scientific world that has been globalized well earlier than other sectors of civil life, it is not a surrendering to a foreign tradition to conform oneself to the British model when writing a paper - and not simply adopting the English language - with the certainty to better achieve the brevity, clarity and concreteness of an exhaustive communication. \"Be brief and you'll be good\" - this is a suggestion always of great value to overcome the congestion and convulsions of our times. Furthermore in following the rules suggested by the \"Guidelines for Authors\" in writing a paper gives the Author the adjunctive advantage of a preventive and autonomous checking the validity and interest of the article as for premises, objectivity and reality of conclusions, and therefore vehicle of at least of one clear element of knowledge and progress, although possibly and despite of a \"niche\" argument. A paper is much more effective as more focused on a well-defined theme and as such more easily understood and its conclusions more easily assimilated. Therefore in the formal preparation of a paper, the critical sense must develop itself and grow, added to the vocation of following and attain the curiosity of knowledge typical of surgery, but following the typical procedure of the medical profession in the approaching march to the diagnosis, and then to the identification of the correct therapeutic indications which must take into account individual patient characteristics. As the very technical skill in performing the therapy chosen and agreed with the patient, completion of the professional duties, must not leave aside a constant exercise of criticism and self-criticism at every stage of the profession, similarly this same critical sense is also necessary in the preparation of a scientific paper to transmit a concrete, valid and original scientific contribution. It is useful to keep in mind that, as any student knows and does perhaps unconsciously, those who read an article do not follow the order in which it is printed, but having considered the catchy title go directly to review the conclusions. If still interested, they go on to the Abstract or to Summary, and only at this point if intrigued they read the Introduction, and then the material and method of study with their results, and finally the discussion, and to the end they read anew the Conclusions. However, in the formal drafting of a paper one has to remember this very likely sequence in reading an article and adapt accordingly: first put into focus the conclusions, which in fact are the reason of the publication. Therefore this must lead us to give up writing an article such as a report or a conference, and to conform attentively to the Anglo-Saxon model of presentation of a paper. A last point concerns the language of the presentation of a paper. Using own native language, in our case Italian, it is easy to the precise in language and is facilitated the communication of concepts, but that restricts the communication to own linguistics environment, whereas all scientific knowledge and the surgical one in particular, in accordance as stated by the motto of the International Society of Surgery \"La Science n'a pas de Patrie\", needs no linguistic boundaries. We should not feel therefore humiliated and colonized in adopting the English language in publishing our papers, because it has become the language of science globally adopted, indeed we must consider positively this choice for formal and substantial reasons. Formally the use of English forces ourselves to conform to a language traditionally pragmatic, schematic and synthetic typical of the Anglo-Saxon world, so renouncing to the usual subordinate phrases of Italian language, that may result contrary to its ingenuous purpose, making instead less clear and more foggy the concepts. This achieves in the meantime the advantage of a better and more schematic final clarity. We must take in mind from the very beginning the final concept you want to express with every sentence, and take it in the highest account. From a substantial point of view the adoption of English opens at the best the entire scientific world to all cultural contributions, no longer limited to the national linguistic areas, that now, in the globalized world of knowledge remains provincial even though vector of undisputed professional value and experience of Italian surgery. Any possible inadequacies of the used English language - that should be carefully avoided in terms of syntactic and orthographic rules, with the eventual help of a native language fellow - can anyways be accepted within certain limits as the price of a globalization of the diffusing knowledge, become even more evident by the introduction of digital editions. A special case is given by the publication of experiences derived from individual case reports. Clearly it is evident the impulse to disclose one's own individual experience, or because of its rare occurrence, or on the enthusiastic wave of a diagnosis successfully completed or because of the own satisfaction in choosing and performing an effective treatment successfully achieved thank to a surgical technique exceptional or of particularly difficulty. One can, however, make the mistake of aiming to publish \"a case report\" simply to show off one's skills and personal professional value. It is a short-sighted goal that gives the author an ephemeral satisfaction, but it will almost inevitably penalizated in the judgment of colleagues who read it. For psychological reasons it is difficult for someone to cheer the professional success of a not related fellow, and therefore it is advisable to refrain from this type of publication, which is a waste of time not very profitable, both to the one's reputation and for the likely rejection by the most accredited scientific journals. The publication of a case report must follow the same rules set for a \"genuine article\", with the difference that in the introduction has to be immediately highlighted the particularity of the experience, possibly framing it in common knowledge. The presentation of the clinical and strategic aspects is the result of a careful reflection on the surgical experience lived, because its exposure has to be very different from an extemporaneous oral presentation, which is by nature open to a free immediate confrontation in oral discussions that follow."
},
{
"pmid": "24808876",
"title": "Clustering, hierarchical organization, and the topography of abstract and concrete nouns.",
"abstract": "The empirical study of language has historically relied heavily upon concrete word stimuli. By definition, concrete words evoke salient perceptual associations that fit well within feature-based, sensorimotor models of word meaning. In contrast, many theorists argue that abstract words are \"disembodied\" in that their meaning is mediated through language. We investigated word meaning as distributed in multidimensional space using hierarchical cluster analysis. Participants (N = 365) rated target words (n = 400 English nouns) across 12 cognitive dimensions (e.g., polarity, ease of teaching, emotional valence). Factor reduction revealed three latent factors, corresponding roughly to perceptual salience, affective association, and magnitude. We plotted the original 400 words for the three latent factors. Abstract and concrete words showed overlap in their topography but also differentiated themselves in semantic space. This topographic approach to word meaning offers a unique perspective to word concreteness."
},
{
"pmid": "27138012",
"title": "The principals of meaning: Extracting semantic dimensions from co-occurrence models of semantics.",
"abstract": "Notable progress has been made recently on computational models of semantics using vector representations for word meaning (Mikolov, Chen, Corrado, & Dean, 2013; Mikolov, Sutskever, Chen, Corrado, & Dean, 2013). As representations of meaning, recent models presumably hone in on plausible organizational principles for meaning. We performed an analysis on the organization of the skip-gram model's semantic space. Consistent with human performance (Osgood, Suci, & Tannenbaum, 1957), the skip-gram model primarily relies on affective distinctions to organize meaning. We showed that the skip-gram model accounts for unique variance in behavioral measures of lexical access above and beyond that accounted for by affective and lexical measures. We also raised the possibility that word frequency predicts behavioral measures of lexical access due to the fact that word use is organized by semantics. Deconstruction of the semantic representations in semantic models has the potential to reveal organizing principles of human semantics."
},
{
"pmid": "23408565",
"title": "The neural representation of abstract words: the role of emotion.",
"abstract": "It is generally assumed that abstract concepts are linguistically coded, in line with imaging evidence of greater engagement of the left perisylvian language network for abstract than concrete words (Binder JR, Desai RH, Graves WW, Conant LL. 2009. Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cerebral Cortex. 19:2767-2796; Wang J, Conder JA, Blitzer DN, Shinkareva SV. 2010. Neural representation of abstract and concrete concepts: A meta-analysis of neuroimaging studies. Hum Brain Map. 31:1459-1468). Recent behavioral work, which used tighter matching of items than previous studies, however, suggests that abstract concepts also entail affective processing to a greater extent than concrete concepts (Kousta S-T, Vigliocco G, Vinson DP, Andrews M, Del Campo E. The representation of abstract words: Why emotion matters. J Exp Psychol Gen. 140:14-34). Here we report a functional magnetic resonance imaging experiment that shows greater engagement of the rostral anterior cingulate cortex, an area associated with emotion processing (e.g., Etkin A, Egner T, Peraza DM, Kandel ER, Hirsch J. 2006. Resolving emotional conflict: A role for the rostral anterior cingulate cortex in modulating activity in the amygdala. Neuron. 52:871), in abstract processing. For abstract words, activation in this area was modulated by the hedonic valence (degree of positive or negative affective association) of our items. A correlation analysis of more than 1,400 English words further showed that abstract words, in general, receive higher ratings for affective associations (both valence and arousal) than concrete words, supporting the view that engagement of emotional processing is generally required for processing abstract words. We argue that these results support embodiment views of semantic representation, according to which, whereas concrete concepts are grounded in our sensory-motor experience, affective experience is crucial in the grounding of abstract concepts."
},
{
"pmid": "23404613",
"title": "Norms of valence, arousal, and dominance for 13,915 English lemmas.",
"abstract": "Information about the affective meanings of words is used by researchers working on emotions and moods, word recognition and memory, and text-based sentiment analysis. Three components of emotions are traditionally distinguished: valence (the pleasantness of a stimulus), arousal (the intensity of emotion provoked by a stimulus), and dominance (the degree of control exerted by a stimulus). Thus far, nearly all research has been based on the ANEW norms collected by Bradley and Lang (1999) for 1,034 words. We extended that database to nearly 14,000 English lemmas, providing researchers with a much richer source of information, including gender, age, and educational differences in emotion norms. As an example of the new possibilities, we included stimuli from nearly all of the category norms (e.g., types of diseases, occupations, and taboo words) collected by Van Overschelde, Rawson, and Dunlosky (Journal of Memory and Language 50:289-335, 2004), making it possible to include affect in studies of semantic memory."
},
{
"pmid": "21702791",
"title": "Content differences for abstract and concrete concepts.",
"abstract": "Concept properties are an integral part of theories of conceptual representation and processing. To date, little is known about conceptual properties of abstract concepts, such as idea. This experiment systematically compared the content of 18 abstract and 18 concrete concepts, using a feature generation task. Thirty-one participants listed characteristics of the concepts (i.e., item properties) or their relevant context (i.e., context properties). Abstract concepts had significantly fewer intrinsic item properties and more properties expressing subjective experiences than concrete concepts. Situation components generated for abstract and concrete concepts differed in kind, but not in number. Abstract concepts were predominantly related to social aspects of situations. Properties were significantly less specific for abstract than for concrete concepts. Thus, abstractness emerged as a function of several, both qualitative and quantitative, factors."
}
] |
Scientific Reports | 35241746 | PMC8894420 | 10.1038/s41598-022-07527-3 | A pavement distresses identification method optimized for YOLOv5s | Automatic detection and recognition of pavement distresses is the key to timely repair of pavement. Repairing the pavement distresses in time can prevent the destruction of road structure and the occurrence of traffic accidents. However, some other factors, such as a single object category, shading and occlusion, make detection of pavement distresses very challenging. In order to solve these problems, we use the improved YOLOv5 model to detect various pavement distresses. We optimize the YOLOv5 model and introduce attention mechanism to enhance the robustness of the model. The improved model is more suitable for deployment in embedded devices. The optimized model is transplanted to the self-built intelligent mobile platform. Experimental results show that the improved network model proposed in this paper can effectively identify pavement distresses on the self-built intelligent mobile platform and datasets. The precision, recall and mAP are 95.5%, 94.3% and 95%. Compared with YOLOv5s and YOLOv4 models, the mAP of the improved YOLOv5s model is increased by 4.3% and 25.8%. This method can provide technical reference for pavement distresses detection robot. | Related workArtificial intelligence, as the theoretical basis of deep learning, has been updated by researchers in recent years. Liu et al.13 proposed A long-term memory-based model for greenhouse climate prediction. He used long short-term memory (LSTM) model to capture the dependence between historical climate data. The unreliability of label data has been studied across borders. Liu, Qi et al.14,15 proposed a framework for tag noise filtering and missing Tag Supplement (LNFS). They take location tags in location-based social networks (LBSN) as an example to implement our framework. In addition, They propose an attention-based bidirectional gated recurrent unit (GRU) model for point-of-interest (POI) category Prediction (ABG_poic). They combine the attention Mechanism with Bidirectional GRU to Focus on history Check-in records, which can improve the interpretability of the model.Deep learning is gradually applied to the task of pavement distresses detection. Yusof et al.16 used deep convolutional neural networks for crack detection and classification of asphalt pavement images. In their study, the input to their network framework required clear and high-quality pictures with a relatively single category of predictions. This does not match the complexity of actual pavement distress. Xianglong et al.17 studied the recognition of road cracks based on VGG deep convolutional neural network, and the types of cracks include transverse, longitudinal and mesh. This has led to a certain increase in the variety of road diseases, but the VGG network has the disadvantage of a large number of network parameters and slow working speed, which cannot be ported to embedded devices in practical applications. In order to solve the problem of a single type of pavement disease, the number of parameters is large and the model cannot be well ported to the embedded device. V Mandal et al.18 proposed a deep learning framework-based pavement hazard study. He used the YOLOv5s framework for classification detection and expanded the detection sample, but the detection precision was low. Based on this, we need to further ah expand the data samples and restructure the network model to improve the detection efficiency of detecting pavement distress. Therefore, we adopt the one-stage algorithm.The single-stage algorithm has high detection accuracy, which not only achieves success in the pursuit of high detection accuracy, but also shows excellent performance in the detection efficiency. Therefore, YOLO represents the work of the single-phase algorithm, as well as the update from YOLOv2 to YOLOv5. The YOLOv519 is the latest model in the YOLO20 family. The network model has high detection precision and fast reasoning speed. The fastest detection speed can reach 140 frames per second. On the other hand, the weight file size of the YOLOv5s object detection network model is small, which is nearly 90% smaller than that of YOLOv421, which indicates that YOLOv5 model is suitable for deploying embedded devices to realize real-time detection. Therefore, the advantages of YOLOv5s are that the network is characterized by high detection precision, light weight and fast detection speed. There are four architectures of YOLOv5, specifically named YOLOv5s, YOLOv5m, YOLOv5l and YOLOv5x. The main difference is the depth and width of the models. As there are five categories of objects to be identified in this study, the recognition model has high requirements on real-time performance and lightweight performance. Therefore, we optimize and improve the model in order to improve the accuracy and efficiency of the model. | [
"21869365",
"17588345",
"34741057",
"29395652",
"26353135",
"31034408"
] | [
{
"pmid": "21869365",
"title": "A computational approach to edge detection.",
"abstract": "This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge."
},
{
"pmid": "17588345",
"title": "Support vector machine for SAR/QSAR of phenethyl-amines.",
"abstract": "AIM\nTo discriminate 32 phenethyl-amines between antagonists and agonists, and predict the activities of these compounds.\n\n\nMETHODS\nThe support vector machine (SVM) is employed to investigate the structure-activity relationship (SAR)/quantitative structure-activity relationship (QSAR) of phenethyl-amines based on molecular descriptors.\n\n\nRESULTS\nBy using the leave-one-out cross-validation (LOOCV) test, 1 optimal SAR and 2 optimal QSAR models for agonists and antagonists were attained. The accuracy of prediction for the classification of phenethyl-amines by using the LOOCV test is 91.67%, and the accuracy of prediction for the classification of phenethyl-amines by using the independent test is 100%; the results are better than those of the Fisher, the artificial neural network (ANN), and the K-nearest neighbor models for this real world data. The RMSE (root mean square error) of antagonists' QSAR model is 0.5881, and the RMSE of agonists' QSAR model is 0.4779, which are better than those of the multiple linear regression, partial least squares, and ANN models for this real world data.\n\n\nCONCLUSION\nThe SVM can be used to investigate the SAR and QSAR of phenethylamines and could be a promising tool in the field of SAR/QSAR research."
},
{
"pmid": "34741057",
"title": "Real-time detection of particleboard surface defects based on improved YOLOV5 target detection.",
"abstract": "Particleboard surface defect detection technology is of great significance to the automation of particleboard detection, but the current detection technology has disadvantages such as low accuracy and poor real-time performance. Therefore, this paper proposes an improved lightweight detection method of You Only Live Once v5 (YOLOv5), namely PB-YOLOv5 (Particle Board-YOLOv5). Firstly, the gamma-ray transform method and the image difference method are combined to deal with the uneven illumination of the acquired images, so that the uneven illumination is well corrected. Secondly, Ghost Bottleneck lightweight deep convolution module is added to Backbone module and Neck module of YOLOv5 detection algorithm to reduce model volume. Thirdly, the SELayer module of attention mechanism is added into Backbone module. Finally, replace Conv in Neck module with depthwise convolution (DWConv) to compress network parameters. The experimental results show that the PB-YOLOv5 model proposed in this paper can accurately identify five types of defects on the particleboard surface: Bigshavings, SandLeakage, GlueSpot, Soft and OliPollution, and meet the real-time requirements. Specifically, recall, F1 score, [email protected], [email protected]:.95 values of pB-Yolov5s model were 91.22%, 94.5%, 92.1%, 92.8% and 67.8%, respectively. The results of Soft defects were 92.8%, 97.9%, 95.3%, 99.0% and 81.7%, respectively. The detection of single image time of the model is only 0.031 s, and the weight size of the model is only 5.4 MB. Compared with the original YOLOv5s, YOLOv4, YOLOv3 and Faster RCNN, the PB-Yolov5s model has the fastest Detection of single image time. The Detection of single image time was accelerated by 34.0%, 55.1%, 64.4% and 87.9%, and the weight size of the model is compressed by 62.5%, 97.7%, 97.8% and 98.9%, respectively. The mAP value increased by 2.3%, 4.69%, 7.98% and 13.05%, respectively. The results show that the PB-YOLOV5 model proposed in this paper can realize the rapid and accurate detection of particleboard surface defects, and fully meet the requirements of lightweight embedded model."
},
{
"pmid": "29395652",
"title": "Sigmoid-weighted linear units for neural network function approximation in reinforcement learning.",
"abstract": "In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10 × 10 board, using TD(λ) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa(λ) agent with SiLU and dSiLU hidden units."
},
{
"pmid": "26353135",
"title": "Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.",
"abstract": "Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224 × 224) input image. This requirement is \"artificial\" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, \"spatial pyramid pooling\", to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 × faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition."
},
{
"pmid": "31034408",
"title": "Squeeze-and-Excitation Networks.",
"abstract": "The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the \"Squeeze-and-Excitation\" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We show that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. We further demonstrate that SE blocks bring significant improvements in performance for existing state-of-the-art CNNs at slight additional computational cost. Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to 2.251 percent, surpassing the winning entry of 2016 by a relative improvement of ∼ 25 percent. Models and code are available at https://github.com/hujie-frank/SENet."
}
] |
Frontiers in Psychology | null | PMC8895321 | 10.3389/fpsyg.2022.767389 | Establish a Digital Real-Time Learning System With Push Notifications | This study proposes a push notification system that combines digital real-time learning, roll-call, and feedback collection functions. With the gradually flourishing online real-time learning systems, this research further builds roll-call and feedback functions for students to enhance concentration and provide opinions. Additionally, the lecturers can do a roll call irregularly and randomly through the push notification function, avoiding students logging in but away from the keyboard. Lecturers can also send questions to a specific student or invite all students to answer; the replies can show students' learning performance. The system will store each notification in a database and analyse messages automatically to record roll calls. Moreover, the system can record the time and intervals of student feedback, enabling lecturers to check students' attention and learning conditions. Currently, most online digital systems depend on the lecturer to be responsible for the entire system; taking a roll call and asking questions will consume the lecturer's teaching time and strength. The system developed in this article can do roll calls and feedback by push notifications, reducing lecturers' workload. Furthermore, the roll-call and automatic record functions can save the time of paperwork after a course. | 2. Related WorksLiterature (Chen et al., 2018) primarily proposes a new model to explore users' inter-actions and the willingness to continue the discovery in Massive Open Online Courses (MOOCs). Literature (Hsu et al., 2018) further surveys a comparison about users' willingness to learn between MOOCs and traditional digital learning platforms. The research in Literature (Chang et al., 2019) mentions that virtualized MOOCs tend to cause distraction; therefore, using electroencephalography analysis to find out that students' learning performance in MOOCs is significantly superior to that of traditional PowerPoint-material teaching. The research in Literature (Chen and Hsiao-FenTseng, 2012) points out that it requires considering teachers' teaching methods on the platform and the preparation of teaching materials in digital learning. While maintaining students' concentration is critical in digital learning, the study in Literature (Chen, 2012) introduces a system to capture students' facial expressions by the camera, judging whether they are still paying attention to the class. Additionally, Literature (Hwang et al., 2021) reveals that most research has compared the learning performance between traditional textbook teaching and the teaching using electronic book readers in the classroom. In that article, the study investigates the use of writable electronic book readers in the English class in an elementary school and examines the learning results and achievements of the sharing mechanism with in-class and out-of-classroom notes; the findings have shown that a sharing electronic book reader with notes can benefit students' learning results. Meanwhile, Literature (Hwang et al., 2021) also mentions that the article number regarding MOOC research has been increasing in recent years after it becomes a mainstream method since 2011, and most research focuses on resolving the issues in self-learning. Literature (Alzahrani and Meccawy, 2021) further explores why MOOCs becomes popular among adult learners. The study discovers that many professionals utilize MOOCs for their career development as there are plentiful inter-disciplinary courses. Literature (Wu and Li, 2020) points out that the Electronic Word of Mouth (eWOM) is an important information source for learners in MOOC; nonetheless, little research discussed this factor. The new problem of learning resource mention identification in MOOC forums is mentioned in An et al. (2019), i.e., identifying resource mentions in discussions and classifying them into predefined resource types. The study in Literature (Rohan et al., 2021) suggests and evaluates a theoretic model to identify the factors that influence learners to continue participating in MOOCs; it is an expectation-confirmation model with a gamified structure and additional motivation.Literature (Ikhsan et al., 2019) offers an investigation model to understand students' online learning performance and satisfaction. Since online learning is considered a useful learning tool; to reinforce students' performance, Literature (Tzu-Chi, 2020) introduces many strategies to implement in online-learning environments and guide students. Particularly, these strategies are useful in participant observation and self-regulated learning. Literature (Murad et al., 2020) aims to confirm the preparation of the school, teachers, and students and their conditions during the learning process while maintaining the teaching quality, user satisfaction, and both teachers' and students' learning performance. Moreover, Literature (Wang et al., 2021) reveals that the advance of MOOCs has successfully generated another subversive learning method called blended learning. The existing research has shown that blended learning is not only accepted widely by university students but also well-liked. Compared to traditional teaching approaches, blended learning has transferred the relationship between teaching and learning; hence, the assessment criterion should be amended accordingly. Literature (Saliah-Hassane et al., 2019) defines the storage, retrieval, and access to online laboratory methods for intelligent inter-active students. The discussion in Literature (Tao et al., 2014) indicates that most of the existing research of adaptive online learning systems primarily focuses on exploring students' pre- and post-learning performance but ignores discussing the suitable teaching styles and the selected multi-media. Literature (Xing et al., 2021) also proposes a new predictive modeling approach to enhance the model portability for different classes in one course as time goes. Finally, Literature (Zel et al., 2021) states how facial recognition technology can create a fascinating learning experience for different online participants. | [
"27295638",
"27295638",
"27295638",
"27295638",
"22321703",
"27295638",
"27295638",
"27295638",
"27295638",
"27295638",
"16501260",
"25291732",
"27295638",
"27295638",
"27295638",
"27295638",
"27295638"
] | [
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "22321703",
"title": "Factors that influence acceptance of web-based e-learning systems for the in-service education of junior high school teachers in Taiwan.",
"abstract": "Web-based e-learning is not restricted by time or place and can provide teachers with a learning environment that is flexible and convenient, enabling them to efficiently learn, quickly develop their professional expertise, and advance professionally. Many research reports on web-based e-learning have neglected the role of the teacher's perspective in the acceptance of using web-based e-learning systems for in-service education. We distributed questionnaires to 402 junior high school teachers in central Taiwan. This study used the Technology Acceptance Model (TAM) as our theoretical foundation and employed the Structure Equation Model (SEM) to examine factors that influenced intentions to use in-service training conducted through web-based e-learning. The results showed that motivation to use and Internet self-efficacy were significantly positively associated with behavioral intentions regarding the use of web-based e-learning for in-service training through the factors of perceived usefulness and perceived ease of use. The factor of computer anxiety had a significantly negative effect on behavioral intentions toward web-based e-learning in-service training through the factor of perceived ease of use. Perceived usefulness and motivation to use were the primary reasons for the acceptance by junior high school teachers of web-based e-learning systems for in-service training."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "16501260",
"title": "The impact of E-learning in medical education.",
"abstract": "The authors provide an introduction to e-learning and its role in medical education by outlining key terms, the components of e-learning, the evidence for its effectiveness, faculty development needs for implementation, evaluation strategies for e-learning and its technology, and how e-learning might be considered evidence of academic scholarship. E-learning is the use of Internet technologies to enhance knowledge and performance. E-learning technologies offer learners control over content, learning sequence, pace of learning, time, and often media, allowing them to tailor their experiences to meet their personal learning objectives. In diverse medical education contexts, e-learning appears to be at least as effective as traditional instructor-led methods such as lectures. Students do not see e-learning as replacing traditional instructor-led training but as a complement to it, forming part of a blended-learning strategy. A developing infrastructure to support e-learning within medical education includes repositories, or digital libraries, to manage access to e-learning materials, consensus on technical standardization, and methods for peer review of these resources. E-learning presents numerous research opportunities for faculty, along with continuing challenges for documenting scholarship. Innovations in e-learning technologies point toward a revolution in education, allowing learning to be individualized (adaptive learning), enhancing learners' interactions with others (collaborative learning), and transforming the role of the teacher. The integration of e-learning into medical education can catalyze the shift toward applying adult learning theory, where educators will no longer serve mainly as the distributors of content, but will become more involved as facilitators of learning and assessors of competency."
},
{
"pmid": "25291732",
"title": "Stochastic learning via optimizing the variational inequalities.",
"abstract": "A wide variety of learning problems can be posed in the framework of convex optimization. Many efficient algorithms have been developed based on solving the induced optimization problems. However, there exists a gap between the theoretically unbeatable convergence rate and the practically efficient learning speed. In this paper, we use the variational inequality (VI) convergence to describe the learning speed. To this end, we avoid the hard concept of regret in online learning and directly discuss the stochastic learning algorithms. We first cast the regularized learning problem as a VI. Then, we present a stochastic version of alternating direction method of multipliers (ADMMs) to solve the induced VI. We define a new VI-criterion to measure the convergence of stochastic algorithms. While the rate of convergence for any iterative algorithms to solve nonsmooth convex optimization problems cannot be better than O(1/√t), the proposed stochastic ADMM (SADMM) is proved to have an O(1/t) VI-convergence rate for the l1-regularized hinge loss problems without strong convexity and smoothness. The derived VI-convergence results also support the viewpoint that the standard online analysis is too loose to analyze the stochastic setting properly. The experiments demonstrate that SADMM has almost the same performance as the state-of-the-art stochastic learning algorithms but its O(1/t) VI-convergence rate is capable of tightly characterizing the real learning speed."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
}
] |
Frontiers in Neurorobotics | null | PMC8896344 | 10.3389/fnbot.2022.840240 | A Framework for Composite Layup Skill Learning and Generalizing Through Teleoperation | In this article, an impedance control-based framework for human-robot composite layup skill transfer was developed, and the human-in-the-loop mechanism was investigated to achieve human-robot skill transfer. Although there are some works on human-robot skill transfer, it is still difficult to transfer the manipulation skill to robots through teleoperation efficiently and intuitively. In this article, we developed an impedance-based control architecture of telemanipulation in task space for the human-robot skill transfer through teleoperation. This framework not only achieves human-robot skill transfer but also provides a solution to human-robot collaboration through teleoperation. The variable impedance control system enables the compliant interaction between the robot and the environment, smooth transition between different stages. Dynamic movement primitives based learning from demonstration (LfD) is employed to model the human manipulation skills, and the learned skill can be generalized to different tasks and environments, such as the different shapes of components and different orientations of components. The performance of the proposed approach is evaluated on a 7 DoF Franka Panda through the robot-assisted composite layup on different shapes and orientations of the components. | 2. Related Work2.1. Teleoperation for Human-Robot Skill TransferIn terms of algorithms for LfD, there are generally two types, offline and online learning. Learning from demonstration through teleoperation could provide solutions to both types, offline learning and online learning. In Peternel et al. (2016), the authors proposed a human-in-the-loop paradigm to teleoperate and demonstrate a complex task to a robot in real-time. However, this work did not consider the compliant manipulation skills. Online LfD has some advantages over offline learning from demonstration. First, online LfD could form the skill model gradually during the demonstrations. Second, the transition between the teleoperated mode, semi-autonomous and autonomous mode is straightforward. Third, online learning also could provide real-time feedback on the performance of the model, which is like the iterative learning control. The demonstrator could get real-time feedback on the performance of the learned skills. In addition, the learned skill could execute the task online and directly on the sensorimotor level. Finally, because the skill is encoded in the end-effector, it is straightforward to transfer among different robot platforms.In Latifee et al. (2020), incremental learning from the demonstration method was proposed based on the kinaesthetic demonstration to update the current learned skill model. But the kinaesthetic teaching method lacks immersive, especially involved contact-rich task with tactile sensing. Also, it is hard to demonstrate the impedance skill simultaneously. During the human-in-the-loop demonstration, there is a requirement on the mechanism of control allocation and adaptation between the human demonstration and the autonomous execution by the robot.In Rigter et al. (2020), the authors integrated shared autonomy, LfD, and RL, which reduced the human effort in teleoperation and demonstration time. The controller can switch between autonomous mode and teleoperation mode, enabling controller learning online. The human-in-the-loop provides a solution for imitation learning to exploit human intervention, which can train the policy iteratively online (Mandlekar et al., 2020). Shared control is an approach enabling robots and human operators to collaboratively efficiently. In addition, shared control integrated with LfD can further increase the autonomy of the robotic system, which enables efficient task executions (Abi-Farraj et al., 2017). Human-in-the-loop and learning from the demonstration were used to transfer part assembly skills from humans to robots (Peternel et al., 2015). An approach combining operator's input and learned model online was developed for remotely operated vehicles (ROVs) to reduce human effort and teleoperation time. In addition, intelligent control methods were employed in the teleoperation to improve the trajectory tracking accuracy, which can ensure the stability of the human-robot skill transfer system (Yang et al., 2019, 2021a). A comprehensive review on human-robot skill transfer can refer to Si et al. (2021c).2.2. Motion PrimitivesIn the past years, many researchers from the motor control and neurobiology field tried to answer how biological systems execute complex motion in versatile and creative manners. The motion primitives theory was proposed to answer this question, which means our humans can generate a smooth and complex trajectory out of multiple motion primitives. DMPs are an effective model to encode the motion primitives for robots. Therefore, how to generate a smooth and complex trajectory based on a library of DMPs, has gained attention in the robot skill learning communities, and several approaches have been developed to merge the DMPs sequences. In Saveriano et al. (2019), the authors proposed a method to merge motion primitives in Cartesian space, including position and orientation parts. The convergence of the merging strategy has proved theoretically, and experimental evaluation was performed as well.Additionally, the motion primitives theory and knowledge-based framework were integrated and employed in the surgery, which can be generalized to different tasks and environments (Ginesi et al., 2019, 2020). Furthermore, the sequence of DMPs was employed to encode cooperative manipulation for mobile dual-arm (Zhao et al., 2018). The authors proposed to build a library of motion primitive through LfD, and the library, including the translation and orientation, can be generalized to different tasks and novel situations (Pastor et al., 2009; Manschitz et al., 2014). Also, a novel movement primitive representation, named Mixture of attractors, was proposed to encode complex object-relative movements (Manschitz et al., 2018). In our previous study, we proposed a method to merge the motion primitive based on the execution error and real-time feedback to improve the generalization capability and robustness (Si et al., 2021b). | [
"23148415",
"27295638",
"27295638",
"27295638",
"27295638",
"27295638",
"34350212",
"27295638",
"32095878",
"27295638",
"27295638",
"27295638",
"25855820",
"30047914",
"32857705",
"27295638",
"33022599",
"27295638",
"27295638",
"30047914",
"27295638"
] | [
{
"pmid": "23148415",
"title": "Dynamical movement primitives: learning attractor models for motor behaviors.",
"abstract": "Nonlinear dynamical systems have been used in many disciplines to model complex behaviors, including biological motor control, robotics, perception, economics, traffic prediction, and neuroscience. While often the unexpected emergent behavior of nonlinear systems is the focus of investigations, it is of equal importance to create goal-directed behavior (e.g., stable locomotion from a system of coupled oscillators under perceptual guidance). Modeling goal-directed behavior with nonlinear systems is, however, rather difficult due to the parameter sensitivity of these systems, their complex phase transitions in response to subtle parameter changes, and the difficulty of analyzing and predicting their long-term behavior; intuition and time-consuming parameter tuning play a major role. This letter presents and reviews dynamical movement primitives, a line of research for modeling attractor behaviors of autonomous nonlinear dynamical systems with the help of statistical learning techniques. The essence of our approach is to start with a simple dynamical system, such as a set of linear differential equations, and transform those into a weakly nonlinear system with prescribed attractor dynamics by means of a learnable autonomous forcing term. Both point attractors and limit cycle attractors of almost arbitrary complexity can be generated. We explain the design principle of our approach and evaluate its properties in several example applications in motor control and robotics."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "34350212",
"title": "Robotic Manipulation and Capture in Space: A Survey.",
"abstract": "Space exploration and exploitation depend on the development of on-orbit robotic capabilities for tasks such as servicing of satellites, removing of orbital debris, or construction and maintenance of orbital assets. Manipulation and capture of objects on-orbit are key enablers for these capabilities. This survey addresses fundamental aspects of manipulation and capture, such as the dynamics of space manipulator systems (SMS), i.e., satellites equipped with manipulators, the contact dynamics between manipulator grippers/payloads and targets, and the methods for identifying properties of SMSs and their targets. Also, it presents recent work of sensing pose and system states, of motion planning for capturing a target, and of feedback control methods for SMS during motion or interaction tasks. Finally, the paper reviews major ground testing testbeds for capture operations, and several notable missions and technologies developed for capture of targets on-orbit."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "32095878",
"title": "Real-time sensory-motor integration of hippocampal place cell replay and prefrontal sequence learning in simulated and physical rat robots for novel path optimization.",
"abstract": "An open problem in the cognitive dimensions of navigation concerns how previous exploratory experience is reorganized in order to allow the creation of novel efficient navigation trajectories. This behavior is revealed in the \"traveling salesrat problem\" (TSP) when rats discover the shortest path linking baited food wells after a few exploratory traversals. We have recently published a model of navigation sequence learning, where sharp wave ripple replay of hippocampal place cells transmit \"snippets\" of the recent trajectories that the animal has explored to the prefrontal cortex (PFC) (Cazin et al. in PLoS Comput Biol 15:e1006624, 2019). PFC is modeled as a recurrent reservoir network that is able to assemble these snippets into the efficient sequence (trajectory of spatial locations coded by place cell activation). The model of hippocampal replay generates a distribution of snippets as a function of their proximity to a reward, thus implementing a form of spatial credit assignment that solves the TSP task. The integrative PFC reservoir reconstructs the efficient TSP sequence based on exposure to this distribution of snippets that favors paths that are most proximal to rewards. While this demonstrates the theoretical feasibility of the PFC-HIPP interaction, the integration of such a dynamic system into a real-time sensory-motor system remains a challenge. In the current research, we test the hypothesis that the PFC reservoir model can operate in a real-time sensory-motor loop. Thus, the main goal of the paper is to validate the model in simulated and real robot scenarios. Place cell activation encoding the current position of the simulated and physical rat robot feeds the PFC reservoir which generates the successor place cell activation that represents the next step in the reproduced sequence in the readout. This is input to the robot, which advances to the coded location and then generates de novo the current place cell activation. This allows demonstration of the crucial role of embodiment. If the spatial code readout from PFC is played back directly into PFC, error can accumulate, and the system can diverge from desired trajectories. This required a spatial filter to decode the PFC code to a location and then recode a new place cell code for that location. In the robot, the place cell vector output of PFC is used to physically displace the robot and then generate a new place cell coded input to the PFC, replacing part of the software recoding procedure that was required otherwise. We demonstrate how this integrated sensory-motor system can learn simple navigation sequences and then, importantly, how it can synthesize novel efficient sequences based on prior experience, as previously demonstrated (Cazin et al. 2019). This contributes to the understanding of hippocampal replay in novel navigation sequence formation and the important role of embodiment."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "25855820",
"title": "Erratum: Borderud SP, Li Y, Burkhalter JE, Sheffer CE and Ostroff JS. Electronic cigarette use among patients with cancer: Characteristics of electronic cigarette users and their smoking cessation outcomes. Cancer. doi: 10.1002/ cncr.28811.",
"abstract": "The authors discovered some errors regarding reference group labels in Table 2. The corrected table is attached. The authors regret these errors."
},
{
"pmid": "30047914",
"title": "Robot Learning System Based on Adaptive Neural Control and Dynamic Movement Primitives.",
"abstract": "This paper proposes an enhanced robot skill learning system considering both motion generation and trajectory tracking. During robot learning demonstrations, dynamic movement primitives (DMPs) are used to model robotic motion. Each DMP consists of a set of dynamic systems that enhances the stability of the generated motion toward the goal. A Gaussian mixture model and Gaussian mixture regression are integrated to improve the learning performance of the DMP, such that more features of the skill can be extracted from multiple demonstrations. The motion generated from the learned model can be scaled in space and time. Besides, a neural-network-based controller is designed for the robot to track the trajectories generated from the motion model. In this controller, a radial basis function neural network is used to compensate for the effect caused by the dynamic environments. The experiments have been performed using a Baxter robot and the results have confirmed the validity of the proposed methods."
},
{
"pmid": "32857705",
"title": "Neural Control of Robot Manipulators With Trajectory Tracking Constraints and Input Saturation.",
"abstract": "This article presents a control scheme for the robot manipulator's trajectory tracking task considering output error constraints and control input saturation. We provide an alternative way to remove the feasibility condition that most BLF-based controllers should meet and design a control scheme on the premise that constraint violation possibly happens due to the control input saturation. A bounded barrier Lyapunov function is proposed and adopted to handle the output error constraints. Besides, to suppress the input saturation effect, an auxiliary system is designed and emerged into the control scheme. Moreover, a simplified RBFNN structure is adopted to approximate the lumped uncertainties. Simulation and experimental results demonstrate the effectiveness of the proposed control scheme."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "33022599",
"title": "Combating COVID-19-The role of robotics in managing public health and infectious diseases.",
"abstract": "COVID-19 may drive sustained research in robotics to address risks of infectious diseases."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "30047914",
"title": "Robot Learning System Based on Adaptive Neural Control and Dynamic Movement Primitives.",
"abstract": "This paper proposes an enhanced robot skill learning system considering both motion generation and trajectory tracking. During robot learning demonstrations, dynamic movement primitives (DMPs) are used to model robotic motion. Each DMP consists of a set of dynamic systems that enhances the stability of the generated motion toward the goal. A Gaussian mixture model and Gaussian mixture regression are integrated to improve the learning performance of the DMP, such that more features of the skill can be extracted from multiple demonstrations. The motion generated from the learned model can be scaled in space and time. Besides, a neural-network-based controller is designed for the robot to track the trajectories generated from the motion model. In this controller, a radial basis function neural network is used to compensate for the effect caused by the dynamic environments. The experiments have been performed using a Baxter robot and the results have confirmed the validity of the proposed methods."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
}
] |
Scientific Reports | 35246576 | PMC8897401 | 10.1038/s41598-022-07571-z | An online cursive handwritten medical words recognition system for busy doctors in developing countries for ensuring efficient healthcare service delivery | Doctors in developing countries are too busy to write digital prescriptions. Ninety-seven percent of Bangladeshi doctors write handwritten prescriptions, the majority of which lack legibility. Prescriptions are harder to read as they contain multiple languages. This paper proposes a machine learning approach to recognize doctors’ handwriting to create digital prescriptions. A ‘Handwritten Medical Term Corpus’ dataset is developed containing 17,431 samples of 480 medical terms. In order to improve the recognition efficiency, this paper introduces a data augmentation technique to widen the variety and increase the sample size. A sequence of line data is extracted from the augmented images of 1,591,100 samples and fed to a Bidirectional Long Short-Term Memory (LSTM) network. Data augmentation includes pattern Rotating, Shifting, and Stretching (RSS). Eight different combinations are applied to evaluate the strength of the proposed method. The result shows 93.0% average accuracy (max: 94.5%, min: 92.1%) using Bidirectional LSTM and RSS data augmentation. This accuracy is 19.6% higher than the recognition result with no data expansion. The proposed handwritten recognition technology can be installed in a smartpen for busy doctors which will recognize the writings and digitize them in real-time. It is expected that the smartpen will contribute to reduce medical errors, save medical costs and ensure healthy living in developing countries. | Related workOver the last few decades, multitudes of deep learning approaches have been proposed for efficient handwriting recognition using several handwritten datasets of different languages. This section discusses similar research works in the following four sectors:Doctors’ handwriting datasetFew online datasets are available to design a doctors’ handwriting recognition system. Dibyajyoti et al.15 introduced HP_DocPres dataset with 11,340 samples of handwritten and printed words collected from various medical prescriptions. This dataset is prepared to differentiate between handwritten and printed texts. However, the words are not labeled so they can’t be used to recognize the written words by doctors. Another doctors’ handwriting dataset is introduced by Farjado et al.16. This dataset contains 1800 images of 12 medicine names collected from 50 doctors from clinics and hospitals of Metro Manila, Quezon City, and Taytay, Rizal in the Philippines. However, this dataset is not suitable for recognizing doctors’ handwriting in Bangladeshi prescriptions due to the limited number of medical terms it contains and the region of data collected being different from our study region.Although doctors’ handwriting dataset is scarce, there are multitudes of available handwriting datasets both for English and Bangla languages. IAM Dataset by the University of Bern17 is one of the most popular datasets with the largest handwriting collection in English. This dataset contains 13,353 images of handwritten lines of text created by 657 writers. A similar dataset in Bangla is the Bangla handwriting recognition dataset by Bappaditya et al.18 that has obtained 79,000 handwritten Bangla word samples written by 77 different writers. BanglaLekha-Isolated19 and ISI20 dataset comes with a vast number of handwriting samples of individual Bangla characters with numerals. Another popular dataset is CMATERdb121 that has 100 handwritten Bangla pages and 50 handwritten English and Bangla combined pages with ground-truth annotations. However, these datasets do not contain doctors’ handwriting or any medical terms, hence might perform poorly in recognizing doctors’ handwriting.Offline handwritten character recognition: using image data as inputAutomatic conversion of handwritten texts into images for recognition using Convolutional Neural Network (CNN) is called Offline Character Recognition22. Shahariar et al.23 proposed a lightweight CNN model for Bangla handwriting recognition. The model has 13 convolutional layers with 2 sub-layers. The sub-layers are joined together to pass through a max-pooling layer with one 0.25 weighted dropout layer. This model has attained 98%, 96.8% and 96.4% accuracy in BanglaLekha, CMATERdb, and ISI datasets. A modified LeNet-5 CNN model by Yuan et al.24 obtained an accuracy of 93.7% for uppercase and 90.2% for lowercase for the recognition for English Language characters. Yang et al.25 presented a path-signature feature method using deep CNN for identifying Chinese character writers. The method was 99.52% accurate with DropStroke data augmentation.Online handwritten character recognition: using time-series data as inputOnline character recognition considers a sequence of times which is captured by the movements of a specialized pen. The recognition rate of the online system is more efficient and higher than the offline system22. RNN has recently been widely used in the area of handwriting recognition for showing better recognition performance. The RNNs work with sequence data of coordinates which contain vast information than static images14. Bappaditya et al.18 used bidirectional LSTM using 65,620 handwritten Bangla words dataset and has obtained 79% accuracy. Zhang et al.14 proposed a conditional RNN-based generative model combining LSTM and Gated Recurrent Units (GRU). The model is built for recognizing Chinese handwritten characters and has achieved 98.15% recognition accuracy. Farjado et al.16 used Convolutional RNN (CRNN) for recognizing doctors’ cursive handwriting which contained 13 convolutional layers followed by 3 bidirectional LSTM layers and has attained 72% accuracy. However, Achkar et al.26 reported obtaining 95% accuracy using the similar CRNN model with a different dataset for recognizing medical handwritten prescriptions.Handwriting recognition with data augmentationIn our previous work, SRP (Stroke Rotation and Parallel-shift) data augmentation technique was applied for expanding the doctors’ cursive handwritten dataset. However, the minimum accuracy of that system was only 68.0%27. For recognizing Bangla handwriting characters, Shahariar et al.23 applied three data augmentation methods on 10% of the dataset: shifted height and width, rotated images by 10 degrees, and zoomed in the images. Another data augmentation method named ‘DropStroke’ was used for Chinese character recognition. Chinese characters are very complex as they have many strokes. Thus, the DropStroke method randomly excludes several strokes and generates new handwritten characters using the combination of the remaining strokes1425. Hayashi et al.13 used a data augmentation technique using probability distribution for handwriting recognition. This method calculates probability distribution from the features related to the structure of the character. Then, it generates strokes based on the distribution and forms multitudes of new characters.Ethics approvalAll the authors mentioned in the manuscript have agreed for authorship, read and approved the manuscript, and given consent for submission and subsequent publication of the manuscript.Consent to participateThe written informed consent was obtained from all subjects prior to collecting their handwritten samples in these studies.Consent for publicationThe written informed consent was obtained from all subjects prior to collecting their handwritten samples in these studies. | [
"30598921",
"21331202",
"33030442",
"28436845",
"28409178",
"19147874"
] | [
{
"pmid": "30598921",
"title": "India achieves WHO recommended doctor population ratio: A call for paradigm shift in public health discourse!",
"abstract": "The Indian medical education system has been able to pull through a major turnaround and has been successfully able to double the numbers of MBBS graduate (modern medicine training) positions during recent decades. With more than 479 medical schools, India has reached the capacity of an annual intake of 67,218 MBBS students at medical colleges regulated by the Medical Council of India. Additionally, India produces medical graduates in the \"traditional Indian system of medicine,\" regulated through Central Council for Indian Medicine. Considering the number of registered medical practitioners of both modern medicine (MBBS) and traditional medicine (AYUSH), India has already achieved the World Health Organization recommended doctor to population ratio of 1:1,000 the \"Golden Finishing Line\" in the year 2018 by most conservative estimates. It is indeed a matter of jubilation and celebration! Now, the time has come to critically analyze the whole premise of doctor-population ratio and its value. Public health experts and policy makers now need to move forward from the fixation and excuse of scarcity of doctors. There is an urgent need to focus on augmenting the fiscal capacity as well as development of infrastructure both in public and private health sectors toward addressing pressing healthcare needs of the growing population. It is also an opportunity to call for change in the public health discourse in India in the background of aspirations of attaining sustainable development goals by 2030."
},
{
"pmid": "21331202",
"title": "Prescription drug labeling medication errors: a big deal for pharmacists.",
"abstract": "Today, in the health care profession, all types of medication errors including missed dose, wrong dosage forms, wrong time interval, wrong route, etc., are a big deal for better patient care. Today, problems related to medications are common in the healthcare profession, and are responsible for significant morbidity, mortality, and cost. Several recent studies have demonstrated that patients frequently have difficulty in reading and understanding medication labels. According to the Institute of Medicine report, \"Preventing Medication Errors\", cited poor labeling as a central cause for medication errors in the USA. Evidence suggests that specific content and format of prescription drug labels facilitate communication with and comprehension by patients. Efforts to improve the labels should be guided by such evidence, although an additional study assessing the influence of label design on medication-taking behavior and health outcomes is needed. Several policy options exist to require minimal standards to optimize medical therapy, particularly in light of the new Medicare prescription drug benefit."
},
{
"pmid": "33030442",
"title": "Blood Uric Acid Prediction With Machine Learning: Model Development and Performance Comparison.",
"abstract": "BACKGROUND\nUric acid is associated with noncommunicable diseases such as cardiovascular diseases, chronic kidney disease, coronary artery disease, stroke, diabetes, metabolic syndrome, vascular dementia, and hypertension. Therefore, uric acid is considered to be a risk factor for the development of noncommunicable diseases. Most studies on uric acid have been performed in developed countries, and the application of machine-learning approaches in uric acid prediction in developing countries is rare. Different machine-learning algorithms will work differently on different types of data in various diseases; therefore, a different investigation is needed for different types of data to identify the most accurate algorithms. Specifically, no study has yet focused on the urban corporate population in Bangladesh, despite the high risk of developing noncommunicable diseases for this population.\n\n\nOBJECTIVE\nThe aim of this study was to develop a model for predicting blood uric acid values based on basic health checkup test results, dietary information, and sociodemographic characteristics using machine-learning algorithms. The prediction of health checkup test measurements can be very helpful to reduce health management costs.\n\n\nMETHODS\nVarious machine-learning approaches were used in this study because clinical input data are not completely independent and exhibit complex interactions. Conventional statistical models have limitations to consider these complex interactions, whereas machine learning can consider all possible interactions among input data. We used boosted decision tree regression, decision forest regression, Bayesian linear regression, and linear regression to predict personalized blood uric acid based on basic health checkup test results, dietary information, and sociodemographic characteristics. We evaluated the performance of these five widely used machine-learning models using data collected from 271 employees in the Grameen Bank complex of Dhaka, Bangladesh.\n\n\nRESULTS\nThe mean uric acid level was 6.63 mg/dL, indicating a borderline result for the majority of the sample (normal range <7.0 mg/dL). Therefore, these individuals should be monitoring their uric acid regularly. The boosted decision tree regression model showed the best performance among the models tested based on the root mean squared error of 0.03, which is also better than that of any previously reported model.\n\n\nCONCLUSIONS\nA uric acid prediction model was developed based on personal characteristics, dietary information, and some basic health checkup measurements. This model will be useful for improving awareness among high-risk individuals and populations, which can help to save medical costs. A future study could include additional features (eg, work stress, daily physical activity, alcohol intake, eating red meat) in improving prediction."
},
{
"pmid": "28436845",
"title": "Drawing and Recognizing Chinese Characters with Recurrent Neural Network.",
"abstract": "Recent deep learning based approaches have achieved great success on handwriting recognition. Chinese characters are among the most widely adopted writing systems in the world. Previous research has mainly focused on recognizing handwritten Chinese characters. However, recognition is only one aspect for understanding a language, another challenging and interesting task is to teach a machine to automatically write (pictographic) Chinese characters. In this paper, we propose a framework by using the recurrent neural network (RNN) as both a discriminative model for recognizing Chinese characters and a generative model for drawing (generating) Chinese characters. To recognize Chinese characters, previous methods usually adopt the convolutional neural network (CNN) models which require transforming the online handwriting trajectory into image-like representations. Instead, our RNN based approach is an end-to-end system which directly deals with the sequential structure and does not require any domain-specific knowledge. With the RNN system (combining an LSTM and GRU), state-of-the-art performance can be achieved on the ICDAR-2013 competition database. Furthermore, under the RNN framework, a conditional generative model with character embedding is proposed for automatically drawing recognizable Chinese characters. The generated characters (in vector format) are human-readable and also can be recognized by the discriminative RNN model with high accuracy. Experimental results verify the effectiveness of using RNNs as both generative and discriminative models for the tasks of drawing and recognizing Chinese characters."
},
{
"pmid": "28409178",
"title": "BanglaLekha-Isolated: A multi-purpose comprehensive dataset of Handwritten Bangla Isolated characters.",
"abstract": "BanglaLekha-Isolated, a Bangla handwritten isolated character dataset is presented in this article. This dataset contains 84 different characters comprising of 50 Bangla basic characters, 10 Bangla numerals and 24 selected compound characters. 2000 handwriting samples for each of the 84 characters were collected, digitized and pre-processed. After discarding mistakes and scribbles, 1,66,105 handwritten character images were included in the final dataset. The dataset also includes labels indicating the age and the gender of the subjects from whom the samples were collected. This dataset could be used not only for optical handwriting recognition research but also to explore the influence of gender and age on handwriting. The dataset is publicly available at https://data.mendeley.com/datasets/hf6sf8zrkc/2."
},
{
"pmid": "19147874",
"title": "Handwritten numeral databases of Indian scripts and multistage recognition of mixed numerals.",
"abstract": "This article primarily concerns the problem of isolated handwritten numeral recognition of major Indian scripts. The principal contributions presented here are (a) pioneering development of two databases for handwritten numerals of two most popular Indian scripts, (b) a multistage cascaded recognition scheme using wavelet based multiresolution representations and multilayer perceptron classifiers and (c) application of (b) for the recognition of mixed handwritten numerals of three Indian scripts Devanagari, Bangla and English. The present databases include respectively 22,556 and 23,392 handwritten isolated numeral samples of Devanagari and Bangla collected from real-life situations and these can be made available free of cost to researchers of other academic Institutions. In the proposed scheme, a numeral is subjected to three multilayer perceptron classifiers corresponding to three coarse-to-fine resolution levels in a cascaded manner. If rejection occurred even at the highest resolution, another multilayer perceptron is used as the final attempt to recognize the input numeral by combining the outputs of three classifiers of the previous stages. This scheme has been extended to the situation when the script of a document is not known a priori or the numerals written on a document belong to different scripts. Handwritten numerals in mixed scripts are frequently found in Indian postal mails and table-form documents."
}
] |
Plant Phenomics | null | PMC8897744 | 10.34133/2022/9795275 | How Useful Is Image-Based Active Learning for Plant Organ Segmentation? | Training deep learning models typically requires a huge amount of labeled data which is expensive to acquire, especially in dense prediction tasks such as semantic segmentation. Moreover, plant phenotyping datasets pose additional challenges of heavy occlusion and varied lighting conditions which makes annotations more time-consuming to obtain. Active learning helps in reducing the annotation cost by selecting samples for labeling which are most informative to the model, thus improving model performance with fewer annotations. Active learning for semantic segmentation has been well studied on datasets such as PASCAL VOC and Cityscapes. However, its effectiveness on plant datasets has not received much importance. To bridge this gap, we empirically study and benchmark the effectiveness of four uncertainty-based active learning strategies on three natural plant organ segmentation datasets. We also study their behaviour in response to variations in training configurations in terms of augmentations used, the scale of training images, active learning batch sizes, and train-validation set splits. | 2. Related Work2.1. Semantic Segmentation in Plant PhenotypingPrior to deep learning, plant researchers relied on traditional image processing techniques such as edge detection, thresholding, graph partitioning, and clustering [24] to obtain segmentation maps of plant organs [25–27]. With the rapid growth and success of deep neural networks, intelligent model-based automatic segmentation of plant organs has now become an unavoidable prerequisite for measuring more complex phenotypic traits. Aich and Stavness [9] perform leaf counting by training two separate models—a segmentation model to generate leaf segmentation maps and a regression model that takes the segmented maps as input to perform counting. Choudhury et al. [14] introduced an algorithm that uses plant segmentation masks to compute stem angle, a potential measure for plants' susceptibility to lodging. Ma et al. [15] achieved robust disease segmentation in greenhouse vegetable foliar disease symptom images. They proposed a decision tree-based two-step coarse-to-fine segmentation method. Shi et al. [12] proposed a multiview approach that maps 2D segmentation maps to 3D point clouds on a multiview tomato seedling dataset to increase prediction accuracy. We refer the readers to Li et al. [28] for a more comprehensive review of the applications of semantic segmentation in plant phenotyping.2.2. Active Learning for Plant PhenotypingThe key hypothesis of AL is that if the learning algorithm is allowed to choose the data from which it learns, it will perform better with less training. AL techniques have long been used for reducing annotation effort [17, 29–33]. However, only a handful of works have been published that apply AL on plant phenotyping tasks. In the context of robotic plant phenotyping, Kumar et al. [34] proposed a Gaussian process-based AL algorithm to enable an autonomous system to collect the most informative samples in order to accurately learn the distribution of phenotypes in the field. Grimm et al. [35] proposed a model-free approach to plant species classification with the help of AL. More recently, Nagasubramanian et al. [19] comprehensively studied the usefulness of AL in plant phenotyping on two classification datasets and showed that AL techniques outperform random sampling and indeed reasonably reduce labeling costs. For object detection task, Chandra et al. [20] achieved superior model performance compared to random sampling while saving over 50% of annotation time on sorghum-head and wheat-panicle detection datasets by exploiting weak supervision for obtaining informative samples. | [
"31338116",
"30832283",
"30459822",
"29209408",
"32161624",
"34778804"
] | [
{
"pmid": "31338116",
"title": "Automatic estimation of heading date of paddy rice using deep learning.",
"abstract": "BACKGROUND\nAccurate estimation of heading date of paddy rice greatly helps the breeders to understand the adaptability of different crop varieties in a given location. The heading date also plays a vital role in determining grain yield for research experiments. Visual examination of the crop is laborious and time consuming. Therefore, quick and precise estimation of heading date of paddy rice is highly essential.\n\n\nRESULTS\nIn this work, we propose a simple pipeline to detect regions containing flowering panicles from ground level RGB images of paddy rice. Given a fixed region size for an image, the number of regions containing flowering panicles is directly proportional to the number of flowering panicles present. Consequently, we use the flowering panicle region counts to estimate the heading date of the crop. The method is based on image classification using Convolutional Neural Networks. We evaluated the performance of our algorithm on five time series image sequences of three different varieties of rice crops. When compared to the previous work on this dataset, the accuracy and general versatility of the method has been improved and heading date has been estimated with a mean absolute error of less than 1 day.\n\n\nCONCLUSION\nAn efficient heading date estimation method has been described for rice crops using time series RGB images of crop under natural field conditions. This study demonstrated that our method can reliably be used as a replacement of manual observation to detect the heading date of rice crops."
},
{
"pmid": "30832283",
"title": "CropDeep: The Crop Vision Dataset for Deep-Learning-Based Classification and Detection in Precision Agriculture.",
"abstract": "Intelligence has been considered as the major challenge in promoting economic potential and production efficiency of precision agriculture. In order to apply advanced deep-learning technology to complete various agricultural tasks in online and offline ways, a large number of crop vision datasets with domain-specific annotation are urgently needed. To encourage further progress in challenging realistic agricultural conditions, we present the CropDeep species classification and detection dataset, consisting of 31,147 images with over 49,000 annotated instances from 31 different classes. In contrast to existing vision datasets, images were collected with different cameras and equipment in greenhouses, captured in a wide variety of situations. It features visually similar species and periodic changes with more representative annotations, which have supported a stronger benchmark for deep-learning-based classification and detection. To further verify the application prospect, we provide extensive baseline experiments using state-of-the-art deep-learning classification and detection models. Results show that current deep-learning-based methods achieve well performance in classification accuracy over 99%. While current deep-learning methods achieve only 92% detection accuracy, illustrating the difficulty of the dataset and improvement room of state-of-the-art deep-learning models when applied to crops production and management. Specifically, we suggest that the YOLOv3 network has good potential application in agricultural detection tasks."
},
{
"pmid": "30459822",
"title": "Detection and analysis of wheat spikes using Convolutional Neural Networks.",
"abstract": "BACKGROUND\nField phenotyping by remote sensing has received increased interest in recent years with the possibility of achieving high-throughput analysis of crop fields. Along with the various technological developments, the application of machine learning methods for image analysis has enhanced the potential for quantitative assessment of a multitude of crop traits. For wheat breeding purposes, assessing the production of wheat spikes, as the grain-bearing organ, is a useful proxy measure of grain production. Thus, being able to detect and characterize spikes from images of wheat fields is an essential component in a wheat breeding pipeline for the selection of high yielding varieties.\n\n\nRESULTS\nWe have applied a deep learning approach to accurately detect, count and analyze wheat spikes for yield estimation. We have tested the approach on a set of images of wheat field trial comprising 10 varieties subjected to three fertilizer treatments. The images have been captured over one season, using high definition RGB cameras mounted on a land-based imaging platform, and viewing the wheat plots from an oblique angle. A subset of in-field images has been accurately labeled by manually annotating all the spike regions. This annotated dataset, called SPIKE, is then used to train four region-based Convolutional Neural Networks (R-CNN) which take, as input, images of wheat plots, and accurately detect and count spike regions in each plot. The CNNs also output the spike density and a classification probability for each plot. Using the same R-CNN architecture, four different models were generated based on four different datasets of training and testing images captured at various growth stages. Despite the challenging field imaging conditions, e.g., variable illumination conditions, high spike occlusion, and complex background, the four R-CNN models achieve an average detection accuracy ranging from 88 to across different sets of test images. The most robust R-CNN model, which achieved the highest accuracy, is then selected to study the variation in spike production over 10 wheat varieties and three treatments. The SPIKE dataset and the trained CNN are the main contributions of this paper.\n\n\nCONCLUSION\nWith the availability of good training datasets such us the SPIKE dataset proposed in this article, deep learning techniques can achieve high accuracy in detecting and counting spikes from complex wheat field images. The proposed robust R-CNN model, which has been trained on spike images captured during different growth stages, is optimized for application to a wider variety of field scenarios. It accurately quantifies the differences in yield produced by the 10 varieties we have studied, and their respective responses to fertilizer treatment. We have also observed that the other R-CNN models exhibit more specialized performances. The data set and the R-CNN model, which we make publicly available, have the potential to greatly benefit plant breeders by facilitating the high throughput selection of high yielding varieties."
},
{
"pmid": "29209408",
"title": "Panicle-SEG: a robust image segmentation method for rice panicles in the field based on deep learning and superpixel optimization.",
"abstract": "BACKGROUND\nRice panicle phenotyping is important in rice breeding, and rice panicle segmentation is the first and key step for image-based panicle phenotyping. Because of the challenge of illumination differentials, panicle shape deformations, rice accession variations, different reproductive stages and the field's complex background, rice panicle segmentation in the field is a very large challenge.\n\n\nRESULTS\nIn this paper, we propose a rice panicle segmentation algorithm called Panicle-SEG, which is based on simple linear iterative clustering superpixel regions generation, convolutional neural network classification and entropy rate superpixel optimization. To build the Panicle-SEG-CNN model and test the segmentation effects, 684 training images and 48 testing images were randomly selected, respectively. Six indicators, including Qseg, Sr, SSIM, Precision, Recall and F-measure, are employed to evaluate the segmentation effects, and the average segmentation results for the 48 testing samples are 0.626, 0.730, 0.891, 0.821, 0.730, and 76.73%, respectively. Compared with other segmentation approaches, including HSeg, i2 hysteresis thresholding and jointSeg, the proposed Panicle-SEG algorithm has better performance on segmentation accuracy. Meanwhile, the executing speed is also improved when combined with multithreading and CUDA parallel acceleration. Moreover, Panicle-SEG was demonstrated to be a robust segmentation algorithm, which can be expanded for different rice accessions, different field environments, different camera angles, different reproductive stages, and indoor rice images. The testing dataset and segmentation software are available online.\n\n\nCONCLUSIONS\nIn conclusion, the results demonstrate that Panicle-SEG is a robust method for panicle segmentation, and it creates a new opportunity for nondestructive yield estimation."
},
{
"pmid": "32161624",
"title": "Active learning with point supervision for cost-effective panicle detection in cereal crops.",
"abstract": "BACKGROUND\nPanicle density of cereal crops such as wheat and sorghum is one of the main components for plant breeders and agronomists in understanding the yield of their crops. To phenotype the panicle density effectively, researchers agree there is a significant need for computer vision-based object detection techniques. Especially in recent times, research in deep learning-based object detection shows promising results in various agricultural studies. However, training such systems usually requires a lot of bounding-box labeled data. Since crops vary by both environmental and genetic conditions, acquisition of huge amount of labeled image datasets for each crop is expensive and time-consuming. Thus, to catalyze the widespread usage of automatic object detection for crop phenotyping, a cost-effective method to develop such automated systems is essential.\n\n\nRESULTS\nWe propose a point supervision based active learning approach for panicle detection in cereal crops. In our approach, the model constantly interacts with a human annotator by iteratively querying the labels for only the most informative images, as opposed to all images in a dataset. Our query method is specifically designed for cereal crops which usually tend to have panicles with low variance in appearance. Our method reduces labeling costs by intelligently leveraging low-cost weak labels (object centers) for picking the most informative images for which strong labels (bounding boxes) are required. We show promising results on two publicly available cereal crop datasets-Sorghum and Wheat. On Sorghum, 6 variants of our proposed method outperform the best baseline method with more than 55% savings in labeling time. Similarly, on Wheat, 3 variants of our proposed methods outperform the best baseline method with more than 50% of savings in labeling time.\n\n\nCONCLUSION\nWe proposed a cost effective method to train reliable panicle detectors for cereal crops. A low cost panicle detection method for cereal crops is highly beneficial to both breeders and agronomists. Plant breeders can obtain quick crop yield estimates to make important crop management decisions. Similarly, obtaining real time visual crop analysis is valuable for researchers to analyze the crop's response to various experimental conditions."
},
{
"pmid": "34778804",
"title": "Global Wheat Head Detection 2021: An Improved Dataset for Benchmarking Wheat Head Detection Methods.",
"abstract": "The Global Wheat Head Detection (GWHD) dataset was created in 2020 and has assembled 193,634 labelled wheat heads from 4700 RGB images acquired from various acquisition platforms and 7 countries/institutions. With an associated competition hosted in Kaggle, GWHD_2020 has successfully attracted attention from both the computer vision and agricultural science communities. From this first experience, a few avenues for improvements have been identified regarding data size, head diversity, and label reliability. To address these issues, the 2020 dataset has been reexamined, relabeled, and complemented by adding 1722 images from 5 additional countries, allowing for 81,553 additional wheat heads. We now release in 2021 a new version of the Global Wheat Head Detection dataset, which is bigger, more diverse, and less noisy than the GWHD_2020 version."
}
] |
Frontiers in Robotics and AI | null | PMC8898942 | 10.3389/frobt.2022.717193 | Affect-Driven Learning of Robot Behaviour for Collaborative Human-Robot Interactions | Collaborative interactions require social robots to share the users’ perspective on the interactions and adapt to the dynamics of their affective behaviour. Yet, current approaches for affective behaviour generation in robots focus on instantaneous perception to generate a one-to-one mapping between observed human expressions and static robot actions. In this paper, we propose a novel framework for affect-driven behaviour generation in social robots. The framework consists of (i) a hybrid neural model for evaluating facial expressions and speech of the users, forming intrinsic affective representations in the robot, (ii) an Affective Core, that employs self-organising neural models to embed behavioural traits like patience and emotional actuation that modulate the robot’s affective appraisal, and (iii) a Reinforcement Learning model that uses the robot’s appraisal to learn interaction behaviour. We investigate the effect of modelling different affective core dispositions on the affective appraisal and use this affective appraisal as the motivation to generate robot behaviours. For evaluation, we conduct a user study (n = 31) where the NICO robot acts as a proposer in the Ultimatum Game. The effect of the robot’s affective core on its negotiation strategy is witnessed by participants, who rank a patient robot with high emotional actuation higher on persistence, while an impatient robot with low emotional actuation is rated higher on its generosity and altruistic behaviour. | 2 Related WorkThe affective impact of one’s interactions with others plays an important role in human cognition (Jeon, 2017). The core affect in an individual forms a neurophysiological state (Russell, 2003) resulting from the interplay between the valence of an experience and the emotional arousal it invokes. This influences how people perceive situations and regulates their responses. Understanding the evolution of human affective behaviour enables us to emulate such characteristics in social robots. It allows robots to ground intrinsic models of affect to improve their interaction capabilities. In this section, we present a brief overview on multi-modal affect perception (Section 2.1), the representation of affect as an intrinsic attribute (Section 2.2), and behaviour synthesis (Section 2.3) in social robots discussing different existing frameworks that use affective appraisal as the basis for modelling robot behaviour in HRI scenarios.2.1 Multi-Modal Affect PerceptionHumans interact with each other using different verbal and non-verbal cues such as facial expressions, body gestures and speech. For social robots, analysing user behaviour across multiple modalities improves their perception capabilities. Furthermore, in case of masked perception or conflicts, this additional information may enable the robot to resolve conflicts (Parisi et al., 2017a). Although various outward signals (Gunes et al., 2011) can be observed to model affect perception in agents, facial expressions and human speech are the predominantly used modalities to evaluate human affective behaviour in HRI settings (Spezialetti et al., 2020).Facial expressions can be categorised into several categories (Ekman and Friesen, 1971) or represented on a dimensional scale (Yannakakis et al., 2017). Traditionally, computational models have used shape-based, spectral or histogram-based analysis for affect perception (see (Zeng et al., 2009; Sariyanidi et al., 2015; Corneanu et al., 2016) for a detailed analysis), but more recently, deep learning has enhanced the performance of Facial Expression Recognition (FER) models by reducing the dependency on the choice of features and instead, learning directly from the data (Kollias and Zafeiriou, 2018; Li and Deng, 2020). Although these work well in clean and noise-free environments, spontaneous emotion recognition in less controlled settings is still a challenge (Sariyanidi et al., 2015). Thus, the focus has now shifted towards developing techniques that are able to recognise facial expressions in real-world conditions (Kossaifi et al., 2017; Zafeiriou et al., 2017), robust to movements of the observed person, noisy environments and occlusions (Zen et al., 2016).Analysing human speech, either by processing spoken words to extract the sentiment behind them or understanding speech intonations, offers another potent way of evaluating human affective expression during interactions. While spoken words convey meaning, paralinguistic cues enhance a conversation by highlighting the affective motivations behind these spoken words (Gunes and Pantic, 2010). Despite providing information about the context and intent (Whissell, 1989) in an interaction, it is difficult to deduce the affective state of an individual using only spoken words (Furnas et al., 1987). Extracting spectral and prosodic representations can help better analyse affective undertones in speech. Different studies on Speech Emotion Recognition (SER) (see (Schuller, 2018) for an overview) make use of representations such as Mel-Frequency Cepstral Coefficients (MFCC) or features like pitch and energy to evaluate affective expression. More recently (deep) learning is employed to extract relevant features directly from the raw audio signals (Keren and Schuller, 2016; Tzirakis et al., 2017).Most of the current approaches (Poria et al., 2017; Tzirakis et al., 2017; Spezialetti et al., 2020) combine face and auditory modalities to determine the affective state expressed by an individual. This combination can either be achieved using weighted averaging or majority voting (Schels et al., 2013) from individual modalities (Busso et al., 2004) or feature-based sensor fusion (Kahou et al., 2016; Tzirakis et al., 2017) and deep learning (Poria et al., 2017).2.2 Representing Affect in Social AgentsFor long-term adaptation, it is important that robots not only recognise users’ affective expressions but also model continually evolving intrinsic representations (Paiva et al., 2014) that track human affective behaviour. Kirby et al. (2010) explore slow-evolving affect models such as moods and attitudes that consider the personal history and the environment to estimate an affective state for the robot in response to the users. Barros et al. (Barros et al., 2020) also propose the formation of an intrinsic mood that uses an affective memory (Barros and Wermter, 2017) of the individual as an influence over the perception model. The WASABI model (Becker-Asano and Wachsmuth, 2009) represents the intrinsic state of the robot on a PAD-scale that adapts based on the agent’s interactions with the user. In the SAIBA framework (Le et al., 2011), the agent’s intrinsic state is modelled using mark-up languages that model intent in the robot and use it to generate corresponding agent behaviour. The (DE)SIRE framework (Lim et al., 2012) represents this intrinsic affect as a vector in a 4-d space for the robot which is then mapped to corresponding expressions across different modalities. Although all these approaches are able to provide the necessary basis for modelling affective and behavioural dispositions in agents, they require careful initialisation, across n-dimensional vector-spaces, to result in the desired effect. It will be beneficial if these intrinsic representations could be learnt dynamically by the agent as a result of its interactions.2.3 Behaviour Synthesis in Social AgentsRecent works on behaviour learning in social agents investigate the role of affect as a motivation to interact with their environment. Such strategies may include affective modulation on the computation of the reward function where explicit feedback from the user is shown to speed up learning (Broekens, 2007). Alternatively, affective appraisal can be viewed as an inherent quality of the robot, motivating it to interact with its environment (Han et al., 2013; Moerland et al., 2018). Affect is modelled as an evaluation of physiological changes (changing battery level or motor temperatures) that occur in the robot, with their behaviour influenced by homeostatic drives that lead towards a stable internal state (Konidaris and Barto, 2006). Other approaches examine different cues such as novelty and the relevance of an action to the task to appraise the robot’s performance (Sequeira et al., 2011). In the case of value-based approaches (Jacobs et al., 2014), the state-space of the robot is mapped onto different affective states and the value of any state represents the affective experience of the robot in that state. Reward-based approaches, on the other hand, consider temporal changes in the reward or the reward itself as the basis of the robot experiencing different affective states (Ahn and Picard, 2005).We propose using the robot’s affective perception, modulated by its past experiences with the users as well as specific affective dispositions forming its affective core, to govern its responses towards the users during interactions. Using the robots’ mood as their subjective evaluation of an interaction, we aim to embed robots with adaptive interaction capabilities that learn task-specific behaviour policies by developing a shared perception of the interaction as well as the users’ expectations from the robots. | [
"23317825",
"26761193",
"17272713",
"26502437",
"29768426",
"12416693",
"29017140",
"12529060",
"26357337",
"5447967",
"19029545"
] | [
{
"pmid": "23317825",
"title": "Role of affect in decision making.",
"abstract": "Emotion plays a major role in influencing our everyday cognitive and behavioral functions, including decision making. We introduce different ways in which emotions are characterized in terms of the way they influence or elicited by decision making. This chapter discusses different theories that have been proposed to explain the role of emotions in judgment and decision making. We also discuss incidental emotional influences, both long-duration influences like mood and short-duration influences by emotional context present prior to or during decision making. We present and discuss results from a study with emotional pictures presented prior to decision making and how that influences both decision processes and postdecision experience as a function of uncertainty. We conclude with a summary of the work on emotions and decision making in the context of decision-making theories and our work on incidental emotions."
},
{
"pmid": "26761193",
"title": "Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History, Trends, and Affect-Related Applications.",
"abstract": "Facial expressions are an important way through which humans interact socially. Building a system capable of automatically recognizing facial expressions from images and video has been an intense field of study in recent years. Interpreting such expressions remains challenging and much research is needed about the way they relate to human affect. This paper presents a general overview of automatic RGB, 3D, thermal and multimodal facial expression analysis. We define a new taxonomy for the field, encompassing all steps from face detection to facial expression recognition, and describe and classify the state of the art methods accordingly. We also present the important datasets and the bench-marking of most influential methods. We conclude with a general discussion about trends, important questions and future lines of research."
},
{
"pmid": "17272713",
"title": "Dimensions of mind perception.",
"abstract": "Participants compared the mental capacities of various human and nonhuman characters via online surveys. Factor analysis revealed two dimensions of mind perception, Experience (for example, capacity for hunger) and Agency (for example, capacity for self-control). The dimensions predicted different moral judgments but were both related to valuing of mind."
},
{
"pmid": "26502437",
"title": "Robotic Emotional Expression Generation Based on Mood Transition and Personality Model.",
"abstract": "This paper presents a method of mood transition design of a robot for autonomous emotional interaction with humans. A 2-D emotional model is proposed to combine robot emotion, mood, and personality in order to generate emotional expressions. In this design, the robot personality is programmed by adjusting the factors of the five factor model proposed by psychologists. From Big Five personality traits, the influence factors of robot mood transition are determined. Furthermore, a method to fuse basic robotic emotional behaviors is proposed in order to manifest robotic emotional states via continuous facial expressions. An artificial face on a screen is a way to provide a robot with a humanlike appearance, which might be useful for human-robot interaction. An artificial face simulator has been implemented to show the effectiveness of the proposed methods. Questionnaire surveys have been carried out to evaluate the effectiveness of the proposed method by observing robotic responses to a user's emotional expressions. Preliminary experimental results on a robotic head show that the proposed mood state transition scheme appropriately responds to a user's emotional changes in a continuous manner."
},
{
"pmid": "29768426",
"title": "The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English.",
"abstract": "The RAVDESS is a validated multimodal database of emotional speech and song. The database is gender balanced consisting of 24 professional actors, vocalizing lexically-matched statements in a neutral North American accent. Speech includes calm, happy, sad, angry, fearful, surprise, and disgust expressions, and song contains calm, happy, sad, angry, and fearful emotions. Each expression is produced at two levels of emotional intensity, with an additional neutral expression. All conditions are available in face-and-voice, face-only, and voice-only formats. The set of 7356 recordings were each rated 10 times on emotional validity, intensity, and genuineness. Ratings were provided by 247 individuals who were characteristic of untrained research participants from North America. A further set of 72 participants provided test-retest data. High levels of emotional validity and test-retest intrarater reliability were reported. Corrected accuracy and composite \"goodness\" measures are presented to assist researchers in the selection of stimuli. All recordings are made freely available under a Creative Commons license and can be downloaded at https://doi.org/10.5281/zenodo.1188976."
},
{
"pmid": "12416693",
"title": "A self-organising network that grows when required.",
"abstract": "The ability to grow extra nodes is a potentially useful facility for a self-organising neural network. A network that can add nodes into its map space can approximate the input space more accurately, and often more parsimoniously, than a network with predefined structure and size, such as the Self-Organising Map. In addition, a growing network can deal with dynamic input distributions. Most of the growing networks that have been proposed in the literature add new nodes to support the node that has accumulated the highest error during previous iterations or to support topological structures. This usually means that new nodes are added only when the number of iterations is an integer multiple of some pre-defined constant, A. This paper suggests a way in which the learning algorithm can add nodes whenever the network in its current state does not sufficiently match the input. In this way the network grows very quickly when new data is presented, but stops growing once the network has matched the data. This is particularly important when we consider dynamic data sets, where the distribution of inputs can change to a new regime after some time. We also demonstrate the preservation of neighbourhood relations in the data by the network. The new network is compared to an existing growing network, the Growing Neural Gas (GNG), on a artificial dataset, showing how the network deals with a change in input distribution after some time. Finally, the new network is applied to several novelty detection tasks and is compared with both the GNG and an unsupervised form of the Reduced Coulomb Energy network on a robotic inspection task and with a Support Vector Machine on two benchmark novelty detection tasks."
},
{
"pmid": "29017140",
"title": "Lifelong learning of human actions with deep neural network self-organization.",
"abstract": "Lifelong learning is fundamental in autonomous robotics for the acquisition and fine-tuning of knowledge through experience. However, conventional deep neural models for action recognition from videos do not account for lifelong learning but rather learn a batch of training data with a predefined number of action classes and samples. Thus, there is the need to develop learning systems with the ability to incrementally process available perceptual cues and to adapt their responses over time. We propose a self-organizing neural architecture for incrementally learning to classify human actions from video sequences. The architecture comprises growing self-organizing networks equipped with recurrent neurons for processing time-varying patterns. We use a set of hierarchically arranged recurrent networks for the unsupervised learning of action representations with increasingly large spatiotemporal receptive fields. Lifelong learning is achieved in terms of prediction-driven neural dynamics in which the growth and the adaptation of the recurrent networks are driven by their capability to reconstruct temporally ordered input sequences. Experimental results on a classification task using two action benchmark datasets show that our model is competitive with state-of-the-art methods for batch learning also when a significant number of sample labels are missing or corrupted during training sessions. Additional experiments show the ability of our model to adapt to non-stationary input avoiding catastrophic interference."
},
{
"pmid": "12529060",
"title": "Core affect and the psychological construction of emotion.",
"abstract": "At the heart of emotion, mood, and any other emotionally charged event are states experienced as simply feeling good or bad, energized or enervated. These states--called core affect--influence reflexes, perception, cognition, and behavior and are influenced by many causes internal and external, but people have no direct access to these causal connections. Core affect can therefore be experienced as free-floating (mood) or can be attributed to some cause (and thereby begin an emotional episode). These basic processes spawn a broad framework that includes perception of the core-affect-altering properties of stimuli, motives, empathy, emotional meta-experience, and affect versus emotion regulation; it accounts for prototypical emotional episodes, such as fear and anger, as core affect attributed to something plus various nonemotional processes."
},
{
"pmid": "26357337",
"title": "Automatic Analysis of Facial Affect: A Survey of Registration, Representation, and Recognition.",
"abstract": "Automatic affect analysis has attracted great interest in various contexts including the recognition of action units and basic or non-basic emotions. In spite of major efforts, there are several open questions on what the important cues to interpret facial expressions are and how to encode them. In this paper, we review the progress across a range of affect recognition applications to shed light on these fundamental questions. We analyse the state-of-the-art solutions by decomposing their pipelines into fundamental components, namely face registration, representation, dimensionality reduction and recognition. We discuss the role of these components and highlight the models and new trends that are followed in their design. Moreover, we provide a comprehensive analysis of facial representations by uncovering their advantages and limitations; we elaborate on the type of information they encode and discuss how they deal with the key challenges of illumination variations, registration errors, head-pose variations, occlusions, and identity bias. This survey allows us to identify open issues and to define future directions for designing real-world affect recognition systems."
},
{
"pmid": "19029545",
"title": "A survey of affect recognition methods: audio, visual, and spontaneous expressions.",
"abstract": "Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions despite the fact that deliberate behaviour differs in visual appearance, audio profile, and timing from spontaneously occurring behaviour. To address this problem, efforts to develop algorithms that can process naturally occurring human affective behaviour have recently emerged. Moreover, an increasing number of efforts are reported toward multimodal fusion for human affect analysis including audiovisual fusion, linguistic and paralinguistic fusion, and multi-cue visual fusion based on facial expressions, head movements, and body gestures. This paper introduces and surveys these recent advances. We first discuss human emotion perception from a psychological perspective. Next we examine available approaches to solving the problem of machine understanding of human affective behavior, and discuss important issues like the collection and availability of training and test data. We finally outline some of the scientific and engineering challenges to advancing human affect sensing technology."
}
] |
Frontiers in Artificial Intelligence | null | PMC8899709 | 10.3389/frai.2022.759255 | Single Shot Corrective CNN for Anatomically Correct 3D Hand Pose Estimation | Hand pose estimation in 3D from depth images is a highly complex task. Current state-of-the-art 3D hand pose estimators focus only on the accuracy of the model as measured by how closely it matches the ground truth hand pose but overlook the resulting hand pose's anatomical correctness. In this paper, we present the Single Shot Corrective CNN (SSC-CNN) to tackle the problem of enforcing anatomical correctness at the architecture level. In contrast to previous works which use post-facto pose filters, SSC-CNN predicts the hand pose that conforms to the human hand's biomechanical bounds and rules in a single forward pass. The model was trained and tested on the HANDS2017 and MSRA datasets. Experiments show that our proposed model shows comparable accuracy to the state-of-the-art models as measured by the ground truth pose. However, the previous methods have high anatomical errors, whereas our model is free from such errors. Experiments show that our proposed model shows zero anatomical errors along with comparable accuracy to the state-of-the-art models as measured by the ground truth pose. The previous methods have high anatomical errors, whereas our model is free from such errors. Surprisingly even the ground truth provided in the existing datasets suffers from anatomical errors, and therefore Anatomical Error Free (AEF) versions of the datasets, namely AEF-HANDS2017 and AEF-MSRA, were created. | 2. Related WorksThis section discusses hand pose estimation methods that use deep learning algorithms and hand pose estimators with biomechanics-related features such as anatomical bounds.2.1. Pose Estimation With Deep LearningHand pose estimation using deep learning algorithms can be classified into discriminative and model-based methods. The former category directly regresses the joint locations of the hand using deep networks such as CNNs (Ge et al., 2017; Guo et al., 2017; Simon et al., 2017; Malik et al., 2018a; Moon et al., 2018; Rad et al., 2018; Cai et al., 2019; Chen et al., 2019; Poier et al., 2019; Xiong et al., 2019). The latter category abstracts a model of the human hand and fits the model with minimum error (such as the mean distance between ground truth and predicted hand pose joints) on the input data (Vollmer et al., 1999; Taylor et al., 2016; Oberweger and Lepetit, 2017; Ge et al., 2018b; Malik et al., 2018b). Directly regressing the joint locations achieves high accuracy poses but suffers from issues such as the hand's structural properties. Works such as Li and Lee (2019) and Xiong et al. (2019) used cost functions taking only the joint locations of the hands into account and no structural properties of the hand. Moon et al. (2018) proposed the V2V Posenet, which converts the 2D depth image into a 3D voxelized grid and then predicts the joint positions of the hand. The cost function of the V2V algorithm used the joint locations alone for training and did not consider biomechanical constraints such as the joint angles.2.2. Pose Estimation With Biomechanical ConstraintsBiomechanical constraints are well studied in earlier works to enable anatomically correct hand poses using structural limits of the hands (Ryf and Weymann, 1995; Cobos et al., 2008; Chen Chen et al., 2013; Melax et al., 2013; Sridhar et al., 2013; Xu and Cheng, 2013; Tompson et al., 2014; Poier et al., 2015; Dibra et al., 2017; Aristidou, 2018; Wan et al., 2019; Spurr et al., 2020). Some works such as Cai et al. (2019) used refinement models to adjust the poses with limits and rules. However, most of these works (Ryf and Weymann, 1995; Cobos et al., 2008; Chen Chen et al., 2013; Melax et al., 2013; Sridhar et al., 2013; Xu and Cheng, 2013; Tompson et al., 2014; Aristidou, 2018; Li et al., 2021) apply the rules and bounds after estimating the pose of the hand using post-processing methods such as inverse kinematics and bound penalization. Recent works used biomechanical constraints for hand pose estimation using 2D images in the neural network's cost function to penalize the joints. Malik et al. (2018b) incorporated structural properties of the hand such as the finger lengths and inter-finger joint distances to provide an accurate estimation of the hand pose. The drawback of this method is that the joints' angles are not considered for estimating the pose. Hence the resulting hand pose can still output a pose in which the joint angles can exceed the human joint bounds. Works such as Sun et al. (2017) and Zhou et al. (2017) successfully implemented bone length-based constraints on human pose estimation but only on the whole body and not for the intricate parts of the hand such as finger length constraints. The model designed by Spurr et al. (2020) achieved better accuracy when tested on 2D datasets; however, the model was weakly supervised, and bound constraints were soft. Hence there are poses where the joint angles exceed the anatomical bounds. Li et al. (2021) used a model-based iterative approach by first applying the PoseNet (Choi et al., 2020) and then computing the motion parameters. The drawback of this approach is that it depends on the PoseNet for recovering the primary joint positions and fails to operate if PoseNet fails to predict the pose. Moreover, the resulting search space of the earlier networks still includes implausible hand poses as these models only rely on the training dataset to learn the kinematic rules. We encoded the biomechanical rules as a closed-form expression that does not require any form of training. SSC-CNN's search space is hence much smaller than the aforementioned models. In our approach, the hand joint locations and their respective angles are predicted, and the bounds were implicitly applied to the model such that the joint angle always lies between them. Also, as pointed out in Section 5, many datasets themselves are not free from anatomical errors due to errors during annotation, and hence learning kinematic structures based on the dataset alone might lead to absorbing those errors into our model. To the best of our knowledge, our work is the first to propose incorporating anatomical constraints implicitly in the neural architecture. | [
"28953744",
"30423837",
"15746001",
"13249858",
"15140494"
] | [
{
"pmid": "30423837",
"title": "3DAirSig: A Framework for Enabling In-Air Signatures Using a Multi-Modal Depth Sensor.",
"abstract": "In-air signature is a new modality which is essential for user authentication and access control in noncontact mode and has been actively studied in recent years. However, it has been treated as a conventional online signature, which is essentially a 2D spatial representation. Notably, this modality bears a lot more potential due to an important hidden depth feature. Existing methods for in-air signature verification neither capture this unique depth feature explicitly nor fully explore its potential in verification. Moreover, these methods are based on heuristic approaches for fingertip or hand palm center detection, which are not feasible in practice. Inspired by the great progress in deep-learning-based hand pose estimation, we propose a real-time in-air signature acquisition method which estimates hand joint positions in 3D using a single depth image. The predicted 3D position of fingertip is recorded for each frame. We present four different implementations of a verification module, which are based on the extracted depth and spatial features. An ablation study was performed to explore the impact of the depth feature in particular. For matching, we employed the most commonly used multidimensional dynamic time warping (MD-DTW) algorithm. We created a new database which contains 600 signatures recorded from 15 different subjects. Extensive evaluations were performed on our database. Our method, called 3DAirSig, achieved an equal error rate (EER) of 0 . 46 %. Experiments showed that depth itself is an important feature, which is sufficient for in-air signature verification. The dataset will be publicly available (https://goo.gl/yFdfdL)."
},
{
"pmid": "15746001",
"title": "Functional anatomy of biological motion perception in posterior temporal cortex: an FMRI study of eye, mouth and hand movements.",
"abstract": "Passive viewing of biological motion engages extensive regions of the posterior temporal-occipital cortex in humans, particularly within and nearby the superior temporal sulcus (STS). Relatively little is known about the functional specificity of this area. Some recent studies have emphasized the perceived intentionality of the motion as a potential organizing principle, while others have suggested the existence of a somatotopy based upon the limb perceived in motion. Here we conducted an event-related functional magnetic resonance imaging experiment to compare activity elicited by movement of the eyes, mouth or hand. Each motion evoked robust activation in the right posterior temporal-occipital cortex. While there was substantial overlap of the activation maps in this region, the spatial distribution of hemodynamic response amplitudes differentiated the movements. Mouth movements elicited activity along the mid-posterior STS while eye movements elicited activity in more superior and posterior portions of the right posterior STS region. Hand movements activated more inferior and posterior portions of the STS region within the posterior continuing branch of the STS. Hand-evoked activity also extended into the inferior temporal, middle occipital and lingual gyri. This topography may, in part, reflect the role of particular body motions in different functional activities."
},
{
"pmid": "15140494",
"title": "Clinical indicators of normal thumb length in adults.",
"abstract": "PURPOSE\nThis study was undertaken to obtain clinical indicators of normal relative thumb length in adults.\n\n\nMETHODS\nFifty two normal hands in 26 volunteers were analyzed. There were 10 women and 16 men. The average age was 34 years (range, 23-47 years). Eighteen of the volunteers (36 hands) were Chinese and 8 were Caucasian (16 hands). All the subjects were healthy with no history of trauma or disease affecting the hand. The relative distal extent of the tip of the thumb was measured against 2 parameters: the length of the proximal phalanx of the index finger and the distance between the proximal digital crease and proximal interphalangeal crease of the index finger. The obtained values were designated as the thumb-proximal phalanx (TPP) index and the thumb-digital crease (TDC) index, respectively.\n\n\nRESULTS\nThe TPP index was 0.69 (standard deviation = 0.09) and the TDC index was 0.41 (standard deviation = 0.15). There were no statistically significant differences between the right and left hands nor were there any between male and female hands. It was also noticed that when the thumb was adducted the thenar crease and thumb interphalangeal crease came into contact with one another in 90% of the hands. Consequently, an arc traced along the thenar crease could be extended smoothly into the interphalangeal crease of the thumb. This was termed the thenar arc. Positive or negative variances of the thenar arc correlated statistically with variations of the TPP and TDC indexes.\n\n\nCONCLUSIONS\nThis study provides 3 simple clinical indicators of normal thumb length: the TPP index, the TDC index, and the thenar arc. Statistical analysis of the TPP and TDC indexes showed that the values are independent of gender, race, or laterality of the hands examined. These 3 indicators may help the clinician determine normal relative thumb length when reconstructing the thumb in adults."
}
] |
Scientific Reports | null | PMC8901645 | 10.1038/s41598-022-07545-1 | Pretrained transformer framework on pediatric claims data for population specific tasks | The adoption of electronic health records (EHR) has become universal during the past decade, which has afforded in-depth data-based research. By learning from the large amount of healthcare data, various data-driven models have been built to predict future events for different medical tasks, such as auto diagnosis and heart-attack prediction. Although EHR is abundant, the population that satisfies specific criteria for learning population-specific tasks is scarce, making it challenging to train data-hungry deep learning models. This study presents the Claim Pre-Training (Claim-PT) framework, a generic pre-training model that first trains on the entire pediatric claims dataset, followed by a discriminative fine-tuning on each population-specific task. The semantic meaning of medical events can be captured in the pre-training stage, and the effective knowledge transfer is completed through the task-aware fine-tuning stage. The fine-tuning process requires minimal parameter modification without changing the model architecture, which mitigates the data scarcity issue and helps train the deep learning model adequately on small patient cohorts. We conducted experiments on a real-world pediatric dataset with more than one million patient records. Experimental results on two downstream tasks demonstrated the effectiveness of our method: our general task-agnostic pre-training framework outperformed tailored task-specific models, achieving more than 10% higher in model performance as compared to baselines. In addition, our framework showed a potential to transfer learned knowledge from one institution to another, which may pave the way for future healthcare model pre-training across institutions. | Related workIn this section, we first review the related research for transfer learning using claims data. Specifically, we focus on deep learning approaches. Next, we present several medical tasks using claims data and the corresponding predictive models.Transfer learning using claims dataTransfer learning is an approach where deep learning models are first trained on a large (unlabeled) dataset to learn generalized parameter initialization and perform similar tasks on another dataset. Several state-of-the-art results in the NLP and CV domain are based on transfer learning solutions24,25.Recently, researchers applied transfer learning techniques to the medical domain. Transfer learning enables deep learning models to capture comprehensive contextual semantics, which can benefit the downstream predictive tasks. For example, MED-BERT26 pre-trained contextualized medical code embeddings on large-scale claims data and illustrate that the pre-trained embeddings can improve model performance on the downstream tasks. Med2vec, proposed by Choi et al.27 is a skip-gram-based model that can capture the co-occurrence information between medical visits. Med2vec is able to learn semantic meaningful and interpretable medical code embedding, which can benefit the predictive tasks and provide clinical interpretation. BioBERT21, is a pre-trained biomedical language model trained on biomedical text instead of claims data, aims at adapting the language model for biomedical corpora.These studies demonstrate the effectiveness of the pre-train and fine-tune framework with respect to boosting model performance on the downstream predictive tasks, especially when the data size is limited. However, none of the previous research focuses on pediatric claims data. We want to explore whether the pre-train and fine-tune paradigm on pediatric claims can benefit downstream predictive tasks, specifically with a population-specific patient cohort.Predictive models using claims dataThere has been active research in modeling the longitudinal claims data for various predictive tasks. Generally, these studies can be divided into two groups: works that focus on predicting a specific future medical event, such as suicide risk prediction, asthma exacerbation prediction; and works that focus on a broader range of medical events, such as auto diagnosis and chronic disease progression modeling.Various deep learning models have been proposed to model claims data for a specific future medical event prediction. Su et al.28 proposed a logistic regression model with carefully selected features to predict the suicide risk among children. Xiang et al.29 predict the risk of asthma exacerbations and explore the potential risk factors involved in the progression of asthma via a time-sensitive attentive neural network. Zeng et al.30 developed a multi-view framework to predict the future medical expenses for better care delivery and care management. Choi et al.31 proposed RETAIN to estimate the future heart failure rate with explainable risk factors. For general-purpose disease progressing models, Zeng et al.3 proposed a hierarchical transformer-based deep learning model to forecast future medical events. Ma et al.32 leverage medical domain knowledge to model the sequential medical codes for the next visit medical code prediction.One of the main challenges in developing these models is the size of the dataset. The datasets used in previous studies usually contain over a hundred thousand patients, which is large enough to train most deep learning networks. However, for many population-specific predictive tasks or institutions without a large data corpus, training a complex deep learning model from scratch is not feasible and therefore requires transfer learning or alternative techniques. | [
"27521897",
"31934645",
"32442152",
"32989459",
"20067612",
"25014970",
"31913322",
"31501885",
"33398041",
"32066695",
"23303463",
"29323930"
] | [
{
"pmid": "27521897",
"title": "Using recurrent neural network models for early detection of heart failure onset.",
"abstract": "Objective\nWe explored whether use of deep learning to model temporal relations among events in electronic health records (EHRs) would improve model performance in predicting initial diagnosis of heart failure (HF) compared to conventional methods that ignore temporality.\n\n\nMaterials and Methods\nData were from a health system's EHR on 3884 incident HF cases and 28 903 controls, identified as primary care patients, between May 16, 2000, and May 23, 2013. Recurrent neural network (RNN) models using gated recurrent units (GRUs) were adapted to detect relations among time-stamped events (eg, disease diagnosis, medication orders, procedure orders, etc.) with a 12- to 18-month observation window of cases and controls. Model performance metrics were compared to regularized logistic regression, neural network, support vector machine, and K-nearest neighbor classifier approaches.\n\n\nResults\nUsing a 12-month observation window, the area under the curve (AUC) for the RNN model was 0.777, compared to AUCs for logistic regression (0.747), multilayer perceptron (MLP) with 1 hidden layer (0.765), support vector machine (SVM) (0.743), and K-nearest neighbor (KNN) (0.730). When using an 18-month observation window, the AUC for the RNN model increased to 0.883 and was significantly higher than the 0.834 AUC for the best of the baseline methods (MLP).\n\n\nConclusion\nDeep learning models adapted to leverage temporal relations appear to improve performance of models for detection of incident heart failure with a short observation window of 12-18 months."
},
{
"pmid": "31934645",
"title": "Blockchain vehicles for efficient Medical Record management.",
"abstract": "The lack of interoperability in Britain's medical records systems precludes the realisation of benefits generated by increased spending elsewhere in healthcare. Growing concerns regarding the security of online medical data following breaches, and regarding regulations governing data ownership, mandate strict parameters in the development of efficient methods to administrate medical records. Furthermore, consideration must be placed on the rise of connected devices, which vastly increase the amount of data that can be collected in order to improve a patient's long-term health outcomes. Increasing numbers of healthcare systems are developing Blockchain-based systems to manage medical data. A Blockchain is a decentralised, continuously growing online ledger of records, validated by members of the network. Traditionally used to manage cryptocurrency records, distributed ledger technology can be applied to various aspects of healthcare. In this manuscript, we focus on how Electronic Medical Records in particular can be managed by Blockchain, and how the introduction of this novel technology can create a more efficient and interoperable infrastructure to manage records that leads to improved healthcare outcomes, while maintaining patient data ownership and without compromising privacy or security of sensitive data."
},
{
"pmid": "32442152",
"title": "Testing Suicide Risk Prediction Algorithms Using Phone Measurements With Patients in Acute Mental Health Settings: Feasibility Study.",
"abstract": "BACKGROUND\nDigital phenotyping and machine learning are currently being used to augment or even replace traditional analytic procedures in many domains, including health care. Given the heavy reliance on smartphones and mobile devices around the world, this readily available source of data is an important and highly underutilized source that has the potential to improve mental health risk prediction and prevention and advance mental health globally.\n\n\nOBJECTIVE\nThis study aimed to apply machine learning in an acute mental health setting for suicide risk prediction. This study uses a nascent approach, adding to existing knowledge by using data collected through a smartphone in place of clinical data, which have typically been collected from health care records.\n\n\nMETHODS\nWe created a smartphone app called Strength Within Me, which was linked to Fitbit, Apple Health kit, and Facebook, to collect salient clinical information such as sleep behavior and mood, step frequency and count, and engagement patterns with the phone from a cohort of inpatients with acute mental health (n=66). In addition, clinical research interviews were used to assess mood, sleep, and suicide risk. Multiple machine learning algorithms were tested to determine the best fit.\n\n\nRESULTS\nK-nearest neighbors (KNN; k=2) with uniform weighting and the Euclidean distance metric emerged as the most promising algorithm, with 68% mean accuracy (averaged over 10,000 simulations of splitting the training and testing data via 10-fold cross-validation) and an average area under the curve of 0.65. We applied a combined 5×2 F test to test the model performance of KNN against the baseline classifier that guesses training majority, random forest, support vector machine and logistic regression, and achieved F statistics of 10.7 (P=.009) and 17.6 (P=.003) for training majority and random forest, respectively, rejecting the null of performance being the same. Therefore, we have taken the first steps in prototyping a system that could continuously and accurately assess the risk of suicide via mobile devices.\n\n\nCONCLUSIONS\nPredicting for suicidality is an underaddressed area of research to which this paper makes a useful contribution. This is part of the first generation of studies to suggest that it is feasible to utilize smartphone-generated user input and passive sensor data to generate a risk algorithm among inpatients at suicide risk. The model reveals fair concordance between phone-derived and research-generated clinical data, and with iterative development, it has the potential for accurate discriminant risk prediction. However, although full automation and independence of clinical judgment or input would be a worthy development for those individuals who are less likely to access specialist mental health services, and for providing a timely response in a crisis situation, the ethical and legal implications of such advances in the field of psychiatry need to be acknowledged."
},
{
"pmid": "32989459",
"title": "Generating sequential electronic health records using dual adversarial autoencoder.",
"abstract": "OBJECTIVE\nRecent studies on electronic health records (EHRs) started to learn deep generative models and synthesize a huge amount of realistic records, in order to address significant privacy issues surrounding the EHR. However, most of them only focus on structured records about patients' independent visits, rather than on chronological clinical records. In this article, we aim to learn and synthesize realistic sequences of EHRs based on the generative autoencoder.\n\n\nMATERIALS AND METHODS\nWe propose a dual adversarial autoencoder (DAAE), which learns set-valued sequences of medical entities, by combining a recurrent autoencoder with 2 generative adversarial networks (GANs). DAAE improves the mode coverage and quality of generated sequences by adversarially learning both the continuous latent distribution and the discrete data distribution. Using the MIMIC-III (Medical Information Mart for Intensive Care-III) and UT Physicians clinical databases, we evaluated the performances of DAAE in terms of predictive modeling, plausibility, and privacy preservation.\n\n\nRESULTS\nOur generated sequences of EHRs showed the comparable performances to real data for a predictive modeling task, and achieved the best score in plausibility evaluation conducted by medical experts among all baseline models. In addition, differentially private optimization of our model enables to generate synthetic sequences without increasing the privacy leakage of patients' data.\n\n\nCONCLUSIONS\nDAAE can effectively synthesize sequential EHRs by addressing its main challenges: the synthetic records should be realistic enough not to be distinguished from the real records, and they should cover all the training patients to reproduce the performance of specific downstream tasks."
},
{
"pmid": "20067612",
"title": "A framework for enhancing spatial and temporal granularity in report-based health surveillance systems.",
"abstract": "BACKGROUND\nCurrent public concern over the spread of infectious diseases has underscored the importance of health surveillance systems for the speedy detection of disease outbreaks. Several international report-based monitoring systems have been developed, including GPHIN, Argus, HealthMap, and BioCaster. A vital feature of these report-based systems is the geo-temporal encoding of outbreak-related textual data. Until now, automated systems have tended to use an ad-hoc strategy for processing geo-temporal information, normally involving the detection of locations that match pre-determined criteria, and the use of document publication dates as a proxy for disease event dates. Although these strategies appear to be effective enough for reporting events at the country and province levels, they may be less effective at discovering geo-temporal information at more detailed levels of granularity. In order to improve the capabilities of current Web-based health surveillance systems, we introduce the design for a novel scheme called spatiotemporal zoning.\n\n\nMETHOD\nThe proposed scheme classifies news articles into zones according to the spatiotemporal characteristics of their content. In order to study the reliability of the annotation scheme, we analyzed the inter-annotator agreements on a group of human annotators for over 1000 reported events. Qualitative and quantitative evaluation is made on the results including the kappa and percentage agreement.\n\n\nRESULTS\nThe reliability evaluation of our scheme yielded very promising inter-annotator agreement, more than a 0.9 kappa and a 0.9 percentage agreement for event type annotation and temporal attributes annotation, respectively, with a slight degradation for the spatial attribute. However, for events indicating an outbreak situation, the annotators usually had inter-annotator agreements with the lowest granularity location.\n\n\nCONCLUSIONS\nWe developed and evaluated a novel spatiotemporal zoning annotation scheme. The results of the scheme evaluation indicate that our annotated corpus and the proposed annotation scheme are reliable and could be effectively used for developing an automatic system. Given the current advances in natural language processing techniques, including the availability of language resources and tools, we believe that a reliable automatic spatiotemporal zoning system can be achieved. In the next stage of this work, we plan to develop an automatic zoning system and evaluate its usability within an operational health surveillance system."
},
{
"pmid": "25014970",
"title": "Transfer learning for visual categorization: a survey.",
"abstract": "Regular machine learning and data mining techniques study the training data for future inferences under a major assumption that the future data are within the same feature space or have the same distribution as the training data. However, due to the limited availability of human labeled training data, training data that stay in the same feature space or have the same distribution as the future data cannot be guaranteed to be sufficient enough to avoid the over-fitting problem. In real-world applications, apart from data in the target domain, related data in a different domain can also be included to expand the availability of our prior knowledge about the target future data. Transfer learning addresses such cross-domain learning problems by extracting useful information from data in a related domain and transferring them for being used in target tasks. In recent years, with transfer learning being applied to visual categorization, some typical problems, e.g., view divergence in action recognition tasks and concept drifting in image classification tasks, can be efficiently solved. In this paper, we survey state-of-the-art transfer learning algorithms in visual categorization applications, such as object recognition, image classification, and human action recognition."
},
{
"pmid": "31913322",
"title": "Re-epithelialization and immune cell behaviour in an ex vivo human skin model.",
"abstract": "A large body of literature is available on wound healing in humans. Nonetheless, a standardized ex vivo wound model without disruption of the dermal compartment has not been put forward with compelling justification. Here, we present a novel wound model based on application of negative pressure and its effects for epidermal regeneration and immune cell behaviour. Importantly, the basement membrane remained intact after blister roof removal and keratinocytes were absent in the wounded area. Upon six days of culture, the wound was covered with one to three-cell thick K14+Ki67+ keratinocyte layers, indicating that proliferation and migration were involved in wound closure. After eight to twelve days, a multi-layered epidermis was formed expressing epidermal differentiation markers (K10, filaggrin, DSG-1, CDSN). Investigations about immune cell-specific manners revealed more T cells in the blister roof epidermis compared to normal epidermis. We identified several cell populations in blister roof epidermis and suction blister fluid that are absent in normal epidermis which correlated with their decrease in the dermis, indicating a dermal efflux upon negative pressure. Together, our model recapitulates the main features of epithelial wound regeneration, and can be applied for testing wound healing therapies and investigating underlying mechanisms."
},
{
"pmid": "31501885",
"title": "BioBERT: a pre-trained biomedical language representation model for biomedical text mining.",
"abstract": "MOTIVATION\nBiomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora.\n\n\nRESULTS\nWe introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts.\n\n\nAVAILABILITY AND IMPLEMENTATION\nWe make the pre-trained weights of BioBERT freely available at https://github.com/naver/biobert-pretrained, and the source code for fine-tuning BioBERT available at https://github.com/dmis-lab/biobert."
},
{
"pmid": "33398041",
"title": "Digital oximetry biomarkers for assessing respiratory function: standards of measurement, physiological interpretation, and clinical use.",
"abstract": "Pulse oximetry is routinely used to non-invasively monitor oxygen saturation levels. A low oxygen level in the blood means low oxygen in the tissues, which can ultimately lead to organ failure. Yet, contrary to heart rate variability measures, a field which has seen the development of stable standards and advanced toolboxes and software, no such standards and open tools exist for continuous oxygen saturation time series variability analysis. The primary objective of this research was to identify, implement and validate key digital oximetry biomarkers (OBMs) for the purpose of creating a standard and associated reference toolbox for continuous oximetry time series analysis. We review the sleep medicine literature to identify clinically relevant OBMs. We implement these biomarkers and demonstrate their clinical value within the context of obstructive sleep apnea (OSA) diagnosis on a total of n = 3806 individual polysomnography recordings totaling 26,686 h of continuous data. A total of 44 digital oximetry biomarkers were implemented. Reference ranges for each biomarker are provided for individuals with mild, moderate, and severe OSA and for non-OSA recordings. Linear regression analysis between biomarkers and the apnea hypopnea index (AHI) showed a high correlation, which reached [Formula: see text]. The resulting python OBM toolbox, denoted \"pobm\", was contributed to the open software PhysioZoo ( physiozoo.org ). Studying the variability of the continuous oxygen saturation time series using pbom may provide information on the underlying physiological control systems and enhance our understanding of the manifestations and etiology of diseases, with emphasis on respiratory diseases."
},
{
"pmid": "32066695",
"title": "Correction: Differential transcriptional response following glucocorticoid activation in cultured blood immune cells: a novel approach to PTSD biomarker development.",
"abstract": "This Article was originally published without the correct Supplemental Table file (Table S1 was missing). In total, there are seven Supplemental Tables, and six were in the original submission. Furthermore, Fig. 1 was misplaced in the main text; it was embedded in the manuscript file even before the results section. Both issues have now been fixed in the HTML and PDF versions of this Article."
},
{
"pmid": "23303463",
"title": "Prevalence, correlates, and treatment of lifetime suicidal behavior among adolescents: results from the National Comorbidity Survey Replication Adolescent Supplement.",
"abstract": "CONTEXT\nAlthough suicide is the third leading cause of death among US adolescents, little is known about the prevalence, correlates, or treatment of its immediate precursors, adolescent suicidal behaviors (ie, suicide ideation, plans, and attempts).\n\n\nOBJECTIVES\nTo estimate the lifetime prevalence of suicidal behaviors among US adolescents and the associations of retrospectively reported, temporally primary DSM-IV disorders with the subsequent onset of suicidal behaviors.\n\n\nDESIGN\nDual-frame national sample of adolescents from the National Comorbidity Survey Replication Adolescent Supplement.\n\n\nSETTING\nFace-to-face household interviews with adolescents and questionnaires for parents.\n\n\nPARTICIPANTS\nA total of 6483 adolescents 13 to 18 years of age and their parents.\n\n\nMAIN OUTCOME MEASURES\nLifetime suicide ideation, plans, and attempts.\n\n\nRESULTS\nThe estimated lifetime prevalences of suicide ideation, plans, and attempts among the respondents are 12.1%, 4.0%, and 4.1%, respectively. The vast majority of adolescents with these behaviors meet lifetime criteria for at least one DSM-IV mental disorder assessed in the survey. Most temporally primary (based on retrospective age-of-onset reports) fear/anger, distress, disruptive behavior, and substance disorders significantly predict elevated odds of subsequent suicidal behaviors in bivariate models. The most consistently significant associations of these disorders are with suicide ideation, although a number of disorders are also predictors of plans and both planned and unplanned attempts among ideators. Most suicidal adolescents (>80%) receive some form of mental health treatment. In most cases (>55%), treatment starts prior to onset of suicidal behaviors but fails to prevent these behaviors from occurring.\n\n\nCONCLUSIONS\nSuicidal behaviors are common among US adolescents, with rates that approach those of adults. The vast majority of youth with suicidal behaviors have preexisting mental disorders. The disorders most powerfully predicting ideation, though, are different from those most powerfully predicting conditional transitions from ideation to plans and attempts. These differences suggest that distinct prediction and prevention strategies are needed for ideation, plans among ideators, planned attempts, and unplanned attempts."
},
{
"pmid": "29323930",
"title": "The Economic Burden of Asthma in the United States, 2008-2013.",
"abstract": "RATIONALE\nAsthma is a chronic disease that affects quality of life, productivity at work and school, and healthcare use; and it can result in death. Measuring the current economic burden of asthma provides important information on the impact of asthma on society. This information can be used to make informed decisions about allocation of limited public health resources.\n\n\nOBJECTIVES\nIn this paper, we provide a comprehensive approach to estimating the current prevalence, medical costs, cost of absenteeism (missed work and school days), and mortality attributable to asthma from a national perspective. In addition, we estimate the association of the incremental medical cost of asthma with several important factors, including race/ethnicity, education, poverty, and insurance status.\n\n\nMETHODS\nThe primary source of data was the 2008-2013 household component of the Medical Expenditure Panel Survey. We defined treated asthma as the presence of at least one medical or pharmaceutical encounter or claim associated with asthma. For the main analysis, we applied two-part regression models to estimate asthma-related annual per-person incremental medical costs and negative binomial models to estimate absenteeism associated with asthma.\n\n\nRESULTS\nOf 213,994 people in the pooled sample, 10,237 persons had treated asthma (prevalence, 4.8%). The annual per-person incremental medical cost of asthma was $3,266 (in 2015 U.S. dollars), of which $1,830 was attributable to prescription medication, $640 to office visits, $529 to hospitalizations, $176 to hospital-based outpatient visits, and $105 to emergency room visits. For certain groups, the per-person incremental medical cost of asthma differed from that of the population average, namely $2,145 for uninsured persons and $3,581 for those living below the poverty line. During 2008-2013, asthma was responsible for $3 billion in losses due to missed work and school days, $29 billion due to asthma-related mortality, and $50.3 billion in medical costs. All combined, the total cost of asthma in the United States based on the pooled sample amounted to $81.9 billion in 2013.\n\n\nCONCLUSIONS\nAsthma places a significant economic burden on the United States, with a total cost of asthma, including costs incurred by absenteeism and mortality, of $81.9 billion in 2013."
}
] |
Scientific Reports | null | PMC8901655 | 10.1038/s41598-022-07956-0 | HELP-DKT: an interpretable cognitive model of how students learn programming based on deep knowledge tracing | Student cognitive models are playing an essential role in intelligent online tutoring for programming courses. These models capture students’ learning interactions and store them in the form of a set of binary responses, thereby failing to utilize rich educational information in the learning process. Moreover, the recent development of these models has been focused on improving the prediction performance and tended to adopt deep neural networks in building the end-to-end prediction frameworks. Although this approach can provide an improved prediction performance, it may also cause difficulties in interpreting the student’s learning status, which is crucial for providing personalized educational feedback. To address this problem, this paper provides an interpretable cognitive model named HELP-DKT, which can infer how students learn programming based on deep knowledge tracing. HELP-DKT has two major advantages. First, it implements a feature-rich input layer, where the raw codes of students are encoded to vector representations, and the error classifications as concept indicators are incorporated. Second, it can infer meaningful estimation of student abilities while reliably predicting future performance. The experiments confirm that HELP-DKT can achieve good prediction performance and present reasonable interpretability of student skills improvement. In practice, HELP-DKT can personalize the learning experience of novice learners. | Related workStudent cognitive modelIn an intelligent tutoring system for programming courses, a student cognitive model has been often needed to describe students’ cognitive states during their studying. Early research efforts in this field highlighted observable gaps between students’ understanding of core programming concepts and their capability of applying these concepts to the construction of simple programs6. Therefore, modeling the learning process of novice students in programming courses involves describing the temporal development of multiple latent cognitive skills.Prior research efforts have mostly adopted Bayesian knowledge tracing (BKT) models, item response theory (IRT) based models or some other user behavior analysis models to build student models. Papers7,8 are based on gated recurrent unit (GRU) model while papers9,10 are focus on solving the link prediction task. These work propose good prediction models. However, a limitation with these work is that they do not fully leverage the students’ historical attempt dataset.The Bayesian knowledge tracing (BKT)2 provides an effective way to model temporal development of cognitive skills using the Bayesian inference with a hidden Markov model. However, the conventional BKT model-based approach11 is not suitable for programming courses because it does not support a multi-dimensional skill model and requires additional algorithms to create a Q-matrix.Some of the related studies adopted the IRT extensions for student’s skills modeling in programming courses. Yudelson et al.12 used a variant additive factors model (AFM) to infer students’ knowledge states when solving Java programming exercises. Rivers et al.13 analyzed the students’ Python programming data by fitting learning curves using the AFM to identify which programming concepts were the most challenging for students to master the Python programming. The advantage of the mentioned AFM-based methods over the BKT-based methods is their capability to tackle scenarios of multi-dimensional skills. However, both mentioned methods regard students’ programming trajectories as sequences of binary responses while ignoring rich features embedded in different versions of students’ codes during the submission attempts.Our previous work14 aimed to address the above-mentioned issue and adopted the conjunctive factor model (CFM)15 to establish a better cognitive relationship based on students’ learning data. The core concept of the CFM is a boolean Q-matrix, which is a pre-required matrix for describing the relationship between items and skills. The limitation of the CFM is that it does not treat multiple skills in one item differently, which might lead to inaccurate skill assessment. The CFM was extended to the personalized factor model(PFM) by using programming error classification as a cognitive skill representation. By introducing this modification, the predictive performance of the CFM for learning to program has been significantly improved. Both CFM and PFM are shallow model, and their main limitation is that they cannot handle large datasets.Recently, a number of deep neural network-based KT models have been proposed. The Deep-IRT16 is an extended DKT model, which has been inspired by Bayesian deep learning. The Deep-IRT can achieve better prediction performance than shallow structured models, but it lacks personalized descriptions of students in the input layer due to fixed, binary Q-matrix designed by experts. In online program teaching, Wang et al.4 used a recurrent neural network (RNN) and focused on students’ sequences of submissions within a single programming exercise to predict future performance. The main shortcoming of the DKT model is poor interpretability caused by the black-box nature of a deep neural network. Also, it does not specify the probabilistic relationship between latent skills and student codes in the form of a Q-matrix, which makes it hard for instructors to understand the analysis results of the DKT.Program vector embeddingsMethods for vectorizing programs have many similarities with the representation learning methods, such as the vector embedding technique presented in5. In the program analysis domain, Piech et al.17 introduced a neural network method, which encoded programs as a linear mapping from an embedded precondition space to an embedded postcondition space. Peng et al.18 proposed a novel “coding criterion” to build vector representations of nodes in ASTs, which have provided great progress in program analysis. BigCode19 is a tool that can learn AST representations of given source codes with the help of the Skip-gram model20.The above-mentioned methods have achieved good results, which has enlightened us to make the best use of vector embeddings that include rich information. This approach offers the possibility of using program codes as the input of deep learning models, especially student cognitive models.Automated program repairIn online programming education, many tools have been adopted to repair student error codes automatically. These tools are collectively referred to as automated program repair (APR) tools. For instance, Qlose21 is an approach used to repair students’ programming attempts in the education field automatically. This approach is based on different program distances. The AutoGrader22 is a tool that aims to find a series of minimal corrections for incorrect programs based on the program synthesis analysis. This tool requires course teachers to provide basic materials, such as a list of potential corrections based on known expression rewrite rules and a series of possible solutions for a certain problem. Gulwani et al.23 proposed a novel APR technique for introductory programming assignments. The authors used the existing correct students’ solutions to fix the new incorrect attempts. A limitation of this solution is that it cannot provide educational feedback to students and instructors.The above-presented tools aim at fixing the wrong codes or getting the right repair results, but they neither examine the error types of students in detail nor try to integrate the outputs with the student cognitive model. However, these error types contain rich information that reflects the student’s weakness, which is very useful in the intelligent tutoring field. | [] | [] |
Scientific Reports | 35256665 | PMC8901930 | 10.1038/s41598-022-07615-4 | A semantic web technology index | Semantic web (SW) technology has been widely applied to many domains such as medicine, health care, finance, geology. At present, researchers mainly rely on their experience and preferences to develop and evaluate the work of SW technology. Although the general architecture (e.g., Tim Berners-Lee’s Semantic Web Layer Cake) of SW technology was proposed many years ago and has been well-known, it still lacks a concrete guideline for standardizing the development of SW technology. In this paper, we propose an SW technology index to standardize the development for ensuring that the work of SW technology is designed well and to quantitatively evaluate the quality of the work in SW technology. This index consists of 10 criteria that quantify the quality as a score of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$0{-}10$$\end{document}0-10. We address each criterion in detail for a clear explanation from three aspects: (1) what is the criterion? (2) why do we consider this criterion and (3) how do the current studies meet this criterion? Finally, we present the validation of this index by providing some examples of how to apply the index to the validation cases. We conclude that the index is a useful standard to guide and evaluate the work in SW technology. | Related workThere are a lot of studies in SW technology over the last decades. Many approaches are proposed to design the work of SW technology with a series of criteria from software engineering6. Although many studies investigate how to assess these works, they focus on the evaluation of SW technology. By contrast, it is more meaningful how to guide the work in SW technology to be designed well with a good generality. However, there is a lack of methods, metrics, and tools for improving the development of ensuring that the work of SW technology are designed well. This lack of consistent standards is an obstacle to improve the quality of SW technology development and maintenance7.There are many articles that presented the survey of evaluation methods in SW technology. On the Semantic Web, ontology is a key technology that can describe concepts, relationships between entities, and categories of things. Brank et al.8 reviewed the state-of-the-art SW technology evaluations that assess a given ontology from the point of view of a particular criterion of application, typically in order to determine which of several ontologies would be suitable for a specific purpose. These papers9,10 presented the surveys on the current evaluation approaches and proposed their evaluation methods for the work of SW technology. Yu et al.11 presented a remarkable study in SW technology for evaluation including current methodologies, criteria and measures, and specifically seek to evaluate ontologies based on categories found in Wikipedia. Similarly, Hlomani and Stacey12 analyzed the state-of-the-art approaches to ontology evaluation, the metrics and measures. Verma13 presented a comprehensive analysis of the approaches, perspectives or dimensions, metrics and other related aspects of ontology evaluation. These articles focus on the evaluation methods for the work of SW (ontology) technology.Specifically, there are many approaches that were proposed to evaluate the work of SW technology. Dellschaft and Staab14 presented a taxonomic measure framework for gold standard based evaluation of ontologies, which overcomes the problems that the evaluation of concept hierarchies fail to meet basic criteria. Brank et al.15 proposed an ontology evaluation approach based on comparing an ontology to a gold standard ontology, assuming that both ontologies are constructed over the same set of instances. Aruna et al.16 proposed an evaluation framework for properties of ontologies and technologies for developing and deploying ontologies. A famous online tool, Oops! (ontology pitfall scanner!) was proposed for ontology evaluation in17. Raad and Cruz18 addressed how to find an efficient ontology evaluation method based on the existing ontology evaluation techniques, and presented their advantages and drawbacks. Similarly, Gao et al.19 proposed an efficient sampling and evaluation framework, which aims to provide quality accuracy evaluation with strong statistical guarantee while minimizing human efforts. Without exception, these studies focus on the evaluation of the work in SW technology. By contrast, we consider the criteria to guide the work of SW technology for designed well with a good generality.Finally, there are some studies that provided guidelines with criteria in SW technology. However, these guidelines still focus on the evaluation methods in SW technology. Gangemi et al.20 present a guideline by following several questions for ontology evaluation. In6, a framework is proposed to guide the choice of suitable criteria for various ontology evaluation levels and evaluation methods. Bandeira et al.21 provided a guideline with three main steps: ontology type verification, questions verification and quality verification for ontology evaluation. Sabou and Fernandez22 provided methodological guidelines for evaluating stand-alone ontologies as well as ontology networks. Although these studies provide guidelines, these guidelines focus on how to evaluate the work of SW technology rather than guide how to design the work of SW technology well. However, compared with evaluation, it is more meaningful how to guide the work with criteria in SW technology to be designed well with a good generality. | [
"28679487",
"11396337",
"28545611",
"28127051",
"30626917",
"32591513",
"31867102",
"28127051",
"34450877"
] | [
{
"pmid": "28679487",
"title": "Issues Associated With the Use of Semantic Web Technology in Knowledge Acquisition for Clinical Decision Support Systems: Systematic Review of the Literature.",
"abstract": "BACKGROUND\nKnowledge-based clinical decision support system (KB-CDSS) can be used to help practitioners make diagnostic decisions. KB-CDSS may use clinical knowledge obtained from a wide variety of sources to make decisions. However, knowledge acquisition is one of the well-known bottlenecks in KB-CDSSs, partly because of the enormous growth in health-related knowledge available and the difficulty in assessing the quality of this knowledge as well as identifying the \"best\" knowledge to use. This bottleneck not only means that lower-quality knowledge is being used, but also that KB-CDSSs are difficult to develop for areas where expert knowledge may be limited or unavailable. Recent methods have been developed by utilizing Semantic Web (SW) technologies in order to automatically discover relevant knowledge from knowledge sources.\n\n\nOBJECTIVE\nThe two main objectives of this study were to (1) identify and categorize knowledge acquisition issues that have been addressed through using SW technologies and (2) highlight the role of SW for acquiring knowledge used in the KB-CDSS.\n\n\nMETHODS\nWe conducted a systematic review of the recent work related to knowledge acquisition MeM for clinical decision support systems published in scientific journals. In this regard, we used the keyword search technique to extract relevant papers.\n\n\nRESULTS\nThe retrieved papers were categorized based on two main issues: (1) format and data heterogeneity and (2) lack of semantic analysis. Most existing approaches will be discussed under these categories. A total of 27 papers were reviewed in this study.\n\n\nCONCLUSIONS\nThe potential for using SW technology in KB-CDSS has only been considered to a minor extent so far despite its promise. This review identifies some questions and issues regarding use of SW technology for extracting relevant knowledge for a KB-CDSS."
},
{
"pmid": "28545611",
"title": "Knowledge graph for TCM health preservation: Design, construction, and applications.",
"abstract": "Traditional Chinese Medicine (TCM) is one of the important non-material cultural heritages of the Chinese nation. It is an important development strategy of Chinese medicine to collect, analyzes, and manages the knowledge assets of TCM health care. As a novel and massive knowledge management technology, knowledge graph provides an ideal technical means to solve the problem of \"Knowledge Island\" in the field of traditional Chinese medicine. In this study, we construct a large-scale knowledge graph, which integrates terms, documents, databases and other knowledge resources. This knowledge graph can facilitate various knowledge services such as knowledge visualization, knowledge retrieval, and knowledge recommendation, and helps the sharing, interpretation, and utilization of TCM health care knowledge."
},
{
"pmid": "28127051",
"title": "Ror2 signaling regulates Golgi structure and transport through IFT20 for tumor invasiveness.",
"abstract": "Signaling through the Ror2 receptor tyrosine kinase promotes invadopodia formation for tumor invasion. Here, we identify intraflagellar transport 20 (IFT20) as a new target of this signaling in tumors that lack primary cilia, and find that IFT20 mediates the ability of Ror2 signaling to induce the invasiveness of these tumors. We also find that IFT20 regulates the nucleation of Golgi-derived microtubules by affecting the GM130-AKAP450 complex, which promotes Golgi ribbon formation in achieving polarized secretion for cell migration and invasion. Furthermore, IFT20 promotes the efficiency of transport through the Golgi complex. These findings shed new insights into how Ror2 signaling promotes tumor invasiveness, and also advance the understanding of how Golgi structure and transport can be regulated."
},
{
"pmid": "30626917",
"title": "Occurrence of the potent mutagens 2- nitrobenzanthrone and 3-nitrobenzanthrone in fine airborne particles.",
"abstract": "Polycyclic aromatic compounds (PACs) are known due to their mutagenic activity. Among them, 2-nitrobenzanthrone (2-NBA) and 3-nitrobenzanthrone (3-NBA) are considered as two of the most potent mutagens found in atmospheric particles. In the present study 2-NBA, 3-NBA and selected PAHs and Nitro-PAHs were determined in fine particle samples (PM 2.5) collected in a bus station and an outdoor site. The fuel used by buses was a diesel-biodiesel (96:4) blend and light-duty vehicles run with any ethanol-to-gasoline proportion. The concentrations of 2-NBA and 3-NBA were, on average, under 14.8 µg g-1 and 4.39 µg g-1, respectively. In order to access the main sources and formation routes of these compounds, we performed ternary correlations and multivariate statistical analyses. The main sources for the studied compounds in the bus station were diesel/biodiesel exhaust followed by floor resuspension. In the coastal site, vehicular emission, photochemical formation and wood combustion were the main sources for 2-NBA and 3-NBA as well as the other PACs. Incremental lifetime cancer risk (ILCR) were calculated for both places, which presented low values, showing low cancer risk incidence although the ILCR values for the bus station were around 2.5 times higher than the ILCR from the coastal site."
},
{
"pmid": "32591513",
"title": "Building a PubMed knowledge graph.",
"abstract": "PubMed® is an essential resource for the medical domain, but useful concepts are either difficult to extract or are ambiguous, which has significantly hindered knowledge discovery. To address this issue, we constructed a PubMed knowledge graph (PKG) by extracting bio-entities from 29 million PubMed abstracts, disambiguating author names, integrating funding data through the National Institutes of Health (NIH) ExPORTER, collecting affiliation history and educational background of authors from ORCID®, and identifying fine-grained affiliation data from MapAffil. Through the integration of these credible multi-source data, we could create connections among the bio-entities, authors, articles, affiliations, and funding. Data validation revealed that the BioBERT deep learning method of bio-entity extraction significantly outperformed the state-of-the-art models based on the F1 score (by 0.51%), with the author name disambiguation (AND) achieving an F1 score of 98.09%. PKG can trigger broader innovations, not only enabling us to measure scholarly impact, knowledge usage, and knowledge transfer, but also assisting us in profiling authors and organizations based on their connections with bio-entities."
},
{
"pmid": "28127051",
"title": "Ror2 signaling regulates Golgi structure and transport through IFT20 for tumor invasiveness.",
"abstract": "Signaling through the Ror2 receptor tyrosine kinase promotes invadopodia formation for tumor invasion. Here, we identify intraflagellar transport 20 (IFT20) as a new target of this signaling in tumors that lack primary cilia, and find that IFT20 mediates the ability of Ror2 signaling to induce the invasiveness of these tumors. We also find that IFT20 regulates the nucleation of Golgi-derived microtubules by affecting the GM130-AKAP450 complex, which promotes Golgi ribbon formation in achieving polarized secretion for cell migration and invasion. Furthermore, IFT20 promotes the efficiency of transport through the Golgi complex. These findings shed new insights into how Ror2 signaling promotes tumor invasiveness, and also advance the understanding of how Golgi structure and transport can be regulated."
},
{
"pmid": "34450877",
"title": "An Indoor Navigation Methodology for Mobile Devices by Integrating Augmented Reality and Semantic Web.",
"abstract": "Indoor navigation systems incorporating augmented reality allow users to locate places within buildings and acquire more knowledge about their environment. However, although diverse works have been introduced with varied technologies, infrastructure, and functionalities, a standardization of the procedures for elaborating these systems has not been reached. Moreover, while systems usually handle contextual information of places in proprietary formats, a platform-independent model is desirable, which would encourage its access, updating, and management. This paper proposes a methodology for developing indoor navigation systems based on the integration of Augmented Reality and Semantic Web technologies to present navigation instructions and contextual information about the environment. It comprises four modules to define a spatial model, data management (supported by an ontology), positioning and navigation, and content visualization. A mobile application system was developed for testing the proposal in academic environments, modeling the structure, routes, and places of two buildings from independent institutions. The experiments cover distinct navigation tasks by participants in both scenarios, recording data such as navigation time, position tracking, system functionality, feedback (answering a survey), and a navigation comparison when the system is not used. The results demonstrate the system's feasibility, where the participants show a positive interest in its functionalities."
}
] |
Scientific Reports | null | PMC8903312 | 10.1038/s41598-022-07954-2 | A deep learning-driven low-power, accurate, and portable platform for rapid detection of COVID-19 using reverse-transcription loop-mediated isothermal amplification | This paper presents a deep learning-driven portable, accurate, low-cost, and easy-to-use device to perform Reverse-Transcription Loop-Mediated Isothermal Amplification (RT-LAMP) to facilitate rapid detection of COVID-19. The 3D-printed device—powered using only a 5 Volt AC-DC adapter—can perform 16 simultaneous RT-LAMP reactions and can be used multiple times. Moreover, the experimental protocol is devised to obviate the need for separate, expensive equipment for RNA extraction in addition to eliminating sample evaporation. The entire process from sample preparation to the qualitative assessment of the LAMP amplification takes only 45 min (10 min for pre-heating and 35 min for RT-LAMP reactions). The completion of the amplification reaction yields a fuchsia color for the negative samples and either a yellow or orange color for the positive samples, based on a pH indicator dye. The device is coupled with a novel deep learning system that automatically analyzes the amplification results and pays attention to the pH indicator dye to screen the COVID-19 subjects. The proposed device has been rigorously tested on 250 RT-LAMP clinical samples, where it achieved an overall specificity and sensitivity of 0.9666 and 0.9722, respectively with a recall of 0.9892 for Ct < 30. Also, the proposed system can be widely used as an accurate, sensitive, rapid, and portable tool to detect COVID–19 in settings where access to a lab is difficult, or the results are urgently required. | Related worksThe medical practitioners are currently employing multiple methods to diagnose the COVID-19 disease. One of the most popular and accurate methods is the Reverse Transcription quantitative Polymerase Chain Reaction (RT-qPCR)6, which the WHO and the CDC declared as the gold standard for the detection of SARS CoV-27,8. However, the method has its limitations. For instance, it requires specialized, bulky equipment and a highly-skilled workforce. Moreover, it requires a robust control and optimization of the heating/cooling modules operating at different temperatures; any inconsistencies in the heating/cooling temperatures or transition times during the PCR cycles could result in nonspecific or even no amplification9,10. Furthermore, the detection of the amplification poses additional challenges. It requires specialized equipment—such as electrophoresis or fluorescence-based equipment—thus adding to the complexity and the system's cost and overall process time. These challenges make it difficult to transport the PCR technique outside a specialized lab and utilize it in a point-of-care (POC) setting. On the other hand, the POC-based devices ay assist in the screening of large masses outside of laboratories. These devices can minimize the number of unnecessary visits to the labs/hospitals, reducing not just the burden of healthcare workers but also the risks of virus spreading. Furthermore, the rapid POC-based molecular tests would allow governments to conduct more diagnostic tests in parallel. Thus, more asymptomatic patients are likely to be detected, allowing for more efficient control in the fight against pandemic spread.Other detection methods—called serology tests—detect antibodies or antigens associated with SARS-CoV-2. The serology tests are rapid, easy to use, cheaper, less complicated, and allow POC operation11,12. However, the antibody tests do not confirm the active state of infection in a patient, as they rely on the antibodies that the patient's immune system produces in response to SARS-CoV-212,13. Moreover, these tests suffer from low accuracy, low sensitivity, and a high number of false-positive/negative results13. Therefore, there is a high demand for introducing alternative accurate POC-based detection methods. In this respect, a class of techniques called “isothermal amplification” appears to be a promising alternative. In contrast to the PCR method, the isothermal techniques require only a single temperature to carry out the nucleic acid amplification. Furthermore, the temperature requirement in these techniques ranges typically from 37 °C to 65 °C, much less than that used in the PCR denaturation step (~ 90 °C–95 °C). This greatly simplifies the heating requirements of the isothermal amplification-based systems, as there is no need of thermocyclers anymore. In addition, it makes this technique to be easily employed in portable devices that can be used in POC settings. Some of the examples include the helicase-dependent amplification (HDA)14, strand-displacement amplification (SDA)15, nucleic acid sequence-based amplification (NASBA)16, rolling circle amplification (RCA)17, and loop-mediated isothermal amplification (LAMP)18.Among the isothermal amplification-based diagnosis tests, LAMP has emerged as an attractive amplification method for the POC applications because of its simplicity, high tolerance against inhibitors, and ability to amplify minimally processed or even raw samples19–21. LAMP recognizes six to eight regions of the DNA and utilizes four to six primers, a strand-displacing DNA polymerase, and an additional reverse transcriptase in case of RNA amplification (i.e., RT-LAMP). The result is a highly specific, exponential amplification of the target nucleic acid in 20–60 min22. This extensive synthesis facilitates the detection of the amplicon via a variety of techniques, which include the agarose-gel23, real-time fluorescence detection using an intercalating DNA dye24, turbidity25, metal-sensitive indicator dye26, or a pH-sensitive indicator dye in minimally-buffered or non-buffered solutions27,28. Moreover, there is no need for specialized detection equipment for the latter since direct visual evaluation is possible. Among these detection techniques, the pH-sensitive dyes are the most favorable and convenient in allowing the LAMP to be used in a POC setting. Their underlying principle is that a successful amplification produces hydrogen ions as a byproduct, which changes the initially alkaline solution to an acidic solution and reduces the pH value (by ≥ 2 pH units). This drop in the pH value is detected by a change in the color of a pH sensitive dye that is added with other reagents27,29. A detailed comparison of the molecular and non-molecular techniques for diagnosing COVID-19 is provided in Table 1.Table 1The comparison of selective molecular and non-molecular techniques for the detection of COVID-19.SrMolecular TestAnti-body TestAntigen TestRT-PCRRT-LAMPELISAIgG/IgM Lateral Flow Assay1What it detects:Viral RNAViral RNAAntibodyAntibodyViral Antigens (Specific proteins on the surface of virus)2Sample taken from:Nasopharyngeal Swab, sputum, saliva, stoolSame as RT-PCRBloodHuman serum, plasma, or whole bloodNasal or throat swab3Performed in:LabLab or Point-of-careLabPoint-of-careLab4Time Required:3–4 hVariable (35 min–3 h)1–3 h10–20 min15 min5Specificity:HighHighHigh (after at least 14 days of active infection)High (after at least 14 days of active infection)Moderate6Sensitivity:HighHighHigh (after at least 14 days of active infection)High (after at least 14 days of active infection)Moderate7What it tellsActive coronavirus infectionActive coronavirus infectionPast coronavirus infectionPast coronavirus infectionActive coronavirus infection8Pros:Commonly used; gold standardRapidResults can be detected by naked eyeSimple and cheapSimple; cheap; fast;visual inspection possiblePositive results are usually highly accurate9Cons:Requires bulky, expensive, and specialized equipment to analyze the resultsThe time needed to complete the test is high; trained personnel is requiredThe design of primers can be complex; more chances of primer-to-primer interactionQualitative test (challenging to quantify the results, i.e., the level of viral infection)Not well established; it can take from days to several weeks to develop antibodies enough to be detectedDoes not show active coronavirus infectionNeeds a PCR validationA higher chance of missing an active infection (less sensitive than molecular tests);negative results may need to be confirmed via a molecular test10CostHighModerateLowLowLowDue to these positive attributes and potential benefits, there is a growing interest in using the LAMP technique in several fields and POC diagnostics of various pathogens, especially the SARS-CoV-220,22,27,30–32. A CMOS-based RT-LAMP POC platform was reported to amplify/detect the nucleocapsid (N) gene of SARS-CoV-233. The platform is integrated with a smartphone for data visualization and presents the results within 20 min after RNA extraction. An additively manufactured portable POC platform detected the presence of SARS-CoV-2 in 30 min without requiring the RNA extraction step20. The samples were first thermally lysed, followed by mixing samples with RT-LAMP reagents in the serpentine microfluidic cartridge. During the amplification step, the mixture is heated at 65 °C, and fluorescence emission during the process is recorded using a smartphone camera integrated with the cartridge. In addition, a tablet PC-based POC colorimetric platform to detect COVID-19 was also introduced34. The device could perform 8 tests simultaneously and yield qualitative results in ~ 30 min. In November 2020, the US Food and Drug Administration (FDA) has issued an Emergency Use Authorization (EUA) to an RT-LAMP-based POC device ‘Lucira’35 for the qualitative detection of COVID-19. The single-use device can be used outside a lab setting by individuals of 14-year-old or older, using self-collected nasopharyngeal swab samples. Owing to the ease in operation and their affordability, more devices of similar nature are expected to be introduced in the near future.The subject of utilizing deep learning for early detection and prediction of COVID-19 has also attracted considerable attention since the pandemic outbreak. Researchers have developed numerous classification algorithms across the globe to enable fast and reliable detection of the infection36. Efforts are on the way to using deep-learning-based detection/prediction in conjunction with the existing diagnostic tools to produce more accurate and time-efficient results. This is expected to assist clinicians and healthcare professionals in making more appropriate and timely data-driven decisions. Furthermore, automated diagnostics is more likely to eliminate human-related errors in the analyses, thus facilitating both patients and the health care systems.Most of the deep-learning studies investigated the chest information within the healthy and COVID-19 positive subjects37. These chest manifestations are obtained either through Computed Tomography (C.T.)38–40, X-rays41,42, or fused C.T. and X-ray imagery43,44. The work of Wang et al.40 is notable here as they used DenseNet-20145 driven encoder-decoder network to extract chest lesions from the C.T. imagery. Moreover, the extracted lesions are then utilized in giving the lesion-aware COVID-19 diagnostic and prognostic analysis. The authors tested their framework on a total of 5,372 C.T. imagery, where it achieved the area-under-the-curve (AUC) score of 0.86 and 0.88 for identifying COVID-19 from viral and other pneumonia, respectively. Similarly, Xu. et al.41 developed a custom multi-class deep classification model that takes patches of the candidate C.T. scan to screen the presence of COVID-19, and other chest abnormalities, such as Influenza-A-Viral Pneumonia (IVAP) and Irrelevant to Infection (ITI) groups. The framework achieved an accuracy rate of 0.8670 when tested on a dataset containing 11,871 C.T. image patches (from which 2634 patches show COVID-19 symptoms, 2661 patches contain IVAP pathologies, and 6576 belonged to the ITI group). It should also be noted that the processing time for the C.T. scans is quite less; however, it requires expensive and complicated equipment. On the other hand, the X-ray imagery (particularly CXRs) costs less and has lesser memory requirements. Considering this aspect, Chowdhury et al.42 tuned the pre-trained models (such as ResNet-10146, DenseNet-20145, MobileNetv247, etc.) to screen healthy, COVID-19 pneumonia, and viral pneumonia from the 3487 CXRs and achieved the best accuracy of 0.9970 through DenseNet-20145. Similarly, Narin et al.44 utilized pre-trained models to screen healthy and infected (with COVID-19 pneumonia, bacterial pneumonia, and viral pneumonia) patients. Their model achieved an overall accuracy of 0.9610, 0.9950, and 0.9970 on the three custom datasets consisting of 3141, 1843, and 3113 CXRs, respectively. Islam et al.48 utilized CNN coupled LSTM model to detect COVID-19 manifestation from the CXR imagery. They validated their framework on a custom dataset containing 4575 scans and achieved the AUC and accuracy ratings of 0.9990 and 0.9940, respectively. Saha et al.49 proposed EMCNet, a deep feature extractor-based ensemble of different classification models, to diagnose COVID-19 via CXR imagery. EMCNet was tested on 460 CXRs where it achieved its accuracy, sensitivity, and precision ratings of 0.9891, 0.9782, and 1.0000, respectively. Islam et al.50 combined CNN backbones such as ResNets46, DenseNets45, etc., with recurrent neural networks to effectively recognize healthy, COVID-19 pneumonia and non-COVID-19 pneumonic pathologies from the CXR imagery. They tested their framework on a dataset consisting of 1,388 CXRs where the framework achieved the best accuracy of 0.9986 by coupling VGG-1951 with RNN. Moreover, Islam et al.52 presented a review of different modalities that are majorly used in conjunction with the deep learning systems towards screening and grading COVID-19 manifestations. Asraf et al.53 presented an overview of the application of deep learning schemes to control the COVID-19 spread. Rahman et al.54 discussed four different applications of machine learning approaches to combat COVID-19 and its related challenges. Azmat Ullah et al.55 presented a review of scalable telehealth services that supports patients suffering from COVID-19. Islam et al.56 discussed the wearable monitoring devices (driven via respiration rate, heart rate, temperature, and oxygen saturation levels), and respiratory support systems that are frequently used in assisting COVID-19 positive subjects. In another work, Islam et al.57 presented an overview of breathing aid devices such as ventilators, and continuous positive airway pressure that aids in rehabilitating the COVID-19 subjects.In addition, Batista et al.58 utilized 235 RT-PCR samples to screen COVID-19 via different machine learning models, such as Support Vector Machines (SVM), Random Forests (RF), Artificial Neural Networks (ANN), Logistic Regression (LR), and Gradient Boosting Trees (GBT). They achieved the best AUC score of 0.847 using SVM and RF. Jiang et al.59 used predictive analysis based on the acute respiratory distress syndrome, alanine aminotransferase, elevated hemoglobin, and myalgias and achieved the accuracy of 0.800 to screen RT-PCR samples of 53 subjects as healthy or COVID-19 positive. Finally, Rahman et al.60 developed a custom lightweight CNN model to detect persons with face mask violations in smart city networks through closed-circuit television (CCTV) imagery. They tested their model on a local dataset containing 308 scans where they achieved the accuracy of 0.9870 towards accurately recognizing the persons with and without face masks.Looking into the literature, we can observe that many researchers have worked on screening COVID-19 via deep learning. The majority of these methods rely on finding clinical manifestations from the CXRs. However, the assessment of COVID-19 from CXR is vulnerable to noise and other vendor artifacts61. Furthermore, the clinical biomarkers within CXRs for diagnosing the COVID-19 and non-COVID-19 pneumonia are highly correlated, which can affect the performance of deep learning system62. It should be noted that the COVID-19 screening through C.T. imagery or fused C.T. and CXR imagery is reliable40. Nevertheless, incorporating C.T. imagery for rapid COVID-19 analysis is costly and cannot be performed in remote clinics and hospitals. To overcome these limitations, we present a cost-effective device that can simultaneously acquire Reverse-Transcription Loop-Mediated Isothermal Amplification (RT-LAMP) reactions and utilize a multi-resolution deep classification model to screen those reactions as healthy and COVID-19 positive accurately. A detailed discussion on the novel contributions of the proposed system is presented in the subsequent section. | [
"32651579",
"32939084",
"32580937",
"15247927",
"1579461",
"11876473",
"24643375",
"10871386",
"32868442",
"33439160",
"22987649",
"26554941",
"15163526",
"18451795",
"25652028",
"33165486",
"32900935",
"33649735",
"32568676",
"32767103",
"32444412",
"32837749",
"32835084",
"33363252",
"34976571",
"34010791",
"33574070",
"15015033",
"32635743",
"33425651",
"32396075"
] | [
{
"pmid": "32651579",
"title": "Extrapulmonary manifestations of COVID-19.",
"abstract": "Although COVID-19 is most well known for causing substantial respiratory pathology, it can also result in several extrapulmonary manifestations. These conditions include thrombotic complications, myocardial dysfunction and arrhythmia, acute coronary syndromes, acute kidney injury, gastrointestinal symptoms, hepatocellular injury, hyperglycemia and ketosis, neurologic illnesses, ocular symptoms, and dermatologic complications. Given that ACE2, the entry receptor for the causative coronavirus SARS-CoV-2, is expressed in multiple extrapulmonary tissues, direct viral tissue damage is a plausible mechanism of injury. In addition, endothelial damage and thromboinflammation, dysregulation of immune responses, and maladaptation of ACE2-related pathways might all contribute to these extrapulmonary manifestations of COVID-19. Here we review the extrapulmonary organ-specific pathophysiology, presentations and management considerations for patients with COVID-19 to aid clinicians and scientists in recognizing and monitoring the spectrum of manifestations, and in developing research priorities and therapeutic strategies for all organ systems involved."
},
{
"pmid": "15247927",
"title": "Helicase-dependent isothermal DNA amplification.",
"abstract": "Polymerase chain reaction is the most widely used method for in vitro DNA amplification. However, it requires thermocycling to separate two DNA strands. In vivo, DNA is replicated by DNA polymerases with various accessory proteins, including a DNA helicase that acts to separate duplex DNA. We have devised a new in vitro isothermal DNA amplification method by mimicking this in vivo mechanism. Helicase-dependent amplification (HDA) utilizes a DNA helicase to generate single-stranded templates for primer hybridization and subsequent primer extension by a DNA polymerase. HDA does not require thermocycling. In addition, it offers several advantages over other isothermal DNA amplification methods by having a simple reaction scheme and being a true isothermal reaction that can be performed at one temperature for the entire process. These properties offer a great potential for the development of simple portable DNA diagnostic devices to be used in the field and at the point-of-care."
},
{
"pmid": "1579461",
"title": "Strand displacement amplification--an isothermal, in vitro DNA amplification technique.",
"abstract": "Strand Displacement Amplification (SDA) is an isothermal, in vitro nucleic acid amplification technique based upon the ability of HincII to nick the unmodified strand of a hemiphosphorothioate form of its recognition site, and the ability of exonuclease deficient klenow (exo- klenow) to extend the 3'-end at the nick and displace the downstream DNA strand. Exponential amplification results from coupling sense and antisense reactions in which strands displaced from a sense reaction serve as target for an antisense reaction and vice versa. In the original design (G. T. Walker, M. C. Little, J. G. Nadeau and D. D. Shank (1992) Proc. Natl. Acad. Sci 89, 392-396), the target DNA sample is first cleaved with a restriction enzyme(s) in order to generate a double-stranded target fragment with defined 5'- and 3'-ends that can then undergo SDA. Although effective, target generation by restriction enzyme cleavage presents a number of practical limitations. We report a new target generation scheme that eliminates the requirement for restriction enzyme cleavage of the target sample prior to amplification. The method exploits the strand displacement activity of exo- klenow to generate target DNA copies with defined 5'- and 3'-ends. The new target generation process occurs at a single temperature (after initial heat denaturation of the double-stranded DNA). The target copies generated by this process are then amplified directly by SDA. The new protocol improves overall amplification efficiency. Amplification efficiency is also enhanced by improved reaction conditions that reduce nonspecific binding of SDA primers. Greater than 10(7)-fold amplification of a genomic sequence from Mycobacterium tuberculosis is achieved in 2 hours at 37 degrees C even in the presence of as much as 10 micrograms of human DNA per 50 microL reaction. The new target generation scheme can also be applied to techniques separate from SDA as a means of conveniently producing double-stranded fragments with 5'- and 3'-sequences modified as desired."
},
{
"pmid": "11876473",
"title": "Characteristics and applications of nucleic acid sequence-based amplification (NASBA).",
"abstract": "Nucleic acid sequence-based amplification (NASBA) is a sensitive, isothermal, transcription-based amplification system specifically designed for the detection of RNA targets. In some NASBA systems, DNA is also amplified though very inefficiently and only in the absence of the corresponding RNA target or in case of an excess (>1,000-fold) of target DNA over RNA. As NASBA is primer-dependent and amplicon detection is based on probe binding, primer and probe design rules are included. An overview of various target nucleic acids that have been amplified successfully using NASBA is presented. For the isolation of nucleic acids prior to NASBA, the \"Boom\" method, based on the denaturing properties of guanidine isothiocyanate and binding of nucleic acid to silica particles, is preferred. Currently, electro-chemiluminescence (ECL) is recommended for the detection of the amplicon at the end of amplification. In the near future, molecular beacons will be introduced enabling \"real-time detection,\" i.e., amplicon detection during amplification. Quantitative HIV-1 NASBA and detection of up to 48 samples can then be performed in only 90 min."
},
{
"pmid": "24643375",
"title": "Rolling circle amplification: a versatile tool for chemical biology, materials science and medicine.",
"abstract": "Rolling circle amplification (RCA) is an isothermal enzymatic process where a short DNA or RNA primer is amplified to form a long single stranded DNA or RNA using a circular DNA template and special DNA or RNA polymerases. The RCA product is a concatemer containing tens to hundreds of tandem repeats that are complementary to the circular template. The power, simplicity, and versatility of the DNA amplification technique have made it an attractive tool for biomedical research and nanobiotechnology. Traditionally, RCA has been used to develop sensitive diagnostic methods for a variety of targets including nucleic acids (DNA, RNA), small molecules, proteins, and cells. RCA has also attracted significant attention in the field of nanotechnology and nanobiotechnology. The RCA-produced long, single-stranded DNA with repeating units has been used as template for the periodic assembly of nanospecies. Moreover, since RCA products can be tailor-designed by manipulating the circular template, RCA has been employed to generate complex DNA nanostructures such as DNA origami, nanotubes, nanoribbons and DNA based metamaterials. These functional RCA based nanotechnologies have been utilized for biodetection, drug delivery, designing bioelectronic circuits and bioseparation. In this review, we introduce the fundamental engineering principles used to design RCA nanotechnologies, discuss recently developed RCA-based diagnostics and bioanalytical tools, and summarize the use of RCA to construct multivalent molecular scaffolds and nanostructures for applications in biology, diagnostics and therapeutics."
},
{
"pmid": "10871386",
"title": "Loop-mediated isothermal amplification of DNA.",
"abstract": "We have developed a novel method, termed loop-mediated isothermal amplification (LAMP), that amplifies DNA with high specificity, efficiency and rapidity under isothermal conditions. This method employs a DNA polymerase and a set of four specially designed primers that recognize a total of six distinct sequences on the target DNA. An inner primer containing sequences of the sense and antisense strands of the target DNA initiates LAMP. The following strand displacement DNA synthesis primed by an outer primer releases a single-stranded DNA. This serves as template for DNA synthesis primed by the second inner and outer primers that hybridize to the other end of the target, which produces a stem-loop DNA structure. In subsequent LAMP cycling one inner primer hybridizes to the loop on the product and initiates displacement DNA synthesis, yielding the original stem-loop DNA and a new stem-loop DNA with a stem twice as long. The cycling reaction continues with accumulation of 10(9) copies of target in less than an hour. The final products are stem-loop DNAs with several inverted repeats of the target and cauliflower-like structures with multiple loops formed by annealing between alternately inverted repeats of the target in the same strand. Because LAMP recognizes the target by six distinct sequences initially and by four distinct sequences afterwards, it is expected to amplify the target sequence with high selectivity."
},
{
"pmid": "32868442",
"title": "Rapid isothermal amplification and portable detection system for SARS-CoV-2.",
"abstract": "The COVID-19 pandemic provides an urgent example where a gap exists between availability of state-of-the-art diagnostics and current needs. As assay protocols and primer sequences become widely known, many laboratories perform diagnostic tests using methods such as RT-PCR or reverse transcription loop mediated isothermal amplification (RT-LAMP). Here, we report an RT-LAMP isothermal assay for the detection of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus and demonstrate the assay on clinical samples using a simple and accessible point-of-care (POC) instrument. We characterized the assay by dipping swabs into synthetic nasal fluid spiked with the virus, moving the swab to viral transport medium (VTM), and sampling a volume of the VTM to perform the RT-LAMP assay without an RNA extraction kit. The assay has a limit of detection (LOD) of 50 RNA copies per μL in the VTM solution within 30 min. We further demonstrate our assay by detecting SARS-CoV-2 viruses from 20 clinical samples. Finally, we demonstrate a portable and real-time POC device to detect SARS-CoV-2 from VTM samples using an additively manufactured three-dimensional cartridge and a smartphone-based reader. The POC system was tested using 10 clinical samples, and was able to detect SARS-CoV-2 from these clinical samples by distinguishing positive samples from negative samples after 30 min. The POC tests are in complete agreement with RT-PCR controls. This work demonstrates an alternative pathway for SARS-CoV-2 diagnostics that does not require conventional laboratory infrastructure, in settings where diagnosis is required at the point of sample collection."
},
{
"pmid": "33439160",
"title": "Rapid molecular diagnostics of COVID-19 by RT-LAMP in a centrifugal polystyrene-toner based microdevice with end-point visual detection.",
"abstract": "Infection caused by the new coronavirus (SARS-CoV-2) has become a serious worldwide public health problem, and one of the most important strategies for its control is mass testing. Loop-mediated isothermal amplification (LAMP) has emerged as an important alternative to simplify the diagnostics of infectious diseases. In addition, an advantage of LAMP is that it allows for easy reading of the final result through visual detection. However, this step must be performed with caution to avoid contamination and false-positive results. LAMP performed on microfluidic platforms can minimize false-positive results, in addition to having potential for point-of-care applications. Here, we describe a polystyrene-toner (PS-T) centrifugal microfluidic device manually controlled by a fidget spinner for molecular diagnosis of COVID-19 by RT-LAMP, with integrated and automated colorimetric detection. The amplification was carried out in a microchamber with 5 μL capacity, and the reaction was thermally controlled with a thermoblock at 72 °C for 10 min. At the end of the incubation time, the detection of amplified RT-LAMP fragments was performed directly on the chip by automated visual detection. Our results demonstrate that it is possible to detect COVID-19 in reactions initiated with approximately 10-3 copies of SARS-CoV-2 RNA. Clinical samples were tested using our RT-LAMP protocol as well as by conventional RT-qPCR, demonstrating comparable performance to the CDC SARS-CoV-2 RT-qPCR assay. The methodology described in this study represents a simple, rapid, and accurate method for rapid molecular diagnostics of COVID-19 in a disposable microdevice, ideal for point-of-care testing (POCT) systems."
},
{
"pmid": "22987649",
"title": "The development of a loop-mediated isothermal amplification method (LAMP) for Echinococcus granulosus [corrected] coprodetection.",
"abstract": "We have previously developed a polymerase chain reaction (PCR) assay for detection of Echinococcus granulosus infection, which proved very sensitive and specific for identification of infected dogs. We have now developed a loop-mediated isothermal amplification (LAMP) assay, which amplifies the same genomic repeated sequences of E. granulosus for coprodetection. This assay enabled detection of a single egg in fecal samples and showed high species specificity for E. granulosus with no cross-amplification of DNA from closely related helminths, including Echinococcus multilocularis. Because the method does not require thermocycling for DNA amplification, or electrophoresis for amplicon detection, it can potentially be used for premortem identification of E. granulosus-infected dogs to enable large-scale surveys in endemic countries where highly specialized equipment to undertake PCR analysis is rare."
},
{
"pmid": "26554941",
"title": "Selection of fluorescent DNA dyes for real-time LAMP with portable and simple optics.",
"abstract": "Loop-mediated isothermal amplification (LAMP) is increasingly used for point-of-care nucleic acid based diagnostics. LAMP can be monitored in real-time by measuring the increase in fluorescence of DNA binding dyes. However, there is little information comparing the effect of various fluorescent dyes on signal to noise ratio (SNR) or threshold time (Tt). This information is critical for implementation with field deployable diagnostic tools that require small, low power consumption, robust, and inexpensive optical components with reagent saving low volume reactions. In this study, SNR and Tt during real-time LAMP was evaluated with eleven fluorescent dyes. Of all dyes tested, SYTO-82, SYTO-84, and SYTOX Orange resulted in the shortest Tt, and SYTO-81 had the widest range of working concentrations. The optimized protocol detected 10 genome copies of Mycobacterium tuberculosis in less than 10 min, 10 copies of Giardia intestinalis in ~20 min, and 10 copies of Staphylococcus aureus or Salmonella enterica in less than 15 min. Results demonstrate that reaction efficiency depends on both dye type and concentration and the selected polymerase. The optimized protocol was evaluated in the Gene-Z™ device, a hand-held battery operated platform characterized via simple and low cost optics, and a multiple assay microfluidic chip with micron volume reaction wells. Compared to the more conventional intercalating dye (SYBR Green), reliable amplification was only observed in the Gene-Z™ when using higher concentrations of SYTO-81."
},
{
"pmid": "15163526",
"title": "Real-time turbidimetry of LAMP reaction for quantifying template DNA.",
"abstract": "Loop-mediated isothermal amplification (LAMP) is a nucleic acid amplification method that allows the synthesis of large amounts of DNA in a short period of time with high specificity. As the LAMP reaction progresses, the reaction by-product pyrophosphate ions bind to magnesium ions and form a white precipitate of magnesium pyrophosphate. We designed an apparatus capable of measuring the turbidity of multiple samples simultaneously while maintaining constant temperature to conduct real-time measurements of the changes in the turbidity of LAMP reactions. The time (Tt) required for the turbidity of the LAMP reaction solution to exceed a given value was dependent on the quantity of the initial template DNA. That is, a graph with the plot of Tt versus the log of the amount of initial template DNA was linear from 2 x 10(3) copies (0.01 pg/tube) to 2 x 10(9) copies (100 ng/tube) of template DNA. These results indicate that real-time turbidity measurements of the LAMP reaction permit the quantitative analysis of minute amounts of nucleic acids present in a sample, with a high precision over a wide range, using a simple apparatus reported in this study."
},
{
"pmid": "18451795",
"title": "Loop-mediated isothermal amplification (LAMP) of gene sequences and simple visual detection of products.",
"abstract": "As the human genome is decoded and its involvement in diseases is being revealed through postgenome research, increased adoption of genetic testing is expected. Critical to such testing methods is the ease of implementation and comprehensible presentation of amplification results. Loop-mediated isothermal amplification (LAMP) is a simple, rapid, specific and cost-effective nucleic acid amplification method when compared to PCR, nucleic acid sequence-based amplification, self-sustained sequence replication and strand displacement amplification. This protocol details an improved simple visual detection system for the results of the LAMP reaction. In LAMP, a large amount of DNA is synthesized, yielding a large pyrophosphate ion by-product. Pyrophosphate ion combines with divalent metallic ion to form an insoluble salt. Adding manganous ion and calcein, a fluorescent metal indicator, to the reaction solution allows a visualization of substantial alteration of the fluorescence during the one-step amplification reaction, which takes 30-60 min. As the signal recognition is highly sensitive, this system enables visual discrimination of results without costly specialized equipment. This detection method should be helpful in basic research on medicine and pharmacy, environmental hygiene, point-of-care testing and more."
},
{
"pmid": "25652028",
"title": "Visual detection of isothermal nucleic acid amplification using pH-sensitive dyes.",
"abstract": "Nucleic acid amplification is the basis for many molecular diagnostic assays. In these cases, the amplification product must be detected and analyzed, typically requiring extended workflow time, sophisticated equipment, or both. Here we present a novel method of amplification detection that harnesses the pH change resulting from amplification reactions performed with minimal buffering capacity. In loop-mediated isothermal amplification (LAMP) reactions, we achieved rapid (<30 min) and sensitive (<10 copies) visual detection using pH-sensitive dyes. Additionally, the detection can be performed in real time, enabling high-throughput or quantitative applications. We also demonstrate this visual detection for another isothermal amplification method (strand-displacement amplification), PCR, and reverse transcription LAMP (RT-LAMP) detection of RNA. The colorimetric detection of amplification presented here represents a generally applicable approach for visual detection of nucleic acid amplification, enabling molecular diagnostic tests to be analyzed immediately without the need for specialized and expensive instrumentation."
},
{
"pmid": "33165486",
"title": "Colorimetric reverse transcription loop-mediated isothermal amplification (RT-LAMP) as a visual diagnostic platform for the detection of the emerging coronavirus SARS-CoV-2.",
"abstract": "COVID-19, caused by the infection of SARS-CoV-2, has emerged as a rapidly spreading infection. The disease has now reached the level of a global pandemic and as a result a more rapid and simple detection method is imperative to curb the spread of the virus. We aimed to develop a visual diagnostic platform for SARS-CoV-2 based on colorimetric RT-LAMP with levels of sensitivity and specificity comparable to that of commercial qRT-PCR assays. In this work, the primers were designed to target a conserved region of the RNA-dependent RNA polymerase gene (RdRp). The assay was characterized for its sensitivity and specificity, and validated with clinical specimens collected in Thailand. The developed colorimetric RT-LAMP assay could amplify the target gene and enabled visual interpretation in 60 min at 65 °C. No cross-reactivity with six other common human respiratory viruses (influenza A virus subtypes H1 and H3, influenza B virus, respiratory syncytial virus types A and B, and human metapneumovirus) and five other human coronaviruses (MERS-CoV, HKU-1, OC43, 229E and NL63) was observed. The limit of detection was 25 copies per reaction when evaluated with contrived specimens. However, the detection rate at this concentration fell to 95.8% when the incubation time was reduced from 60 to 30 min. The diagnostic performance of the developed RT-LAMP assay was evaluated in 2120 clinical specimens and compared with the commercial qRT-PCR. The results revealed high sensitivity and specificity of 95.74% and 99.95%, respectively. The overall accuracy of the RT-LAMP assay was determined to be 99.86%. In summary, our results indicate that the developed colorimetric RT-LAMP provides a simple, sensitive and reliable approach for the detection of SARS-CoV-2 in clinical samples, implying its beneficial use as a diagnostic platform for COVID-19 screening."
},
{
"pmid": "32900935",
"title": "SARS-CoV-2 detection using isothermal amplification and a rapid, inexpensive protocol for sample inactivation and purification.",
"abstract": "The current severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic has had an enormous impact on society worldwide, threatening the lives and livelihoods of many. The effects will continue to grow and worsen if economies begin to open without the proper precautions, including expanded diagnostic capabilities. To address this need for increased testing, we have developed a sensitive reverse-transcription loop-mediated isothermal amplification (RT-LAMP) assay compatible with current reagents, which utilizes a colorimetric readout in as little as 30 min. A rapid inactivation protocol capable of inactivating virions, as well as endogenous nucleases, was optimized to increase sensitivity and sample stability. This protocol, combined with the RT-LAMP assay, has a sensitivity of at least 50 viral RNA copies per microliter in a sample. To further increase the sensitivity, a purification protocol compatible with this inactivation method was developed. The inactivation and purification protocol, combined with the RT-LAMP assay, brings the sensitivity to at least 1 viral RNA copy per microliter in a sample. This simple inactivation and purification pipeline is inexpensive and compatible with other downstream RNA detection platforms and uses readily available reagents. It should increase the availability of SARS-CoV-2 testing as well as expand the settings in which this testing can be performed."
},
{
"pmid": "33649735",
"title": "Handheld Point-of-Care System for Rapid Detection of SARS-CoV-2 Extracted RNA in under 20 min.",
"abstract": "The COVID-19 pandemic is a global health emergency characterized by the high rate of transmission and ongoing increase of cases globally. Rapid point-of-care (PoC) diagnostics to detect the causative virus, SARS-CoV-2, are urgently needed to identify and isolate patients, contain its spread and guide clinical management. In this work, we report the development of a rapid PoC diagnostic test (<20 min) based on reverse transcriptase loop-mediated isothermal amplification (RT-LAMP) and semiconductor technology for the detection of SARS-CoV-2 from extracted RNA samples. The developed LAMP assay was tested on a real-time benchtop instrument (RT-qLAMP) showing a lower limit of detection of 10 RNA copies per reaction. It was validated against extracted RNA from 183 clinical samples including 127 positive samples (screened by the CDC RT-qPCR assay). Results showed 91% sensitivity and 100% specificity when compared to RT-qPCR and average positive detection times of 15.45 ± 4.43 min. For validating the incorporation of the RT-LAMP assay onto our PoC platform (RT-eLAMP), a subset of samples was tested (n = 52), showing average detection times of 12.68 ± 2.56 min for positive samples (n = 34), demonstrating a comparable performance to a benchtop commercial instrument. Paired with a smartphone for results visualization and geolocalization, this portable diagnostic platform with secure cloud connectivity will enable real-time case identification and epidemiological surveillance."
},
{
"pmid": "32568676",
"title": "Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks.",
"abstract": "Fast diagnostic methods can control and prevent the spread of pandemic diseases like coronavirus disease 2019 (COVID-19) and assist physicians to better manage patients in high workload conditions. Although a laboratory test is the current routine diagnostic tool, it is time-consuming, imposing a high cost and requiring a well-equipped laboratory for analysis. Computed tomography (CT) has thus far become a fast method to diagnose patients with COVID-19. However, the performance of radiologists in diagnosis of COVID-19 was moderate. Accordingly, additional investigations are needed to improve the performance in diagnosing COVID-19. In this study is suggested a rapid and valid method for COVID-19 diagnosis using an artificial intelligence technique based. 1020 CT slices from 108 patients with laboratory proven COVID-19 (the COVID-19 group) and 86 patients with other atypical and viral pneumonia diseases (the non-COVID-19 group) were included. Ten well-known convolutional neural networks were used to distinguish infection of COVID-19 from non-COVID-19 groups: AlexNet, VGG-16, VGG-19, SqueezeNet, GoogleNet, MobileNet-V2, ResNet-18, ResNet-50, ResNet-101, and Xception. Among all networks, the best performance was achieved by ResNet-101 and Xception. ResNet-101 could distinguish COVID-19 from non-COVID-19 cases with an AUC of 0.994 (sensitivity, 100%; specificity, 99.02%; accuracy, 99.51%). Xception achieved an AUC of 0.994 (sensitivity, 98.04%; specificity, 100%; accuracy, 99.02%). However, the performance of the radiologist was moderate with an AUC of 0.873 (sensitivity, 89.21%; specificity, 83.33%; accuracy, 86.27%). ResNet-101 can be considered as a high sensitivity model to characterize and diagnose COVID-19 infections, and can be used as an adjuvant tool in radiology departments."
},
{
"pmid": "32767103",
"title": "A decade of radiomics research: are images really data or just patterns in the noise?",
"abstract": "• Although radiomics is potentially a promising approach to analyze medical image data, many pitfalls need to be considered to avoid a reproducibility crisis.• There is a translation gap in radiomics research, with many studies being published but so far little to no translation into clinical practice.• Going forward, more studies with higher levels of evidence are needed, ideally also focusing on prospective studies with relevant clinical impact."
},
{
"pmid": "32444412",
"title": "A fully automatic deep learning system for COVID-19 diagnostic and prognostic analysis.",
"abstract": "Coronavirus disease 2019 (COVID-19) has spread globally, and medical resources become insufficient in many regions. Fast diagnosis of COVID-19 and finding high-risk patients with worse prognosis for early prevention and medical resource optimisation is important. Here, we proposed a fully automatic deep learning system for COVID-19 diagnostic and prognostic analysis by routinely used computed tomography.We retrospectively collected 5372 patients with computed tomography images from seven cities or provinces. Firstly, 4106 patients with computed tomography images were used to pre-train the deep learning system, making it learn lung features. Following this, 1266 patients (924 with COVID-19 (471 had follow-up for >5 days) and 342 with other pneumonia) from six cities or provinces were enrolled to train and externally validate the performance of the deep learning system.In the four external validation sets, the deep learning system achieved good performance in identifying COVID-19 from other pneumonia (AUC 0.87 and 0.88, respectively) and viral pneumonia (AUC 0.86). Moreover, the deep learning system succeeded to stratify patients into high- and low-risk groups whose hospital-stay time had significant difference (p=0.013 and p=0.014, respectively). Without human assistance, the deep learning system automatically focused on abnormal areas that showed consistent characteristics with reported radiological findings.Deep learning provides a convenient tool for fast screening of COVID-19 and identifying potential high-risk patients, which may be helpful for medical resource optimisation and early prevention before patients show severe symptoms."
},
{
"pmid": "32837749",
"title": "A Deep Learning System to Screen Novel Coronavirus Disease 2019 Pneumonia.",
"abstract": "The real-time reverse transcription-polymerase chain reaction (RT-PCR) detection of viral RNA from sputum or nasopharyngeal swab had a relatively low positive rate in the early stage of coronavirus disease 2019 (COVID-19). Meanwhile, the manifestations of COVID-19 as seen through computed tomography (CT) imaging show individual characteristics that differ from those of other types of viral pneumonia such as influenza-A viral pneumonia (IAVP). This study aimed to establish an early screening model to distinguish COVID-19 from IAVP and healthy cases through pulmonary CT images using deep learning techniques. A total of 618 CT samples were collected: 219 samples from 110 patients with COVID-19 (mean age 50 years; 63 (57.3%) male patients); 224 samples from 224 patients with IAVP (mean age 61 years; 156 (69.6%) male patients); and 175 samples from 175 healthy cases (mean age 39 years; 97 (55.4%) male patients). All CT samples were contributed from three COVID-19-designated hospitals in Zhejiang Province, China. First, the candidate infection regions were segmented out from the pulmonary CT image set using a 3D deep learning model. These separated images were then categorized into the COVID-19, IAVP, and irrelevant to infection (ITI) groups, together with the corresponding confidence scores, using a location-attention classification model. Finally, the infection type and overall confidence score for each CT case were calculated using the Noisy-OR Bayesian function. The experimental result of the benchmark dataset showed that the overall accuracy rate was 86.7% in terms of all the CT cases taken together. The deep learning models established in this study were effective for the early screening of COVID-19 patients and were demonstrated to be a promising supplementary diagnostic method for frontline clinical doctors."
},
{
"pmid": "32835084",
"title": "A combined deep CNN-LSTM network for the detection of novel coronavirus (COVID-19) using X-ray images.",
"abstract": "Nowadays, automatic disease detection has become a crucial issue in medical science due to rapid population growth. An automatic disease detection framework assists doctors in the diagnosis of disease and provides exact, consistent, and fast results and reduces the death rate. Coronavirus (COVID-19) has become one of the most severe and acute diseases in recent times and has spread globally. Therefore, an automated detection system, as the fastest diagnostic option, should be implemented to impede COVID-19 from spreading. This paper aims to introduce a deep learning technique based on the combination of a convolutional neural network (CNN) and long short-term memory (LSTM) to diagnose COVID-19 automatically from X-ray images. In this system, CNN is used for deep feature extraction and LSTM is used for detection using the extracted feature. A collection of 4575 X-ray images, including 1525 images of COVID-19, were used as a dataset in this system. The experimental results show that our proposed system achieved an accuracy of 99.4%, AUC of 99.9%, specificity of 99.2%, sensitivity of 99.3%, and F1-score of 98.9%. The system achieved desired results on the currently available dataset, which can be further improved when more COVID-19 images become available. The proposed system can help doctors to diagnose and treat COVID-19 patients easily."
},
{
"pmid": "33363252",
"title": "EMCNet: Automated COVID-19 diagnosis from X-ray images using convolutional neural network and ensemble of machine learning classifiers.",
"abstract": "Recently, coronavirus disease (COVID-19) has caused a serious effect on the healthcare system and the overall global economy. Doctors, researchers, and experts are focusing on alternative ways for the rapid detection of COVID-19, such as the development of automatic COVID-19 detection systems. In this paper, an automated detection scheme named EMCNet was proposed to identify COVID-19 patients by evaluating chest X-ray images. A convolutional neural network was developed focusing on the simplicity of the model to extract deep and high-level features from X-ray images of patients infected with COVID-19. With the extracted features, binary machine learning classifiers (random forest, support vector machine, decision tree, and AdaBoost) were developed for the detection of COVID-19. Finally, these classifiers' outputs were combined to develop an ensemble of classifiers, which ensures better results for the dataset of various sizes and resolutions. In comparison with other recent deep learning-based systems, EMCNet showed better performance with 98.91% accuracy, 100% precision, 97.82% recall, and 98.89% F1-score. The system could maintain its great importance on the automatic detection of COVID-19 through instant detection and low false negative rate."
},
{
"pmid": "34976571",
"title": "A Review on Deep Learning Techniques for the Diagnosis of Novel Coronavirus (COVID-19).",
"abstract": "Novel coronavirus (COVID-19) outbreak, has raised a calamitous situation all over the world and has become one of the most acute and severe ailments in the past hundred years. The prevalence rate of COVID-19 is rapidly rising every day throughout the globe. Although no vaccines for this pandemic have been discovered yet, deep learning techniques proved themselves to be a powerful tool in the arsenal used by clinicians for the automatic diagnosis of COVID-19. This paper aims to overview the recently developed systems based on deep learning techniques using different medical imaging modalities like Computer Tomography (CT) and X-ray. This review specifically discusses the systems developed for COVID-19 diagnosis using deep learning techniques and provides insights on well-known data sets used to train these networks. It also highlights the data partitioning techniques and various performance measures developed by researchers in this field. A taxonomy is drawn to categorize the recent works for proper insight. Finally, we conclude by addressing the challenges associated with the use of deep learning methods for COVID-19 detection and probable future trends in this research area. The aim of this paper is to facilitate experts (medical or otherwise) and technicians in understanding the ways deep learning techniques are used in this regard and how they can be potentially further utilized to combat the outbreak of COVID-19."
},
{
"pmid": "34010791",
"title": "An incremental learning approach to automatically recognize pulmonary diseases from the multi-vendor chest radiographs.",
"abstract": "The human respiratory network is a vital system that provides oxygen supply and nourishment to the whole body. Pulmonary diseases can cause severe respiratory problems, leading to sudden death if not treated timely. Many researchers have utilized deep learning systems (in both transfer learning and fine-tuning modes) to diagnose pulmonary disorders using chest X-rays (CXRs). However, such systems require exhaustive training efforts on large-scale (and well-annotated) data to effectively diagnose chest abnormalities (at the inference stage). Furthermore, procuring such large-scale data (in a clinical setting) is often infeasible and impractical, especially for rare diseases. With the recent advances in incremental learning, researchers have periodically tuned deep neural networks to learn different classification tasks with few training examples. Although, such systems can resist catastrophic forgetting, they treat the knowledge representations (which the network learns periodically) independently of each other, and this limits their classification performance. Also, to the best of our knowledge, there is no incremental learning-driven image diagnostic framework (to date) that is specifically designed to screen pulmonary disorders from the CXRs. To address this, we present a novel framework that can learn to screen different chest abnormalities incrementally (via few-shot training). In addition to this, the proposed framework is penalized through an incremental learning loss function that infers Bayesian theory to recognize structural and semantic inter-dependencies between incrementally learned knowledge representations to diagnose the pulmonary diseases effectively (at the inference stage), regardless of the scanner specifications. We tested the proposed framework on five public CXR datasets containing different chest abnormalities, where it achieved an accuracy of 0.8405 and the F1 score of 0.8303, outperforming various state-of-the-art incremental learning schemes. It also achieved a highly competitive performance compared to the conventional fine-tuning (transfer learning) approaches while significantly reducing the training and computational requirements."
},
{
"pmid": "33574070",
"title": "Chest radiography or computed tomography for COVID-19 pneumonia? Comparative study in a simulated triage setting.",
"abstract": "INTRODUCTION\nFor the management of patients referred to respiratory triage during the early stages of the severe acute respiratory syndrome coronavirus type 2 (SARS-CoV-2) pandemic, either chest radiography or computed tomography (CT) were used as first-line diagnostic tools. The aim of this study was to compare the impact on the triage, diagnosis and prognosis of patients with suspected COVID-19 when clinical decisions are derived from reconstructed chest radiography or from CT.\n\n\nMETHODS\nWe reconstructed chest radiographs from high-resolution CT (HRCT) scans. Five clinical observers independently reviewed clinical charts of 300 subjects with suspected COVID-19 pneumonia, integrated with either a reconstructed chest radiography or HRCT report in two consecutive blinded and randomised sessions: clinical decisions were recorded for each session. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and prognostic value were compared between reconstructed chest radiography and HRCT. The best radiological integration was also examined to develop an optimised respiratory triage algorithm.\n\n\nRESULTS\nInterobserver agreement was fair (Kendall's W=0.365, p<0.001) by the reconstructed chest radiography-based protocol and good (Kendall's W=0.654, p<0.001) by the CT-based protocol. NPV assisted by reconstructed chest radiography (31.4%) was lower than that of HRCT (77.9%). In case of indeterminate or typical radiological appearance for COVID-19 pneumonia, extent of disease on reconstructed chest radiography or HRCT were the only two imaging variables that were similarly linked to mortality by adjusted multivariable models CONCLUSIONS: The present findings suggest that clinical triage is safely assisted by chest radiography. An integrated algorithm using first-line chest radiography and contingent use of HRCT can help optimise management and prognostication of COVID-19."
},
{
"pmid": "15015033",
"title": "False-positive results and contamination in nucleic acid amplification assays: suggestions for a prevent and destroy strategy.",
"abstract": "Contamination of samples with DNA is still a major problem in microbiology laboratories, despite the wide acceptance of PCR and other amplification techniques for the detection of frequently low amounts of target DNA. This review focuses on the implications of contamination in the diagnosis and research of infectious diseases, possible sources of contaminants, strategies for prevention and destruction, and quality control. Contamination of samples in diagnostic PCR can have far-reaching consequences for patients, as illustrated by several examples in this review. Furthermore, it appears that the (sometimes very unexpected) sources of contaminants are diverse (including water, reagents, disposables, sample carry over, and amplicon), and contaminants can also be introduced by unrelated activities in neighboring laboratories. Therefore, lack of communication between researchers using the same laboratory space can be considered a risk factor. Only a very limited number of multicenter quality control studies have been published so far, but these showed false-positive rates of 9-57%. The overall conclusion is that although nucleic acid amplification assays are basically useful both in research and in the clinic, their accuracy depends on awareness of risk factors and the proper use of procedures for the prevention of nucleic acid contamination. The discussion of prevention and destruction strategies included in this review may serve as a guide to help improve laboratory practices and reduce the number of false-positive amplification results."
},
{
"pmid": "32635743",
"title": "Enhancing colorimetric loop-mediated isothermal amplification speed and sensitivity with guanidine chloride.",
"abstract": "Loop-mediated isothermal amplification (LAMP) is a versatile technique for detection of target DNA and RNA, enabling rapid molecular diagnostic assays with minimal equipment. The global SARS-CoV-2 pandemic has presented an urgent need for new and better diagnostic methods, with colorimetric LAMP utilized in numerous studies for SARS-CoV-2 detection. However, the sensitivity of colorimetric LAMP in early reports has been below that of the standard RT-qPCR tests, and we sought to improve performance. Here we report the use of guanidine hydrochloride and combined primer sets to increase speed and sensitivity in colorimetric LAMP, bringing this simple method up to the standards of sophisticated techniques and enabling accurate, high-throughput diagnostics."
},
{
"pmid": "33425651",
"title": "Analysis and best parameters selection for person recognition based on gait model using CNN algorithm and image augmentation.",
"abstract": "Person Recognition based on Gait Model (PRGM) and motion features is are indeed a challenging and novel task due to their usages and to the critical issues of human pose variation, human body occlusion, camera view variation, etc. In this project, a deep convolution neural network (CNN) was modified and adapted for person recognition with Image Augmentation (IA) technique depending on gait features. Adaptation aims to get best values for CNN parameters to get best CNN model. In Addition to the CNN parameters Adaptation, the design of CNN model itself was adapted to get best model structure; Adaptation in the design was affected the type, the number of layers in CNN and normalization between them. After choosing best parameters and best design, Image augmentation was used to increase the size of train dataset with many copies of the image to boost the number of different images that will be used to train Deep learning algorithms. The tests were achieved using known dataset (Market dataset). The dataset contains sequential pictures of people in different gait status. The image in CNN model as matrix is extracted to many images or matrices by the convolution, so dataset size may be bigger by hundred times to make the problem a big data issue. In this project, results show that adaptation has improved the accuracy of person recognition using gait model comparing to model without adaptation. In addition, dataset contains images of person carrying things. IA technique improved the model to be robust to some variations such as image dimensions (quality and resolution), rotations and carried things by persons. Results for 200 persons recognition, validation accuracy was about 82% without IA and 96.23 with IA. For 800 persons recognition, validation accuracy was 93.62% without IA."
},
{
"pmid": "32396075",
"title": "Deep Learning COVID-19 Features on CXR Using Limited Training Data Sets.",
"abstract": "Under the global pandemic of COVID-19, the use of artificial intelligence to analyze chest X-ray (CXR) image for COVID-19 diagnosis and patient triage is becoming important. Unfortunately, due to the emergent nature of the COVID-19 pandemic, a systematic collection of CXR data set for deep neural network training is difficult. To address this problem, here we propose a patch-based convolutional neural network approach with a relatively small number of trainable parameters for COVID-19 diagnosis. The proposed method is inspired by our statistical analysis of the potential imaging biomarkers of the CXR radiographs. Experimental results show that our method achieves state-of-the-art performance and provides clinically interpretable saliency maps, which are useful for COVID-19 diagnosis and patient triage."
}
] |
Frontiers in Oncology | null | PMC8904144 | 10.3389/fonc.2022.821594 | Deep Learning-Based Classification of Cancer Cell in Leptomeningeal Metastasis on Cytomorphologic Features of Cerebrospinal Fluid | BackgroundIt is a critical challenge to diagnose leptomeningeal metastasis (LM), given its technical difficulty and the lack of typical symptoms. The existing gold standard of diagnosing LM is to use positive cerebrospinal fluid (CSF) cytology, which consumes significantly more time to classify cells under a microscope.ObjectiveThis study aims to establish a deep learning model to classify cancer cells in CSF, thus facilitating doctors to achieve an accurate and fast diagnosis of LM in an early stage.MethodThe cerebrospinal fluid laboratory of Xijing Hospital provides 53,255 cells from 90 LM patients in the research. We used two deep convolutional neural networks (CNN) models to classify cells in the CSF. A five-way cell classification model (CNN1) consists of lymphocytes, monocytes, neutrophils, erythrocytes, and cancer cells. A four-way cancer cell classification model (CNN2) consists of lung cancer cells, gastric cancer cells, breast cancer cells, and pancreatic cancer cells. Here, the CNN models were constructed by Resnet-inception-V2. We evaluated the performance of the proposed models on two external datasets and compared them with the results from 42 doctors of various levels of experience in the human-machine tests. Furthermore, we develop a computer-aided diagnosis (CAD) software to generate cytology diagnosis reports in the research rapidly.ResultsWith respect to the validation set, the mean average precision (mAP) of CNN1 is over 95% and that of CNN2 is close to 80%. Hence, the proposed deep learning model effectively classifies cells in CSF to facilitate the screening of cancer cells. In the human-machine tests, the accuracy of CNN1 is similar to the results from experts, with higher accuracy than doctors in other levels. Moreover, the overall accuracy of CNN2 is 10% higher than that of experts, with a time consumption of only one-third of that consumed by an expert. Using the CAD software saves 90% working time of cytologists.ConclusionA deep learning method has been developed to assist the LM diagnosis with high accuracy and low time consumption effectively. Thanks to labeled data and step-by-step training, our proposed method can successfully classify cancer cells in the CSF to assist LM diagnosis early. In addition, this unique research can predict cancer’s primary source of LM, which relies on cytomorphologic features without immunohistochemistry. Our results show that deep learning can be widely used in medical images to classify cerebrospinal fluid cells. For complex cancer classification tasks, the accuracy of the proposed method is significantly higher than that of specialist doctors, and its performance is better than that of junior doctors and interns. The application of CNNs and CAD software may ultimately aid in expediting the diagnosis and overcoming the shortage of experienced cytologists, thereby facilitating earlier treatment and improving the prognosis of LM. | Related WorkFeature Extraction and ClassificationMorphological features of white blood cells (WBCs) based on traditional ML algorithms played a crucial role in the accuracy of WBC classification in recent research (18). The classifier of support vector machine (SVM) could achieve 84% accuracy of 140 digital blood smear images in five types of WBCs (19). Moreover, bi-spectral invariant features combined with the SVM and classification tree were used to 10 types of WBCs classification on three datasets of Cellavision database, ALL-IDB, and Wadsworth center and ultimately reached an averaged accuracy of 96.13% (20). Razzak and Naz (2017) extracted the features of the ELM classifier in CNN and achieved an accuracy of 98.68% in WBC classification (21).Deep-Learning-Based AlgorithmDL has been widely used in the WBC classification, which uses multiple processing layers to learn internal data representations from training datasets compared to traditional machine learning methods. Tiwari et al. (2018) applied data augmentation to expand training. They augmented cell images from 400 to 3,000 and achieved average precision of 88% in double convolution layer neural networks (DCLNN) (22). Convolutional neural networks (CNNs) achieved the best performance with an accuracy of 96.6% in the task of two types of WBCs classification from the ALL-IDB dataset (23). In the BCCD dataset, the implementation of recurrent neural network (RNN) in the CNN reached an accuracy of 90.79% for the task of four types of WBCs classification (24). CNN obtained an accuracy of 96.63% in five types of WBCs classification (25). The dataset proposed by Khouani et al. (2020) contains 145 labeled cells (87 images), including 49 normal cells, 24 dystrophic cells, and 72 other cells. Their study obtained 92.19% of precision by Resnet 50. It is the smallest dataset in recent research for WBC classification (26). Timely proposed CNN and RNN merging model with canonical correlation analysis illustrated an excellent performance of 95.89% to classify four types of WBCs in public data from Shenggan/BCCD data and kaggle.com/paultimothymooney/blood-cells/data (27). TWO-DCNN obtained the highest precision of 95.7%, with the most significant area under the receiver operating characteristic (ROC) curve (AUC) of 0.98 in low-resolution datasets (28). | [
"12690644",
"21220726",
"29311619",
"31913322",
"30626917",
"31913322",
"31481919"
] | [
{
"pmid": "12690644",
"title": "Leptomeningeal metastases.",
"abstract": "LM is an increasingly common neurologic complication of cancer with variable clinical manifestations. Although there are no curative treatments, currently available therapies can preserve neurologic function and potentially improve quality of life. Further research into the mechanisms of leptomeningeal metastasis will elucidate molecular and cellular pathways that may allow identification of potential targets to interrupt this process early or to prevent this complication. Animal models are needed to further define the pathophysiology of LM and to provide an experimental system to test novel treatments [242-245]. There is an urgent need to develop new drug-based or radiation-based treatments for patients with LM. Randomized clinical trials are the appropriate study design to determine the efficacy of new treatments for LM. However, surrogate markers for response must be developed to facilitate the identification of effective regimens. Survival is not the optimal end point for such studies as most patients who develop this complication already have advanced, incurable cancer. Prevention of or delay in neurologic progression is one objective that has been utilized in recent randomized trials in patients with LM, and this end point deserves further attention. Although the development of LM represents a poor prognostic marker in patients with cancer it is important for physicians to recognize the symptoms and signs of the disease and establish the diagnosis as early in the disease course as possible. This may provide an opportunity for effective intervention that can improve quality of life, prevent further neurologic deterioration and, for a subset of patients, improve survival."
},
{
"pmid": "29311619",
"title": "In situ immune response and mechanisms of cell damage in central nervous system of fatal cases microcephaly by Zika virus.",
"abstract": "Zika virus (ZIKV) has recently caused a pandemic disease, and many cases of ZIKV infection in pregnant women resulted in abortion, stillbirth, deaths and congenital defects including microcephaly, which now has been proposed as ZIKV congenital syndrome. This study aimed to investigate the in situ immune response profile and mechanisms of neuronal cell damage in fatal Zika microcephaly cases. Brain tissue samples were collected from 15 cases, including 10 microcephalic ZIKV-positive neonates with fatal outcome and five neonatal control flavivirus-negative neonates that died due to other causes, but with preserved central nervous system (CNS) architecture. In microcephaly cases, the histopathological features of the tissue samples were characterized in three CNS areas (meninges, perivascular space, and parenchyma). The changes found were mainly calcification, necrosis, neuronophagy, gliosis, microglial nodules, and inflammatory infiltration of mononuclear cells. The in situ immune response against ZIKV in the CNS of newborns is complex. Despite the predominant expression of Th2 cytokines, other cytokines such as Th1, Th17, Treg, Th9, and Th22 are involved to a lesser extent, but are still likely to participate in the immunopathogenic mechanisms of neural disease in fatal cases of microcephaly caused by ZIKV."
},
{
"pmid": "31913322",
"title": "Re-epithelialization and immune cell behaviour in an ex vivo human skin model.",
"abstract": "A large body of literature is available on wound healing in humans. Nonetheless, a standardized ex vivo wound model without disruption of the dermal compartment has not been put forward with compelling justification. Here, we present a novel wound model based on application of negative pressure and its effects for epidermal regeneration and immune cell behaviour. Importantly, the basement membrane remained intact after blister roof removal and keratinocytes were absent in the wounded area. Upon six days of culture, the wound was covered with one to three-cell thick K14+Ki67+ keratinocyte layers, indicating that proliferation and migration were involved in wound closure. After eight to twelve days, a multi-layered epidermis was formed expressing epidermal differentiation markers (K10, filaggrin, DSG-1, CDSN). Investigations about immune cell-specific manners revealed more T cells in the blister roof epidermis compared to normal epidermis. We identified several cell populations in blister roof epidermis and suction blister fluid that are absent in normal epidermis which correlated with their decrease in the dermis, indicating a dermal efflux upon negative pressure. Together, our model recapitulates the main features of epithelial wound regeneration, and can be applied for testing wound healing therapies and investigating underlying mechanisms."
},
{
"pmid": "30626917",
"title": "Occurrence of the potent mutagens 2- nitrobenzanthrone and 3-nitrobenzanthrone in fine airborne particles.",
"abstract": "Polycyclic aromatic compounds (PACs) are known due to their mutagenic activity. Among them, 2-nitrobenzanthrone (2-NBA) and 3-nitrobenzanthrone (3-NBA) are considered as two of the most potent mutagens found in atmospheric particles. In the present study 2-NBA, 3-NBA and selected PAHs and Nitro-PAHs were determined in fine particle samples (PM 2.5) collected in a bus station and an outdoor site. The fuel used by buses was a diesel-biodiesel (96:4) blend and light-duty vehicles run with any ethanol-to-gasoline proportion. The concentrations of 2-NBA and 3-NBA were, on average, under 14.8 µg g-1 and 4.39 µg g-1, respectively. In order to access the main sources and formation routes of these compounds, we performed ternary correlations and multivariate statistical analyses. The main sources for the studied compounds in the bus station were diesel/biodiesel exhaust followed by floor resuspension. In the coastal site, vehicular emission, photochemical formation and wood combustion were the main sources for 2-NBA and 3-NBA as well as the other PACs. Incremental lifetime cancer risk (ILCR) were calculated for both places, which presented low values, showing low cancer risk incidence although the ILCR values for the bus station were around 2.5 times higher than the ILCR from the coastal site."
},
{
"pmid": "31913322",
"title": "Re-epithelialization and immune cell behaviour in an ex vivo human skin model.",
"abstract": "A large body of literature is available on wound healing in humans. Nonetheless, a standardized ex vivo wound model without disruption of the dermal compartment has not been put forward with compelling justification. Here, we present a novel wound model based on application of negative pressure and its effects for epidermal regeneration and immune cell behaviour. Importantly, the basement membrane remained intact after blister roof removal and keratinocytes were absent in the wounded area. Upon six days of culture, the wound was covered with one to three-cell thick K14+Ki67+ keratinocyte layers, indicating that proliferation and migration were involved in wound closure. After eight to twelve days, a multi-layered epidermis was formed expressing epidermal differentiation markers (K10, filaggrin, DSG-1, CDSN). Investigations about immune cell-specific manners revealed more T cells in the blister roof epidermis compared to normal epidermis. We identified several cell populations in blister roof epidermis and suction blister fluid that are absent in normal epidermis which correlated with their decrease in the dermis, indicating a dermal efflux upon negative pressure. Together, our model recapitulates the main features of epithelial wound regeneration, and can be applied for testing wound healing therapies and investigating underlying mechanisms."
},
{
"pmid": "31481919",
"title": "Leptomeningeal Metastasis: The Role of Cerebrospinal Fluid Diagnostics.",
"abstract": "Background: Metastatic spread into the cerebrospinal fluid (CSF) represents a severe complication of malignant disease with poor prognosis. Although early diagnosis is crucial, broad spectrums of clinical manifestations, and pitfalls of magnetic resonance imaging (MRI) and CSF diagnostics can be challenging. Data are limited how CSF parameters and MRI findings relate to each other in patients with leptomeningeal metastasis. Methods: Patients with malignant cells in CSF cytology examination diagnosed between 1998 and 2016 at the Department of Neurology in the Hannover Medical School were included in this study. Clinical records, MRI findings and CSF parameters were retrospectively analyzed. Results: One hundred thirteen patients with leptomeningeal metastasis were identified. Seventy-six patients (67%) suffered from a solid malignancy while a hematological malignancy was found in 37 patients (33%). Cerebral signs and symptoms were most frequently found (78% in solid vs. 49% in hematological malignancies) followed by cranial nerve impairment (26% in solid vs. 46% in hematological malignancies) and spinal symptoms (26% in solid vs. 27% in hematological malignancies). In patients with malignant cells in CSF MRI detected signs of leptomeningeal metastasis in 62% of patients with solid and in only 33% of patients with hematological malignancies. Investigations of standard CSF parameters revealed a normal CSF cell count in 21% of patients with solid malignancies and in 8% of patients with hematological malignancies. Blood-CSF-barrier dysfunction was found in most patients (80% in solid vs. 92% in hematological malignancies). Elevated CSF lactate levels occurred in 68% of patients in solid and in 48% of patients with hematological malignancies. A high number of patients (30% in solid vs. 26% in hematological malignancies) exhibited oligoclonal bands in CSF. Significant correlations between the presence of leptomeningeal enhancement demonstrated by MRI and CSF parameters (cell count, lactate levels, and CSF/Serum albumin quotient) were not found in both malignancy groups. Conclusion: CSF examination is helpful to detect leptomeningeal metastasis since the diagnosis can be challenging especially when MRI is negative. CSF cytological investigation is mandatory whenever leptomeningeal metastasis is suspected, even when CSF cell count is normal."
}
] |
Scientific Reports | null | PMC8904452 | 10.1038/s41598-022-07754-8 | Personalized wearable electrodermal sensing-based human skin hydration level detection for sports, health and wellbeing | Personalized hydration level monitoring play vital role in sports, health, wellbeing and safety of a person while performing particular set of activities. Clinical staff must be mindful of numerous physiological symptoms that identify the optimum hydration specific to the person, event and environment. Hence, it becomes extremely critical to monitor the hydration levels in a human body to avoid potential complications and fatalities. Hydration tracking solutions available in the literature are either inefficient and invasive or require clinical trials. An efficient hydration monitoring system is very required, which can regularly track the hydration level, non-invasively. To this aim, this paper proposes a machine learning (ML) and deep learning (DL) enabled hydration tracking system, which can accurately estimate the hydration level in human skin using galvanic skin response (GSR) of human body. For this study, data is collected, in three different hydration states, namely hydrated, mild dehydration (8 hours of dehydration) and extreme mild dehydration (16 hours of dehydration), and three different body postures, such as sitting, standing and walking. Eight different ML algorithms and four different DL algorithms are trained on the collected GSR data. Their accuracies are compared and a hybrid (ML+DL) model is proposed to increase the estimation accuracy. It can be reported that hybrid Bi-LSTM algorithm can achieve an accuracy of 97.83%. | Related workDehydration is an efficient predictor of morbidity and mortality in the patients9,10. The authors in10 assessed the complications of dehydration in the stroke patients after they are discharged from the hospital. It was found that dehydrated patients were likely to become more dependent on others than hydrated stroke patients. Similarly, over-hydration has been assessed in the literature for its link to many fatal diseases such as congestive heart failure and pulmonary edema11–13, confusion14,15, seizure, high blood pressure, and even death16,17.Due to this, there is an increased interest in estimating hydration levels in a body in recent years. Indeed, an early detection of dehydration level is important to avoid serious complications. To this aim, it is desired to have a system for frequent identification of hydration levels. However, most of the methods proposed in the literature either rely on manual entry of water intake through mobile application or on some common signs and parameters, such as poor skin turgor, dry mucous membrane, urine colour, dry axilla, tachycardia, urine specific gravity, low systolic blood pressure, blood urea nitrogen to creatinine ratio, TBW, saliva flow rate, saliva osmolality, PO, and BIA4,18–20.The limitations of some of the aforementioned methods is the requirement of clinical setting for the data collection. In addition, most of the methods are either invasive or require biochemical analysis of body liquids. Hence, it is very needed to have a wearable non-invasive hydration monitoring system that can timely identify the body’s hydration level with sufficient accuracy. Although BIA is a non-invasive method to estimate TBW, it is assumed as a complex solution requiring special equipment, which may not suitable for continuous monitoring. In addition, TBW is the by product of FM and FFM measurements in BIA. In order to fill the gap, some other non-invasive methods for measuring the body’s hydration level are proposed in the literature, using body temperature21, skin impedance22 and tracking the activity and water consumption of the user23.Taking inspirations from the mentioned non-invasive techniques, we proposed a non-invasive method to estimate the hydration level relying on the galvanic skin response (GSR) or skin resistance level (SRL) of human body. Extending on the work presented in24, we further expand on the dataset, covering more states to include shorter and longer durations of fasting, i.e., a fasting of 8 hours and 16 hours. Further, a body posture of walking is added. In addition, we propose a hybrid algorithm combining different ML and DL methods to give better accuracies on the identification of hydration level in the body.Figure 1Skin conductance in three different states of hydration levels, ’Hydrated’, ’Mildly Dehdyrated’ and ’Extremely Dehdyrated’ and three body postures, sitting, standing and walking. | [
"26290295",
"29990109",
"23739778",
"26316508",
"22156691",
"15373958",
"3781976",
"7446703",
"20027192",
"19131355",
"20975548",
"25444573",
"16028571"
] | [
{
"pmid": "26290295",
"title": "Acute and chronic effects of hydration status on health.",
"abstract": "Maintenance of fluid and electrolyte balance is essential to healthy living as dehydration and fluid overload are associated with morbidity and mortality. This review presents the current evidence for the impact of hydration status on health. The Web of Science, MEDLINE, PubMed, and Google Scholar databases were searched using relevant terms. Randomized controlled trials and large cohort studies published during the 20 years preceding February 2014 were selected. Older articles were included if the topic was not covered by more recent work. Studies show an association between hydration status and disease. However, in many cases, there is insufficient or inconsistent evidence to draw firm conclusions. Dehydration has been linked with urological, gastrointestinal, circulatory, and neurological disorders. Fluid overload has been linked with cardiopulmonary disorders, hyponatremia, edema, gastrointestinal dysfunction, and postoperative complications. There is a growing body of evidence that links states of fluid imbalance and disease. However, in some cases, the evidence is largely associative and lacks consistency, and the number of randomized trials is limited."
},
{
"pmid": "29990109",
"title": "Engineering Approaches to Assessing Hydration Status.",
"abstract": "Dehydration is a common condition characterized by a decrease in total body water. Acute dehydration can cause physical and cognitive impairment, heat stroke and exhaustion, and, if severe and uncorrected, even death. The health effects of chronic mild dehydration are less well studied with urolithiasis (kidney stones) the only condition consistently associated with it. Aside from infants and those with particular medical conditions, athletes, military personnel, manual workers, and older adults are at particular risk of dehydration due to their physical activity, environmental exposure, and/or challenges in maintaining fluid homeostasis. This review describes the different approaches that have been explored for hydration assessment in adults. These include clinical indicators perceived by the patient or detected by a practitioner and routine laboratory analyses of blood and urine. These techniques have variable accuracy and practicality outside of controlled environments, creating a need for simple, portable, and rapid hydration monitoring devices. We review the wide array of devices proposed for hydration assessment based on optical, electromagnetic, chemical, and acoustical properties of tissue and bodily fluids. However, none of these approaches has yet emerged as a reliable indicator in diverse populations across various settings, motivating efforts to develop new methods of hydration assessment."
},
{
"pmid": "23739778",
"title": "Epidermal impedance sensing sheets for precision hydration assessment and spatial mapping.",
"abstract": "This paper presents a class of hydration monitor that uses ultrathin, stretchable sheets with arrays of embedded impedance sensors for precise measurement and spatially multiplexed mapping. The devices contain miniaturized capacitive electrodes arranged in a matrix format, capable of integration with skin in a conformal, intimate manner due to the overall skin-like physical properties. These \"epidermal\" systems noninvasively quantify regional variations in skin hydration, at uniform or variable skin depths. Experimental results demonstrate that the devices possess excellent uniformity, with favorable precision and accuracy. Theoretical models capture the underlying physics of the measurement and enable quantitative interpretation of the experimental results. These devices are appealing for applications ranging from skin care and dermatology, to cosmetology and health/wellness monitoring, with the additional potential for combined use with other classes of sensors for comprehensive, quantitative physiological assessment via the skin."
},
{
"pmid": "26316508",
"title": "Hydration and outcome in older patients admitted to hospital (The HOOP prospective cohort study).",
"abstract": "BACKGROUND\nOlder adults are susceptible to dehydration due to age-related pathophysiological changes. We aimed to investigate the prevalence of hyperosmolar dehydration (HD) in hospitalised older adults, aged ≥65 years, admitted as an emergency and to assess the impact on short-term and long-term outcome.\n\n\nMETHODS\nThis prospective cohort study was performed on older adult participants who were admitted acutely to a large U.K. teaching hospital. Data collected included the Charlson comorbidity index (CCI), national early warning score (NEWS), Canadian Study of Health and Aging (CSHA) clinical frailty scale and Nutrition Risk Screening Tool (NRS) 2002. Admission bloods were used to measure serum osmolality. HD was defined as serum osmolality >300 mOsmol/kg. Participants who were still in hospital 48 h after admission were reviewed, and the same measurements were repeated.\n\n\nRESULTS\nA total of 200 participants were recruited at admission to hospital, 37% of whom were dehydrated. Of those dehydrated, 62% were still dehydrated when reviewed at 48 h after admission. Overall, 7% of the participants died in hospital, 79% of whom were dehydrated at admission (P = 0.001). Cox regression analysis adjusted for age, gender, CCI, NEWS, CSHA and NRS demonstrated that participants dehydrated at admission were 6 times more likely to die in hospital than those euhydrated, hazards ratio (HR) 6.04 (1.64-22.25); P = 0.007.\n\n\nCONCLUSIONS\nHD is common in hospitalised older adults and is associated with poor outcome. Coordinated efforts are necessary to develop comprehensive hydration assessment tools to implement and monitor a real change in culture and attitude towards hydration in hospitalised older adults."
},
{
"pmid": "22156691",
"title": "Dehydration in hospital-admitted stroke patients: detection, frequency, and association.",
"abstract": "BACKGROUND AND PURPOSE\nWe aimed to determine the frequency of dehydration, risk factors, and associations with outcomes at hospital discharge after stroke.\n\n\nMETHODS\nWe linked clinical data from stroke patients in 2 prospective hospital registers with routine blood urea and creatinine results. Dehydration was defined by a blood urea-to-creatinine ratio >80.\n\n\nRESULTS\nOf 2591 patients registered, 1606 (62%) were dehydrated at some point during their admission. Independent risk factors for dehydration included older age, female gender, total anterior circulation syndrome, and prescribed diuretics (all P<0.001). Patients with dehydration were significantly more likely be dead or dependent at hospital discharge than those without (χ(2)=170.5; degrees of freedom=2; P<0.0001).\n\n\nCONCLUSIONS\nDehydration is common and associated with poor outcomes. Further work is required to establish if these associations are causal and if preventing or treating dehydration improves outcomes."
},
{
"pmid": "15373958",
"title": "Fluid, electrolytes and nutrition: physiological and clinical aspects.",
"abstract": "Fluid and electrolyte balance is often poorly understood and inappropriate prescribing can cause increased post-operative morbidity and mortality. The efficiency of the physiological response to a salt or water deficit, developed through evolution, contrasts with the relatively inefficient mechanism for dealing with salt excess. Saline has a Na+:Cl- of 1:1 and can produce hyperchloraemic acidosis, renal vasoconstriction and reduced glomerular filtration rate. In contrast, the more physiological Hartmann's solution with a Na+:Cl- of 1.18:1 does not cause hyperchloraemia and Na excretion following infusion is more rapid. Salt and water overload causes not only peripheral and pulmonary oedema, but may also produce splanchnic oedema, resulting in ileus or acute intestinal failure. This overload may sometimes be an inevitable consequence of resuscitation, yet it may take 3 weeks to excrete this excess. It is important to avoid unnecessary additional overload by not prescribing excessive maintenance fluids after the need for resuscitation has passed. Most patients require 2-2.5 litres water and 60-100 mmol Na/d for maintenance in order to prevent a positive fluid balance. This requirement must not be confused with those for resuscitation of the hypovolaemic patient in whom the main aim of fluid therapy is repletion of the intravascular volume. Fluid and electrolyte balance is a vital component of the metabolic care of surgical and critically-ill patients, with important consequences for gastrointestinal function and hence nutrition. It is also of importance when prescribing artificial nutrition and should be given the same careful consideration as other nutritional and pharmacological needs."
},
{
"pmid": "3781976",
"title": "Effect of systemic venous pressure elevation on lymph flow and lung edema formation.",
"abstract": "Pulmonary lymph drains into the thoracic duct and then into the systemic venous circulation. Since systemic venous pressure (SVP) must be overcome before pulmonary lymph can flow, variations in SVP may affect lymph flow rate and therefore the rate of fluid accumulation within the lung. The importance of this issue is evident when one considers the variety of clinical interventions that increase SVP and promote pulmonary edema formation, such as volume infusion, positive-pressure ventilation, and various vasoactive drug therapies. We recorded pulmonary arterial pressure (PAP), left atrial pressure (LAP), and SVP in chronic unanesthetized sheep. Occlusion balloons were placed in the left atrium and superior vena cava to control their respective pressures. The superior vena caval occluder was placed above the azygos vein so that bronchial venous pressure would not be elevated when the balloon was inflated. Three-hour experiments were carried out at various LAP levels with and without SVP being elevated to 20 mmHg. The amount of fluid present in the lung was determined by the wet-to-dry weight ratio method. At control LAP levels, no significant difference in lung fluid accumulation could be shown between animals with control and elevated SVP levels. When LAP was elevated above control a significantly greater amount of pulmonary fluid accumulated in animals with elevated SVP levels compared with those with control SVP levels. We conclude that significant excess pulmonary edema formation will occur when SVP is elevated at pulmonary microvascular pressures not normally associated with rapid fluid accumulation."
},
{
"pmid": "7446703",
"title": "Anatomic pathway of fluid leakage in fluid-overload pulmonary edema in mice.",
"abstract": "Mice were given an intravenous injection of isotonic saline containing horseradish peroxidase (HRP) as an ultrastructural marker in an attempt to determine the site of fluid leakage from the vascular space to the air space in the lung. The localization of HRP was studied by ultrastructural histochemistry. When injected in a small volume of saline (0.1 ml), HRP was confined in the vascular space. When the volume of saline was increased to 1.0 ml, the reaction product of HRP was found first in the intercellular junctions of the arterial endothelium and then through the arterial wall. The reaction product was traced from the arterial wall to the peribronchiolar tissue, bronchiolar wall, and the intercellular space of the bronchiolar epithelium. HRP was seen in direct contact with the air space in the bronchiole. It is suggested that in fluid-overload pulmonary edema, fluid leaks through the arterial wall to the peribronchiolar tissue and then into the intercellular space of the bronchiolar epithelium. Alveolar is probably a result of the backflow of fluid from the bronchiole."
},
{
"pmid": "20027192",
"title": "Fluid balance and acute kidney injury.",
"abstract": "Intravenous fluids are widely administered to patients who have, or are at risk of, acute kidney injury (AKI). However, deleterious consequences of overzealous fluid therapy are increasingly being recognized. Salt and water overload can predispose to organ dysfunction, impaired wound healing and nosocomial infection, particularly in patients with AKI, in whom fluid challenges are frequent and excretion is impaired. In this Review article, we discuss how interstitial edema can further delay renal recovery and why conservative fluid strategies are now being advocated. Applying these strategies in critical illness is challenging. Although volume resuscitation is needed to restore cardiac output, it often leads to tissue edema, thereby contributing to ongoing organ dysfunction. Conservative strategies of fluid management mandate a switch towards neutral balance and then negative balance once hemodynamic stabilization is achieved. In patients with AKI, this strategy might require renal replacement therapy to be given earlier than when more-liberal fluid management is used. However, hypovolemia and renal hypoperfusion can occur in patients with AKI if excessive fluid removal is pursued with diuretics or extracorporeal therapy. Thus, accurate assessment of fluid status and careful definition of targets are needed at all stages to improve clinical outcomes. A conservative strategy of fluid management was recently tested and found to be effective in a large, randomized, controlled trial in patients with acute lung injury. Similar randomized, controlled studies in patients with AKI now seem justified."
},
{
"pmid": "19131355",
"title": "The mortality risk of overhydration in haemodialysis patients.",
"abstract": "BACKGROUND\nWhile cardiovascular events remain the primary form of mortality in haemodialysis (HD) patients, few centres are aware of the impact of the hydration status (HS). The aim of this study was to investigate how the magnitude of the prevailing overhydration influences long-term survival.\n\n\nMETHODS\nWe measured the hydration status in 269 prevalent HD patients (28% diabetics, dialysis vintage = 41.2 +/- 70 months) in three European centres with a body composition monitor (BCM) that enables quantitative assessment of hydration status and body composition. The survival of these patients was ascertained after a follow-up period of 3.5 years. The cut off threshold for the definition of hyperhydration was set to 15% relative to the extracellular water (ECW), which represents an excess of ECW of approximately 2.5 l. Cox-proportional hazard models were used to compare survival according to the baseline hydration status for a set of demographic data, comorbid conditions and other predictors.\n\n\nRESULTS\nThe median hydration state (HS) before the HD treatment (DeltaHSpre) for all patients was 8.6 +/- 8.9%. The unadjusted gross annual mortality of all patients was 8.5%. The hyperhydrated subgroup (n = 58) presented DeltaHSpre = 19.9 +/- 5.3% and a gross mortality of 14.7%. The Cox adjusted hazard ratios (HRs) revealed that age (HRage = 1.05, 1/year; P < 0.001), systolic blood pressure (BPsys) (HRBPsys = 0.986 1/mmHg; P = 0.014), diabetes (HRDia = 2.766; P < 0.001), peripheral vascular disease (PVD) (HRPVD = 1.68; P = 0.045) and relative hydration status (DeltaHSpre) (HRDeltaHSpre = 2.102 P = 0.003) were the only significant predictors of mortality in our patient population.\n\n\nCONCLUSION\nThe results of our study indicate that the hydration state is an important and independent predictor of mortality in chronic HD patients secondary only to the presence of diabetes. We believe that it is essential to measure the hydration status objectively and quantitatively in order to obtain a more clearly defined assessment of the prognosis of haemodialysis patients."
},
{
"pmid": "20975548",
"title": "Fluid resuscitation in septic shock: a positive fluid balance and elevated central venous pressure are associated with increased mortality.",
"abstract": "OBJECTIVE\nTo determine whether central venous pressure and fluid balance after resuscitation for septic shock are associated with mortality.\n\n\nDESIGN\nWe conducted a retrospective review of the use of intravenous fluids during the first 4 days of care.\n\n\nSETTING\nMulticenter randomized controlled trial.\n\n\nPATIENTS\nThe Vasopressin in Septic Shock Trial (VASST) study enrolled 778 patients who had septic shock and who were receiving a minimum of 5 μg of norepinephrine per minute.\n\n\nINTERVENTIONS\nNone.\n\n\nMEASUREMENTS AND MAIN RESULTS\nBased on net fluid balance, we determined whether one's fluid balance quartile was correlated with 28-day mortality. We also analyzed whether fluid balance was predictive of central venous pressure and furthermore whether a guideline-recommended central venous pressure of 8-12 mm Hg yielded a mortality advantage. At enrollment, which occurred on average 12 hrs after presentation, the average fluid balance was +4.2 L. By day 4, the cumulative average fluid balance was +11 L. After correcting for age and Acute Physiology and Chronic Health Evaluation II score, a more positive fluid balance at both at 12 hrs and day 4 correlated significantly with increased mortality. Central venous pressure was correlated with fluid balance at 12 hrs, whereas on days 1-4, there was no significant correlation. At 12 hrs, patients with central venous pressure <8 mm Hg had the lowest mortality rate followed by those with central venous pressure 8-12 mm Hg. The highest mortality rate was observed in those with central venous pressure >12 mm Hg. Contrary to the overall effect, patients whose central venous pressure was <8 mm Hg had improved survival with a more positive fluid balance.\n\n\nCONCLUSIONS\nA more positive fluid balance both early in resuscitation and cumulatively over 4 days is associated with an increased risk of mortality in septic shock. Central venous pressure may be used to gauge fluid balance ≤ 12 hrs into septic shock but becomes an unreliable marker of fluid balance thereafter. Optimal survival in the VASST study occurred with a positive fluid balance of approximately 3 L at 12 hrs."
},
{
"pmid": "25444573",
"title": "Is this elderly patient dehydrated? Diagnostic accuracy of hydration assessment using physical signs, urine, and saliva markers.",
"abstract": "OBJECTIVES\nDehydration in older adults contributes to increased morbidity and mortality during hospitalization. As such, early diagnosis of dehydration may improve patient outcome and reduce the burden on healthcare. This prospective study investigated the diagnostic accuracy of routinely used physical signs, and noninvasive markers of hydration in urine and saliva.\n\n\nDESIGN\nProspective diagnostic accuracy study.\n\n\nSETTING\nHospital acute medical care unit and emergency department.\n\n\nPARTICIPANTS\nOne hundred thirty older adults [59 males, 71 females, mean (standard deviation) age = 78 (9) years].\n\n\nMEASUREMENTS\nParticipants with any primary diagnosis underwent a hydration assessment within 30 minutes of admittance to hospital. Hydration assessment comprised 7 physical signs of dehydration [tachycardia (>100 bpm), low systolic blood pressure (<100 mm Hg), dry mucous membrane, dry axilla, poor skin turgor, sunken eyes, and long capillary refill time (>2 seconds)], urine color, urine specific gravity, saliva flow rate, and saliva osmolality. Plasma osmolality and the blood urea nitrogen to creatinine ratio were assessed as reference standards of hydration with 21% of participants classified with water-loss dehydration (plasma osmolality >295 mOsm/kg), 19% classified with water-and-solute-loss dehydration (blood urea nitrogen to creatinine ratio >20), and 60% classified as euhydrated.\n\n\nRESULTS\nAll physical signs showed poor sensitivity (0%-44%) for detecting either form of dehydration, with only low systolic blood pressure demonstrating potential utility for aiding the diagnosis of water-and-solute-loss dehydration [diagnostic odds ratio (OR) = 14.7]. Neither urine color, urine specific gravity, nor saliva flow rate could discriminate hydration status (area under the receiver operating characteristic curve = 0.49-0.57, P > .05). In contrast, saliva osmolality demonstrated moderate diagnostic accuracy (area under the receiver operating characteristic curve = 0.76, P < .001) to distinguish both dehydration types (70% sensitivity, 68% specificity, OR = 5.0 (95% confidence interval 1.7-15.1) for water-loss dehydration, and 78% sensitivity, 72% specificity, OR = 8.9 (95% confidence interval 2.5-30.7) for water-and-solute-loss dehydration).\n\n\nCONCLUSIONS\nWith the exception of low systolic blood pressure, which could aid in the specific diagnosis of water-and-solute-loss dehydration, physical signs and urine markers show little utility to determine if an elderly patient is dehydrated. Saliva osmolality demonstrated superior diagnostic accuracy compared with physical signs and urine markers, and may have utility for the assessment of both water-loss and water-and-solute-loss dehydration in older individuals. It is particularly noteworthy that saliva osmolality was able to detect water-and-solute-loss dehydration, for which a measurement of plasma osmolality would have no diagnostic utility."
},
{
"pmid": "16028571",
"title": "Hydration assessment techniques.",
"abstract": "Water in the human body is essential for metabolism, temperature regulation, and numerous other physiological processes that are consistent with good health. Accurate, precise, and reliable methods to assess body fluid compartments are needed. This review describes the hydration assessment techniques of isotope dilution, neutron activation analysis, bioelectrical impedance, body mass change, thirst, tracer appearance, hematologic indices, and urinary markers. It also provides guidance for selecting techniques that are appropriate for use with unique individuals and situations."
}
] |
Scientific Reports | null | PMC8904510 | 10.1038/s41598-022-08157-5 | An efficient self-attention network for skeleton-based action recognition | There has been significant progress in skeleton-based action recognition. Human skeleton can be naturally structured into graph, so graph convolution networks have become the most popular method in this task. Most of these state-of-the-art methods optimized the structure of human skeleton graph to obtain better performance. Based on these advanced algorithms, a simple but strong network is proposed with three major contributions. Firstly, inspired by some adaptive graph convolution networks and non-local blocks, some kinds of self-attention modules are designed to exploit spatial and temporal dependencies and dynamically optimize the graph structure. Secondly, a light but efficient architecture of network is designed for skeleton-based action recognition. Moreover, a trick is proposed to enrich the skeleton data with bones connection information and make obvious improvement to the performance. The method achieves 90.5% accuracy on cross-subjects setting (NTU60), with 0.89M parameters and 0.32 GMACs of computation cost. This work is expected to inspire new ideas for the field. | Related workSkeleton-based action recognitionThe goal of this task is using skeleton data to recognize the action of instance. The input is skeleton sequence in the form of a graph, and what needs to be requested is the class of action. Skeleton data consists of two parts, one part is a vector composed of joint point positions, and another part is a matrix formed by the connection relationship of the joints.Several years ago, convolution neural networks (CNNs) and random forest (RF) were widely used to deal with the task. But CNNs fail to model the structure of skeleton data properly because skeleton data are naturally embedded in the form of graphs rather than a vector sequence or 2D grids. After firstly applied to this task in ST-GCN14, GCNs have been the mainstream methods and make great achievements. AGC-LSTM15 proposed another idea on how to use GCNs in this task, and step further to higher accuracy. In these algorithms, the graph of nature links plays a significant role. Some researchers optimize the graph structure by adding edges which is hand-designed, such as MS-G3D11. Some other researchers proposed adaptive GCNs7,12,16, which produce the dependencies totally different from the graph of human structure. All in all, these methods tried to solve the problem of dependencies in space. In another view, the major joints locations represent the poses in each frame, and the changes of posture determine the action. The dependencies between frames should also be considered. Some methods added links or made a shift in the features between adjacent frames10,14,17,18. Some others transferred the module that was often used to process time series, such as recurrent neural network (RNN) and long short-term memory (LSTM), to a new one by replacing CNN units with GCN ones15,19. Most recently, some researchers have generated adjacent matrix dynamically by using self-attention mechanism and lower the complexity of networks7,8. However, these researchers discussed the self-attention mechanism only in the spatial dimension.Graph modelGraph is a kind of data structure which models a set of objects (nodes) and their relationships (edges). Recently, researches of analyzing graphs with machine learning have received more and more attention for its wide applications20–22. As a unique non-Euclidean data structure for machine learning, graph analysis focuses on node classification, link prediction, and clustering. Inspired by CNN which is the most popular methods in many fields, GCN is generated. As the input of GCN, the nodes signals are embedded in a vector, whose relationships are embedded in a matrix named adjacent matrix. Graph model can be divided into directed graphs and undirected graphs, and their adjacent matrixes are different. Adjacent matrix is symmetric in undirected graphs, and it is not symmetric in directed graphs.Self-attention mechanismSelf-attention mechanism has been successfully used in a variety of tasks. Attention mechanism can be described as \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$Attention(Query, Source)=\Sigma ^{L_{x}}_{i = 1} Similarity(Query, Key_{i}) \cdot Value_{i}$$\end{document}Attention(Query,Source)=Σi=1LxSimilarity(Query,Keyi)·Valuei23. When Query, Key, Value are same, it is self-attention mechanism. Non-local neural network is a kind of self-attention application in computer vision.In brief, self-attention mechanism exploits the correlation in a sequence, and each position is computed as the weighted sum of all positions. The weight of every position in similarity matrix is generated dynamically. The proposed self-attention block is transferred from non-local neural network. It works like an abstract graph neural network and the similarity matrix can be seen as a weighted adjacent matrix. Some researchers have discussed the designs and effects of self-attention mechanism on the task of human skeleton-based action recognition, and used it to model spatial dependencies of the human skeleton. However, in addition to spatial dependencies, temporal and spatio-temporal dependencies can also be modeled by the self-attention mechanism. | [
"31095476"
] | [
{
"pmid": "31095476",
"title": "NTU RGB+D 120: A Large-Scale Benchmark for 3D Human Activity Understanding.",
"abstract": "Research on depth-based human activity analysis achieved outstanding performance and demonstrated the effectiveness of 3D representation for action recognition. The existing depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of large-scale training samples, realistic number of distinct class categories, diversity in camera views, varied environmental conditions, and variety of human subjects. In this work, we introduce a large-scale dataset for RGB+D human action recognition, which is collected from 106 distinct subjects and contains more than 114 thousand video samples and 8 million frames. This dataset contains 120 different action classes including daily, mutual, and health-related activities. We evaluate the performance of a series of existing 3D activity analysis methods on this dataset, and show the advantage of applying deep learning methods for 3D-based human action recognition. Furthermore, we investigate a novel one-shot 3D activity recognition problem on our dataset, and a simple yet effective Action-Part Semantic Relevance-aware (APSR) framework is proposed for this task, which yields promising results for recognition of the novel action classes. We believe the introduction of this large-scale dataset will enable the community to apply, adapt, and develop various data-hungry learning techniques for depth-based and RGB+D-based human activity understanding."
}
] |
Frontiers in Neurorobotics | null | PMC8904726 | 10.3389/fnbot.2022.808222 | Fabric Classification Using a Finger-Shaped Tactile Sensor via Robotic Sliding | Tactile sensing endows the robots to perceive certain physical properties of the object in contact. Robots with tactile perception can classify textures by touching. Interestingly, textures of fine micro-geometry beyond the nominal resolution of the tactile sensors can also be identified through exploratory robotic movements like sliding. To study the problem of fine texture classification, we design a robotic sliding experiment using a finger-shaped multi-channel capacitive tactile sensor. A feature extraction process is presented to encode the acquired tactile signals (in the form of time series) into a low dimensional (≤7D) feature vector. The feature vector captures the frequency signature of a fabric texture such that fabrics can be classified directly. The experiment includes multiple combinations of sliding parameters, i.e., speed and pressure, to investigate the correlation between sliding parameters and the generated feature space. Results show that changing the contact pressure can greatly affect the significance of the extracted feature vectors. Instead, variation of sliding speed shows no apparent effects. In summary, this paper presents a study of texture classification on fabrics by training a simple k-NN classifier, using only one modality and one type of exploratory motion (sliding). The classification accuracy can reach up to 96%. The analysis of the feature space also implies a potential parametric representation of textures for tactile perception, which could be used for the adaption of motion to reach better classification performance. | 2. Related WorksThe problem of discriminating textured objects or materials with the support of tactile sensing has been widely investigated in the literature. Most of the previous works integrate tactile sensors into robot end-effectors which are controlled to interact with the objects of interest. Tactile data collected during the interaction are then processed to extract features for texture classification using machine learning techniques.The type of features extracted from tactile data usually depends on the sensing technology adopted. There are two major trends of methods in the task of texture classification. The first either employs a high-resolution vision-based sensor (Li and Adelson, 2013; Luo et al., 2018; Yuan et al., 2018) or crops the time-series data (Taunyazov et al., 2019) to construct tactile images and directly encode the spatial textures by neural networks (NNs). While the second type of method collects the tactile signals using sensors sensitive to vibrations. Tactile signals are first transformed into the frequency domain and then both temporal and frequency features are extracted to identify textures as in Fishel and Loeb (2012); Khan et al. (2016); Kerr et al. (2018); Massalim et al. (2020).2.1. Spatial Features as ImagesLi and Adelson (2013) directly use a vision-based GelSight sensor to classify 40 different materials. The high-resolution tactile image generated by the sensor captures geometric information on the texture of the specific material. In particular, the authors proposed a novel operator, the Multi Local Binary Patterns, taking both micro and macro structures of the texture into account for feature extraction.Instead of classifying the exact type of material, the work proposed by Yuan et al. (2018) aims at recognizing 11 different properties from 153 varied pieces of clothes using a convolutional neural network (CNN) based architecture. Those properties are both physical (softness, thickness, durability, etc.) and semantic (e.g., washing method and wearing season). Moreover, a Kinect RGB-D camera is also used to help explore the clothes autonomously. The results showed great potential in the application of domestic help for clothes management.Alternatively, Taunyazov et al. (2019) proposed an interaction strategy alternating static touches and sliding movements with controlled force, exploring the possibility to extract spatial features from a capacitive sensor using a CNN-LSTM (long-short-term memory) architecture. Experiments are performed on 23 materials using a capacitor-based skin covered on the iCub forearm, reaching 98% classification accuracy. Capacitive tactile sensors are usually more suitable for dexterous manipulations compared to vision-based sensors due to their compact sizes and less deformable contact surfaces. The possibility to apply a vision-based tactile perception method eases the usage of capacitive sensors.Bauml and Tulbure (2019) presented another interesting research in this category. The proposed method makes use of the trendy transfer learning techniques to enable n-shot learning for the task of texture classification. The capability of learning from very few samples by taking advantage of a pre-trained dataset can be very handy for deploying tactile sensing systems on new robotic systems.2.2. Temporal and Frequency FeaturesFishel and Loeb (2012) conducted comprehensive research on texture classification using BioTac. Unlike most of the other works, their features are computed with specific physical meanings as traction, roughness, and fineness. Several combinations of sliding speeds and normal forces are also tested to enable a Bayesian inference.Khan et al. (2016) described a similar experiment with hand-crafted statistical features to identify textures. The research employs a custom finger-shaped capacitive tactile sensor, which is mounted on the probe of a 5-axes machine and controlled to slide on a platform covered with the fabric. Both applied pressure and velocity are controlled for the sliding motions. The statistical features, computed both in frequency and time domains, are used to train a support-vector-machine (SVM) classifier to discriminate 17 different fabrics.Another similar work is followed by Kerr et al. (2018) where PCA based feature extraction is performed on the tactile data. Both pressing and sliding motions are applied to acquire data and several different classifiers are evaluated.A recent work Massalim et al. (2020) tries to not only identify textures but also detect slip and estimate the speed of sliding, using an accelerometer installed on the fingertips of the robotic gripper to record vibration. This work combined multiple deep learning techniques to achieve a decent classification accuracy.2.3. SummaryCompared to some of the literature, our work differs mostly in two aspects:The design of the experiments simulate a realistic application scenario where very few constraints are applied on the fabrics and the robotic sliding.The perception system is very lightweight computationally, which can be implemented on a modern quad-core consumer PC; it tries to extract some intrinsic frequency features without the necessity to train on a large dataset (like other deep learning techniques) and the quality of these features are self-explanatory. | [
"22783186",
"8237454",
"24523522",
"32722353",
"22868649",
"24082087"
] | [
{
"pmid": "22783186",
"title": "Bayesian exploration for intelligent identification of textures.",
"abstract": "In order to endow robots with human-like abilities to characterize and identify objects, they must be provided with tactile sensors and intelligent algorithms to select, control, and interpret data from useful exploratory movements. Humans make informed decisions on the sequence of exploratory movements that would yield the most information for the task, depending on what the object may be and prior knowledge of what to expect from possible exploratory movements. This study is focused on texture discrimination, a subset of a much larger group of exploratory movements and percepts that humans use to discriminate, characterize, and identify objects. Using a testbed equipped with a biologically inspired tactile sensor (the BioTac), we produced sliding movements similar to those that humans make when exploring textures. Measurement of tactile vibrations and reaction forces when exploring textures were used to extract measures of textural properties inspired from psychophysical literature (traction, roughness, and fineness). Different combinations of normal force and velocity were identified to be useful for each of these three properties. A total of 117 textures were explored with these three movements to create a database of prior experience to use for identifying these same textures in future encounters. When exploring a texture, the discrimination algorithm adaptively selects the optimal movement to make and property to measure based on previous experience to differentiate the texture from a set of plausible candidates, a process we call Bayesian exploration. Performance of 99.6% in correctly discriminating pairs of similar textures was found to exceed human capabilities. Absolute classification from the entire set of 117 textures generally required a small number of well-chosen exploratory movements (median = 5) and yielded a 95.4% success rate. The method of Bayesian exploration developed and tested in this paper may generalize well to other cognitive problems."
},
{
"pmid": "8237454",
"title": "Extracting object properties through haptic exploration.",
"abstract": "This paper reviews some of our recent research on haptic exploration, perception and recognition of multidimensional objects. We begin by considering the nature of manual exploration in terms of the characteristics of various exploratory procedures (EPs) or stereotypical patterns of hand movements. Next, we explore their consequences for the sequence of EPs selected, for the relative cognitive salience of material versus geometric properties, and for dimensional integration. Finally, we discuss several applications of our research programme to the development of tangible graphics displays for the blind, autonomous and teleoperated haptic robotic systems, and food evaluation in the food industry."
},
{
"pmid": "24523522",
"title": "Natural scenes in tactile texture.",
"abstract": "Sensory systems are designed to extract behaviorally relevant information from the environment. In seeking to understand a sensory system, it is important to understand the environment within which it operates. In the present study, we seek to characterize the natural scenes of tactile texture perception. During tactile exploration complex high-frequency vibrations are elicited in the fingertip skin, and these vibrations are thought to carry information about the surface texture of manipulated objects. How these texture-elicited vibrations depend on surface microgeometry and on the biomechanical properties of the fingertip skin itself remains to be elucidated. Here we record skin vibrations, using a laser-Doppler vibrometer, as various textured surfaces are scanned across the finger. We find that the frequency composition of elicited vibrations is texture specific and highly repeatable. In fact, textures can be classified with high accuracy on the basis of the vibrations they elicit in the skin. As might be expected, some aspects of surface microgeometry are directly reflected in the skin vibrations. However, texture vibrations are also determined in part by fingerprint geometry. This mechanism enhances textural features that are too small to be resolved spatially, given the limited spatial resolution of the neural signal. We conclude that it is impossible to understand the neural basis of texture perception without first characterizing the skin vibrations that drive neural responses, given the complex dependence of skin vibrations on both surface microgeometry and fingertip biomechanics."
},
{
"pmid": "32722353",
"title": "Deep Vibro-Tactile Perception for Simultaneous Texture Identification, Slip Detection, and Speed Estimation.",
"abstract": "Autonomous dexterous manipulation relies on the ability to recognize an object and detect its slippage. Dynamic tactile signals are important for object recognition and slip detection. An object can be identified based on the acquired signals generated at contact points during tactile interaction. The use of vibrotactile sensors can increase the accuracy of texture recognition and preempt the slippage of a grasped object. In this work, we present a Deep Learning (DL) based method for the simultaneous texture recognition and slip detection. The method detects non-slip and slip events, the velocity, and discriminate textures-all within 17 ms. We evaluate the method for three objects grasped using an industrial gripper with accelerometers installed on its fingertips. A comparative analysis of convolutional neural networks (CNNs), feed-forward neural networks, and long short-term memory networks confirmed that deep CNNs have a higher generalization accuracy. We also evaluated the performance of the highest accuracy method for different signal bandwidths, which showed that a bandwidth of 125 Hz is enough to classify textures with 80% accuracy."
},
{
"pmid": "22868649",
"title": "Incremental learning of 3D-DCT compact representations for robust visual tracking.",
"abstract": "Visual tracking usually requires an object appearance model that is robust to changing illumination, pose, and other factors encountered in video. Many recent trackers utilize appearance samples in previous frames to form the bases upon which the object appearance model is built. This approach has the following limitations: 1) The bases are data driven, so they can be easily corrupted, and 2) it is difficult to robustly update the bases in challenging situations. In this paper, we construct an appearance model using the 3D discrete cosine transform (3D-DCT). The 3D-DCT is based on a set of cosine basis functions which are determined by the dimensions of the 3D signal and thus independent of the input video data. In addition, the 3D-DCT can generate a compact energy spectrum whose high-frequency coefficients are sparse if the appearance samples are similar. By discarding these high-frequency coefficients, we simultaneously obtain a compact 3D-DCT-based object representation and a signal reconstruction-based similarity measure (reflecting the information loss from signal reconstruction). To efficiently update the object representation, we propose an incremental 3D-DCT algorithm which decomposes the 3D-DCT into successive operations of the 2D discrete cosine transform (2D-DCT) and 1D discrete cosine transform (1D-DCT) on the input video data. As a result, the incremental 3D-DCT algorithm only needs to compute the 2D-DCT for newly added frames as well as the 1D-DCT along the third dimension, which significantly reduces the computational complexity. Based on this incremental 3D-DCT algorithm, we design a discriminative criterion to evaluate the likelihood of a test sample belonging to the foreground object. We then embed the discriminative criterion into a particle filtering framework for object state inference over time. Experimental results demonstrate the effectiveness and robustness of the proposed tracker."
},
{
"pmid": "24082087",
"title": "Spatial and temporal codes mediate the tactile perception of natural textures.",
"abstract": "When we run our fingers over the surface of an object, we acquire information about its microgeometry and material properties. Texture information is widely believed to be conveyed in spatial patterns of activation evoked across one of three populations of cutaneous mechanoreceptive afferents that innervate the fingertips. Here, we record the responses evoked in individual cutaneous afferents in Rhesus macaques as we scan a diverse set of natural textures across their fingertips using a custom-made rotating drum stimulator. We show that a spatial mechanism can only account for the processing of coarse textures. Information about most natural textures, however, is conveyed through precise temporal spiking patterns in afferent responses, driven by high-frequency skin vibrations elicited during scanning. Furthermore, these texture-specific spiking patterns predictably dilate or contract in time with changes in scanning speed; the systematic effect of speed on neuronal activity suggests that it can be reversed to achieve perceptual constancy across speeds. The proposed temporal coding mechanism involves converting the fine spatial structure of the surface into a temporal spiking pattern, shaped in part by the mechanical properties of the skin, and ascribes an additional function to vibration-sensitive mechanoreceptive afferents. This temporal mechanism complements the spatial one and greatly extends the range of tangible textures. We show that a combination of spatial and temporal mechanisms, mediated by all three populations of afferents, accounts for perceptual judgments of texture."
}
] |
Frontiers in Bioengineering and Biotechnology | null | PMC8904736 | 10.3389/fbioe.2022.806177 | Hybrid Swarming Algorithm With Van Der Waals Force | This paper proposes a hybrid swarming algorithm based on Ant Colony Optimization and Physarum Polycephalum Algorithm. And the Van Der Waals force is first applied to the pheromone update mechanism of the hybrid algorithm. The improved method can prevent premature convergence into the local optimal solution. Simulation results show the proposed approach has excellent in solving accuracy and convergence time. We also compare the improved algorithm with other advanced algorithms and the results show that our algorithm is more accurate than the literature algorithms. In addition, we use the capitals of 35 Asian countries as an example to verify the robustness and versatility of the hybrid algorithm. | Related Work
Section 2.1 describes the Traveling Salesman Problem; section 2.2 introduces the Van Der Waals Forces; section 2.3 explains the ant colony algorithm. Finally, section 2.4 shows the Physarum Polycephalum model.Traveling Salesman ProblemTraveling Salesman Problem can be described as follows: a salesman is going to travel finally returns to the initial city. Its model is to find a travel route with the shortest total distance and satisfy the objective function:
L(C)=min∑i=1n−1d(ci,ci+1)+d(cn,c1)
(1)
Where c
i
is the city number
i∈n
,
1≤i≤n
;
n
is the number of cities;
d(ci,cj)
is the city
i
The length of the distance to the city
j
.Van Der Waals ForcesVan Der Waals Force is a weakly alkaline electrical attraction between neutral molecules or atoms. It comes from three parts: 1) One of the permanent dipole moments of polar molecules; 2) The interaction between a polar molecule polarizes another molecule, generating an induced dipole moment and attracting each other; 3) The movement of electrons in a molecule generates an instantaneous dipole moment, which makes neighboring molecules Instantaneous polarization, the latter in turn enhances the instantaneous dipole moment of the original molecule. Its formula is:
F=Ada−Bdb
(2)
Where
d
is the distance between two molecules;
A,B,a,b
are all self-selected values, where
a<b
.Ant Colony OptimizationAnt Colony Optimization is a heuristic bionic algorithm proposed by Dorigo and Caro (1999). The principle is that when ants are looking for food, they will leave volatile pheromone on the path, and subsequent ants will tend to choose the path with high pheromone concentration. As time goes by, there will be more and more ants on the shortest path. At a particular moment, the transition probability of the ant choosing node j from node i is:
Pijk(t)={[τij(t)]α[ηij(t)]β∑x∈sk[τis(t)]α[ηis(t)]β,j∈Nk0otherwise
(3)
Where
τijt
is the pheromone concentration on the path between node i and node j,
ηij(t)
is the heuristic information, the reciprocal of the distance between node i and node j,
α
and
β
are the critical factors of pheromone concentration and the vital factor of heuristic information respectively, and
N_k
is the set of optional nodes. The pheromone update rule for ants is:
τij(t+1)=(1−ρ)τij(t)+∑k=1m(Lk)−1
(4)
Where
ρ(0≤ρ≤1
) is the volatilization coefficient of the pheromone and
Lk
is the length of the path traveled by the ant
k
in this cycle.Physarum Polycephalum AlgorithmThe food source is regarded as the node and the spreading hyphae. They composed of the pipe and the liquid flowing inside in the foraging formed by Physarum Polycephalum. The pressure difference between the two ends of the pipe determines the flow direction of the liquid. The pipe becomes thicker when the liquid flow rate increases. It will eventually form the shortest path connecting the food source at last. The liquid flow through the pipe can be expressed as:
Qij=πrij4(Pi−Pj)8ξLij=DijLij(Pi−Pj)
(5)
Where
ξ
is the viscosity coefficient of the liquid in the pipe,
Qij
is the liquid flow between node i and node j, and
Pij
is node i and node j The pressure between nodes,
D(ij)
is the conductivity between node i and node j,
L(ij)
is the Euler distance between node i and node j,
rij
is the radius of the tube. According to Kirchhoff’s law, the flow of liquid in the tube is conserved, so
I0
is a constant, and the flow of each node can be expressed as:
∑j=1,j≠inQij={I0, j=in−I0,j=out;0,otherwise
(6)
The change in conductivity over time is expressed as follows:
dDijdt=f(|Qij|)−kDij
(7)
Where k is the attenuation rate of the pipeline. | [] | [] |
International Journal of Telemedicine and Applications | null | PMC8904914 | 10.1155/2022/3749413 | Feedback Artificial Shuffled Shepherd Optimization-Based Deep Maxout Network for Human Emotion Recognition Using EEG Signals | Emotion recognition is very important for the humans in order to enhance the self-awareness and react correctly to the actions around them. Based on the complication and series of emotions, EEG-enabled emotion recognition is still a difficult issue. Hence, an effective human recognition approach is designed using the proposed feedback artificial shuffled shepherd optimization- (FASSO-) based deep maxout network (DMN) for recognizing emotions using EEG signals. The proposed technique incorporates feedback artificial tree (FAT) algorithm and shuffled shepherd optimization algorithm (SSOA). Here, median filter is used for preprocessing to remove the noise present in the EEG signals. The features, like DWT, spectral flatness, logarithmic band power, fluctuation index, spectral decrease, spectral roll-off, and relative energy, are extracted to perform further processing. Based on the data augmented results, emotion recognition can be accomplished using the DMN, where the training process of the DMN is performed using the proposed FASSO method. Furthermore, the experimental results and performance analysis of the proposed algorithm provide efficient performance with respect to accuracy, specificity, and sensitivity with the maximal values of 0.889, 0.89, and 0.886, respectively. | 2. Related WorkEmotion encompasses consciousness as well as cognition in all human beings and it plays a very significant role in all aspects of humans. Hence, recognition of emotion has become a very important research area. As part of the proposed work, this section reviews various emotion recognition techniques using EEG signals. This review also provides advantages, challenges, and limitations of the existing emotion recognition approaches.Zhong et al. [5] developed a regularized graph neural networks (RGNN) for the automated emotion recognition. This technique reduced the overfitting problems, but it failed to control the imbalance among the testing sets and the training sets. Ekman and Keltner [2] proposed a firefly integrated optimization algorithm (FIOA) to strengthen the EEG-based emotion recognition. This technique significantly reduced the artificial selection of the work loads. However, this method suffers from computational complexity issues. Sharma et al. [10] designed a LSTM- (long short-term memory-) driven deep learning method for automated emotion recognition. This method did not involve any primary knowledge regarding the functional parameters. However, the major challenge lies in maximizing the processing speed while using larger datasets. Wei et al. [12] designed a simple recurrent unit (SRU) network and ensemble learning technique for the EEG-based emotion recognition. This method achieved comparatively lower computational cost. However, deep learning is highly reliant on computation control, and it utilizes higher time for the training process.Chao and Dong [13] presented an advanced convolutional neural network (CNN) for recognizing the emotions from the multichannel EEG signals. The distinctive grouping technique of filter preserves the regional features with respect to the diverse areas, but this technique suffers from higher computational complexity. Salankar et al. [14] developed an empirical mode decomposition (EMD) for the emotion recognition based on EEG signals. This method was more effective for the medical recognition of high- and low-dominance regions in the subjects. However, this method failed to classify the states, such as Alzheimer's, depression, and epilepsy for enhanced outcomes. Yin et al. [15] introduced a graph convolutional neural networks (ECLGCNN) and LSTM for recognizing the emotions using EEG signals. The processing time of this method was low and maintains poor recognition accuracy. Pandey and Seeja [1] devised a deep CNN for recognizing the EEG emotions. Here, frontal electrodes are more effective for recognizing the emotions when compared to all other electrodes. However, this technique failed to apply attention mechanisms on various brain regions in order to achieve effective performance results.Maheshwari et al. [16] proposed the deep CNN architecture; it has eight convolutions, three average pooling, four batch normalization, three spatial dropouts, two dropouts, one global average pooling, and three dense layers. It is validated using three publicly available databases: DEAP. But still it suffers with poor accuracy in classifying various emotions. Hector et al. [17] work, architect, design, implement, and test a handcrafted, hardware convolutional neural network, named BioCNN, optimized for EEG-based emotion detection and other biomedical applications. The EEG signals are generated using a low-cost, off-the-shelf device, namely, Emotiv Epoc+, and then denoised and preprocessed ahead of their use by BioCNN. For training and testing, BioCNN uses three repositories of emotion classification datasets, including the publicly available DEAP and DREAMER datasets. Hu et al. [18] presented a novel convolutional layer, called the scaling layer, which can adaptively extract effective data-driven spectrogram-like features from raw EEG signals. Furthermore, it exploits convolutional kernels scaled from one data-driven pattern to expose a frequency-like dimension to address the shortcomings of prior methods requiring hand-extracted features or their approximations. This has achieved state-of-the-art results across the established DEAP and AMIGOS benchmark datasets. Liu and Fu [19] have proposed an emotion recognition by deeply learned multichannel textural and EEG features. In this work, multichannel features from the EEG signal for human emotion recognition are applied. Here, the EEG signal is generated by sound signal stimulation. Specifically, applying multichannel EEG and textual feature fusion in time domain recognizes different human emotions, where six statistical features in time domain are fused to a feature vector for emotion classification. It conducts EEG and textual-based feature extraction from both time and frequency domains. Various challenges of human emotion recognition approaches are as follows. SRU was developed for the emotion recognition. However, the major challenge lies in selecting the appropriate SRU network parameters, like training parameters and the total nodes based on the trial-and-error technique [12]FIOA offers a hybridized optimization scheme for recognizing the patterns of higher dimensionality datasets, but the experimental information used in this method are simply multiple physiological signals. Hence, the major challenge lies in using the FIOA to automatic pattern detection of medical image data [20]The EMD technique implemented can be valuable for medical recognition of low- and high-dominance regions in the subjects, but major challenge lies in classifying the various brain conditions, like sadness, Alzheimer's, and epilepsy [14]Deep CNN approach was very efficient in focusing the independent emotion recognitions with respect to classification accuracy when compared with various existing techniques. This can be enhanced by applying attention mechanisms on various regions of the brain for improving the classification accuracy [1]Based on the literature review inference, the ECLGCNN achieved better classification accuracy, which explores only the binary categorization of emotions, such as positive or negative valence and low/high arousal. This can be made to overcome by considering ECLGCNN as a multiclassifier for effectively distinguishing the different states of emotions [15]. | [
"21038950",
"5098954",
"27810626"
] | [
{
"pmid": "21038950",
"title": "Gender differences in implicit and explicit processing of emotional facial expressions as revealed by event-related theta synchronization.",
"abstract": "Emotion information processing may occur in 2 modes that are differently represented in conscious awareness. Fast online processing involves coarse-grained analysis of salient features and is not represented in conscious awareness; offline processing takes hundreds of milliseconds to generate fine-grained analysis and is represented in conscious awareness. These processing modes may be studied using event-related electroencephalogram theta synchronization as a marker of emotion processing. Two experiments were conducted that differed on the mode of emotional information presentation. In the explicit mode, subjects were explicitly instructed to evaluate the emotional content of presented stimuli; in the implicit mode, their attention was directed to other features of the stimulus. In the implicit mode, theta synchronization was most pronounced in the early processing stage, whereas in the explicit mode, it was more pronounced in the late processing stage. The early processing stage was more pronounced in men, whereas the late processing stage was more pronounced in women. Implications of these gender differences in emotion processing for well-documented differences in social behavior are discussed."
},
{
"pmid": "5098954",
"title": "Physiological role of pleasure.",
"abstract": "A given stimulus can induce a pleasant or unpleasant sensation depending on the subject's internal state. The word alliesthesia is proposed to describe this phenomenon. It is, in itself, an adequate motivation for behavior such as food intake or thermoregulation. Therefore, negative regulatory feedback systems, based upon oropharingeal or cutaneous thermal signals are peripheral only in appearance, since the motivational component of the sensation is of internal origin. The internal signals seem to be complex and related to the set points of some regulated variables of the \"milieu interieur,\" like set internal temperature in the case of thermal sensation (15). Alliesthesia can therefore explain the adaptation of these behaviors to their goals. Only three sensations have been studied- thermal, gustatory, and olfactory, but it is probable that alliesthesia also exists in such simple ways as in bringing a signal, usually ignored, to the subject's attention. For example, gastric contractions, not normally perceived, are felt in the state of hunger (16). Since alliesthesia relies on an internal input, it is possible that alliesthesia exists only with sensations related to some constants of the \"milieu interieur\" and therefore would not exist in visual or auditory sensations. As a matter of fact, luminous or auditory stimuli can be pleasing or displeasing in themselves, but there seems to be little variation of pleasure in these sensations, that is, no alliesthesia. There may be some esthetic value linked to these stimuli but it is a striking coincidence that they are in themselves rather neutral and that it is difficult to imagine a constant of the \"milieu interieur\" which could be possibly modified by a visual or an auditive stimulus-such as light of a certain wavelength or sound of a given frequency. In the light of this theory, it is possible to reconsider the nature of the whole conscious experience. The existence of alliesthesia implies the presence of internal signals modifying the concious sensations aroused from peripheral receptors. It is therefore necessary to question the existence of sensations aroused by direct stimulation of central receptors, such as hypothalamic temperature detectors, osmoreceptors, and others. Does their excitation arouse sensations of their own, or does the sensation have to pass through peripheral senses? Only human experimentation could answer this question. In the same way, it is possible that selfstimulation of the brain is pleasant, not by giving a sensation in itself, but because the electrical stimulus (17), renders peripheral stimuli pleasant."
},
{
"pmid": "27810626",
"title": "Unsupervised domain adaptation techniques based on auto-encoder for non-stationary EEG-based emotion recognition.",
"abstract": "In electroencephalography (EEG)-based emotion recognition systems, the distribution between the training samples and the testing samples may be mismatched if they are sampled from different experimental sessions or subjects because of user fatigue, different electrode placements, varying impedances, etc. Therefore, it is difficult to directly classify the EEG patterns with a conventional classifier. The domain adaptation method, which is aimed at obtaining a common representation across training and test domains, is an effective method for reducing the distribution discrepancy. However, the existing domain adaptation strategies either employ a linear transformation or learn the nonlinearity mapping without a consistency constraint; they are not sufficiently powerful to obtain a similar distribution from highly non-stationary EEG signals. To address this problem, in this paper, a novel component, called the subspace alignment auto-encoder (SAAE), is proposed. Taking advantage of both nonlinear transformation and a consistency constraint, we combine an auto-encoder network and a subspace alignment solution in a unified framework. As a result, the source domain can be aligned with the target domain together with its class label, and any supervised method can be applied to the new source domain to train a classifier for classification in the target domain, as the aligned source domain follows a distribution similar to that of the target domain. We compared our SAAE method with six typical approaches using a public EEG dataset containing three affective states: positive, neutral, and negative. Subject-to-subject and session-to-session evaluations were performed. The subject-to-subject experimental results demonstrate that our component achieves a mean accuracy of 77.88% in comparison with a state-of-the-art method, TCA, which achieves 73.82% on average. In addition, the average classification accuracy of SAAE in the session-to-session evaluation for all the 15 subjects in a dataset is 81.81%, an improvement of up to 1.62% on average as compared to the best baseline TCA. The experimental results show the effectiveness of the proposed method relative to state-of-the-art methods. It can be concluded that SAAE is a useful and effective tool for decreasing domain discrepancy and reducing performance degradation across subjects and sessions in the EEG-based emotion recognition field."
}
] |
Frontiers in Big Data | null | PMC8905430 | 10.3389/fdata.2021.806014 | MODIT: MOtif DIscovery in Temporal Networks | Temporal networks are graphs where each edge is linked with a timestamp, denoting when an interaction between two nodes happens. According to the most recently proposed definitions of the problem, motif search in temporal networks consists in finding and counting all connected temporal graphs Q (called motifs) occurring in a larger temporal network T, such that matched target edges follow the same chronological order imposed by edges in Q. In the last few years, several algorithms have been proposed to solve motif search, but most of them are limited to very small or specific motifs due to the computational complexity of the problem. In this paper, we present MODIT (MOtif DIscovery in Temporal Networks), an algorithm for counting motifs of any size in temporal networks, inspired by a very recent algorithm for subgraph isomorphism in temporal networks, called TemporalRI. Experiments show that for big motifs (more than 3 nodes and 3 edges) MODIT can efficiently retrieve them in reasonable time (up to few hours) in many networks of medium and large size and outperforms state-of-the art algorithms. | 1. Introduction and Related WorksNetworks (also named graphs) are tools for the description and analysis of entities, called nodes, that interact with each other by means of edges. There are many types of data that can be represented by graphs, including computer networks, social networks, communication networks, biological networks, and so on. A wide range of domains can be modeled and studied with static networks but many complex systems are fully dynamic, indeed interactions between entities change over time. Systems of this type can be modeled as temporal networks, in which edges between nodes are associated with temporal information such as, for example, the duration of the interaction and the instant in which the interaction begins. Annotations of edges with temporal data is important to understand the formation and the evolution of such systems.In literature, several definitions of temporal networks have been proposed (Holme and Saramaki, 2012; Masuda and Lambiotte, 2020). In few works, these are also referenced as dynamic (Carley et al., 2007), evolutionary (Aggarwal and Subbian, 2014) or time-varying (Casteigts et al., 2011) networks. In this paper, we define temporal network as a multigraph (i.e a graph where two nodes may interact multiple times). Each edge is associated with an integer, called timestamp, which denotes when two nodes interact.In the last few years, there has been a growing interest in analyzing temporal networks and studying their properties. Analysis of temporal networks includes network centrality (Lv et al., 2019; Tsalouchidou et al., 2020), network clustering (Crawford and Milenkovic, 2018), community detection (Rossetti and Cazabet, 2018), link prediction (Divakaran and Mohan, 2020), graph mining (Sun et al., 2019), graph embedding (Torricelli et al., 2020), network sampling (Rocha et al., 2017), random models (Petit et al., 2018; Hiraoka et al., 2020; Singh and Cherifi, 2020), and epidemic spreading (Tizzani et al., 2018; Williams et al., 2019; Masuda and Holme, 2020). Extensive reviews of temporal networks and their main features can be found in Holme and Saramaki (2012, 2019), Masuda and Lambiotte (2020).In this work, we focus on motif search in temporal networks. Different definitions of temporal motifs have been proposed so far (Kovanen et al., 2011; Hulovatyy et al., 2015; Paranjape et al., 2017). Here, we follow the most recent definition proposed by Paranjape et al. (2017), which is becoming the most accepted one. A temporal motif is a temporal network where edges denote a succession of events. In addition to the original definition proposed by Paranjape et al. (2017), simultaneous events, represented by edges with equal timestamps, are allowed, provided that such edges do not link the same pair of nodes. Temporal graphs Q1 and Q2 of Figure 1 are two examples of motifs. Applications of Temporal Motif Search include the creation of evolution rules that govern the way the network changes over time (Berlingerio et al., 2009; Ugander et al., 2013) allowing also to identify all the time an edge participates to particular pattern in a time window. A second application consists in the identification of motifs in temporal network at different time resolution to identify patterns at different time scale. Another application consists in temporal network classification using a feature representation based on the temporal motifs distribution (Tu et al., 2018).Figure 1Example of motif Δ-occurrence in a temporal graph T, given Δ = 6. Motif Q1 has exactly one Δ-occurrence in T, which is the subgraph formed by nodes and edges colored in red. Motif Q2, instead, does not Δ-occur in T. In fact, the subgraph with blue nodes and blue edges is isomorphic to Q2 and respects the chronological order imposed by Q2's edges, but its edges are not observed within the time window Δ.Given a time interval Δ, we say that a motif Q Δ-occurs in T, iff: (i) Q is isomorphic (i.e., structurally equivalent) to a subgraph S of T (called an occurrence of Q in T), (ii) edges in S follow the same chronological order imposed by corresponding matched edges in Q, (iii) all interactions in S are observed in a time interval less than or equal to Δ (i.e., they are likely to be related each other). In Figure 1, motif Q1 Δ-occurs (Δ = 6 in the example) in T, while Q2 does not.For a given temporal graph T and time interval Δ, motif search aims at retrieving all motifs that Δ-occurs in T. In addition, for each such motif Q, we also count the number of occurrences of Q in T. It has been shown that Temporal Motif Search (TMS) problem is NP-complete, even for star topologies (Liu et al., 2019). For this reason, motif search is usually restricted to motifs with up to a certain number of nodes and edges. Given Δ = 10, Figure 2 shows all temporal motifs with up to 3 nodes and 3 edges that Δ-occur in a toy temporal graph, together with the corresponding number of occurrences.Figure 2Example of application of the Temporal Motif Search (TMS) problem for a temporal graph T, given Δ = 10, k = 3, and l = 3. For each motif, the relative number of Δ-occurrences in T is reported.Recently, several TMS algorithms have been introduced (Kovanen et al., 2011; Hulovatyy et al., 2015; Paranjape et al., 2017; Liu et al., 2019). However, the proposed solutions are limited to very small motifs or specific topologies.Temporal motifs have been introduced for the first time by Kovanen et al. (2011). Authors define a motif as an ordered set of edges such that: (i) the difference between the timestamps of two consecutive edges must be less than or equal to a certain threshold Δ and (ii) if a node is part of a motif, then all its adjacent edges have to be consecutive (consecutive edge restriction).In Hulovatyy et al. (2015) the consecutive edge restriction was relaxed and the authors considered only induced subgraphs, called graphlets, in order to reduce the computational complexity while obtaining approximate results.Paranjape et al. (2017) describes a temporal motif as a sequence of edges ordered by increasing timestamps. More precisely, the authors define a k-node, l-edge, Δ-temporal motif as a sequence of l edges, M = (u1, v1, t1), (u2, v2, t2), …, (ul, vl, tl) that are time-ordered within a Δ duration, i.e., t1 < t2 ⋯ < tl and tl − t1 ≤ Δ, such that the static graph induced by the edges is connected and has k nodes. The authors present an algorithm to efficiently calculate the frequencies of all possible directed temporal motifs with 3 edges. For bigger motifs they use a naive algorithm that first computes static matches, then filters out occurrences which do not match the temporal constraints.To tackle with the NP-completeness of TMS, approximate solutions have been proposed too. Liu et al. (2019) propose a general sampling framework to estimate motif counts. It consists in partitioning time into intervals, finding exact counts of motifs in each interval and weighting counts to get the final estimate, using importance sampling.In this paper, we present a new motif search algorithm, called MODIT (MOtif DIscovery in Temporal networks). The method is inspired by the temporal subgraph matching algorithm TemporalRI (Locicero et al., 2021; Micale et al., 2021). Our algorithm overcomes many of the limitations imposed by other motif search methods. In fact, MODIT is general and can search for motifs of any size. It has no consecutive edge restriction and allows edges with equal timestamps, provided that they do not link the same pair of nodes.The rest of the paper is organized as follows. In section 2, we give preliminary definitions about temporal networks and temporal motif search, then we illustrate MODIT and evaluate its computational complexity. In section 3, we assess the performance of MODIT on a dataset of real networks and compare it with the algorithm presented in Paranjape et al. (2017). Finally, section 4 ends the paper. | [
"29738568",
"26072480",
"21130777",
"29347767",
"32346033"
] | [
{
"pmid": "29738568",
"title": "ClueNet: Clustering a temporal network based on topological similarity rather than denseness.",
"abstract": "Network clustering is a very popular topic in the network science field. Its goal is to divide (partition) the network into groups (clusters or communities) of \"topologically related\" nodes, where the resulting topology-based clusters are expected to \"correlate\" well with node label information, i.e., metadata, such as cellular functions of genes/proteins in biological networks, or age or gender of people in social networks. Even for static data, the problem of network clustering is complex. For dynamic data, the problem is even more complex, due to an additional dimension of the data-their temporal (evolving) nature. Since the problem is computationally intractable, heuristic approaches need to be sought. Existing approaches for dynamic network clustering (DNC) have drawbacks. First, they assume that nodes should be in the same cluster if they are densely interconnected within the network. We hypothesize that in some applications, it might be of interest to cluster nodes that are topologically similar to each other instead of or in addition to requiring the nodes to be densely interconnected. Second, they ignore temporal information in their early steps, and when they do consider this information later on, they do so implicitly. We hypothesize that capturing temporal information earlier in the clustering process and doing so explicitly will improve results. We test these two hypotheses via our new approach called ClueNet. We evaluate ClueNet against six existing DNC methods on both social networks capturing evolving interactions between individuals (such as interactions between students in a high school) and biological networks capturing interactions between biomolecules in the cell at different ages. We find that ClueNet is superior in over 83% of all evaluation tests. As more real-world dynamic data are becoming available, DNC and thus ClueNet will only continue to gain importance."
},
{
"pmid": "26072480",
"title": "Exploring the structure and function of temporal networks with dynamic graphlets.",
"abstract": "MOTIVATION\nWith increasing availability of temporal real-world networks, how to efficiently study these data? One can model a temporal network as a single aggregate static network, or as a series of time-specific snapshots, each being an aggregate static network over the corresponding time window. Then, one can use established methods for static analysis on the resulting aggregate network(s), but losing in the process valuable temporal information either completely, or at the interface between different snapshots, respectively. Here, we develop a novel approach for studying a temporal network more explicitly, by capturing inter-snapshot relationships.\n\n\nRESULTS\nWe base our methodology on well-established graphlets (subgraphs), which have been proven in numerous contexts in static network research. We develop new theory to allow for graphlet-based analyses of temporal networks. Our new notion of dynamic graphlets is different from existing dynamic network approaches that are based on temporal motifs (statistically significant subgraphs). The latter have limitations: their results depend on the choice of a null network model that is required to evaluate the significance of a subgraph, and choosing a good null model is non-trivial. Our dynamic graphlets overcome the limitations of the temporal motifs. Also, when we aim to characterize the structure and function of an entire temporal network or of individual nodes, our dynamic graphlets outperform the static graphlets. Clearly, accounting for temporal information helps. We apply dynamic graphlets to temporal age-specific molecular network data to deepen our limited knowledge about human aging.\n\n\nAVAILABILITY AND IMPLEMENTATION\nhttp://www.nd.edu/∼cone/DG."
},
{
"pmid": "21130777",
"title": "What's in a crowd? Analysis of face-to-face behavioral networks.",
"abstract": "The availability of new data sources on human mobility is opening new avenues for investigating the interplay of social networks, human mobility and dynamical processes such as epidemic spreading. Here we analyze data on the time-resolved face-to-face proximity of individuals in large-scale real-world scenarios. We compare two settings with very different properties, a scientific conference and a long-running museum exhibition. We track the behavioral networks of face-to-face proximity, and characterize them from both a static and a dynamic point of view, exposing differences and similarities. We use our data to investigate the dynamics of a susceptible-infected model for epidemic spreading that unfolds on the dynamical networks of human proximity. The spreading patterns are markedly different for the conference and the museum case, and they are strongly impacted by the causal structure of the network data. A deeper study of the spreading paths shows that the mere knowledge of static aggregated networks would lead to erroneous conclusions about the transmission paths on the dynamical networks."
},
{
"pmid": "29347767",
"title": "Sampling of temporal networks: Methods and biases.",
"abstract": "Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data."
},
{
"pmid": "32346033",
"title": "weg2vec: Event embedding for temporal networks.",
"abstract": "Network embedding techniques are powerful to capture structural regularities in networks and to identify similarities between their local fabrics. However, conventional network embedding models are developed for static structures, commonly consider nodes only and they are seriously challenged when the network is varying in time. Temporal networks may provide an advantage in the description of real systems, but they code more complex information, which could be effectively represented only by a handful of methods so far. Here, we propose a new method of event embedding of temporal networks, called weg2vec, which builds on temporal and structural similarities of events to learn a low dimensional representation of a temporal network. This projection successfully captures latent structures and similarities between events involving different nodes at different times and provides ways to predict the final outcome of spreading processes unfolding on the temporal structure."
}
] |
Frontiers in Big Data | null | PMC8905631 | 10.3389/fdata.2022.805713 | Balancing Gender Bias in Job Advertisements With Text-Level Bias Mitigation | Despite progress toward gender equality in the labor market over the past few decades, gender segregation in labor force composition and labor market outcomes persists. Evidence has shown that job advertisements may express gender preferences, which may selectively attract potential job candidates to apply for a given post and thus reinforce gendered labor force composition and outcomes. Removing gender-explicit words from job advertisements does not fully solve the problem as certain implicit traits are more closely associated with men, such as ambitiousness, while others are more closely associated with women, such as considerateness. However, it is not always possible to find neutral alternatives for these traits, making it hard to search for candidates with desired characteristics without entailing gender discrimination. Existing algorithms mainly focus on the detection of the presence of gender biases in job advertisements without providing a solution to how the text should be (re)worded. To address this problem, we propose an algorithm that evaluates gender bias in the input text and provides guidance on how the text should be debiased by offering alternative wording that is closely related to the original input. Our proposed method promises broad application in the human resources process, ranging from the development of job advertisements to algorithm-assisted screening of job applications. | 2. Related Works2.1. Gender Bias in Job AdvertisementGender inequality in the labor market is longstanding and well-documented. Although there has been a long-term increase in women's labor force participation over the past few decades, research shows persistent gender segregation across many occupations and industries. Women continue to be underrepresented in senior and managerial positions (Sohrab et al., 2012), are less likely to be promoted and are perceived as less committed to professional careers (Wallace, 2008) and as less suitable to perform tasks in the fields that have been historically male-dominated (Hatmaker, 2013). The hiring process is a significant social encounter, in which employers search for the most “suitable” candidate to fill the position (Kang et al., 2016; Rivera, 2020). Research demonstrates that “suitability” is often defined categorically, is not neutral to bias, and is gendered (McCall, 2005). The wording of job advertisements, in particular, may play a role in generating such gender inequality. For instance, Bem and Bem (1973) and Kuhn et al. (2020) show that job advertisements with explicitly gendered words discourage potential applicants of the opposite gender from applying, even when they are qualified to do so, which in turn reinforces the imbalance. More recent studies (Born and Taris, 2010; Askehave and Zethsen, 2014) have shown that words describing gendered traits and behaviors may also entail gendered responses from potential job applicants. Female students are substantially more attracted to advertisements that contain feminine traits than masculine traits (Born and Taris, 2010). Traits favored in leadership roles are predominately considered to be male-biased, correlating with the gender imbalance in top-management positions (Askehave and Zethsen, 2014). It has been shown that such bias co-exists with the salary gap where, on average, job posts that favor masculine traits offer higher salaries compared with job posts that favor feminine traits (Arceo-Gómez et al., 2020). Research also shows that using gender-neutral terms (e.g., police officer) or masculine/feminine pairs (e.g., policeman/policewoman) can help reduce gender barrier and attract both male and female applicants (Bem and Bem, 1973; Horvath and Sczesny, 2016; Sczesny et al., 2016).2.2. Bias Evaluation at the Text LevelMany studies can be found that collect and identify masculine and feminine words as a measure of gendered wording (Bem and Bem, 1973; Bem, 1974; Gaucher et al., 2011). These word lists are consistent with previous research that examined gender differences in language use (Newman et al., 2008). Given the list of gender-coded words, text-level bias can be quantified by measuring the occurrences of each word in the list. Gaucher et al. (2011) calculated the percentage of masculine and feminine words in the text to produce two separate scores, for male and female biases, respectively, to reveal the fact that job advertisements in male-dominated industries and female-dominated industries exhibit different score pairs. Tang et al. (2017) presents a slightly different approach where they assign weights to each gendered word by their level of gender implications that accumulate over the whole text, with the effects of masculine words and feminine words offsetting each other Tang et al. (2017).Another technique of bias evaluation relies on the use of word embeddings. Using this technique, we can evaluate the level of bias owing to the fact that gender stereotype bias can be passed on from corpus to the embedding model through training (Bolukbasi et al., 2016). The Word Embedding Association Test (WEAT), proposed by Caliskan et al. (2017), is an analog to the Implicit Association Test (IAT) used in Psychology studies. The purpose of WEAT is to test and quantify that two groups of target words, e.g., male-dominated professions vs. female-dominate professions, are indeed biased toward two groups of attribute words, e.g., {he}, {she}. A similar strategy is developed in Garg et al. (2018) called Relative Norm Distance (RND) which tests a single group of target words against two groups of attribute words, though the idea is much the same as WEAT. The bias of each word is evaluated by computing the difference in norm distance between the word from a masculine word group and a feminine word group. This approach can be easily extended to the text level by averaging the bias score of each word in text (Kwak et al., 2021) or taking the average of word vectors prior to bias evaluation. | [
"4823550",
"26215079",
"21058576",
"28408601",
"30886898",
"32229559",
"29615513",
"21381851",
"34395864",
"30886898",
"12185209",
"26869947"
] | [
{
"pmid": "26215079",
"title": "The Synthesis, Characterization, Crystal Structure and Photophysical Properties of a New Meso-BODIPY Substituted Phthalonitrile.",
"abstract": "A new highly fluorescent difluoroboradipyrromethene (BODIPY) dye (4) bearing an phthalonitrile group at meso-position of the chromophoric core has been synthesized starting from 4-(4-meso-dipyrromethene-phenoxy)phthalonitrile (3) which was prepared by the oxidation of 4-(2-meso-dipyrromethane-phenoxy)phthalonitrile (2). The structural, electronic and photophysical properties of the prepared dye molecule were investigated. The final product exhibit noticeable spectroscopic properties which were examined by its absorption and fluorescence spectra. The original compounds prepared in the reaction pathway were characterized by the combination of FT-IR, (1)H and (13)C NMR, UV-vis and MS spectral data. It has been calculated; molecular structure, vibrational frequencies, (1)H and (13)C NMR chemical shifts and HOMO and LUMO energies of the title compound by using B3LYP method with 6-311++G(dp) basis set, as well. The final product (4) was obtained as single crystal which crystallized in the triclinic space group P-1 with a = 9.0490 (8) Å, b = 10.5555 (9) Å, c = 11.7650 (9) Å, α = 77.024 (6)°, β = 74.437 (6)°, γ = 65.211 (6)° and Z = 2. The crystal structure has intermolecular C-H···F weak hydrogen bonds. The singlet oxygen generation ability of the dye (4) was also investigated in different solvents to determine of using in photodynamic therapy (PDT)."
},
{
"pmid": "21058576",
"title": "The impact of the wording of employment advertisements on students' inclination to apply for a job.",
"abstract": "Students' inclination to apply for a job was examined as a function of (1) the wording of the desired candidate's profile specified in the employment advertisement and (2) applicant gender. Previous research found that women are more inclined than men to apply for jobs that include a profile corresponding to their gender (i.e., a profile containing prototypically feminine instead of masculine personal characteristics). Based on Fiedler and Semin's (1996) Linguistic Category Model, we expected that this effect would decrease if the desired profile was worded in terms of behaviors/verbs instead of nouns/ adjectives. ANOVA supported this reasoning for women but not for men. We conclude that organizations may increase the number of women applying for particular jobs by changing the presentation form of the advertisement."
},
{
"pmid": "28408601",
"title": "Semantics derived automatically from language corpora contain human-like biases.",
"abstract": "Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here, we show that applying machine learning to ordinary human language results in human-like semantic biases. We replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web. Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Our methods hold promise for identifying and addressing sources of bias in culture, including technology."
},
{
"pmid": "30886898",
"title": "Effects of Budesonide Combined with Noninvasive Ventilation on PCT, sTREM-1, Chest Lung Compliance, Humoral Immune Function and Quality of Life in Patients with AECOPD Complicated with Type II Respiratory Failure.",
"abstract": "OBJECTIVE\nOur objective is to explore the effects of budesonide combined with noninvasive ventilation on procalcitonin (PCT), soluble myeloid cell triggering receptor-1 (sTREM-1), thoracic and lung compliance, humoral immune function, and quality of life in patients with acute exacerbation of chronic obstructive pulmonary disease (AECOPD) complicated with type II respiratory failure.\n\n\nMETHODS\nThere were 82 patients with AECOPD complicated with type II respiratory failure admitted into our hospital between March, 2016-September, 2017. They were selected and randomly divided into observation group (n=41) and control group (n=41). The patients in the control group received noninvasive mechanical ventilation and the patients in the observation group received budesonide based on the control group. The treatment courses were both 10 days.\n\n\nRESULTS\nThe total effective rate in the observation group (90.25%) was higher than the control group (65.85%) (P<0.05). The scores of cough, expectoration, and dyspnea were decreased after treatment (Observation group: t=18.7498, 23.2195, 26.0043, control group: t=19.9456, 11.6261, 14.2881, P<0.05); the scores of cough, expectoration, and dyspnea in the observation group were lower than the control group after treatment (t=11.6205, 17.4139, 11.6484, P<0.05). PaO2 was increased and PaCO2 was decreased in both groups after treatment (Observation group: t=24.1385, 20.7360, control group: t=11.6606, 9.2268, P<0.05); PaO2 was higher and PaCO2 was lower in the observation group than the control group after treatment (t=10.3209, 12.0115, P<0.05). Serum PCT and sTREM-1 in both groups were decreased after treatment (Observation group: t=16.2174, 12.6698, control group: t=7.2283, 6.1634, P<0.05); serum PCT and sTREM-1 in the observation group were lower than the control group after treatment (t=10.1017, 7.8227, P<0.05). The thoracic and lung compliance in both groups were increased after treatment (Observation group: t=30.5359, 17.8471, control group: t=21.2426, 13.0007, P<0.05); the thoracic and lung compliance in the observation group were higher than the control group after treatment (t=10.8079, 5.9464, P<0.05). IgA and IgG in both groups were increased after treatment (Observation group: t=9.5794, 25.3274, control group: t=5.5000, 4.7943, P<0.05), however IgM was not statistically different after treatment (Observation group: t=0.7845, control group: t=0.1767, P>0.05); IgA and IgG in the observation group were higher than the control group (t=4.9190, 4.7943, P<0.05), however IgM was not statistically different between two groups after treatment (t=0.6168, P>0.05). COPD assessment test (CAT) scores were decreased in both groups after treatment (Observation group: t=20.6781, control group: t=9.0235, P<0.05); CAT score in the observation group was lower than the control group after treatment (t=12.9515, P<0.05). Forced expiratory volume in one second (FEV1%) and forced expiratory volume in one second/ forced expiratory volume in one second (FEV1/FVC) were increased in both groups after treatment (Observation group: t=15.3684, 15.9404, control group: t=10.6640, 12.8979, P<0.05); FEV1% and FEV1/FVC in the observation group were higher than the control group (t=6.9528, 7.3527,P<0.05). The rates of complication were not statistically different between two groups (P>0.05).\n\n\nCONCLUSION\nBudesonide combined with noninvasive mechanical ventilation has good curative effects in treating AECOPE patients complicated with type II respiratory failure. It can decrease serum PCT and sTREM-1, increase thoracic lung compliance, and improve the humoral immune function and life quality."
},
{
"pmid": "32229559",
"title": "Progress toward gender equality in the United States has slowed or stalled.",
"abstract": "We examine change in multiple indicators of gender inequality for the period of 1970 to 2018. The percentage of women (age 25 to 54) who are employed rose continuously until ∼2000 when it reached its highest point to date of 75%; it was slightly lower at 73% in 2018. Women have surpassed men in receipt of baccalaureate and doctoral degrees. The degree of segregation of fields of study declined dramatically in the 1970s and 1980s, but little since then. The desegregation of occupations continues but has slowed its pace. Examining the hourly pay of those aged 25 to 54 who are employed full-time, we found that the ratio of women's to men's pay increased from 0.61 to 0.83 between 1970 and 2018, rising especially fast in the 1980s, but much slower since 1990. In sum, there has been dramatic progress in movement toward gender equality, but, in recent decades, change has slowed and on some indicators stalled entirely."
},
{
"pmid": "29615513",
"title": "Word embeddings quantify 100 years of gender and ethnic stereotypes.",
"abstract": "Word embeddings are a powerful machine-learning framework that represents each English word by a vector. The geometric relationship between these vectors captures meaningful semantic relationships between the corresponding words. In this paper, we develop a framework to demonstrate how the temporal dynamics of the embedding helps to quantify changes in stereotypes and attitudes toward women and ethnic minorities in the 20th and 21st centuries in the United States. We integrate word embeddings trained on 100 y of text data with the US Census to show that changes in the embedding track closely with demographic and occupation shifts over time. The embedding captures societal shifts-e.g., the women's movement in the 1960s and Asian immigration into the United States-and also illuminates how specific adjectives and occupations became more closely associated with certain populations over time. Our framework for temporal analysis of word embedding opens up a fruitful intersection between machine learning and quantitative social science."
},
{
"pmid": "21381851",
"title": "Evidence that gendered wording in job advertisements exists and sustains gender inequality.",
"abstract": "Social dominance theory (Sidanius & Pratto, 1999) contends that institutional-level mechanisms exist that reinforce and perpetuate existing group-based inequalities, but very few such mechanisms have been empirically demonstrated. We propose that gendered wording (i.e., masculine- and feminine-themed words, such as those associated with gender stereotypes) may be a heretofore unacknowledged, institutional-level mechanism of inequality maintenance. Employing both archival and experimental analyses, the present research demonstrates that gendered wording commonly employed in job recruitment materials can maintain gender inequality in traditionally male-dominated occupations. Studies 1 and 2 demonstrated the existence of subtle but systematic wording differences within a randomly sampled set of job advertisements. Results indicated that job advertisements for male-dominated areas employed greater masculine wording (i.e., words associated with male stereotypes, such as leader, competitive, dominant) than advertisements within female-dominated areas. No difference in the presence of feminine wording (i.e., words associated with female stereotypes, such as support, understand, interpersonal) emerged across male- and female-dominated areas. Next, the consequences of highly masculine wording were tested across 3 experimental studies. When job advertisements were constructed to include more masculine than feminine wording, participants perceived more men within these occupations (Study 3), and importantly, women found these jobs less appealing (Studies 4 and 5). Results confirmed that perceptions of belongingness (but not perceived skills) mediated the effect of gendered wording on job appeal (Study 5). The function of gendered wording in maintaining traditional gender divisions, implications for gender parity, and theoretical models of inequality are discussed."
},
{
"pmid": "34395864",
"title": "FrameAxis: characterizing microframe bias and intensity with word embedding.",
"abstract": "Framing is a process of emphasizing a certain aspect of an issue over the others, nudging readers or listeners towards different positions on the issue even without making a biased argument. Here, we propose FrameAxis, a method for characterizing documents by identifying the most relevant semantic axes (\"microframes\") that are overrepresented in the text using word embedding. Our unsupervised approach can be readily applied to large datasets because it does not require manual annotations. It can also provide nuanced insights by considering a rich set of semantic axes. FrameAxis is designed to quantitatively tease out two important dimensions of how microframes are used in the text. Microframe bias captures how biased the text is on a certain microframe, and microframe intensity shows how prominently a certain microframe is used. Together, they offer a detailed characterization of the text. We demonstrate that microframes with the highest bias and intensity align well with sentiment, topic, and partisan spectrum by applying FrameAxis to multiple datasets from restaurant reviews to political news. The existing domain knowledge can be incorporated into FrameAxis by using custom microframes and by using FrameAxis as an iterative exploratory analysis instrument. Additionally, we propose methods for explaining the results of FrameAxis at the level of individual words and documents. Our method may accelerate scalable and sophisticated computational analyses of framing across disciplines."
},
{
"pmid": "30886898",
"title": "Effects of Budesonide Combined with Noninvasive Ventilation on PCT, sTREM-1, Chest Lung Compliance, Humoral Immune Function and Quality of Life in Patients with AECOPD Complicated with Type II Respiratory Failure.",
"abstract": "OBJECTIVE\nOur objective is to explore the effects of budesonide combined with noninvasive ventilation on procalcitonin (PCT), soluble myeloid cell triggering receptor-1 (sTREM-1), thoracic and lung compliance, humoral immune function, and quality of life in patients with acute exacerbation of chronic obstructive pulmonary disease (AECOPD) complicated with type II respiratory failure.\n\n\nMETHODS\nThere were 82 patients with AECOPD complicated with type II respiratory failure admitted into our hospital between March, 2016-September, 2017. They were selected and randomly divided into observation group (n=41) and control group (n=41). The patients in the control group received noninvasive mechanical ventilation and the patients in the observation group received budesonide based on the control group. The treatment courses were both 10 days.\n\n\nRESULTS\nThe total effective rate in the observation group (90.25%) was higher than the control group (65.85%) (P<0.05). The scores of cough, expectoration, and dyspnea were decreased after treatment (Observation group: t=18.7498, 23.2195, 26.0043, control group: t=19.9456, 11.6261, 14.2881, P<0.05); the scores of cough, expectoration, and dyspnea in the observation group were lower than the control group after treatment (t=11.6205, 17.4139, 11.6484, P<0.05). PaO2 was increased and PaCO2 was decreased in both groups after treatment (Observation group: t=24.1385, 20.7360, control group: t=11.6606, 9.2268, P<0.05); PaO2 was higher and PaCO2 was lower in the observation group than the control group after treatment (t=10.3209, 12.0115, P<0.05). Serum PCT and sTREM-1 in both groups were decreased after treatment (Observation group: t=16.2174, 12.6698, control group: t=7.2283, 6.1634, P<0.05); serum PCT and sTREM-1 in the observation group were lower than the control group after treatment (t=10.1017, 7.8227, P<0.05). The thoracic and lung compliance in both groups were increased after treatment (Observation group: t=30.5359, 17.8471, control group: t=21.2426, 13.0007, P<0.05); the thoracic and lung compliance in the observation group were higher than the control group after treatment (t=10.8079, 5.9464, P<0.05). IgA and IgG in both groups were increased after treatment (Observation group: t=9.5794, 25.3274, control group: t=5.5000, 4.7943, P<0.05), however IgM was not statistically different after treatment (Observation group: t=0.7845, control group: t=0.1767, P>0.05); IgA and IgG in the observation group were higher than the control group (t=4.9190, 4.7943, P<0.05), however IgM was not statistically different between two groups after treatment (t=0.6168, P>0.05). COPD assessment test (CAT) scores were decreased in both groups after treatment (Observation group: t=20.6781, control group: t=9.0235, P<0.05); CAT score in the observation group was lower than the control group after treatment (t=12.9515, P<0.05). Forced expiratory volume in one second (FEV1%) and forced expiratory volume in one second/ forced expiratory volume in one second (FEV1/FVC) were increased in both groups after treatment (Observation group: t=15.3684, 15.9404, control group: t=10.6640, 12.8979, P<0.05); FEV1% and FEV1/FVC in the observation group were higher than the control group (t=6.9528, 7.3527,P<0.05). The rates of complication were not statistically different between two groups (P>0.05).\n\n\nCONCLUSION\nBudesonide combined with noninvasive mechanical ventilation has good curative effects in treating AECOPE patients complicated with type II respiratory failure. It can decrease serum PCT and sTREM-1, increase thoracic lung compliance, and improve the humoral immune function and life quality."
},
{
"pmid": "12185209",
"title": "Psychological aspects of natural language. use: our words, our selves.",
"abstract": "The words people use in their daily lives can reveal important aspects of their social and psychological worlds. With advances in computer technology, text analysis allows researchers to reliably and quickly assess features of what people say as well as subtleties in their linguistic styles. Following a brief review of several text analysis programs, we summarize some of the evidence that links natural word use to personality, social and situational fluctuations, and psychological interventions. Of particular interest are findings that point to the psychological value of studying particles-parts of speech that include pronouns, articles, prepositions, conjunctives, and auxiliary verbs. Particles, which serve as the glue that holds nouns and regular verbs together, can serve as markers of emotional state, social identity, and cognitive styles."
},
{
"pmid": "26869947",
"title": "Can Gender-Fair Language Reduce Gender Stereotyping and Discrimination?",
"abstract": "Gender-fair language (GFL) aims at reducing gender stereotyping and discrimination. Two principle strategies have been employed to make languages gender-fair and to treat women and men symmetrically: neutralization and feminization. Neutralization is achieved, for example, by replacing male-masculine forms (policeman) with gender-unmarked forms (police officer), whereas feminization relies on the use of feminine forms to make female referents visible (i.e., the applicant… he or she instead of the applicant… he). By integrating research on (1) language structures, (2) language policies, and (3) individual language behavior, we provide a critical review of how GFL contributes to the reduction of gender stereotyping and discrimination. Our review provides a basis for future research and for scientifically based policy-making."
}
] |
Scientific Reports | 35264592 | PMC8907242 | 10.1038/s41598-022-07296-z | Noise-assisted variational quantum thermalization | Preparing thermal states on a quantum computer can have a variety of applications, from simulating many-body quantum systems to training machine learning models. Variational circuits have been proposed for this task on near-term quantum computers, but several challenges remain, such as finding a scalable cost-function, avoiding the need of purification, and mitigating noise effects. We propose a new algorithm for thermal state preparation that tackles those three challenges by exploiting the noise of quantum circuits. We consider a variational architecture containing a depolarizing channel after each unitary layer, with the ability to directly control the level of noise. We derive a closed-form approximation for the free-energy of such circuit and use it as a cost function for our variational algorithm. By evaluating our method on a variety of Hamiltonians and system sizes, we find several systems for which the thermal state can be approximated with a high fidelity. However, we also show that the ability for our algorithm to learn the thermal state strongly depends on the temperature: while a high fidelity can be obtained for high and low temperatures, we identify a specific range for which the problem becomes more challenging. We hope that this first study on noise-assisted thermal state preparation will inspire future research on exploiting noise in variational algorithms. | Related workVariational circuits have recently been proposed for thermal state preparation, due to the existence of a natural cost function for this task: the free energy. Using variational circuits to prepare a thermal state presents two challenges specific to this task: (1) finding an ansatz that can prepare mixed states, (2) finding a scalable optimization strategy.Choice of the ansatzA common approach to VQT consists in preparing a purification of the thermal state using a variational circuit that acts on 2N qubits—N system qubits and N ancilla/environment qubits—, and tracing the ancilla qubits out at the end of the circuit15–18. An example of purification often considered in the literature is the thermofield double (TFD) state15,16. For a Hamiltonian H and an inverse temperature \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\beta$$\end{document}β, it is given by2\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\begin{aligned} | {\text {TFD}}\rangle =\frac{1}{\sqrt{Z}} \sum _n e^{-\beta E_n / 2} |n\rangle _S \otimes |n\rangle _E \end{aligned}$$\end{document}|TFD⟩=1Z∑ne-βEn/2|n⟩S⊗|n⟩Ewhere the \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\{E_n, |n\rangle \}_n$$\end{document}{En,|n⟩}n are pairs of eigenvalue/eigenvector of H, and subscript S and E refers to the system and environment, respectively. For instance, Refs.15,16 use a Quantum Approximate Optimization Ansatz (QAOA) ansatz acting on 2N qubits to prepare the TFD state of the transverse-field Ising model, the XY chain, and free fermions. One advantage of this approach is the ability to simulate the TFD, which can be interesting in in its own right, for instance for studying black holes26. The obvious disadvantage is that it requires twice as many qubits that the thermal state we want to simulate. A converse approach consists in starting with a mixed state \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\rho _0$$\end{document}ρ0 and applying a unitary circuit ansatz on the N qubits of the system. The initial \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\rho _0$$\end{document}ρ0 can either be fixed19 or modified during the optimization process20,21. In Ref.19, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\rho _0$$\end{document}ρ0 is the fixed thermal state of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$H_I=\sum _{i=1}^N Z_i$$\end{document}HI=∑i=1NZi, where \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$Z_i$$\end{document}Zi is the Pauli Z operator applied to qubit i of the system. It can easily be prepared using the purification3\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\begin{aligned} \bigotimes _j \sqrt{2 \cosh (\beta )} \sum _{b \in \{0,1\}^N} e^{(-1)^{1+b} \beta / 2} |b\rangle _S |b\rangle _{E}. \end{aligned}$$\end{document}⨂j2cosh(β)∑b∈{0,1}Ne(-1)1+bβ/2|b⟩S|b⟩E.However, since the spectrum does not change when we apply the unitary ansatz, having a static \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\rho _0$$\end{document}ρ0 freezes the spectrum of the final state. Therefore, if the spectrum of the thermal state we want to approximate is far from the spectrum of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\rho _0$$\end{document}ρ0, this approach will fail. In Ref.20, they use the thermal state \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\rho _0({\varvec{\varepsilon }})$$\end{document}ρ0(ε) of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$H=\sum _{i=1}^n \varepsilon _i P_i$$\end{document}H=∑i=1nεiPi, where \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$P_i=\frac{1-Z_i}{2}$$\end{document}Pi=1-Zi2 as an initial state and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${\varvec{\varepsilon }}=\{\varepsilon _1,\ldots ,\varepsilon _n \}$$\end{document}ε={ε1,…,εn} are parameters optimized during the training process. Finally, Ref.21 proposes to use a unitary with stochastic parameters to prepare \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\rho _0$$\end{document}ρ0. More precisely,4\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\begin{aligned} \rho _0({\varvec{\theta }})=V(X_{{\varvec{\theta }}})|{0}\rangle \langle {0}| V(X_{{\varvec{\theta }}})^\dag \end{aligned}$$\end{document}ρ0(θ)=V(Xθ)|0⟩⟨0|V(Xθ)†where \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$V({\varvec{x}})$$\end{document}V(x) is a unitary ansatz and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$X_{{\varvec{\theta }}} \sim p_{{\varvec{\theta }}}$$\end{document}Xθ∼pθ is a random vector with parametrized density \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$p_{{\varvec{\theta }}}$$\end{document}pθ. The density \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$p_{{\varvec{\theta }}}$$\end{document}pθ can be given by a classical model, such as an energy-based model (e.g. restricted Boltzmann machine) or a normalizing flow, which will be trained to get a \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\rho _0$$\end{document}ρ0 with a spectrum close to the thermal state of interest.Optimization strategiesOnce the ansatz has been fixed, the parameters within needs to be optimized. Two main approaches have been proposed in the literature: (1) explicitly minimizing the free energy, (2) using imaginary-time evolution. In the following, we describe both these methods.Free energy methodsThe thermal state is the density matrix that minimizes the free energy. Therefore, in the same way as VQE uses the energy as a cost function, any thermal state preparation method can use the free energy as its cost function15,16,19,21. However, one main difference with VQE is that the free energy cannot be easily estimated. Indeed, the Von Neumann entropy term, as a non-linear function of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\rho$$\end{document}ρ, cannot be turned into an observable, and doing a full quantum state tomography would be very costly. Several methods have been proposed to solve this challenge:Computing several Renyi entropies \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$S_{\alpha }=\frac{1}{1-\alpha } {\text{Tr}}\left[ \rho ^{\alpha } \right]$$\end{document}Sα=11-αTrρα (using multiple copies of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\rho$$\end{document}ρ) and approximating the Von Neumann entropy with them15,27.Computing the Von Neumann entropies locally on a small subsystem15Approximate the Von Neumann entropy by truncating its Taylor18 or Fourier22 decomposition.In our work, the entropy term does not come from a purification procedure, but from the presence of depolarizing gates in the circuits. This led us to propose a different type of approximation that we will study in “Noise-assisted variational quantum thermalization”.Imaginary-time evolutionThermal state preparation can be seen as the application of imaginary-time evolution during a time \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\Delta t=i\beta /2$$\end{document}Δt=iβ/2 on the maximally-mixed state \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\rho _m=\frac{1}{d} {\text{I}}$$\end{document}ρm=1dI, using the decomposition\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\begin{aligned} \rho _{\beta }=\left( \frac{1}{C} e^{-\beta H/2} \right) \left( \frac{1}{d} {\text {I}} \right) \left( \frac{1}{C}e^{-\beta H/2}\right) \end{aligned}$$\end{document}ρβ=1Ce-βH/21dI1Ce-βH/2This imaginary-time evolution can be simulated using a variational circuit and a specific update rule28,29. In Ref.17, the authors use a variational circuit \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$U(\varvec{\theta })$$\end{document}U(θ) on 2N qubits, initialized such that\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\begin{aligned} U(\varvec{\theta }_0) |0\rangle ^{\otimes 2N} \approx |\Phi ^+\rangle \end{aligned}$$\end{document}U(θ0)|0⟩⊗2N≈|Φ+⟩where \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\Phi ^+$$\end{document}Φ+ is a maximally-entangled state. An imaginary-time update rule with a small learning rate \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\tau$$\end{document}τ will lead to a unitary \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$U(\varvec{\theta }_0)$$\end{document}U(θ0) such that:\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\begin{aligned} U(\varvec{\theta }_1) |0\rangle ^{\otimes 2N} \approx \frac{1}{C} e^{-\tau H} |\Phi ^+\rangle \end{aligned}$$\end{document}U(θ1)|0⟩⊗2N≈1Ce-τH|Φ+⟩Repeating it during \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$k=\frac{\beta }{2}$$\end{document}k=β2 steps will give the state\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\begin{aligned} U(\varvec{\theta }_k) |0\rangle ^{\otimes 2N} \approx \frac{1}{C} e^{-\beta H / 2} |\Phi ^+\rangle \end{aligned}$$\end{document}U(θk)|0⟩⊗2N≈1Ce-βH/2|Φ+⟩which will be the thermal state after tracing out the environment. In Ref.30, the authors also use imaginary-time evolution to prepare the thermal state, but manage to reduce the number of qubits to N when the Hamiltonian is diagonal in the Z-basis. Finally, an ansatz-independent imaginary-time evolution method has been proposed for thermal state preparation31,32.In this work, we optimize the ansatz parameters using the free energy approach. Adapting imaginary-time evolution to a noisy ansatz could however be an interesting alternative, that we let for future work. | [
"31469277",
"31868415",
"29219599",
"33328669"
] | [
{
"pmid": "31469277",
"title": "Quantum Chemistry in the Age of Quantum Computing.",
"abstract": "Practical challenges in simulating quantum systems on classical computers have been widely recognized in the quantum physics and quantum chemistry communities over the past century. Although many approximation methods have been introduced, the complexity of quantum mechanics remains hard to appease. The advent of quantum computation brings new pathways to navigate this challenging and complex landscape. By manipulating quantum states of matter and taking advantage of their unique features such as superposition and entanglement, quantum computers promise to efficiently deliver accurate results for many important problems in quantum chemistry, such as the electronic structure of molecules. In the past two decades, significant advances have been made in developing algorithms and physical hardware for quantum computing, heralding a revolution in simulation of quantum systems. This Review provides an overview of the algorithms and results that are relevant for quantum chemistry. The intended audience is both quantum chemists who seek to learn more about quantum computing and quantum computing researchers who would like to explore applications in quantum chemistry."
},
{
"pmid": "31868415",
"title": "Variational Thermal Quantum Simulation via Thermofield Double States.",
"abstract": "We present a variational approach for quantum simulators to realize finite temperature Gibbs states by preparing thermofield double (TFD) states. Our protocol is motivated by the quantum approximate optimization algorithm and involves alternating time evolution between the Hamiltonian of interest and interactions which entangle the system and its auxiliary counterpart. As a simple example, we demonstrate that thermal states of the 1D classical Ising model at any temperature can be prepared with perfect fidelity using L/2 iterations, where L is system size. We also show that a free fermion TFD can be prepared with nearly optimal efficiency. Given the simplicity and efficiency of the protocol, our approach enables near-term quantum platforms to access finite temperature phenomena via preparation of thermofield double states."
},
{
"pmid": "29219599",
"title": "Error Mitigation for Short-Depth Quantum Circuits.",
"abstract": "Two schemes are presented that mitigate the effect of errors and decoherence in short-depth quantum circuits. The size of the circuits for which these techniques can be applied is limited by the rate at which the errors in the computation are introduced. Near-term applications of early quantum devices, such as quantum simulations, rely on accurate estimates of expectation values to become relevant. Decoherence and gate errors lead to wrong estimates of the expectation values of observables used to evaluate the noisy circuit. The two schemes we discuss are deliberately simple and do not require additional qubit resources, so to be as practically relevant in current experiments as possible. The first method, extrapolation to the zero noise limit, subsequently cancels powers of the noise perturbations by an application of Richardson's deferred approach to the limit. The second method cancels errors by resampling randomized circuits according to a quasiprobability distribution."
},
{
"pmid": "33328669",
"title": "Spin transport in a tunable Heisenberg model realized with ultracold atoms.",
"abstract": "Simple models of interacting spins have an important role in physics. They capture the properties of many magnetic materials, but also extend to other systems, such as bosons and fermions in a lattice, gauge theories, high-temperature superconductors, quantum spin liquids, and systems with exotic particles such as anyons and Majorana fermions1,2. To study and compare these models, a versatile platform is needed. Realizing such systems has been a long-standing goal in the field of ultracold atoms. So far, spin transport has only been studied in systems with isotropic spin-spin interactions3-12. Here we realize the Heisenberg model describing spins on a lattice, with fully adjustable anisotropy of the nearest-neighbour spin-spin couplings (called the XXZ model). In this model we study spin transport far from equilibrium after quantum quenches from imprinted spin-helix patterns. When spins are coupled only along two of three possible orientations (the XX model), we find ballistic behaviour of spin dynamics, whereas for isotropic interactions (the XXX model), we find diffusive behaviour. More generally, for positive anisotropies, the dynamics ranges from anomalous superdiffusion to subdiffusion, whereas for negative anisotropies, we observe a crossover in the time domain from ballistic to diffusive transport. This behaviour is in contrast with expectations from the linear-response regime and raises new questions in understanding quantum many-body dynamics far away from equilibrium."
}
] |
Scientific Reports | 35264664 | PMC8907279 | 10.1038/s41598-022-07894-x | Digital twin-driven variant design of a 3C electronic product assembly line | Large-scale personalization is becoming a reality. To ensure market competitiveness and economic benefits, enterprises require rapid response capability and flexible manufacturing operations. However, variant design and production line reconfiguration are complicated because it involves the commissioning, replacement, and adaptive integration of equipment and remodification of control systems. Herein, a digital twin-driven production line variant design is presented. As a new technology, the digital twin can realize the parallel control from the physical world to the digital world and accelerate the design process of the production line through a virtual–real linkage. Simultaneously, the actual production line can be simulated to verify the rationality of the design scheme and avoid cost wastage. Four key technologies are described in detail, and a production line variant design platform based on digital twin is built to support rapid production line variant design. Finally, experiments using a smartphone assembly line as an example are performed; the results demonstrate that the proposed method can realize production line variant design and increase production efficiency. | Related worksVariant design refers to the extraction of an existing design scheme or design plan based on specific modifications to develop a product with a design similar to that of the original. Generally, it does not destroy the basic principles and basic structural characteristics of the original product; instead, it is a type of fusion-based parameter change or partial structural adjustment performed to realize fast, high-quality, and low-cost design6–8. The variant design of products drives functional change through structural change in an agile manner. The change achieved via the variant design begins from the user domain and spreads to the structural domain, functional domain, and use process domain of products. Product family planning, modular parametric variant design9, knowledge-based variant design10, variant design based on product assembly model11, case-based variant design12, and other design methods have emerged in combination with product assembly structure and characteristic product parameters. The key to product variant design is to establish a multidomain and multi-use model of product variant structure and define the evolution process13. Yang et al.14 proposed a rapid modeling method for product skeleton and a parametric design method for products and established an assembly model and product skeleton model template of a series of products based on the similarity principle. Wang et al.15 also studied the application of CBD technology in product variant design and established a basic analytical model of variant design. Through practical application, the deformation design principle and method can reduce design intensity and shorten design time. Compared with the change caused by product variant design, the change caused by the production line is more diversified. In particular, the alteration of users and products triggers a change in the production line structure configuration, equipment action, work-in-process (WIP) movement, control network, manufacturing execution system, and execution engine. Liu et al.2 proposed a digital twin-drive rapid design method for automated flow shop design and developed a double-layer iterative coordination mechanism to achieve optimal design performance of functions required by automated flow shop systems. Further, for the design of a process-based intelligent manufacturing system, the CMCO (configuration design-motion planning-control-development-optimization-decoupling) design architecture was proposed, the iterative logic of CMCO design model was elaborated, and the prototype manufacturing system design platform based on digital twin brother was developed16. Variant design mainly focuses on the variant design of products. The key to the variant design of products is to establish a multidomain and multi-use model of product variation structure and define the evolution process. Compared with the change effect of product variant design, the change effect of the production line is more diversified, including the change from users and products, the change of production line structure configuration, equipment action, WIP movement, control network, manufacturing execution system. The variant design of production lines is usually studied around a specific topic, such as production line balance17–19, equipment configuration optimization20–22, and layout design23,24. However, the design of the production line must be characterized, reasoned, and decided through the global design idea25. In practice, the variant design of the production line is complex and high-dimensional, not only including configuration design at the static level but also logic design at the dynamic level. Therefore, describing the system from all directions and dimensions and establishing relationships between design dimensions is necessary to achieve a fast, agile and accurate design.Digital twin technology is increasingly being applied in various fields. Many scholars have performed in-depth research and practice on this technology and have reported extensive research results5,26,27. It has powerful simulation capability and can support design tasks or validate system attributes28. Digital twin technology can also effectively realize the fusion and management of multisource heterogeneous dynamic data throughout an entire product lifecycle along with the optimization and decision-making of various product development and production activities. Tao et al.29 proposed a product design framework based on digital twin and demonstrated the effectiveness of the proposed method through an example. Digital twin describes system operation from the virtual and real perspectives and can provide robust support to the customized design, rapid reconstruction, distributed integration, transparent monitoring, virtual operation, and maintenance of workshop production lines. It also opens up a new route for manufacturing system modeling and simulation analysis based on trial-and-error, reducible, near-physical, high-fidelity, and virtual–real synchronous verification and highly interactive iterative optimization. Guo et al.30 proposed a modular method to help construct a digital twin considering the frequent changes in the design stage of a factory. Designers can then evaluate design schemes based on the digital twin and quickly identify design defects, thus saving time. Leng et al.31 proposed a new digital twin-driven joint optimization method for the design and optimization of large automated high-rise warehouses. Periodic optimal decisions can be obtained by establishing a joint optimization model and then fed back to the semi-physical simulation engine in the digital twin system to verify the results. Yan et al.32 also proposed a rapid custom design method for a novel furniture panel production line based on a digital twin. This method has the characteristics of interactive virtual reality mapping and fusion. It can provide design guidance and decision support services at the design stage, generate engineering analysis to solve coupling problems, and ultimately generate authoritative design solutions for manufacturing systems. Yi et al.33 also proposed a digital twin reference model for intelligent assembly process design together with a three-layer intelligent assembly application framework based on a digital twin. The working mechanism of assembly process planning, simulation, prediction, and control management in the virtual space layer was discussed in detail. Leng et al.4 proposed a digital twin manufacturing network physical system for the parallel control of intelligent shops in a large-scale personalized mode. Through the physical connection of a distributed digital twin model, all types of manufacturing resources can be integrated to form dynamic autonomous systems and jointly create personalized products.Many studies have shown that digital twin technologies have the characteristics of a design task and verification system. Herein, considering a mobile phone assembly line as an example, a production line variant design method driven by the digital twin technology is proposed. The modeling and simulation capabilities of the digital twin enable rapid variant design and solution validation of the production line, reducing variant design time and research costs. | [] | [] |
Scientific Reports | 35264595 | PMC8907310 | 10.1038/s41598-022-07125-3 | An improved Lagrangian relaxation algorithm based SDN framework for industrial internet hybrid service flow scheduling | The Industrial Internet is the key for Industry 4.0, and network control in the industrial internet usually requires high reliability and low latency. The industrial internet ubiquitously connects all relevant Internet of things (IoT) sensing and actuating devices, allowing for monitoring and control of multiple industrial systems. Unfortunately, guaranteeing very low end-to-end wait times is particularly challenging because the transmissions must be articulated in time. In the industrial internet, there usually coexist multiple streams. The amount of data for controlling business flows is small, while other business flows (e.g., interactive business flows, sensing business flows) typically transmit large amounts of data across the network. These data flows are mainly processed in traditional switches using a queue-based "store-and-forward" mode for data exchange, consuming much bandwidth and filling up the network buffers. This causes delays in the control flow. In our research, we propose an Software Defined Networking (SDN) framework to reduce such delays and ensure real-time delivery of mixed service flows. The scheduling policy is performed through the northbound Application Programming Interface (API) of the SDN controller so that the dynamic network topology can be satisfied. We use the concept of edge and intermediate switches, where each switch port sends data at a specific time to avoid queuing intermediate switches. We also introduce an improved Langerian relaxation algorithm to select the best path to ensure low latency. Finally, the path rules are deployed to the switches in flow tables through the SDN controller. Our mathematical analysis and simulation evaluation demonstrates that the proposed scheme is more efficient. | Related workThis paper focuses only on the low-latency problem of scheduling mixed service flows in the industrial Internet. Many scholars have investigated the scheduling of flows. Berclaz et al. proposed the well-known K Shortest Paths (KSP) algorithm3, which uses the traditional Shortest Path (SP) algorithm to find the shortest path and then finds the subsequent shortest paths based on the initial shortest path. KSP is a static scheduling algorithm; although it performs well in sparse network topologies, the number of traversed paths needs to be increased in dense topologies to achieve optimality. The traditional optimal shortest path algorithm is often unusable due to its high computational effort and is unsuitable for real-time operation. A novel stream scheduling generation model is proposed by Zaiyu Pang et al.4 The model is designed with two algorithms to suit different application scenarios: the offline algorithm has better schedulability, while the online algorithm consumes less time and has slightly reduced schedulability. Many heuristic search strategies have been proposed to improve the computational efficiency of shortest path search. Misra et al.5 proposed a polynomial-time heuristic algorithm, but it only considered the maximum throughput under the bandwidth constraint and ignored the delay and packet loss rate. We found a heuristic algorithm based on Particle Swarm Optimization (PSO) to solve the scheduling problem in Xue et al.6. Nevertheless, the results obtained are not entirely accurate due to the rapid convergence rate, which can lead to an optimal local solution. Even the results do not reflect the actual performance of the PSO. For large-scale scheduling, Pozo et al.7 proposed a segmentation method that decomposes the scheduling problem into smaller problems that can be solved independently. While maintaining the quality of the schedule reduces the running time of the scheduler and allows scheduling larger networks in a shorter period. The work in8 relaxes the scheduling rules, divides the Satisfiability Modulo Theories (SMT) problem into multiple Optimization Mode Theory (OMT) problems, and reduces the solver runtime to an acceptable level to eliminate scheduling conflicts. The scheduling is an NP-hard problem. To solve the NP-hard problem, some researchers used the traditional Lagrange relaxation algorithm9 to transform the relaxation function into a new heuristic function. After analyzing and discussing Lagrange multipliers, we propose multipliers different from the traditional Lagrange relaxation algorithm in this paper.To guarantee low latency in the Industrial Internet, the network is traditionally organized by a dedicated Fieldbus network. For this purpose, several real-time Ethernet networks have been proposed, such as ProfiNET10 and TT-Ethernet11. GengLiang et al.12 used a fieldbus scheduling table to construct an algorithm that minimizes communication jitter. We can assume that the propagation delay, processing delay, and transmission delay in the network are deterministic. The propagation delay, processing delay, and transmission delay in the network are deterministic13. Therefore, Niklas Reusch et al.14 propose a novel and more flexible window-based scheduling algorithm using a time-aware shaper (TAS) that guarantees bounded delays by integrating worst-case delay analysis. These networks architectures limit the non-deterministic queuing delay. For this purpose, Nayak et al.15 used a Software Defined Network (SDN) architecture and received a series of predefined time-triggered flows. They first proposed an Integer Linear Programming (ILP) formulation to address the problem of combining routing and time-triggered flows. Subsequently, the same authors proposed an increasing flow scheduling and routing algorithm in their paper16. This algorithm dynamically adds or removes flows by scheduling them one at a time. The authors propose the concept of a basic period, which is then divided into multiple time slots, each of which is large enough to traverse the entire network with Maximum Transmission Unit (MTU)-sized packets. And this results in too large a time slot being reserved for each stream, resulting in a large amount of wasted bandwidth. The network topology is large enough to introduce time delays with time-triggered flows traversing the entire network. Traditional switches mainly use queue-based "store-and-forward" mode for data exchange processing, where mixed service flows are scheduled in the switch’s "input queue" to different forwarding queues according to priority and then forwarded to the output queue. Inspired by Nayak’s scheduling of time slots for flows to avoid network queuing, we propose an SDN network framework to minimize the possibility of queuing. As shown in Fig. 2, the edge switch collects information from end devices and sends it to the SDN controller, which then schedules time slots by the time-sensitivity of different flows. So it can segregate different types of traffic in time and space, thus avoiding queuing problems at the edge switches. It is also inspired by Fan Wang et al.17 who proposed TracForetime-LSH to perform short-term traffic flow prediction using traffic data detected from sensors.We design a low-latency routing algorithm for messaging between intermediate switch links and also for port data flow prediction to minimize queuing in the intermediate switches. This algorithm effectively eliminates packet loss due to queue overflow while maximizing the number of flows in the network and reducing the end-to-end delay of each flow.Figure 2The architecture of industrial SDN. | [
"21282851"
] | [
{
"pmid": "21282851",
"title": "Multiple Object Tracking Using K-Shortest Paths Optimization.",
"abstract": "Multi-object tracking can be achieved by detecting objects in individual frames and then linking detections across frames. Such an approach can be made very robust to the occasional detection failure: If an object is not detected in a frame but is in previous and following ones, a correct trajectory will nevertheless be produced. By contrast, a false-positive detection in a few frames will be ignored. However, when dealing with a multiple target problem, the linking step results in a difficult optimization problem in the space of all possible families of trajectories. This is usually dealt with by sampling or greedy search based on variants of Dynamic Programming which can easily miss the global optimum. In this paper, we show that reformulating that step as a constrained flow optimization results in a convex problem. We take advantage of its particular structure to solve it using the k-shortest paths algorithm, which is very fast. This new approach is far simpler formally and algorithmically than existing techniques and lets us demonstrate excellent performance in two very different contexts."
}
] |
Frontiers in Psychology | null | PMC8907480 | 10.3389/fpsyg.2022.820813 | What Does Twitter Say About Self-Regulated Learning? Mapping Tweets From 2011 to 2021 | Social network services such as Twitter are important venues that can be used as rich data sources to mine public opinions about various topics. In this study, we used Twitter to collect data on one of the most growing theories in education, namely Self-Regulated Learning (SRL) and carry out further analysis to investigate What Twitter says about SRL? This work uses three main analysis methods, descriptive, topic modeling, and geocoding analysis. The searched and collected dataset consists of a large volume of relevant SRL tweets equal to 54,070 tweets between 2011 and 2021. The descriptive analysis uncovers a growing discussion on SRL on Twitter from 2011 till 2018 and then markedly decreased till the collection day. For topic modeling, the text mining technique of Latent Dirichlet allocation (LDA) was applied and revealed insights on computationally processed topics. Finally, the geocoding analysis uncovers a diverse community from all over the world, yet a higher density representation of users from the Global North was identified. Further implications are discussed in the paper. | Related WorkSelf-regulated learning is a skill of self-thought, plan, and action that has been identified as one of the critical factors affecting student success in learning processes (Zimmerman, 1990; Winne, 2021; Yusufu and Shakir, 2021). While there are various models of SRL, most of the models agreed that SRL is cyclical and clustered into three phases, namely forethought, performance, and reflection. One of the grounds for the relevant interest in SRL is the growth of digital, online, and virtual courses in the context of formal and informal learning environments (Lim et al., 2021). The reason of which returns to the students who are in needed skills to “actively make decisions on the metacognitive and cognitive strategies they deploy to monitor and control their learning to achieve their goals” (Lim et al., 2021, p. 2). SRL strategies such as goal setting, time management, and help-seeking are useful and common practices used to explore and investigate SRL processes (Yusufu and Shakir, 2021).Encouraging online collaborative activities through social media platforms to seek help from other colleagues was identified as relevant and essential for SRL (Yen et al., 2021). Yen et al. (2021) also found that blogging on social media effectively engages students in self-evaluation and self-reflection, which, as mentioned earlier, are fundamental parts of the SRL phases. With that in mind, social media may encompass important discussions on the theoretical and practical approaches for better self-regulation.Recently, there have been a growing number of Twitter-related research works. Some of the studies powered up Twitter and used the huge collection of microblogs contextual data to address interesting research questions. For example, Chen et al. (2015) analyzed tweets of 4 years period of the official learning analytics and knowledge conference to gain insights into the community. The analysis revealed that Twitter was helpful to identify trends of learning analytics as well as identify major personal experiences. Chen et al. (2015) were able to characterize an escalating trend of student-centered topics on engagement and assessment as well as cluster tweets into topics using topic modeling to show the diversity of the field of learning analytics.The conversational nature of Twitter has been identified to be useful in detecting user networks to discover scientific knowledge across different communities. The study by Díaz-Faes et al. (2019) provided novice evidence on Twitter studies to break new ground for systematic analysis around science. Díaz-Faes et al. (2019) analyzed over 1.3 million unique users’ data and 14 million tweets on scientific publications to outline the general activities of Twitter communities and their interactions with scientific outputs based on social media metrics. Some of the major findings of their study has revealed the significant disciplinary differences of how researchers behave in the social media realm and the development of scholarly identity of researchers.Another example is the study by Garcia and Berton (2021) who used sentiment analysis to explore a large number of tweets in Brazil and United States on related microblogs to COVID-19. The researchers identified a general negative emotions dominancy during the COVID-19 pandemic for almost all the topics in United States and Brazil. A key contribution of Garcia and Berton (2021) study was enriching the library of the Portuguese language with keywords related to positive and negative emotions as well as gap the literature with new sentiment content for the development of new techniques for processing languages other than English.Perhaps some of the most popular analysis methods of Twitter from the literature are content analysis and topic modeling (Giachanou and Crestani, 2016). The latter method has been immensely used to identify topics from complex yet short textual data. One interesting example of how topic modeling has been used with a large tweets database is the study by Dahal et al. (2019). The researchers were able to infer different topics of discussion on the issue of climate change and how it is perceived by the general public. Dahal et al. (2019) found that the discussions of climate change in the United States are less focused on policy topics than other countries in Europe. Other examples from the literature used topic modeling to examine themes discussed on Twitter about the COVID-19 pandemic (e.g., Boon-Itt and Skunkan, 2020; Wicke and Bolognesi, 2020).Topic modeling helped Saura et al. (2021) divide a large corpus of nearly 900,000 tweets on security issues in smart living environments. The result of this study identified 10 topics related to privacy and security breaches and smart living environments such as the Internet of Things. One of the significant implications that took advantage of Twitter microblogs using topic modeling is identifying key concerns raised by users. For example, Saura et al. (2021) determined that malware, data cloud storage, and cyber-attacks are among the major issues Twitter users reported and require further attention by manufacturers.With respect to content analysis, Twitter offers various possibilities, for example hashtag analysis. Hashtags enable users to identify other users based on their interest in parallel topics (Kimmons et al., 2017). As such, hashtags provide sharing of information in an organized manner with which resources are curated based on shared interest. Another research study by Kimmons et al. (2021) examined tweets that incorporate a hashtag of #EdTech, found out that discussions of educational technology have been changing with the present pandemic. It seems that the COVID-19 has triggered the emerging usage of new terms in educational technology such as “remote learning.” The study by Kimmons et al. (2021) also stated that trends of educational technology (i.e., EdTech) had been largely influenced by a small group of active Twitter users during the time of the pandemic.A less popular but interesting analysis method is location analysis based on microblogs (i.e., geocoding). Using social media data for geographical research can be used to identify trends, explain patterns and describe various geographical phenomena (Goldberg, 2008). In Twitter, researchers have used geolocation analysis to map the felt area by earthquakes by examining the tweets generated after a particular time (Earle et al., 2010). Others used geolocation to identify accidents reported on Twitter in large cities (Milusheva et al., 2021).In general, we learned from the literature that exploring the public discussions surrounding the SRL theory using Twitter analysis methods could offer useful information and present alternative perspectives to the theory. Provided that, the current study aims to gain a broader understanding of how the SRL theory is discussed in the public affinity space and how it has been argued over the last 10 years. To achieve this goal, our analyses will attempt to answer the following research questions:•What are the general characteristics of Twitter conversation on SRL?•What are the main topics of interest that are related to SRL from Twitter public discussion?•Where do English-based SRL discussions originate from? | [
"33108310",
"27732598",
"31116783",
"31182117",
"33673545",
"33519326",
"30725308",
"33644781",
"34803832",
"33271776",
"33534801",
"24372742",
"29342910",
"29943415",
"32997720"
] | [
{
"pmid": "33108310",
"title": "Public Perception of the COVID-19 Pandemic on Twitter: Sentiment Analysis and Topic Modeling Study.",
"abstract": "BACKGROUND\nCOVID-19 is a scientifically and medically novel disease that is not fully understood because it has yet to be consistently and deeply studied. Among the gaps in research on the COVID-19 outbreak, there is a lack of sufficient infoveillance data.\n\n\nOBJECTIVE\nThe aim of this study was to increase understanding of public awareness of COVID-19 pandemic trends and uncover meaningful themes of concern posted by Twitter users in the English language during the pandemic.\n\n\nMETHODS\nData mining was conducted on Twitter to collect a total of 107,990 tweets related to COVID-19 between December 13 and March 9, 2020. The analyses included frequency of keywords, sentiment analysis, and topic modeling to identify and explore discussion topics over time. A natural language processing approach and the latent Dirichlet allocation algorithm were used to identify the most common tweet topics as well as to categorize clusters and identify themes based on the keyword analysis.\n\n\nRESULTS\nThe results indicate three main aspects of public awareness and concern regarding the COVID-19 pandemic. First, the trend of the spread and symptoms of COVID-19 can be divided into three stages. Second, the results of the sentiment analysis showed that people have a negative outlook toward COVID-19. Third, based on topic modeling, the themes relating to COVID-19 and the outbreak were divided into three categories: the COVID-19 pandemic emergency, how to control COVID-19, and reports on COVID-19.\n\n\nCONCLUSIONS\nSentiment analysis and topic modeling can produce useful information about the trends in the discussion of the COVID-19 pandemic on social media as well as alternative perspectives to investigate the COVID-19 crisis, which has created considerable public awareness. This study shows that Twitter is a good communication channel for understanding both public concern and public awareness about COVID-19. These findings can help health departments communicate information to alleviate specific public concerns about the disease."
},
{
"pmid": "27732598",
"title": "How Are Scientists Using Social Media in the Workplace?",
"abstract": "Social media has created networked communication channels that facilitate interactions and allow information to proliferate within professional academic communities as well as in informal social circumstances. A significant contemporary discussion in the field of science communication is how scientists are using (or might use) social media to communicate their research. This includes the role of social media in facilitating the exchange of knowledge internally within and among scientific communities, as well as externally for outreach to engage the public. This study investigates how a surveyed sample of 587 scientists from a variety of academic disciplines, but predominantly the academic life sciences, use social media to communicate internally and externally. Our results demonstrate that while social media usage has yet to be widely adopted, scientists in a variety of disciplines use these platforms to exchange scientific knowledge, generally via either Twitter, Facebook, LinkedIn, or blogs. Despite the low frequency of use, our work evidences that scientists perceive numerous potential advantages to using social media in the workplace. Our data provides a baseline from which to assess future trends in social media use within the science academy."
},
{
"pmid": "31116783",
"title": "Towards a second generation of 'social media metrics': Characterizing Twitter communities of attention around science.",
"abstract": "'Social media metrics' are bursting into science studies as emerging new measures of impact related to scholarly activities. However, their meaning and scope as scholarly metrics is still far from being grasped. This research seeks to shift focus from the consideration of social media metrics around science as mere indicators confined to the analysis of the use and visibility of publications on social media to their consideration as metrics of interaction and circulation of scientific knowledge across different communities of attention, and particularly as metrics that can also be used to characterize these communities. Although recent research efforts have proposed tentative typologies of social media users, no study has empirically examined the full range of Twitter user's behavior within Twitter and disclosed the latent dimensions in which activity on Twitter around science can be classified. To do so, we draw on the overall activity of social media users on Twitter interacting with research objects collected from the Altmetic.com database. Data from over 1.3 million unique users, accounting for over 14 million tweets to scientific publications, is analyzed. Based on an exploratory and confirmatory factor analysis, four latent dimensions are identified: 'Science Engagement', 'Social Media Capital', 'Social Media Activity' and 'Science Focus'. Evidence on the predominant type of users by each of the four dimensions is provided by means of VOSviewer term maps of Twitter profile descriptions. This research breaks new ground for the systematic analysis and characterization of social media users' activity around science."
},
{
"pmid": "31182117",
"title": "Efficacy and safety of a novel topical agent for gallstone dissolution: 2-methoxy-6-methylpyridine.",
"abstract": "BACKGROUND\nAlthough methyl-tertiary butyl ether (MTBE) is the only clinical topical agent for gallstone dissolution, its use is limited by its side effects mostly arising from a relatively low boiling point (55 °C). In this study, we developed the gallstone-dissolving compound containing an aromatic moiety, named 2-methoxy-6-methylpyridine (MMP) with higher boiling point (156 °C), and compared its effectiveness and toxicities with MTBE.\n\n\nMETHODS\nThe dissolubility of MTBE and MMP in vitro was determined by placing human gallstones in glass containers with either solvent and, then, measuring their dry weights. Their dissolubility in vivo was determined by comparing the weights of solvent-treated gallstones and control (dimethyl sulfoxide)-treated gallstones, after directly injecting each solvent into the gallbladder in hamster models with cholesterol and pigmented gallstones.\n\n\nRESULTS\nIn the in vitro dissolution test, MMP demonstrated statistically higher dissolubility than did MTBE for cholesterol and pigmented gallstones (88.2% vs. 65.7%, 50.8% vs. 29.0%, respectively; P < 0.05). In the in vivo experiments, MMP exhibited 59.0% and 54.3% dissolubility for cholesterol and pigmented gallstones, respectively, which were significantly higher than those of MTBE (50.0% and 32.0%, respectively; P < 0.05). The immunohistochemical stains of gallbladder specimens obtained from the MMP-treated hamsters demonstrated that MMP did not significantly increase the expression of cleaved caspase 9 or significantly decrease the expression of proliferation cell nuclear antigen.\n\n\nCONCLUSIONS\nThis study demonstrated that MMP has better potential than does MTBE in dissolving gallstones, especially pigmented gallstones, while resulting in lesser toxicities."
},
{
"pmid": "33673545",
"title": "Geospatial Analysis of COVID-19: A Scoping Review.",
"abstract": "The outbreak of SARS-CoV-2 in Wuhan, China in late December 2019 became the harbinger of the COVID-19 pandemic. During the pandemic, geospatial techniques, such as modeling and mapping, have helped in disease pattern detection. Here we provide a synthesis of the techniques and associated findings in relation to COVID-19 and its geographic, environmental, and socio-demographic characteristics, following the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) methodology for scoping reviews. We searched PubMed for relevant articles and discussed the results separately for three categories: disease mapping, exposure mapping, and spatial epidemiological modeling. The majority of studies were ecological in nature and primarily carried out in China, Brazil, and the USA. The most common spatial methods used were clustering, hotspot analysis, space-time scan statistic, and regression modeling. Researchers used a wide range of spatial and statistical software to apply spatial analysis for the purpose of disease mapping, exposure mapping, and epidemiological modeling. Factors limiting the use of these spatial techniques were the unavailability and bias of COVID-19 data-along with scarcity of fine-scaled demographic, environmental, and socio-economic data-which restrained most of the researchers from exploring causal relationships of potential influencing factors of COVID-19. Our review identified geospatial analysis in COVID-19 research and highlighted current trends and research gaps. Since most of the studies found centered on Asia and the Americas, there is a need for more comparable spatial studies using geographically fine-scaled data in other areas of the world."
},
{
"pmid": "33519326",
"title": "Topic detection and sentiment analysis in Twitter content related to COVID-19 from Brazil and the USA.",
"abstract": "Twitter is a social media platform with more than 500 million users worldwide. It has become a tool for spreading the news, discussing ideas and comments on world events. Twitter is also an important source of health-related information, given the amount of news, opinions and information that is shared by both citizens and official sources. It is a challenge identifying interesting and useful content from large text-streams in different languages, few works have explored languages other than English. In this paper, we use topic identification and sentiment analysis to explore a large number of tweets in both countries with a high number of spreading and deaths by COVID-19, Brazil, and the USA. We employ 3,332,565 tweets in English and 3,155,277 tweets in Portuguese to compare and discuss the effectiveness of topic identification and sentiment analysis in both languages. We ranked ten topics and analyzed the content discussed on Twitter for four months providing an assessment of the discourse evolution over time. The topics we identified were representative of the news outlets during April and August in both countries. We contribute to the study of the Portuguese language, to the analysis of sentiment trends over a long period and their relation to announced news, and the comparison of the human behavior in two different geographical locations affected by this pandemic. It is important to understand public reactions, information dissemination and consensus building in all major forms, including social media in different countries."
},
{
"pmid": "30725308",
"title": "Self-Regulation in Low- and Middle-Income Countries: Challenges and Future Directions.",
"abstract": "Self-regulation is developed early in life through family and parenting interactions. There has been considerable debate on how to best conceptualize and enhance self-regulation. Many consider self-regulation as the socio-emotional competencies required for healthy and productive living, including the flexibility to regulate emotions, control anger, maintain calm under pressure, and respond adaptively to a variety of situations. Its enhancement is the focus of many child and family interventions. An important limitation of the self-regulation field is that most empirical and conceptual research comes from high-income countries (HICs). Less is known about the manifestation, measurement and role of self-regulation in many collectivistic, rural, or less-developed contexts such as low- and middle-income countries (LMICs). This position paper aims to present an initial review of the existing literature on self-regulation in LMICs, with a focus on parenting, and to describe challenges in terms of measurement and implementation of self-regulation components into existing interventions for parents, children and adolescents in these settings. We conclude by establishing steps or recommendations for conducting basic research to understand how self-regulation expresses itself in vulnerable and low-resource settings and for incorporating components of self-regulation into services in LMICs."
},
{
"pmid": "33644781",
"title": "Trends in Educational Technology: What Facebook, Twitter, and Scopus Can Tell us about Current Research and Practice.",
"abstract": "Using large-scale, public data sources, this editorial provides a high-level description of educational technology trends leading up to and encompassing the year 2020. Data sources included (a) 17.9 million Facebook page posts by K-12 educational institutions in the U.S., (b) 131,760 tweets to the #EdTech hashtag on Twitter, and (c) 29,636 educational technology articles in the Scopus database. We provide a variety of descriptive results in the form of participation frequency charts, keyword matches, URL domain link counts, co-occurring hashtags, tweet text word trees, and common word and bigram frequencies. Results from the analysis of Facebook posts indicated that (a) schools increasingly used the platform over time, (b) the pandemic increased frequency (but not the nature) of use, (c) schools are progressively sharing more media, information, and tools, and (d) some of these tools align with trends identified by Weller (2020) while others do not. Analysis of tweets indicated that (a) discussions in 2020 revolved around \"remote learning\" and related topics, (b) this emphasis shifted or morphed into \"elearning\" and \"online learning\" as the year progressed, (c) shared posts were primarily informational or media-based, and (d) the space was heavily directed by a relatively small group of Superusers. Last, analysis of articles in Scopus indicated that (a) online learning is historically the most-researched topic in the field, (b) the past decade reflects a shift to more \"open\" and \"social\" topics, and (c) there seems to be a lag or disconnect between emergent high-interest technologies and research. Taken together, we conclude that these results show the field's preparation for addressing many challenges of 2020, but propose that, moving forward, we would be better served by embracing greater philosophical plurality and better addressing key issues, including equity and practicality."
},
{
"pmid": "34803832",
"title": "Temporal Assessment of Self-Regulated Learning by Mining Students' Think-Aloud Protocols.",
"abstract": "It has been widely theorized and empirically proven that self-regulated learning (SRL) is related to more desired learning outcomes, e.g., higher performance in transfer tests. Research has shifted to understanding the role of SRL during learning, such as the strategies and learning activities, learners employ and engage in the different SRL phases, which contribute to learning achievement. From a methodological perspective, measuring SRL using think-aloud data has been shown to be more insightful than self-report surveys as it helps better in determining the link between SRL activities and learning achievements. Educational process mining on the basis of think-aloud data enables a deeper understanding and more fine-grained analyses of SRL processes. Although students' SRL is highly contextualized, there are consistent findings of the link between SRL activities and learning outcomes pointing to some consistency of the processes that support learning. However, past studies have utilized differing approaches which make generalization of findings between studies investigating the unfolding of SRL processes during learning a challenge. In the present study with 29 university students, we measured SRL via concurrent think-aloud protocols in a pre-post design using a similar approach from a previous study in an online learning environment during a 45-min learning session, where students learned about three topics and wrote an essay. Results revealed significant learning gain and replication of links between SRL activities and transfer performance, similar to past research. Additionally, temporal structures of successful and less successful students indicated meaningful differences associated with both theoretical assumptions and past research findings. In conclusion, extending prior research by exploring SRL patterns in an online learning setting provides insights to the replicability of previous findings from online learning settings and new findings show that it is important not only to focus on the repertoire of SRL strategies but also on how and when they are used."
},
{
"pmid": "33271776",
"title": "Examining Procrastination among University Students through the Lens of the Self-Regulated Learning Model.",
"abstract": "Generally considered as a prevalent occurrence in academic settings, procrastination was analyzed in association with constructs such as self-efficacy, self-esteem, anxiety, stress, and fear of failure. This study investigated the role played by self-regulated learning strategies in predicting procrastination among university students. To this purpose, the relationships of procrastination with cognitive and metacognitive learning strategies and time management were explored in the entire sample, as well as in male and female groups. Gender differences were taken into account due to the mixed results that emerged in previous studies. This cross-sectional study involved 450 university students (M = 230; F = 220; Mage = 21.08, DS = 3.25) who completed a self-reported questionnaire including a sociodemographic section, the Tuckman Procrastination Scale, the Time Management Scale, and the Metacognitive Self-Regulation and Critical Thinking Scales. Descriptive and inferential analyses were applied to the data. The main findings indicated that temporal and metacognitive components play an important role in students' academic achievement and that, compared to females, males procrastinate more due to poor time management skills and metacognitive strategies. Practical implications were suggested to help students to overcome their dilatory behavior."
},
{
"pmid": "33534801",
"title": "Applying machine learning and geolocation techniques to social media data (Twitter) to develop a resource for urban planning.",
"abstract": "With all the recent attention focused on big data, it is easy to overlook that basic vital statistics remain difficult to obtain in most of the world. What makes this frustrating is that private companies hold potentially useful data, but it is not accessible by the people who can use it to track poverty, reduce disease, or build urban infrastructure. This project set out to test whether we can transform an openly available dataset (Twitter) into a resource for urban planning and development. We test our hypothesis by creating road traffic crash location data, which is scarce in most resource-poor environments but essential for addressing the number one cause of mortality for children over five and young adults. The research project scraped 874,588 traffic related tweets in Nairobi, Kenya, applied a machine learning model to capture the occurrence of a crash, and developed an improved geoparsing algorithm to identify its location. We geolocate 32,991 crash reports in Twitter for 2012-2020 and cluster them into 22,872 unique crashes during this period. For a subset of crashes reported on Twitter, a motorcycle delivery service was dispatched in real-time to verify the crash and its location; the results show 92% accuracy. To our knowledge this is the first geolocated dataset of crashes for the city and allowed us to produce the first crash map for Nairobi. Using a spatial clustering algorithm, we are able to locate portions of the road network (<1%) where 50% of the crashes identified occurred. Even with limitations in the representativeness of the data, the results can provide urban planners with useful information that can be used to target road safety improvements where resources are limited. The work shows how twitter data might be used to create other types of essential data for urban planning in resource poor environments."
},
{
"pmid": "29342910",
"title": "Academic Stress and Self-Regulation among University Students in Malaysia: Mediator Role of Mindfulness.",
"abstract": "Academic stress is the most common emotional or mental state that students experience during their studies. Stress is a result of a wide range of issues, including test and exam burden, a demanding course, a different educational system, and thinking about future plans upon graduation. A sizeable body of literature in stress management research has found that self-regulation and being mindful will help students to cope up with the stress and dodge long-term negative consequences, such as substance abuse. The present study aims to investigate the influence of academic stress, self-regulation, and mindfulness among undergraduate students in Klang Valley, Malaysia, and to identify mindfulness as the mediator between academic stress and self-regulation. For this study, a total of 384 undergraduate students in Klang Valley, Malaysia were recruited. Using Correlational analysis, results revealed that there was a significant relationship between academic stress, self-regulation, and mindfulness. However, using SPSS mediational analysis, mindfulness did not prove the mediator role in the study."
},
{
"pmid": "29943415",
"title": "Self-regulated learning in the clinical context: a systematic review.",
"abstract": "OBJECTIVES\nResearch has suggested beneficial effects of self-regulated learning (SRL) for medical students' and residents' workplace-based learning. Ideally, learners go through a cyclic process of setting learning goals, choosing learning strategies and assessing progress towards goals. A clear overview of medical students' and residents' successful key strategies, influential factors and effective interventions to stimulate SRL in the workplace is missing. This systematic review aims to provide an overview of and a theoretical base for effective SRL strategies of medical students and residents for their learning in the clinical context.\n\n\nMETHODS\nThis systematic review was conducted according to the guidelines of the Association for Medical Education in Europe. We systematically searched PubMed, EMBASE, Web of Science, PsycINFO, ERIC and the Cochrane Library from January 1992 to July 2016. Qualitative and quantitative studies were included. Two reviewers independently performed the review process and assessed the methodological quality of included studies. A total of 3341 publications were initially identified and 18 were included in the review.\n\n\nRESULTS\nWe found diversity in the use of SRL strategies by medical students and residents, which is linked to individual (goal setting), contextual (time pressure, patient care and supervision) and social (supervisors and peers) factors. Three types of intervention were identified (coaching, learning plans and supportive tools). However, all interventions focused on goal setting and monitoring and none on supporting self-evaluation.\n\n\nCONCLUSIONS\nSelf-regulated learning in the clinical environment is a complex process that results from an interaction between person and context. Future research should focus on unravelling the process of SRL in the clinical context and specifically on how medical students and residents assess their progress towards goals."
},
{
"pmid": "32997720",
"title": "Framing COVID-19: How we conceptualize and discuss the pandemic on Twitter.",
"abstract": "Doctors and nurses in these weeks and months are busy in the trenches, fighting against a new invisible enemy: Covid-19. Cities are locked down and civilians are besieged in their own homes, to prevent the spreading of the virus. War-related terminology is commonly used to frame the discourse around epidemics and diseases. The discourse around the current epidemic makes use of war-related metaphors too, not only in public discourse and in the media, but also in the tweets written by non-experts of mass communication. We hereby present an analysis of the discourse around #Covid-19, based on a large corpus tweets posted on Twitter during March and April 2020. Using topic modelling we first analyze the topics around which the discourse can be classified. Then, we show that the WAR framing is used to talk about specific topics, such as the virus treatment, but not others, such as the effects of social distancing on the population. We then measure and compare the popularity of the WAR frame to three alternative figurative frames (MONSTER, STORM and TSUNAMI) and a literal frame used as control (FAMILY). The results show that while the FAMILY frame covers a wider portion of the corpus, among the figurative frames WAR, a highly conventional one, is the frame used most frequently. Yet, this frame does not seem to be apt to elaborate the discourse around some aspects involved in the current situation. Therefore, we conclude, in line with previous suggestions, a plethora of framing options-or a metaphor menu-may facilitate the communication of various aspects involved in the Covid-19-related discourse on the social media, and thus support civilians in the expression of their feelings, opinions and beliefs during the current pandemic."
}
] |
Frontiers in Genetics | null | PMC8908451 | 10.3389/fgene.2022.839949 | An Ensemble Learning Framework for Detecting Protein Complexes From PPI Networks | Detecting protein complexes is one of the keys to understanding cellular organization and processes principles. With high-throughput experiments and computing science development, it has become possible to detect protein complexes by computational methods. However, most computational methods are based on either unsupervised learning or supervised learning. Unsupervised learning-based methods do not need training datasets, but they can only detect one or several topological protein complexes. Supervised learning-based methods can detect protein complexes with different topological structures. However, they are usually based on a type of training model, and the generalization of a single model is poor. Therefore, we propose an Ensemble Learning Framework for Detecting Protein Complexes (ELF-DPC) within protein-protein interaction (PPI) networks to address these challenges. The ELF-DPC first constructs the weighted PPI network by combining topological and biological information. Second, it mines protein complex cores using the protein complex core mining strategy we designed. Third, it obtains an ensemble learning model by integrating structural modularity and a trained voting regressor model. Finally, it extends the protein complex cores and forms protein complexes by a graph heuristic search strategy. The experimental results demonstrate that ELF-DPC performs better than the twelve state-of-the-art approaches. Moreover, functional enrichment analysis illustrated that ELF-DPC could detect biologically meaningful protein complexes. The code/dataset is available for free download from https://github.com/RongquanWang/ELF-DPC. | 1.1 Related WorkDuring the past decade, various computational methods have been presented to identify protein complexes in PPI networks. We will briefly review the related work from three aspects. The first is identifying protein complexes based on unsupervised learning-based methods. Another type of identifying protein complex methods is based on a model optimization-based method. The last type of identifying protein complex methods is based on supervised learning-based methods.1.1.1 Unsupervised Learning-Based MethodsMany researchers hypothesize that subgraphs with different topological structures in PPI networks are factual protein complexes (Wang et al., 2010) such as density, k-clique, and core-attachment structures. Most of these methods are either global heuristic search, local heuristic search, or both. Meanwhile, some methods integrate topological and biological information to further improve the accuracy of detecting protein complexes.Many local heuristic-based methods have been proposed to identify protein complexes. For instance, Altaf-Ul-Amin et al. (Altaf-Ul-Amin et al., 2006) developed DPClus, which generates clusters by ensuring density and checking the periphery of the clusters. Gavin et al. (Gavin et al., 2006) studied the organization of protein complexes, demonstrating that a protein complex generally contains a unique protein complex core and attachment proteins, called a core-attachment structure. Here, proteins in a protein complex core have relatively more reliable interactions among themselves. The attachment proteins are the surrounding proteins of the protein complex core to assist it in performing related functions (Lakizadeh et al., 2015). Wu et al. (Wu et al., 2009) proposed a classic protein complex discovery method (COACH) using the core-attachment structure. COACH first detects protein complex cores and then identifies its attachment proteins to form a whole protein complex. Peng et al. (Peng et al., 2014) designed a PageRank Nibble strategy to give adjacent proteins different probabilities with core-attachment structures and proposed WPNCA to predict protein complexes. Nepuse et al. (Nepusz et al., 2012) presented ClusterONE, which utilizes a demanding growth process to mine subgraphs with high cohesiveness that may be protein complexes. Recently, Wang et al. (Wang et al., 2020) presented a new graph clustering method using a local heuristic search strategy to detect static and dynamic protein complexes. These local heuristic methods have strong local searchability, but finding an optimal global solution is difficult.Meanwhile, some global heuristic-based methods have been proposed to identify protein complexes. In 2009, Liu et al. (Liu et al., 2009) used an iterative method to weight PPI networks and developed a maximal clique-based method (CMC) to discover protein complexes from weighted PPI networks. Wang et al. (Wang et al., 2012) were inspired by the hierarchical organization of GO annotations and known protein complexes. Then they proposed OH-PIN, which is based on the concepts of overlapping M-clusters, λ-module, and clustering coefficients to detect both overlapping and hierarchical protein complexes in PPI networks. PC2P (Omranian et al., 2021) is a parameter-free greedy approximation algorithm casts the problem of protein complex detection as a network partitioning into biclique spanned subgraphs, which include both sparse and dense subgraphs. Although these global heuristic search methods have a strong global search ability, they require considerable time and computing resources.Recently, some methods based on network embedding strategies have been used to detect protein complexes. DPC-HCNE (Meng et al., 2019) is a novel protein complex detection method based on hierarchical compressing network embedding and core-attachment structures. It can preserve both the local topological information and global topological information of a PPI network. CPredictor 5.0 (Yao et al., 2019) uses the network embedding method Node2Vec (Grover and Leskovec, 2016) to learn node feature vector representation and then calculates the node embedding similarity and the functional similarity between interacting proteins to construct the weight PPI networks. These methods illustrate that employing the network embedding method could improve the accuracy of protein complex identification.It is well known that PPI networks contain many false-positive and false-negative interactions, i.e., noise. To overcome the noise of the PPI networks, some studies try to exploit biological information, such as gene expression data (Keretsu and Sarmah, 2016), gene ontology (GO) data (Wang et al., 2019; Yao et al., 2019), and subcellular localization data (Lei et al., 2018) to complement the interactions in PPI networks. CPredictor2.0 (Xu et al., 2017) effectively detects protein complexes from PPI networks, and first groups proteins based on functional annotations. Then, it applies the MCL algorithm to detect dense clusters as protein complexes. Zhang et al. (Zhang et al., 2016) calculated the active time point and the active probability of each protein and constructed dynamic PPI networks. Then a novel method was proposed based on the core-attachment structure. Zhang et al. (Zhang et al., 2019) proposed a novel method based on the core-attachment structure and seed expansion strategy to identify protein complexes using the topological structure and biological data in static PPI networks. ICJointLE (Zhang et al., 2019) is a novel method to identify protein complexes with the features of joint colocalization and joint coexpression in static PPI networks. NNP (Zhang et al., 2021) is a new method for recognizing protein complexes by topological characteristics and biological characteristics. Some methods (Zaki et al., 2013; Wang et al., 2019) are based on topological information to weight interactions in PPI networks. For example, PEWCC (Zaki et al., 2013) is a novel graph mining method that first assesses the reliability of the interactions and then detects protein complexes based on the concept of the weighted clustering coefficient. These methods have shown that the accuracy of protein complex identification can be significantly improved by integrating network topological structure and multiple biological information.1.1.2 Model Optimization-Based MethodsSeveral recent methods suggested that identifying protein complexes or community structures can be an optimization problem using network topology and protein attributes. For example, RNSC (King et al., 2004) attempts to find an optimal set of partitions of a PPI network graph by employing different cost functions for detecting protein complexes. RSGNM (Zhang et al., 2012) is a regularized sparse generative network model that adds another process that generates propensities into an existing generative network model for protein complex identification. EGCPI (He and Chan, 2016) formulates the problem as an optimization problem to mine the optimal clusters with densely connected vertices in the PPI networks to discover protein complexes. DPCA (Hu et al., 2018) formulates the problem of detecting protein complexes as a constrained optimization problem according to protein complexes’ topological and biological properties. In particular, it is an algorithm with high efficiency and effectiveness. GMFTP (Zhang et al., 2014) is a generative model to simulate the generative processes of topological and biological information, and clusters that maximize the likelihood of generating the given PIN are considered protein complexes. DCAFP (Hu and Chan, 2015) transforms the problem of identifying protein complexes into a constrained optimization problem and introduces an optimization model by considering the integration of functional preferences and dense structures. He et al. (He et al., 2019) introduced a novel graph clustering model called contextual correlation preserving multiview featured graph clustering (CCPMVFGC) for discovering communities in graphs with multiview features, viewwise correlations of pairwise features and the graph topology. VVAMo (He et al., 2021a) is a novel matrix factorization-based model for communities in complex network. It proposes a unified likelihood function for VVAMo and derives an alternating algorithm for learning the optimal parameters of the proposed model. In 2017, Zhang et al. (Zhang et al., 2017) proposed a new firefly clustering algorithm for transforming the protein complex detection problem into an optimization problem. IMA (Wang et al., 2021) is a novel improved memetic algorithm that optimizes a fitness function to detect protein complexes. These model optimization-based methods usually have more parameters and variables, and the parameter optimization process is time-consuming. However, these methods also have some significance for us to transform the identification of protein complexes into an optimization problem.1.1.3 Supervised Learning-Based MethodsThe methods mentioned above are either unsupervised learning-based or model optimization-based methods that identify protein complexes using predefined assumptions and determined models. Unsupervised learning-based methods do not need to resolve practical problems, such as insufficient feature extraction from known protein complexes, model selection, and model training. Those methods cannot utilize the information of known protein complexes, and they neglect some other topological protein complexes such as the ‘star’ mode and ‘spoke’ mode and so on. Generally, supervised learning-based methods first train a supervised learning model by extracting features, and then trained supervised learning models are used to search new protein complexes.Many standard protein complex datasets have been obtained in recent years. Therefore, several supervised learning-based methods based on training regression or classification models are proposed to discover protein complexes from PPI networks. For example, Qi et al. (Qi et al., 2008) proposed a framework to learn the parameters of the Bayesian network model for discovering protein complexes. Yu et al. (Yu et al., 2014) presented a supervised learning-based method to detect protein complexes, which used cliques as initial clusters and selected a trained linear regression model to form protein complexes. Lei et al. (Shi et al., 2011) proposed a semisupervised algorithm, and trained a neural network model to detect protein complexes. ClusterEPs (Liu et al., 2016) estimated the possibility of a subgraph being a protein complex by emerging patterns (EPs). Dong et al.(Dong et al., 2018) provided the ClusterSS method, which integrates a trained neural network model and local cohesiveness function to guide the search strategy to identify protein complexes. Liu et al. (Liu et al., 2018) proposed a supervised learning method based on network embeddings and a random forest model for discovering protein complexes. Based on the decision tree, Sikandar et al. (Sikandar et al., 2018) presented a method using biological and topological information to detect protein complexes. Liu et al.(Liu et al., 2021) proposed a novel semisupervised model and a protein complex detection algorithm to identify significant protein complexes with clear module structures from PPI networks. Mei et al. (Mei, 2022) proposed a computational method that combines supervised learning and dense subgraph discovery to predict protein complexes. On the one hand, the accuracy of these detection methods based on semisupervised learning or supervised learning is limited due to the small training dataset. On the other hand, these methods only train a single type of learning model, so these models are not so generalizable and their learning ability has certain limitations.Some existing studies show that graph neural networks (GNNs) methods can effectively learn graph structure and node features. For example, Kipf et al. (Kipf and Welling, 2016) presented a scalable approach for semisupervised learning on graph-structured data. The proposed graph convolutional network (GCN) model is based on an efficient variant of convolutional neural networks. It can encode both graph structure and node features in a way useful for semisupervised classification. In 2021, Zaki et al. (Zaki et al., 2021) introduced various GCN approaches to improve the detection of protein complexes. graph attention networks (GATs), which aggregate neighbor nodes through the attention mechanism, realize the adaptive allocation of weights of different neighbors, thus greatly improving the expression ability of GNN models. He et al. (He et al., 2021b) proposed a class of novel learning-to-attend strategies, named conjoint attentions (CAs) to construct graph conjoint attention networks (CATs) for GNNs. CAs offer flexible incorporation of layerwise node features and structural interventions that can be learned outside the GNNs to compute appropriate weights for feature aggregation. We will study the detection of protein complexes in PPI networks using GATs in the future. | [
"15044803",
"16613608",
"15297299",
"17087821",
"23780996",
"15585665",
"29554120",
"9843981",
"19630542",
"16429126",
"11805826",
"12060727",
"16381906",
"28029628",
"31329151",
"17982175",
"26013799",
"29994334",
"27771556",
"15180928",
"16554755",
"26319550",
"22621308",
"34512712",
"19435747",
"26868667",
"30241459",
"28130237",
"14681354",
"22426491",
"19095691",
"18586722",
"22165896",
"19770263",
"14517352",
"29439025",
"23225755",
"31390979",
"34970305",
"34966415",
"19486541",
"11752321",
"29072136",
"30736004",
"22000801",
"23688127",
"31376777",
"24928559",
"27454775"
] | [
{
"pmid": "15044803",
"title": "Structure-based assembly of protein complexes in yeast.",
"abstract": "Images of entire cells are preceding atomic structures of the separate molecular machines that they contain. The resulting gap in knowledge can be partly bridged by protein-protein interactions, bioinformatics, and electron microscopy. Here we use interactions of known three-dimensional structure to model a large set of yeast complexes, which we also screen by electron microscopy. For 54 of 102 complexes, we obtain at least partial models of interacting subunits. For 29, including the exosome, the chaperonin containing TCP-1, a 3'-messenger RNA degradation complex, and RNA polymerase II, the process suggests atomic details not easily seen by homology, involving the combination of two or more known structures. We also consider interactions between complexes (cross-talk) and use these to construct a structure-based network of molecular machines in the cell."
},
{
"pmid": "16613608",
"title": "Development and implementation of an algorithm for detection of protein complexes in large interaction networks.",
"abstract": "BACKGROUND\nAfter complete sequencing of a number of genomes the focus has now turned to proteomics. Advanced proteomics technologies such as two-hybrid assay, mass spectrometry etc. are producing huge data sets of protein-protein interactions which can be portrayed as networks, and one of the burning issues is to find protein complexes in such networks. The enormous size of protein-protein interaction (PPI) networks warrants development of efficient computational methods for extraction of significant complexes.\n\n\nRESULTS\nThis paper presents an algorithm for detection of protein complexes in large interaction networks. In a PPI network, a node represents a protein and an edge represents an interaction. The input to the algorithm is the associated matrix of an interaction network and the outputs are protein complexes. The complexes are determined by way of finding clusters, i. e. the densely connected regions in the network. We also show and analyze some protein complexes generated by the proposed algorithm from typical PPI networks of Escherichia coli and Saccharomyces cerevisiae. A comparison between a PPI and a random network is also performed in the context of the proposed algorithm.\n\n\nCONCLUSION\nThe proposed algorithm makes it possible to detect clusters of proteins in PPI networks which mostly represent molecular biological functional units. Therefore, protein complexes determined solely based on interaction data can help us to predict the functions of proteins, and they are also useful to understand and explain certain biological processes."
},
{
"pmid": "15297299",
"title": "GO::TermFinder--open source software for accessing Gene Ontology information and finding significantly enriched Gene Ontology terms associated with a list of genes.",
"abstract": "SUMMARY\nGO::TermFinder comprises a set of object-oriented Perl modules for accessing Gene Ontology (GO) information and evaluating and visualizing the collective annotation of a list of genes to GO terms. It can be used to draw conclusions from microarray and other biological data, calculating the statistical significance of each annotation. GO::TermFinder can be used on any system on which Perl can be run, either as a command line application, in single or batch mode, or as a web-based CGI script.\n\n\nAVAILABILITY\nThe full source code and documentation for GO::TermFinder are freely available from http://search.cpan.org/dist/GO-TermFinder/."
},
{
"pmid": "17087821",
"title": "Evaluation of clustering algorithms for protein-protein interaction networks.",
"abstract": "BACKGROUND\nProtein interactions are crucial components of all cellular processes. Recently, high-throughput methods have been developed to obtain a global description of the interactome (the whole network of protein interactions for a given organism). In 2002, the yeast interactome was estimated to contain up to 80,000 potential interactions. This estimate is based on the integration of data sets obtained by various methods (mass spectrometry, two-hybrid methods, genetic studies). High-throughput methods are known, however, to yield a non-negligible rate of false positives, and to miss a fraction of existing interactions. The interactome can be represented as a graph where nodes correspond with proteins and edges with pairwise interactions. In recent years clustering methods have been developed and applied in order to extract relevant modules from such graphs. These algorithms require the specification of parameters that may drastically affect the results. In this paper we present a comparative assessment of four algorithms: Markov Clustering (MCL), Restricted Neighborhood Search Clustering (RNSC), Super Paramagnetic Clustering (SPC), and Molecular Complex Detection (MCODE).\n\n\nRESULTS\nA test graph was built on the basis of 220 complexes annotated in the MIPS database. To evaluate the robustness to false positives and false negatives, we derived 41 altered graphs by randomly removing edges from or adding edges to the test graph in various proportions. Each clustering algorithm was applied to these graphs with various parameter settings, and the clusters were compared with the annotated complexes. We analyzed the sensitivity of the algorithms to the parameters and determined their optimal parameter values. We also evaluated their robustness to alterations of the test graph. We then applied the four algorithms to six graphs obtained from high-throughput experiments and compared the resulting clusters with the annotated complexes.\n\n\nCONCLUSION\nThis analysis shows that MCL is remarkably robust to graph alterations. In the tests of robustness, RNSC is more sensitive to edge deletion but less sensitive to the use of suboptimal parameter values. The other two algorithms are clearly weaker under most conditions. The analysis of high-throughput data supports the superiority of MCL for the extraction of complexes from interaction networks."
},
{
"pmid": "23780996",
"title": "Identifying protein complexes and functional modules--from static PPI networks to dynamic PPI networks.",
"abstract": "Cellular processes are typically carried out by protein complexes and functional modules. Identifying them plays an important role for our attempt to reveal principles of cellular organizations and functions. In this article, we review computational algorithms for identifying protein complexes and/or functional modules from protein-protein interaction (PPI) networks. We first describe issues and pitfalls when interpreting PPI networks. Then based on types of data used and main ideas involved, we briefly describe protein complex and/or functional module identification algorithms in four categories: (i) those based on topological structures of unweighted PPI networks; (ii) those based on characters of weighted PPI networks; (iii) those based on multiple data integrations; and (iv) those based on dynamic PPI networks. The PPI networks are modelled increasingly precise when integrating more types of data, and the study of protein complexes would benefit by shifting from static to dynamic PPI networks."
},
{
"pmid": "15585665",
"title": "Global protein function annotation through mining genome-scale data in yeast Saccharomyces cerevisiae.",
"abstract": "As we are moving into the post genome-sequencing era, various high-throughput experimental techniques have been developed to characterize biological systems on the genomic scale. Discovering new biological knowledge from the high-throughput biological data is a major challenge to bioinformatics today. To address this challenge, we developed a Bayesian statistical method together with Boltzmann machine and simulated annealing for protein functional annotation in the yeast Saccharomyces cerevisiae through integrating various high-throughput biological data, including yeast two-hybrid data, protein complexes and microarray gene expression profiles. In our approach, we quantified the relationship between functional similarity and high-throughput data, and coded the relationship into 'functional linkage graph', where each node represents one protein and the weight of each edge is characterized by the Bayesian probability of function similarity between two proteins. We also integrated the evolution information and protein subcellular localization information into the prediction. Based on our method, 1802 out of 2280 unannotated proteins in yeast were assigned functions systematically."
},
{
"pmid": "29554120",
"title": "Predicting protein complexes using a supervised learning method combined with local structural information.",
"abstract": "The existing protein complex detection methods can be broadly divided into two categories: unsupervised and supervised learning methods. Most of the unsupervised learning methods assume that protein complexes are in dense regions of protein-protein interaction (PPI) networks even though many true complexes are not dense subgraphs. Supervised learning methods utilize the informative properties of known complexes; they often extract features from existing complexes and then use the features to train a classification model. The trained model is used to guide the search process for new complexes. However, insufficient extracted features, noise in the PPI data and the incompleteness of complex data make the classification model imprecise. Consequently, the classification model is not sufficient for guiding the detection of complexes. Therefore, we propose a new robust score function that combines the classification model with local structural information. Based on the score function, we provide a search method that works both forwards and backwards. The results from experiments on six benchmark PPI datasets and three protein complex datasets show that our approach can achieve better performance compared with the state-of-the-art supervised, semi-supervised and unsupervised methods for protein complex detection, occasionally significantly outperforming such methods."
},
{
"pmid": "9843981",
"title": "Cluster analysis and display of genome-wide expression patterns.",
"abstract": "A system of cluster analysis for genome-wide expression data from DNA microarray hybridization is described that uses standard statistical algorithms to arrange genes according to similarity in pattern of gene expression. The output is displayed graphically, conveying the clustering and the underlying expression data simultaneously in a form intuitive for biologists. We have found in the budding yeast Saccharomyces cerevisiae that clustering gene expression data groups together efficiently genes of known similar function, and we find a similar tendency in human data. Thus patterns seen in genome-wide expression experiments can be interpreted as indications of the status of cellular processes. Also, coexpression of genes of known function with poorly characterized or novel genes may provide a simple means of gaining leads to the functions of many genes for which information is not available currently."
},
{
"pmid": "19630542",
"title": "Bootstrapping the interactome: unsupervised identification of protein complexes in yeast.",
"abstract": "Protein interactions and complexes are important components of biological systems. Recently, two genome-wide applications of tandem affinity purification (TAP) in yeast have increased significantly the available information on interactions in complexes. Several approaches have been developed to predict protein complexes from these measurements, which generally depend heavily on additional training data in the form of known complexes. In this article, we present an unsupervised algorithm for the identification of protein complexes which is independent of the availability of such additional complex information. Based on a Bootstrap approach, we calculate intuitive confidence scores for interactions more accurate than all other published scoring methods and predict complexes with the same quality as the best supervised predictions. Although there are considerable differences between the Bootstrap and the best published predictions, the set of consistently identified complexes is more than four times as large as for complexes derived from one data set only. Our results illustrate that meaningful and reliable complexes can be determined from the purification experiments alone. As a consequence, the approach presented in this article is easily applicable to large-scale TAP experiments for any species even if few complexes are already known."
},
{
"pmid": "16429126",
"title": "Proteome survey reveals modularity of the yeast cell machinery.",
"abstract": "Protein complexes are key molecular entities that integrate multiple gene products to perform cellular functions. Here we report the first genome-wide screen for complexes in an organism, budding yeast, using affinity purification and mass spectrometry. Through systematic tagging of open reading frames (ORFs), the majority of complexes were purified several times, suggesting screen saturation. The richness of the data set enabled a de novo characterization of the composition and organization of the cellular machinery. The ensemble of cellular proteins partitions into 491 complexes, of which 257 are novel, that differentially combine with additional attachment proteins or protein modules to enable a diversification of potential functions. Support for this modular organization of the proteome comes from integration with available data on expression, localization, function, evolutionary conservation, protein structure and binary interactions. This study provides the largest collection of physically determined eukaryotic cellular machines so far and a platform for biological data integration and modelling."
},
{
"pmid": "11805826",
"title": "Functional organization of the yeast proteome by systematic analysis of protein complexes.",
"abstract": "Most cellular processes are carried out by multiprotein complexes. The identification and analysis of their components provides insight into how the ensemble of expressed proteins (proteome) is organized into functional units. We used tandem-affinity purification (TAP) and mass spectrometry in a large-scale approach to characterize multiprotein complexes in Saccharomyces cerevisiae. We processed 1,739 genes, including 1,143 human orthologues of relevance to human biology, and purified 589 protein assemblies. Bioinformatic analysis of these assemblies defined 232 distinct multiprotein complexes and proposed new cellular roles for 344 proteins, including 231 proteins with no previous functional annotation. Comparison of yeast and human complexes showed that conservation across species extends from single proteins to their molecular environment. Our analysis provides an outline of the eukaryotic proteome as a network of protein complexes at a level of organization beyond binary interactions. This higher-order map contains fundamental biological information and offers the context for a more reasoned and informed approach to drug discovery."
},
{
"pmid": "12060727",
"title": "Community structure in social and biological networks.",
"abstract": "A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known--a collaboration network and a food web--and find that it detects significant and informative community divisions in both cases."
},
{
"pmid": "16381906",
"title": "MPact: the MIPS protein interaction resource on yeast.",
"abstract": "In recent years, the Munich Information Center for Protein Sequences (MIPS) yeast protein-protein interaction (PPI) dataset has been used in numerous analyses of protein networks and has been called a gold standard because of its quality and comprehensiveness [H. Yu, N. M. Luscombe, H. X. Lu, X. Zhu, Y. Xia, J. D. Han, N. Bertin, S. Chung, M. Vidal and M. Gerstein (2004) Genome Res., 14, 1107-1118]. MPact and the yeast protein localization catalog provide information related to the proximity of proteins in yeast. Beside the integration of high-throughput data, information about experimental evidence for PPIs in the literature was compiled by experts adding up to 4300 distinct PPIs connecting 1500 proteins in yeast. As the interaction data is a complementary part of CYGD, interactive mapping of data on other integrated data types such as the functional classification catalog [A. Ruepp, A. Zollner, D. Maier, K. Albermann, J. Hani, M. Mokrejs, I. Tetko, U. Güldener, G. Mannhaupt, M. Münsterkötter and H. W. Mewes (2004) Nucleic Acids Res., 32, 5539-5545] is possible. A survey of signaling proteins and comparison with pathway data from KEGG demonstrates that based on these manually annotated data only an extensive overview of the complexity of this functional network can be obtained in yeast. The implementation of a web-based PPI-analysis tool allows analysis and visualization of protein interaction networks and facilitates integration of our curated data with high-throughput datasets. The complete dataset as well as user-defined sub-networks can be retrieved easily in the standardized PSI-MI format. The resource can be accessed through http://mips.gsf.de/genre/proj/mpact."
},
{
"pmid": "28029628",
"title": "Evolutionary Graph Clustering for Protein Complex Identification.",
"abstract": "This paper presents a graph clustering algorithm, called EGCPI, to discover protein complexes in protein-protein interaction (PPI) networks. In performing its task, EGCPI takes into consideration both network topologies and attributes of interacting proteins, both of which have been shown to be important for protein complex discovery. EGCPI formulates the problem as an optimization problem and tackles it with evolutionary clustering. Given a PPI network, EGCPI first annotates each protein with corresponding attributes that are provided in Gene Ontology database. It then adopts a similarity measure to evaluate how similar the connected proteins are taking into consideration the network topology. Given this measure, EGCPI then discovers a number of graph clusters within which proteins are densely connected, based on an evolutionary strategy. At last, EGCPI identifies protein complexes in each discovered cluster based on the homogeneity of attributes performed by pairwise proteins. EGCPI has been tested with several real data sets and the experimental results show EGCPI is very effective on protein complex discovery, and the evolutionary clustering is helpful to identify protein complexes in PPI networks. The software of EGCPI can be downloaded via: https://github.com/hetiantian1985/EGCPI."
},
{
"pmid": "31329151",
"title": "Contextual Correlation Preserving Multiview Featured Graph Clustering.",
"abstract": "Graph clustering, which aims at discovering sets of related vertices in graph-structured data, plays a crucial role in various applications, such as social community detection and biological module discovery. With the huge increase in the volume of data in recent years, graph clustering is used in an increasing number of real-life scenarios. However, the classical and state-of-the-art methods, which consider only single-view features or a single vector concatenating features from different views and neglect the contextual correlation between pairwise features, are insufficient for the task, as features that characterize vertices in a graph are usually from multiple views and the contextual correlation between pairwise features may influence the cluster preference for vertices. To address this challenging problem, we introduce in this paper, a novel graph clustering model, dubbed contextual correlation preserving multiview featured graph clustering (CCPMVFGC) for discovering clusters in graphs with multiview vertex features. Unlike most of the aforementioned approaches, CCPMVFGC is capable of learning a shared latent space from multiview features as the cluster preference for each vertex and making use of this latent space to model the inter-relationship between pairwise vertices. CCPMVFGC uses an effective method to compute the degree of contextual correlation between pairwise vertex features and utilizes view-wise latent space representing the feature-cluster preference to model the computed correlation. Thus, the cluster preference learned by CCPMVFGC is jointly inferred by multiview features, view-wise correlations of pairwise features, and the graph topology. Accordingly, we propose a unified objective function for CCPMVFGC and develop an iterative strategy to solve the formulated optimization problem. We also provide the theoretical analysis of the proposed model, including convergence proof and computational complexity analysis. In our experiments, we extensively compare the proposed CCPMVFGC with both classical and state-of-the-art graph clustering methods on eight standard graph datasets (six multiview and two single-view datasets). The results show that CCPMVFGC achieves competitive performance on all eight datasets, which validates the effectiveness of the proposed model."
},
{
"pmid": "17982175",
"title": "Gene Ontology annotations at SGD: new data sources and annotation methods.",
"abstract": "The Saccharomyces Genome Database (SGD; http://www.yeastgenome.org/) collects and organizes biological information about the chromosomal features and gene products of the budding yeast Saccharomyces cerevisiae. Although published data from traditional experimental methods are the primary sources of evidence supporting Gene Ontology (GO) annotations for a gene product, high-throughput experiments and computational predictions can also provide valuable insights in the absence of an extensive body of literature. Therefore, GO annotations available at SGD now include high-throughput data as well as computational predictions provided by the GO Annotation Project (GOA UniProt; http://www.ebi.ac.uk/GOA/). Because the annotation method used to assign GO annotations varies by data source, GO resources at SGD have been modified to distinguish data sources and annotation methods. In addition to providing information for genes that have not been experimentally characterized, GO annotations from independent sources can be compared to those made by SGD to help keep the literature-based GO annotations current."
},
{
"pmid": "26013799",
"title": "A density-based clustering approach for identifying overlapping protein complexes with functional preferences.",
"abstract": "BACKGROUND\nIdentifying protein complexes is an essential task for understanding the mechanisms of proteins in cells. Many computational approaches have thus been developed to identify protein complexes in protein-protein interaction (PPI) networks. Regarding the information that can be adopted by computational approaches to identify protein complexes, in addition to the graph topology of PPI network, the consideration of functional information of proteins has been becoming popular recently. Relevant approaches perform their tasks by relying on the idea that proteins in the same protein complex may be associated with similar functional information. However, we note from our previous researches that for most protein complexes their proteins are only similar in specific subsets of categories of functional information instead of the entire set. Hence, if the preference of each functional category can also be taken into account when identifying protein complexes, the accuracy will be improved.\n\n\nRESULTS\nTo implement the idea, we first introduce a preference vector for each of proteins to quantitatively indicate the preference of each functional category when deciding the protein complex this protein belongs to. Integrating functional preferences of proteins and the graph topology of PPI network, we formulate the problem of identifying protein complexes into a constrained optimization problem, and we propose the approach DCAFP to address it. For performance evaluation, we have conducted extensive experiments with several PPI networks from the species of Saccharomyces cerevisiae and Human and also compared DCAFP with state-of-the-art approaches in the identification of protein complexes. The experimental results show that considering the integration of functional preferences and dense structures improved the performance of identifying protein complexes, as DCAFP outperformed the other approaches for most of PPI networks based on the assessments of independent measures of f-measure, Accuracy and Maximum Matching Rate. Furthermore, the function enrichment experiments indicated that DCAFP identified more protein complexes with functional significance when compared with approaches, such as PCIA, that also utilize the functional information.\n\n\nCONCLUSIONS\nAccording to the promising performance of DCAFP, the integration of functional preferences and dense structures has made it possible to identify protein complexes more accurately and significantly."
},
{
"pmid": "29994334",
"title": "Efficiently Detecting Protein Complexes from Protein Interaction Networks via Alternating Direction Method of Multipliers.",
"abstract": "Protein complexes are crucial in improving our understanding of the mechanisms employed by proteins. Various computational algorithms have thus been proposed to detect protein complexes from protein interaction networks. However, given massive protein interactome data obtained by high-throughput technologies, existing algorithms, especially those with additionally consideration of biological information of proteins, either have low efficiency in performing their tasks or suffer from limited effectiveness. For addressing this issue, this work proposes to detect protein complexes from a protein interaction network with high efficiency and effectiveness. To do so, the original detection task is first formulated into an optimization problem according to the intuitive properties of protein complexes. After that, the framework of alternating direction method of multipliers is applied to decompose this optimization problem into several subtasks, which can be subsequently solved in a separate and parallel manner. An algorithm for implementing this solution is then developed. Experimental results on five large protein interaction networks demonstrated that compared to state-of-the-art protein complex detection algorithms, our algorithm outperformed them in terms of both effectiveness and efficiency. Moreover, as number of parallel processes increases, one can expect an even higher computational efficiency for the proposed algorithm with no compromise on effectiveness."
},
{
"pmid": "27771556",
"title": "Weighted edge based clustering to identify protein complexes in protein-protein interaction networks incorporating gene expression profile.",
"abstract": "Protein complex detection from protein-protein interaction (PPI) network has received a lot of focus in recent years. A number of methods identify protein complexes as dense sub-graphs using network information while several other methods detect protein complexes based on topological information. While the methods based on identifying dense sub-graphs are more effective in identifying protein complexes, not all protein complexes have high density. Moreover, existing methods focus more on static PPI networks and usually overlook the dynamic nature of protein complexes. Here, we propose a new method, Weighted Edge based Clustering (WEC), to identify protein complexes based on the weight of the edge between two interacting proteins, where the weight is defined by the edge clustering coefficient and the gene expression correlation between the interacting proteins. Our WEC method is capable of detecting highly inter-connected and co-expressed protein complexes. The experimental results of WEC on three real life data shows that our method can detect protein complexes effectively in comparison with other highly cited existing methods.\n\n\nAVAILABILITY\nThe WEC tool is available at http://agnigarh.tezu.ernet.in/~rosy8/shared.html."
},
{
"pmid": "15180928",
"title": "Protein complex prediction via cost-based clustering.",
"abstract": "MOTIVATION\nUnderstanding principles of cellular organization and function can be enhanced if we detect known and predict still undiscovered protein complexes within the cell's protein-protein interaction (PPI) network. Such predictions may be used as an inexpensive tool to direct biological experiments. The increasing amount of available PPI data necessitates an accurate and scalable approach to protein complex identification.\n\n\nRESULTS\nWe have developed the Restricted Neighborhood Search Clustering Algorithm (RNSC) to efficiently partition networks into clusters using a cost function. We applied this cost-based clustering algorithm to PPI networks of Saccharomyces cerevisiae, Drosophila melanogaster and Caenorhabditis elegans to identify and predict protein complexes. We have determined functional and graph-theoretic properties of true protein complexes from the MIPS database. Based on these properties, we defined filters to distinguish between identified network clusters and true protein complexes.\n\n\nCONCLUSIONS\nOur application of the cost-based clustering algorithm provides an accurate and scalable method of detecting and predicting protein complexes within a PPI network."
},
{
"pmid": "16554755",
"title": "Global landscape of protein complexes in the yeast Saccharomyces cerevisiae.",
"abstract": "Identification of protein-protein interactions often provides insight into protein function, and many cellular processes are performed by stable protein complexes. We used tandem affinity purification to process 4,562 different tagged proteins of the yeast Saccharomyces cerevisiae. Each preparation was analysed by both matrix-assisted laser desorption/ionization-time of flight mass spectrometry and liquid chromatography tandem mass spectrometry to increase coverage and accuracy. Machine learning was used to integrate the mass spectrometry scores and assign probabilities to the protein-protein interactions. Among 4,087 different proteins identified with high confidence by mass spectrometry from 2,357 successful purifications, our core data set (median precision of 0.69) comprises 7,123 protein-protein interactions involving 2,708 proteins. A Markov clustering algorithm organized these interactions into 547 protein complexes averaging 4.9 subunits per complex, about half of them absent from the MIPS database, as well as 429 additional interactions between pairs of complexes. The data (all of which are available online) will help future studies on individual proteins as well as functional genomics and systems biology."
},
{
"pmid": "26319550",
"title": "CAMWI: Detecting protein complexes using weighted clustering coefficient and weighted density.",
"abstract": "Detection of protein complexes is very important to understand the principles of cellular organization and function. Recently, large protein-protein interactions (PPIs) networks have become available using high-throughput experimental techniques. These networks make it possible to develop computational methods for protein complex detection. Most of the current methods rely on the assumption that protein complex as a module has dense structure. However complexes have core-attachment structure and proteins in a complex core share a high degree of functional similarity, so it expects that a core has high weighted density. In this paper we present a Core-Attachment based method for protein complex detection from Weighted PPI Interactions using clustering coefficient and weighted density. Experimental results show that the proposed method, CAMWI improves the accuracy of protein complex detection."
},
{
"pmid": "22621308",
"title": "Towards the identification of protein complexes and functional modules by integrating PPI network and gene expression data.",
"abstract": "BACKGROUND\nIdentification of protein complexes and functional modules from protein-protein interaction (PPI) networks is crucial to understanding the principles of cellular organization and predicting protein functions. In the past few years, many computational methods have been proposed. However, most of them considered the PPI networks as static graphs and overlooked the dynamics inherent within these networks. Moreover, few of them can distinguish between protein complexes and functional modules.\n\n\nRESULTS\nIn this paper, a new framework is proposed to distinguish between protein complexes and functional modules by integrating gene expression data into protein-protein interaction (PPI) data. A series of time-sequenced subnetworks (TSNs) is constructed according to the time that the interactions were activated. The algorithm TSN-PCD was then developed to identify protein complexes from these TSNs. As protein complexes are significantly related to functional modules, a new algorithm DFM-CIN is proposed to discover functional modules based on the identified complexes. The experimental results show that the combination of temporal gene expression data with PPI data contributes to identifying protein complexes more precisely. A quantitative comparison based on f-measure reveals that our algorithm TSN-PCD outperforms the other previous protein complex discovery algorithms. Furthermore, we evaluate the identified functional modules by using \"Biological Process\" annotated in GO (Gene Ontology). The validation shows that the identified functional modules are statistically significant in terms of \"Biological Process\". More importantly, the relationship between protein complexes and functional modules are studied.\n\n\nCONCLUSIONS\nThe proposed framework based on the integration of PPI data and gene expression data makes it possible to identify protein complexes and functional modules more effectively. Moveover, the proposed new framework and algorithms can distinguish between protein complexes and functional modules. Our findings suggest that functional modules are closely related to protein complexes and a functional module may consist of one or multiple protein complexes. The program is available at http://netlab.csu.edu.cn/bioinfomatics/limin/DFM-CIN/index.html."
},
{
"pmid": "34512712",
"title": "Identifying Protein Complexes With Clear Module Structure Using Pairwise Constraints in Protein Interaction Networks.",
"abstract": "The protein-protein interaction (PPI) networks can be regarded as powerful platforms to elucidate the principle and mechanism of cellular organization. Uncovering protein complexes from PPI networks will lead to a better understanding of the science of biological function in cellular systems. In recent decades, numerous computational algorithms have been developed to identify protein complexes. However, the majority of them primarily concern the topological structure of PPI networks and lack of the consideration for the native organized structure among protein complexes. The PPI networks generated by high-throughput technology include a fraction of false protein interactions which make it difficult to identify protein complexes efficiently. To tackle these challenges, we propose a novel semi-supervised protein complex detection model based on non-negative matrix tri-factorization, which not only considers topological structure of a PPI network but also makes full use of available high quality known protein pairs with must-link constraints. We propose non-overlapping (NSSNMTF) and overlapping (OSSNMTF) protein complex detection algorithms to identify the significant protein complexes with clear module structures from PPI networks. In addition, the proposed two protein complex detection algorithms outperform a diverse range of state-of-the-art protein complex identification algorithms on both synthetic networks and human related PPI networks."
},
{
"pmid": "19435747",
"title": "Complex discovery from weighted PPI networks.",
"abstract": "MOTIVATION\nProtein complexes are important for understanding principles of cellular organization and function. High-throughput experimental techniques have produced a large amount of protein interactions, which makes it possible to predict protein complexes from protein-protein interaction (PPI) networks. However, protein interaction data produced by high-throughput experiments are often associated with high false positive and false negative rates, which makes it difficult to predict complexes accurately.\n\n\nRESULTS\nWe use an iterative scoring method to assign weight to protein pairs, and the weight of a protein pair indicates the reliability of the interaction between the two proteins. We develop an algorithm called CMC (clustering-based on maximal cliques) to discover complexes from the weighted PPI network. CMC first generates all the maximal cliques from the PPI networks, and then removes or merges highly overlapped clusters based on their interconnectivity. We studied the performance of CMC and the impact of our iterative scoring method on CMC. Our results show that: (i) the iterative scoring method can improve the performance of CMC considerably; (ii) the iterative scoring method can effectively reduce the impact of random noise on the performance of CMC; (iii) the iterative scoring method can also improve the performance of other protein complex prediction methods and reduce the impact of random noise on their performance; and (iv) CMC is an effective approach to protein complex prediction from protein interaction network.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "26868667",
"title": "Using contrast patterns between true complexes and random subgraphs in PPI networks to predict unknown protein complexes.",
"abstract": "Most protein complex detection methods utilize unsupervised techniques to cluster densely connected nodes in a protein-protein interaction (PPI) network, in spite of the fact that many true complexes are not dense subgraphs. Supervised methods have been proposed recently, but they do not answer why a group of proteins are predicted as a complex, and they have not investigated how to detect new complexes of one species by training the model on the PPI data of another species. We propose a novel supervised method to address these issues. The key idea is to discover emerging patterns (EPs), a type of contrast pattern, which can clearly distinguish true complexes from random subgraphs in a PPI network. An integrative score of EPs is defined to measure how likely a subgraph of proteins can form a complex. New complexes thus can grow from our seed proteins by iteratively updating this score. The performance of our method is tested on eight benchmark PPI datasets and compared with seven unsupervised methods, two supervised and one semi-supervised methods under five standards to assess the quality of the predicted complexes. The results show that in most cases our method achieved a better performance, sometimes significantly."
},
{
"pmid": "30241459",
"title": "Identifying protein complexes based on node embeddings obtained from protein-protein interaction networks.",
"abstract": "BACKGROUND\nProtein complexes are one of the keys to deciphering the behavior of a cell system. During the past decade, most computational approaches used to identify protein complexes have been based on discovering densely connected subgraphs in protein-protein interaction (PPI) networks. However, many true complexes are not dense subgraphs and these approaches show limited performances for detecting protein complexes from PPI networks.\n\n\nRESULTS\nTo solve these problems, in this paper we propose a supervised learning method based on network node embeddings which utilizes the informative properties of known complexes to guide the search process for new protein complexes. First, node embeddings are obtained from human protein interaction network. Then the protein interactions are weighted through the similarities between node embeddings. After that, the supervised learning method is used to detect protein complexes. Then the random forest model is used to filter the candidate complexes in order to obtain the final predicted complexes. Experimental results on real human and yeast protein interaction networks show that our method effectively improves the performance for protein complex detection.\n\n\nCONCLUSIONS\nWe provided a new method for identifying protein complexes from human and yeast protein interaction networks, which has great potential to benefit the field of protein complex detection."
},
{
"pmid": "28130237",
"title": "Identification of protein complexes by integrating multiple alignment of protein interaction networks.",
"abstract": "MOTIVATION\nProtein complexes are one of the keys to studying the behavior of a cell system. Many biological functions are carried out by protein complexes. During the past decade, the main strategy used to identify protein complexes from high-throughput network data has been to extract near-cliques or highly dense subgraphs from a single protein-protein interaction (PPI) network. Although experimental PPI data have increased significantly over recent years, most PPI networks still have many false positive interactions and false negative edge loss due to the limitations of high-throughput experiments. In particular, the false negative errors restrict the search space of such conventional protein complex identification approaches. Thus, it has become one of the most challenging tasks in systems biology to automatically identify protein complexes.\n\n\nRESULTS\nIn this study, we propose a new algorithm, NEOComplex ( NE CC- and O rtholog-based Complex identification by multiple network alignment), which integrates functional orthology information that can be obtained from different types of multiple network alignment (MNA) approaches to expand the search space of protein complex detection. As part of our approach, we also define a new edge clustering coefficient (NECC) to assign weights to interaction edges in PPI networks so that protein complexes can be identified more accurately. The NECC is based on the intuition that there is functional information captured in the common neighbors of the common neighbors as well. Our results show that our algorithm outperforms well-known protein complex identification tools in a balance between precision and recall on three eukaryotic species: human, yeast, and fly. As a result of MNAs of the species, the proposed approach can tolerate edge loss in PPI networks and even discover sparse protein complexes which have traditionally been a challenge to predict.\n\n\nAVAILABILITY AND IMPLEMENTATION\nhttp://acolab.ie.nthu.edu.tw/bionetwork/NEOComplex.\n\n\nCONTACT\[email protected] or [email protected].\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "14681354",
"title": "MIPS: analysis and annotation of proteins from whole genomes.",
"abstract": "The Munich Information Center for Protein Sequences (MIPS-GSF), Neuherberg, Germany, provides protein sequence-related information based on whole-genome analysis. The main focus of the work is directed toward the systematic organization of sequence-related attributes as gathered by a variety of algorithms, primary information from experimental data together with information compiled from the scientific literature. MIPS maintains automatically generated and manually annotated genome-specific databases, develops systematic classification schemes for the functional annotation of protein sequences and provides tools for the comprehensive analysis of protein sequences. This report updates the information on the yeast genome (CYGD), the Neurospora crassa genome (MNCDB), the database of complete cDNAs (German Human Genome Project, NGFN), the database of mammalian protein-protein interactions (MPPI), the database of FASTA homologies (SIMAP), and the interface for the fast retrieval of protein-associated information (QUIPOS). The Arabidopsis thaliana database, the rice database, the plant EST databases (MATDB, MOsDB, SPUTNIK), as well as the databases for the comprehensive set of genomes (PEDANT genomes) are described elsewhere in the 2003 and 2004 NAR database issues, respectively. All databases described, and the detailed descriptions of our projects can be accessed through the MIPS web server (http://mips.gsf.de)."
},
{
"pmid": "22426491",
"title": "Detecting overlapping protein complexes in protein-protein interaction networks.",
"abstract": "We introduce clustering with overlapping neighborhood expansion (ClusterONE), a method for detecting potentially overlapping protein complexes from protein-protein interaction data. ClusterONE-derived complexes for several yeast data sets showed better correspondence with reference complexes in the Munich Information Center for Protein Sequence (MIPS) catalog and complexes derived from the Saccharomyces Genome Database (SGD) than the results of seven popular methods. The results also showed a high extent of functional homogeneity."
},
{
"pmid": "19095691",
"title": "Up-to-date catalogues of yeast protein complexes.",
"abstract": "Gold standard datasets on protein complexes are key to inferring and validating protein-protein interactions. Despite much progress in characterizing protein complexes in the yeast Saccharomyces cerevisiae, numerous researchers still use as reference the manually curated complexes catalogued by the Munich Information Center of Protein Sequences database. Although this catalogue has served the community extremely well, it no longer reflects the current state of knowledge. Here, we report two catalogues of yeast protein complexes as results of systematic curation efforts. The first one, denoted as CYC2008, is a comprehensive catalogue of 408 manually curated heteromeric protein complexes reliably backed by small-scale experiments reported in the current literature. This catalogue represents an up-to-date reference set for biologists interested in discovering protein interactions and protein complexes. The second catalogue, denoted as YHTP2008, comprises 400 high-throughput complexes annotated with current literature evidence. Among them, 262 correspond, at least partially, to CYC2008 complexes. Evidence for interacting subunits is collected for 68 complexes that have only partial or no overlap with CYC2008 complexes, whereas no literature evidence was found for 100 complexes. Some of these partially supported and as yet unsupported complexes may be interesting candidates for experimental follow up. Both catalogues are freely available at: http://wodaklab.org/cyc2008/."
},
{
"pmid": "18586722",
"title": "Protein complex identification by supervised graph local clustering.",
"abstract": "MOTIVATION\nProtein complexes integrate multiple gene products to coordinate many biological functions. Given a graph representing pairwise protein interaction data one can search for subgraphs representing protein complexes. Previous methods for performing such search relied on the assumption that complexes form a clique in that graph. While this assumption is true for some complexes, it does not hold for many others. New algorithms are required in order to recover complexes with other types of topological structure.\n\n\nRESULTS\nWe present an algorithm for inferring protein complexes from weighted interaction graphs. By using graph topological patterns and biological properties as features, we model each complex subgraph by a probabilistic Bayesian network (BN). We use a training set of known complexes to learn the parameters of this BN model. The log-likelihood ratio derived from the BN is then used to score subgraphs in the protein interaction graph and identify new complexes. We applied our method to protein interaction data in yeast. As we show our algorithm achieved a considerable improvement over clique based algorithms in terms of its ability to recover known complexes. We discuss some of the new complexes predicted by our algorithm and determine that they likely represent true complexes.\n\n\nAVAILABILITY\nMatlab implementation is available on the supporting website: www.cs.cmu.edu/~qyj/SuperComplex."
},
{
"pmid": "22165896",
"title": "Protein complex detection with semi-supervised learning in protein interaction networks.",
"abstract": "BACKGROUND\nProtein-protein interactions (PPIs) play fundamental roles in nearly all biological processes. The systematic analysis of PPI networks can enable a great understanding of cellular organization, processes and function. In this paper, we investigate the problem of protein complex detection from noisy protein interaction data, i.e., finding the subsets of proteins that are closely coupled via protein interactions. However, protein complexes are likely to overlap and the interaction data are very noisy. It is a great challenge to effectively analyze the massive data for biologically meaningful protein complex detection.\n\n\nRESULTS\nMany people try to solve the problem by using the traditional unsupervised graph clustering methods. Here, we stand from a different point of view, redefining the properties and features for protein complexes and designing a \"semi-supervised\" method to analyze the problem. In this paper, we utilize the neural network with the \"semi-supervised\" mechanism to detect the protein complexes. By retraining the neural network model recursively, we could find the optimized parameters for the model, in such a way we can successfully detect the protein complexes. The comparison results show that our algorithm could identify protein complexes that are missed by other methods. We also have shown that our method achieve better precision and recall rates for the identified protein complexes than other existing methods. In addition, the framework we proposed is easy to be extended in the future.\n\n\nCONCLUSIONS\nUsing a weighted network to represent the protein interaction network is more appropriate than using a traditional unweighted network. In addition, integrating biological features and topological features to represent protein complexes is more meaningful than using dense subgraphs. Last, the \"semi-supervised\" learning model is a promising model to detect protein complexes with more biological and topological features available."
},
{
"pmid": "19770263",
"title": "How and when should interactome-derived clusters be used to predict functional modules and protein function?",
"abstract": "MOTIVATION\nClustering of protein-protein interaction networks is one of the most common approaches for predicting functional modules, protein complexes and protein functions. But, how well does clustering perform at these tasks?\n\n\nRESULTS\nWe develop a general framework to assess how well computationally derived clusters in physical interactomes overlap functional modules derived via the Gene Ontology (GO). Using this framework, we evaluate six diverse network clustering algorithms using Saccharomyces cerevisiae and show that (i) the performances of these algorithms can differ substantially when run on the same network and (ii) their relative performances change depending upon the topological characteristics of the network under consideration. For the specific task of function prediction in S.cerevisiae, we demonstrate that, surprisingly, a simple non-clustering guilt-by-association approach outperforms widely used clustering-based approaches that annotate a protein with the overrepresented biological process and cellular component terms in its cluster; this is true over the range of clustering algorithms considered. Further analysis parameterizes performance based on the number of annotated proteins, and suggests when clustering approaches should be used for interactome functional analyses. Overall our results suggest a re-examination of when and how clustering approaches should be applied to physical interactomes, and establishes guidelines by which novel clustering approaches for biological networks should be justified and evaluated with respect to functional analysis.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "14517352",
"title": "Protein complexes and functional modules in molecular networks.",
"abstract": "Proteins, nucleic acids, and small molecules form a dense network of molecular interactions in a cell. Molecules are nodes of this network, and the interactions between them are edges. The architecture of molecular networks can reveal important principles of cellular organization and function, similarly to the way that protein structure tells us about the function and organization of a protein. Computational analysis of molecular networks has been primarily concerned with node degree [Wagner, A. & Fell, D. A. (2001) Proc. R. Soc. London Ser. B 268, 1803-1810; Jeong, H., Tombor, B., Albert, R., Oltvai, Z. N. & Barabasi, A. L. (2000) Nature 407, 651-654] or degree correlation [Maslov, S. & Sneppen, K. (2002) Science 296, 910-913], and hence focused on single/two-body properties of these networks. Here, by analyzing the multibody structure of the network of protein-protein interactions, we discovered molecular modules that are densely connected within themselves but sparsely connected with the rest of the network. Comparison with experimental data and functional annotation of genes showed two types of modules: (i) protein complexes (splicing machinery, transcription factors, etc.) and (ii) dynamic functional units (signaling cascades, cell-cycle regulation, etc.). Discovered modules are highly statistically significant, as is evident from comparison with random graphs, and are robust to noise in the data. Our results provide strong support for the network modularity principle introduced by Hartwell et al. [Hartwell, L. H., Hopfield, J. J., Leibler, S. & Murray, A. W. (1999) Nature 402, C47-C52], suggesting that found modules constitute the \"building blocks\" of molecular networks."
},
{
"pmid": "29439025",
"title": "Thermal proximity coaggregation for system-wide profiling of protein complex dynamics in cells.",
"abstract": "Proteins differentially interact with each other across cellular states and conditions, but an efficient proteome-wide strategy to monitor them is lacking. We report the application of thermal proximity coaggregation (TPCA) for high-throughput intracellular monitoring of protein complex dynamics. Significant TPCA signatures observed among well-validated protein-protein interactions correlate positively with interaction stoichiometry and are statistically observable in more than 350 annotated human protein complexes. Using TPCA, we identified many complexes without detectable differential protein expression, including chromatin-associated complexes, modulated in S phase of the cell cycle. Comparison of six cell lines by TPCA revealed cell-specific interactions even in fundamental cellular processes. TPCA constitutes an approach for system-wide studies of protein complexes in nonengineered cells and tissues and might be used to identify protein complexes that are modulated in diseases."
},
{
"pmid": "23225755",
"title": "Construction and application of dynamic protein interaction network based on time course gene expression data.",
"abstract": "In recent years, researchers have tried to inject dynamic information into static protein interaction networks (PINs). The paper first proposes a three-sigma method to identify active time points of each protein in a cellular cycle, where three-sigma principle is used to compute an active threshold for each gene according to the characteristics of its expression curve. Then a dynamic protein interaction network (DPIN) is constructed, which includes the dynamic changes of protein interactions. To validate the efficiency of DPIN, MCL, CPM, and core attachment algorithms are applied on two different DPINs, the static PIN and the time course PIN (TC-PIN) to detect protein complexes. The performance of each algorithm on DPINs outperforms those on other networks in terms of matching with known complexes, sensitivity, specificity, f-measure, and accuracy. Furthermore, the statistics of three-sigma principle show that 23-45% proteins are active at a time point and most proteins are active in about half of cellular cycle. In addition, we find 94% essential proteins are in the group of proteins that are active at equal or great than 12 timepoints of GSE4987, which indicates the potential existence of feedback mechanisms that can stabilize the expression level of essential proteins and might provide a new insight for predicting essential proteins from dynamic protein networks."
},
{
"pmid": "31390979",
"title": "A seed-extended algorithm for detecting protein complexes based on density and modularity with topological structure and GO annotations.",
"abstract": "BACKGROUND\nThe detection of protein complexes is of great significance for researching mechanisms underlying complex diseases and developing new drugs. Thus, various computational algorithms have been proposed for protein complex detection. However, most of these methods are based on only topological information and are sensitive to the reliability of interactions. As a result, their performance is affected by false-positive interactions in PPINs. Moreover, these methods consider only density and modularity and ignore protein complexes with various densities and modularities.\n\n\nRESULTS\nTo address these challenges, we propose an algorithm to exploit protein complexes in PPINs by a Seed-Extended algorithm based on Density and Modularity with Topological structure and GO annotations, named SE-DMTG to improve the accuracy of protein complex detection. First, we use common neighbors and GO annotations to construct a weighted PPIN. Second, we define a new seed selection strategy to select seed nodes. Third, we design a new fitness function to detect protein complexes with various densities and modularities. We compare the performance of SE-DMTG with that of thirteen state-of-the-art algorithms on several real datasets.\n\n\nCONCLUSION\nThe experimental results show that SE-DMTG not only outperforms some classical algorithms in yeast PPINs in terms of the F-measure and Jaccard but also achieves an ideal performance in terms of functional enrichment. Furthermore, we apply SE-DMTG to PPINs of several other species and demonstrate the outstanding accuracy and matching ratio in detecting protein complexes compared with other algorithms."
},
{
"pmid": "34970305",
"title": "An Improved Memetic Algorithm for Detecting Protein Complexes in Protein Interaction Networks.",
"abstract": "Identifying the protein complexes in protein-protein interaction (PPI) networks is essential for understanding cellular organization and biological processes. To address the high false positive/negative rates of PPI networks and detect protein complexes with multiple topological structures, we developed a novel improved memetic algorithm (IMA). IMA first combines the topological and biological properties to obtain a weighted PPI network with reduced noise. Next, it integrates various clustering results to construct the initial populations. Furthermore, a fitness function is designed based on the five topological properties of the protein complexes. Finally, we describe the rest of our IMA method, which primarily consists of four steps: selection operator, recombination operator, local optimization strategy, and updating the population operator. In particular, IMA is a combination of genetic algorithm and a local optimization strategy, which has a strong global search ability, and searches for local optimal solutions effectively. The experimental results demonstrate that IMA performs much better than the base methods and existing state-of-the-art techniques. The source code and datasets of the IMA can be found at https://github.com/RongquanWang/IMA."
},
{
"pmid": "34966415",
"title": "A New Method for Recognizing Protein Complexes Based on Protein Interaction Networks and GO Terms.",
"abstract": "Motivation: A protein complex is the combination of proteins which interact with each other. Protein-protein interaction (PPI) networks are composed of multiple protein complexes. It is very difficult to recognize protein complexes from PPI data due to the noise of PPI. Results: We proposed a new method, called Topology and Semantic Similarity Network (TSSN), based on topological structure characteristics and biological characteristics to construct the PPI. Experiments show that the TSSN can filter the noise of PPI data. We proposed a new algorithm, called Neighbor Nodes of Proteins (NNP), for recognizing protein complexes by considering their topology information. Experiments show that the algorithm can identify more protein complexes and more accurately. The recognition of protein complexes is vital in research on evolution analysis. Availability and implementation: https://github.com/bioinformatical-code/NNP."
},
{
"pmid": "19486541",
"title": "A core-attachment based method to detect protein complexes in PPI networks.",
"abstract": "BACKGROUND\nHow to detect protein complexes is an important and challenging task in post genomic era. As the increasing amount of protein-protein interaction (PPI) data are available, we are able to identify protein complexes from PPI networks. However, most of current studies detect protein complexes based solely on the observation that dense regions in PPI networks may correspond to protein complexes, but fail to consider the inherent organization within protein complexes.\n\n\nRESULTS\nTo provide insights into the organization of protein complexes, this paper presents a novel core-attachment based method (COACH) which detects protein complexes in two stages. It first detects protein-complex cores as the \"hearts\" of protein complexes and then includes attachments into these cores to form biologically meaningful structures. We evaluate and analyze our predicted protein complexes from two aspects. First, we perform a comprehensive comparison between our proposed method and existing techniques by comparing the predicted complexes against benchmark complexes. Second, we also validate the core-attachment structures using various biological evidence and knowledge.\n\n\nCONCLUSION\nOur proposed COACH method has been applied on two different yeast PPI networks and the experimental results show that COACH performs significantly better than the state-of-the-art techniques. In addition, the identified complexes with core-attachment structures are demonstrated to match very well with existing biological knowledge and thus provide more insights for future biological study."
},
{
"pmid": "11752321",
"title": "DIP, the Database of Interacting Proteins: a research tool for studying cellular networks of protein interactions.",
"abstract": "The Database of Interacting Proteins (DIP: http://dip.doe-mbi.ucla.edu) is a database that documents experimentally determined protein-protein interactions. It provides the scientific community with an integrated set of tools for browsing and extracting information about protein interaction networks. As of September 2001, the DIP catalogs approximately 11 000 unique interactions among 5900 proteins from >80 organisms; the vast majority from yeast, Helicobacter pylori and human. Tools have been developed that allow users to analyze, visualize and integrate their own experimental data with the information about protein-protein interactions available in the DIP database."
},
{
"pmid": "29072136",
"title": "An effective approach to detecting both small and large complexes from protein-protein interaction networks.",
"abstract": "BACKGROUND\nPredicting protein complexes from protein-protein interaction (PPI) networks has been studied for decade. Various methods have been proposed to address some challenging issues of this problem, including overlapping clusters, high false positive/negative rates of PPI data and diverse complex structures. It is well known that most current methods can detect effectively only complexes of size ≥3, which account for only about half of the total existing complexes. Recently, a method was proposed specifically for finding small complexes (size = 2 and 3) from PPI networks. However, up to now there is no effective approach that can predict both small (size ≤ 3) and large (size >3) complexes from PPI networks.\n\n\nRESULTS\nIn this paper, we propose a novel method, called CPredictor2.0, that can detect both small and large complexes under a unified framework. Concretely, we first group proteins of similar functions. Then, the Markov clustering algorithm is employed to discover clusters in each group. Finally, we merge all discovered clusters that overlap with each other to a certain degree, and the merged clusters as well as the remaining clusters constitute the set of detected complexes. Extensive experiments have shown that the new method can more effectively predict both small and large complexes, in comparison with the state-of-the-art methods.\n\n\nCONCLUSIONS\nThe proposed method, CPredictor2.0, can be applied to accurately predict both small and large protein complexes."
},
{
"pmid": "30736004",
"title": "Accurately Detecting Protein Complexes by Graph Embedding and Combining Functions with Interactions.",
"abstract": "Identifying protein complexes is helpful for understanding cellular functions and designing drugs. In the last decades, many computational methods have been proposed based on detecting dense subgraphs or subnetworks in Protein-Protein Interaction Networks (PINs). However, the high rate of false positive/negative interactions in PINs prevents from the achievement of satisfactory detection results directly from PINs, because most of such existing methods exploit mainly topological information to do network partitioning. In this paper, we propose a new approach for protein complex detection by merging topological information of PINs and functional information of proteins. We first split proteins to a number of protein groups from the perspective of protein functions by using FunCat data. Then, for each of the resulting protein groups, we calculate two protein-protein similarity matrices: one is computed by using graph embedding over a PIN, the other is by using GO terms, and combine these two matrices to get an integrated similarity matrix. Following that, we cluster the proteins in each group based on the corresponding integrated similarity matrix, and obtain a number of small protein clusters. We map these clusters of proteins onto the PIN, and get a number of connected subgraphs. After a round of merging of overlapping subgraphs, finally we get the detected complexes. We conduct empirical evaluation on four PPI datasets (Collins, Gavin, Krogan, and Wiphi) with two complex benchmarks (CYC2008 and MIPS). Experimental results show that our method performs better than the state-of-the-art methods."
},
{
"pmid": "22000801",
"title": "A degree-distribution based hierarchical agglomerative clustering algorithm for protein complexes identification.",
"abstract": "Since cellular functionality is typically envisioned as having a hierarchical structure, we propose a framework to identify modules (or clusters) within protein-protein interaction (PPI) networks in this paper. Based on the within-module and between-module edges of subgraphs and degree distribution, we present a formal module definition in PPI networks. Using the new module definition, an effective quantitative measure is introduced for the evaluation of the partition of PPI networks. Because of the hierarchical nature of functional modules, a hierarchical agglomerative clustering algorithm is developed based on the new measure in order to solve the problem of complexes detection within PPI networks. We use gold standard sets of protein complexes to validate the biological significance of predicted complexes. A comprehensive comparison is performed between our method and other four representative methods. The results show that our algorithm finds more protein complexes with high biological significance and a significant improvement. Furthermore, the predicted complexes by our method, whether dense or sparse, match well with known biological characteristics."
},
{
"pmid": "23688127",
"title": "Protein complex detection using interaction reliability assessment and weighted clustering coefficient.",
"abstract": "BACKGROUND\nPredicting protein complexes from protein-protein interaction data is becoming a fundamental problem in computational biology. The identification and characterization of protein complexes implicated are crucial to the understanding of the molecular events under normal and abnormal physiological conditions. On the other hand, large datasets of experimentally detected protein-protein interactions were determined using High-throughput experimental techniques. However, experimental data is usually liable to contain a large number of spurious interactions. Therefore, it is essential to validate these interactions before exploiting them to predict protein complexes.\n\n\nRESULTS\nIn this paper, we propose a novel graph mining algorithm (PEWCC) to identify such protein complexes. Firstly, the algorithm assesses the reliability of the interaction data, then predicts protein complexes based on the concept of weighted clustering coefficient. To demonstrate the effectiveness of the proposed method, the performance of PEWCC was compared to several methods. PEWCC was able to detect more matched complexes than any of the state-of-the-art methods with higher quality scores.\n\n\nCONCLUSIONS\nThe higher accuracy achieved by PEWCC in detecting protein complexes is a valid argument in favor of the proposed method. The datasets and programs are freely available at http://faculty.uaeu.ac.ae/nzaki/Research.htm."
},
{
"pmid": "31376777",
"title": "A method for identifying protein complexes with the features of joint co-localization and joint co-expression in static PPI networks.",
"abstract": "Identifying protein complexes in static protein-protein interaction (PPI) networks is essential for understanding the underlying mechanism of biological processes. Proteins in a complex are co-localized at the same place and co-expressed at the same time. We propose a novel method to identify protein complexes with the features of joint co-localization and joint co-expression in static PPI networks. To achieve this goal, we define a joint localization vector to construct a joint co-localization criterion of a protein group, and define a joint gene expression to construct a joint co-expression criterion of a gene group. Moreover, the functional similarity of proteins in a complex is an important characteristic. Thus, we use the CC-based, MF-based, and BP-based protein similarities to devise functional similarity criterion to determine whether a protein is functionally similar to a protein cluster. Based on the core-attachment structure and following to seed expanding strategy, we use four types of biological data including PPI data with reliability score, protein localization data, gene expression data, and gene ontology annotations, to identify protein complexes. The experimental results on yeast data show that comparing with existing methods our proposed method can efficiently and exactly identify more protein complexes, especially more protein complexes of sizes from 2 to 6. Furthermore, the enrichment analysis demonstrates that the protein complexes identified by our method have significant biological meaning."
},
{
"pmid": "24928559",
"title": "Detecting overlapping protein complexes based on a generative model with functional and topological properties.",
"abstract": "BACKGROUND\nIdentification of protein complexes can help us get a better understanding of cellular mechanism. With the increasing availability of large-scale protein-protein interaction (PPI) data, numerous computational approaches have been proposed to detect complexes from the PPI networks. However, most of the current approaches do not consider overlaps among complexes or functional annotation information of individual proteins. Therefore, they might not be able to reflect the biological reality faithfully or make full use of the available domain-specific knowledge.\n\n\nRESULTS\nIn this paper, we develop a Generative Model with Functional and Topological Properties (GMFTP) to describe the generative processes of the PPI network and the functional profile. The model provides a working mechanism for capturing the interaction structures and the functional patterns of proteins. By combining the functional and topological properties, we formulate the problem of identifying protein complexes as that of detecting a group of proteins which frequently interact with each other in the PPI network and have similar annotation patterns in the functional profile. Using the idea of link communities, our method naturally deals with overlaps among complexes. The benefits brought by the functional properties are demonstrated by real data analysis. The results evaluated using four criteria with respect to two gold standards show that GMFTP has a competitive performance over the state-of-the-art approaches. The effectiveness of detecting overlapping complexes is also demonstrated by analyzing the topological and functional features of multi- and mono-group proteins.\n\n\nCONCLUSIONS\nBased on the results obtained in this study, GMFTP presents to be a powerful approach for the identification of overlapping protein complexes using both the PPI network and the functional profile. The software can be downloaded from http://mail.sysu.edu.cn/home/[email protected]/dai/others/GMFTP.zip."
},
{
"pmid": "27454775",
"title": "A method for predicting protein complex in dynamic PPI networks.",
"abstract": "BACKGROUND\nAccurate determination of protein complexes has become a key task of system biology for revealing cellular organization and function. Up to now, the protein complex prediction methods are mostly focused on static protein protein interaction (PPI) networks. However, cellular systems are highly dynamic and responsive to cues from the environment. The shift from static PPI networks to dynamic PPI networks is essential to accurately predict protein complex.\n\n\nRESULTS\nThe gene expression data contains crucial dynamic information of proteins and PPIs, along with high-throughput experimental PPI data, are valuable for protein complex prediction. Firstly, we exploit gene expression data to calculate the active time point and the active probability of each protein and PPI. The dynamic active information is integrated into high-throughput PPI data to construct dynamic PPI networks. Secondly, a novel method for predicting protein complexes from the dynamic PPI networks is proposed based on core-attachment structural feature. Our method can effectively exploit not only the dynamic active information but also the topology structure information based on the dynamic PPI networks.\n\n\nCONCLUSIONS\nWe construct four dynamic PPI networks, and accurately predict many well-characterized protein complexes. The experimental results show that (i) the dynamic active information significantly improves the performance of protein complex prediction; (ii) our method can effectively make good use of both the dynamic active information and the topology structure information of dynamic PPI networks to achieve state-of-the-art protein complex prediction capabilities."
}
] |
PLoS Computational Biology | 35226665 | PMC8912900 | 10.1371/journal.pcbi.1009912 | Objective quantification of nerves in immunohistochemistry specimens of thyroid cancer utilising deep learning | Accurate quantification of nerves in cancer specimens is important to understand cancer behaviour. Typically, nerves are manually detected and counted in digitised images of thin tissue sections from excised tumours using immunohistochemistry. However the images are of a large size with nerves having substantial variation in morphology that renders accurate and objective quantification difficult using existing manual and automated counting techniques. Manual counting is precise, but time-consuming, susceptible to inconsistency and has a high rate of false negatives. Existing automated techniques using digitised tissue sections and colour filters are sensitive, however, have a high rate of false positives. In this paper we develop a new automated nerve detection approach, based on a deep learning model with an augmented classification structure. This approach involves pre-processing to extract the image patches for the deep learning model, followed by pixel-level nerve detection utilising the proposed deep learning model. Outcomes assessed were a) sensitivity of the model in detecting manually identified nerves (expert annotations), and b) the precision of additional model-detected nerves. The proposed deep learning model based approach results in a sensitivity of 89% and a precision of 75%. The code and pre-trained model are publicly available at https://github.com/IA92/Automated_Nerves_Quantification. | 2 Related workMany CNN based approaches have been applied to object detection tasks, for example, object detection in photographs [9–11], organ detection in medical images [12, 13] or mitosis, cytoplasm and nuclei detection in stained whole slide images (WSIs) [14–16]. However, to our best knowledge, no CNN based approach has been applied specifically to the nerve detection and quantification problem. In this section we will present the rationale for our approach based on some object detection problems that are similar to this nerve detection problem.2.1 Colour thresholdingIn image processing it is typical to convert an image from the red, green and blue (RGB) colour space to the hue, saturation and value (HSV) colour space for the purpose of colour image segmentation and/or thresholding [17]. This is because the HSV colour space organises colour in a similar way to the perception of the human eye [17], in that luma/intensity information are separated from chroma/colour information in the HSV colour space [18]. This makes colour range definition in the HSV colour space more straight forward in comparison to the RGB colour space.To perform colour thresholding (i.e. filtering) on a WSI in a typical image processing program, e.g. ImageJ [19], an expert takes a sample of the target instances to initialise the colour filter range in the HSV colour space. Then, the expert will adjust the threshold limit manually until the desired segmentation output is obtained.2.2 Object detection and localisation in a WSIIn computer vision, an object quantification task is usually formulated as an object detection or localisation task [20, 21]. Some of the most successful approaches in object detection and localisation evolved from relying on either a multi-scale sliding-window (i.e. exhaustive search) [22–24], a selective-search [9, 10, 25, 26] or deep learning models [9–11, 22–26].Deep learning models such as U-Net and FCN have been shown to be successful in various WSI segmentation applications [27], with U-Net proven to be superior [28]. U-Net has also been shown to outperform human experts for lymphocyte detection in immunohistochemically stained tissue sections of breast, colon and prostate cancer [28]. The superiority of U-Net performance with respect to the FCN has also been demonstrated on the application of renal tissue segmentation [29].2.3 Supervised learningTo develop a supervised learning model for segmentation, including a CNN, a complete pixel-wise annotated dataset is required as the ideal supervision information [30]. However, as a complete pixel-wise annotated dataset is often unavailable in real-world applications, a basic assumption, e.g. a cluster assumption or a manifold assumption, can be adopted to annotate the non-annotated data for training [30].It is also crucial to ensure that the data for each class is balanced, i.e. intra- and inter-class data [31] [32]. In the case where the data is highly skewed with respect to the number of annotations in each class, the training data samples should be carefully selected to ensure that the representations of each class are balanced [32]. Data balancing can be achieved by controlling the proportion of the data samples of each class to be equal [32]. | [
"28292437",
"23846904",
"26009480",
"30776178",
"32001748",
"23711540",
"18436370",
"24902592",
"27295650",
"26158062",
"25966470",
"26153368",
"32377220",
"32129846",
"20634557",
"26353135",
"33049577",
"31476576",
"31499320",
"29203879",
"23370772",
"9474319"
] | [
{
"pmid": "28292437",
"title": "Nerve Dependence: From Regeneration to Cancer.",
"abstract": "Nerve dependence has long been described in animal regeneration, where the outgrowth of axons is necessary to the reconstitution of lost body parts and tissue remodeling in various species. Recent discoveries have demonstrated that denervation can suppress tumor growth and metastasis, pointing to nerve dependence in cancer. Regeneration and cancer share similarities in regard to the stimulatory role of nerves, and there are indications that the stem cell compartment is a preferred target of innervation. Thus, the neurobiology of cancer is an emerging discipline that opens new perspectives in oncology."
},
{
"pmid": "23846904",
"title": "Autonomic nerve development contributes to prostate cancer progression.",
"abstract": "Nerves are a common feature of the microenvironment, but their role in tumor growth and progression remains unclear. We found that the formation of autonomic nerve fibers in the prostate gland regulates prostate cancer development and dissemination in mouse models. The early phases of tumor development were prevented by chemical or surgical sympathectomy and by genetic deletion of stromal β2- and β3-adrenergic receptors. Tumors were also infiltrated by parasympathetic cholinergic fibers that promoted cancer dissemination. Cholinergic-induced tumor invasion and metastasis were inhibited by pharmacological blockade or genetic disruption of the stromal type 1 muscarinic receptor, leading to improved survival of the mice. A retrospective blinded analysis of prostate adenocarcinoma specimens from 43 patients revealed that the densities of sympathetic and parasympathetic nerve fibers in tumor and surrounding normal tissue, respectively, were associated with poor clinical outcomes. These findings may lead to novel therapeutic approaches for prostate cancer."
},
{
"pmid": "26009480",
"title": "Nerve fibers infiltrate the tumor microenvironment and are associated with nerve growth factor production and lymph node invasion in breast cancer.",
"abstract": "Infiltration of the tumor microenvironment by nerve fibers is an understudied aspect of breast carcinogenesis. In this study, the presence of nerve fibers was investigated in a cohort of 369 primary breast cancers (ductal carcinomas in situ, invasive ductal and lobular carcinomas) by immunohistochemistry for the neuronal marker PGP9.5. Isolated nerve fibers (axons) were detected in 28% of invasive ductal carcinomas as compared to only 12% of invasive lobular carcinomas and 8% of ductal carcinomas in situ (p = 0.0003). In invasive breast cancers, the presence of nerve fibers was observed in 15% of lymph node negative tumors and 28% of lymph node positive tumors (p = 0.0031), indicating a relationship with the metastatic potential. In addition, there was an association between the presence of nerve fibers and the expression of nerve growth factor (NGF) in cancer cells (p = 0.0001). In vitro, breast cancer cells were able to induce neurite outgrowth in PC12 cells, and this neurotrophic activity was partially inhibited by anti-NGF blocking antibodies. In conclusion, infiltration by nerve fibers is a feature of the tumor microenvironment that is associated with aggressiveness and involves NGF production by cancer cells. The potential participation of nerve fibers in breast cancer progression needs to be further considered."
},
{
"pmid": "30776178",
"title": "Reduction of intrapancreatic neural density in cancer tissue predicts poorer outcome in pancreatic ductal carcinoma.",
"abstract": "Neural invasion is one of the malignant features contributing to locally advanced and/or metastatic disease progression in patients with pancreatic ductal adenocarcinoma (PDAC). Few studies exist on the distribution and state of nerve fibers in PDAC tissue and their clinicopathological impacts. The aim of the present study was to investigate the clinicopathological characteristics and prognostic value of intrapancreatic neural alterations in patients with PDAC. We retrospectively analyzed 256 patients with PDAC who underwent macroscopic curative surgery. Nerve fibers, immunolabeled with a specific neural marker GAP-43, were digitally counted and compared among PDAC, chronic pancreatitis (CP) and normal pancreatic tissues. Interlobular nerve fibers were apparently hypertrophic in both CP and PDAC, although intrapancreatic neural density and nerve number decreased characteristically in PDAC. They tended to decrease toward the center of the tumor. Kaplan-Meier survival analyses revealed a statistically significant correlation between low neural density and shorter overall survival (OS) (P = 0.014), and between high neural invasion and shorter OS (P = 0.017). Neural density (P = 0.04; HR = 1.496; 95% CI 1.018-2.199) and neural invasion ratio (P = 0.064; HR = 1.439; 95% CI .980-2.114) were prognostic factors of shorter OS in the multivariate analysis. These findings suggest low intrapancreatic neural density in patients with PDAC as an independent prognosticator, which may represent aggressive tumor behavior. Furthermore, we propose a simple, practical and reproducible method (to measure neural density and the neural invasion ratio during conventional histopathological diagnosis of PDAC), which has been validated using another cohort (n = 81)."
},
{
"pmid": "32001748",
"title": "Innervation of papillary thyroid cancer and its association with extra-thyroidal invasion.",
"abstract": "Nerves are emerging regulators of cancer progression and in several malignancies innervation of the tumour microenvironment is associated with tumour aggressiveness. However, the innervation of thyroid cancer is unclear. Here, we investigated the presence of nerves in thyroid cancers and the potential associations with clinicopathological parameters. Nerves were detected by immunohistochemistry using the pan-neuronal marker PGP9.5 in whole-slide sections of papillary thyroid cancer (PTC) (n = 75), compared to follicular thyroid cancer (FTC) (n = 13), and benign thyroid tissues (n = 26). Nerves were detected in most normal thyroid tissues and thyroid cancers, but nerve density was increased in PTC (12 nerves/cm2 [IQR 7-21]) compared to benign thyroid (6 nerves/cm2 [IQR: 3-10]) (p = 0.001). In contrast, no increase in nerve density was observed in FTC. In multivariate analysis, nerve density correlated positively with extrathyroidal invasion (p < 0.001), and inversely with tumour size (p < 0.001). The majority of nerves were adrenergic, although cholinergic and peptidergic innervation was detected. Perineural invasion was present in 35% of PTC, and was independently associated with extrathyroidal invasion (p = 0.008). This is the first report of infiltration of nerves into the tumour microenvironment of thyroid cancer and its association with tumour aggressiveness. The role of nerves in thyroid cancer pathogenesis should be further investigated."
},
{
"pmid": "23711540",
"title": "Computerized quantification and planimetry of prostatic capsular nerves in relation to adjacent prostate cancer foci.",
"abstract": "BACKGROUND\nPerineural invasion is discussed as a significant route of extraprostatic extension in prostate cancer (PCa). Recent in vitro studies suggested a complex mechanism of neuroepithelial interaction.\n\n\nOBJECTIVE\nThe present study was intended to investigate whether the concept of neuroepithelial interaction can be supported by a quantitative analysis and planimetry of capsular nerves in relation to adjacent PCa foci.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nWhole-mount sections of the prostate were created from patients undergoing non-nerve-sparing laparoscopic radical prostatectomy. For each prostate, adjacent sections were created and stained both to identify capsular nerves (S100) and to localize cancer foci (hematoxylin and eosin).\n\n\nOUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS\nComputerized quantification and planimetry of capsular nerves (ImageJ software) were performed after applying a digital grid to define 12 capsular sectors. For statistical analyses, mixed linear models were calculated using the SAS 9.3 software package.\n\n\nRESULTS AND LIMITATIONS\nSpecimens of 33 prostates were investigated. A total of 1957 capsular nerves and a total capsular nerve surface area of 26.44 mm(2) were measured. The major proportion was found in the dorsolateral (DL) region (p<0.001). Adjacent tumor was associated with a statistically significant higher capsular nerve count compared with the capsules of tumor-free sectors (p<0.005). Similar results were shown for capsular nerve surface area (p<0.006). Subsequent post hoc analyses at the sector level revealed that the effect of tumor on capsular nerve count or nerve surface area is most pronounced in the DL region.\n\n\nCONCLUSIONS\nThe presence of PCa foci resulted in a significantly increased capsular nerve count and capsular nerve surface area compared with tumor-free sectors. The present study supports former in vitro findings suggesting that the presence of PCa lesions may lead to complex neuroepithelial interactions resulting in PCa-induced nerve growth."
},
{
"pmid": "18436370",
"title": "Topographical anatomy of periprostatic and capsular nerves: quantification and computerised planimetry.",
"abstract": "BACKGROUND\nThe exact distribution of periprostatic autonomic nerves is under debate.\n\n\nOBJECTIVE\nTo study the topographical anatomy of autonomic nerves of the periprostatic tissue and the capsule of the prostate (CAP).\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nWhole-mount sections of 30 prostates from patients having undergone non-nerve-sparing radical prostatectomy were investigated after immunohistochemical nerve staining. Sections from the base, the middle, and the apex were evaluated. All sections were divided into 12 sectors, which were combined into the following regions: ventral, ventrolateral, dorsolateral, and dorsal.\n\n\nMEASUREMENTS\nQuantification of periprostatic and capsular nerves was performed within the sectors. Computerised planimetry of the total periprostatic nerve surface area of each region was performed (Image-J software, Wayne Rasband, National Institute of Health, USA).\n\n\nRESULTS AND LIMITATIONS\nA total of 3514, 3860, and 3902 periprostatic nerves was counted at the base, the middle, and the apex, respectively (p=0.068). The ratio of periprostatic nerves to capsular nerves was 3.6, 2.1, and 1.9 at the base, the middle, and the apex, respectively (p=0.004). Computerised planimetry revealed a significant decrease in total nerve surface area from the base over the middle towards the apex, with 241.79, 133.64, and 89.50mm(2) (p=0.004). The percentage of total nerve surface area was highest dorsolaterally (84.1%, 75.1%, and 74.5% at base, middle, and apex, respectively) but variable: Up to 39.9% of nerve surface area was found ventrolaterally and up to 45.5% in the dorsal position. The study is limited by the fact that autonomic nerve distribution was only investigated from the base to the apex of the prostate.\n\n\nCONCLUSIONS\nPeriprostatic nerve distribution is variable, with a high percentage of nerves in the ventrolateral and dorsal positions. Total periprostatic nerve surface area decreases from the base towards the apex due to nerves leaving the NVB branching into the prostate. This can only be discovered by nerve planimetry, not by quantification."
},
{
"pmid": "24902592",
"title": "A semi-automated technique for labeling and counting of apoptosing retinal cells.",
"abstract": "BACKGROUND\nRetinal ganglion cell (RGC) loss is one of the earliest and most important cellular changes in glaucoma. The DARC (Detection of Apoptosing Retinal Cells) technology enables in vivo real-time non-invasive imaging of single apoptosing retinal cells in animal models of glaucoma and Alzheimer's disease. To date, apoptosing RGCs imaged using DARC have been counted manually. This is time-consuming, labour-intensive, vulnerable to bias, and has considerable inter- and intra-operator variability.\n\n\nRESULTS\nA semi-automated algorithm was developed which enabled automated identification of apoptosing RGCs labeled with fluorescent Annexin-5 on DARC images. Automated analysis included a pre-processing stage involving local-luminance and local-contrast \"gain control\", a \"blob analysis\" step to differentiate between cells, vessels and noise, and a method to exclude non-cell structures using specific combined 'size' and 'aspect' ratio criteria. Apoptosing retinal cells were counted by 3 masked operators, generating 'Gold-standard' mean manual cell counts, and were also counted using the newly developed automated algorithm. Comparison between automated cell counts and the mean manual cell counts on 66 DARC images showed significant correlation between the two methods (Pearson's correlation coefficient 0.978 (p < 0.001), R Squared = 0.956. The Intraclass correlation coefficient was 0.986 (95% CI 0.977-0.991, p < 0.001), and Cronbach's alpha measure of consistency = 0.986, confirming excellent correlation and consistency. No significant difference (p = 0.922, 95% CI: -5.53 to 6.10) was detected between the cell counts of the two methods.\n\n\nCONCLUSIONS\nThe novel automated algorithm enabled accurate quantification of apoptosing RGCs that is highly comparable to manual counting, and appears to minimise operator-bias, whilst being both fast and reproducible. This may prove to be a valuable method of quantifying apoptosing retinal cells, with particular relevance to translation in the clinic, where a Phase I clinical trial of DARC in glaucoma patients is due to start shortly."
},
{
"pmid": "27295650",
"title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.",
"abstract": "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available."
},
{
"pmid": "26158062",
"title": "Mitosis detection in breast cancer pathology images by combining handcrafted and convolutional neural network features.",
"abstract": "Breast cancer (BCa) grading plays an important role in predicting disease aggressiveness and patient outcome. A key component of BCa grade is the mitotic count, which involves quantifying the number of cells in the process of dividing (i.e., undergoing mitosis) at a specific point in time. Currently, mitosis counting is done manually by a pathologist looking at multiple high power fields (HPFs) on a glass slide under a microscope, an extremely laborious and time consuming process. The development of computerized systems for automated detection of mitotic nuclei, while highly desirable, is confounded by the highly variable shape and appearance of mitoses. Existing methods use either handcrafted features that capture certain morphological, statistical, or textural attributes of mitoses or features learned with convolutional neural networks (CNN). Although handcrafted features are inspired by the domain and the particular application, the data-driven CNN models tend to be domain agnostic and attempt to learn additional feature bases that cannot be represented through any of the handcrafted features. On the other hand, CNN is computationally more complex and needs a large number of labeled training instances. Since handcrafted features attempt to model domain pertinent attributes and CNN approaches are largely supervised feature generation methods, there is an appeal in attempting to combine these two distinct classes of feature generation strategies to create an integrated set of attributes that can potentially outperform either class of feature extraction strategies individually. We present a cascaded approach for mitosis detection that intelligently combines a CNN model and handcrafted features (morphology, color, and texture features). By employing a light CNN model, the proposed approach is far less demanding computationally, and the cascaded strategy of combining handcrafted features and CNN-derived features enables the possibility of maximizing the performance by leveraging the disconnected feature sets. Evaluation on the public ICPR12 mitosis dataset that has 226 mitoses annotated on 35 HPFs ([Formula: see text] magnification) by several pathologists and 15 testing HPFs yielded an [Formula: see text]-measure of 0.7345. Our approach is accurate, fast, and requires fewer computing resources compared to existent methods, making this feasible for clinical use."
},
{
"pmid": "25966470",
"title": "Accurate Segmentation of Cervical Cytoplasm and Nuclei Based on Multiscale Convolutional Network and Graph Partitioning.",
"abstract": "In this paper, a multiscale convolutional network (MSCN) and graph-partitioning-based method is proposed for accurate segmentation of cervical cytoplasm and nuclei. Specifically, deep learning via the MSCN is explored to extract scale invariant features, and then, segment regions centered at each pixel. The coarse segmentation is refined by an automated graph partitioning method based on the pretrained feature. The texture, shape, and contextual information of the target objects are learned to localize the appearance of distinctive boundary, which is also explored to generate markers to split the touching nuclei. For further refinement of the segmentation, a coarse-to-fine nucleus segmentation framework is developed. The computational complexity of the segmentation is reduced by using superpixel instead of raw pixels. Extensive experimental results demonstrate that the proposed cervical nucleus cell segmentation delivers promising results and outperforms existing methods."
},
{
"pmid": "26153368",
"title": "The ImageJ ecosystem: An open platform for biomedical image analysis.",
"abstract": "Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available-from commercial to academic, special-purpose to Swiss army knife, small to large-but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on the life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts the life sciences, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem."
},
{
"pmid": "32377220",
"title": "DiSCount: computer vision for automated quantification of Striga seed germination.",
"abstract": "BACKGROUND\nPlant parasitic weeds belonging to the genus Striga are a major threat for food production in Sub-Saharan Africa and Southeast Asia. The parasite's life cycle starts with the induction of seed germination by host plant-derived signals, followed by parasite attachment, infection, outgrowth, flowering, reproduction, seed set and dispersal. Given the small seed size of the parasite (< 200 μm), quantification of the impact of new control measures that interfere with seed germination relies on manual, labour-intensive counting of seed batches under the microscope. Hence, there is a need for high-throughput assays that allow for large-scale screening of compounds or microorganisms that adversely affect Striga seed germination.\n\n\nRESULTS\nHere, we introduce DiSCount (Digital Striga Counter): a computer vision tool for automated quantification of total and germinated Striga seed numbers in standard glass fibre filter assays. We developed the software using a machine learning approach trained with a dataset of 98 manually annotated images. Then, we validated and tested the model against a total dataset of 188 manually counted images. The results showed that DiSCount has an average error of 3.38 percentage points per image compared to the manually counted dataset. Most importantly, DiSCount achieves a 100 to 3000-fold speed increase in image analysis when compared to manual analysis, with an inference time of approximately 3 s per image on a single CPU and 0.1 s on a GPU.\n\n\nCONCLUSIONS\nDiSCount is accurate and efficient in quantifying total and germinated Striga seeds in a standardized germination assay. This automated computer vision tool enables for high-throughput, large-scale screening of chemical compound libraries and biological control agents of this devastating parasitic weed. The complete software and manual are hosted at https://gitlab.com/lodewijk-track32/discount_paper and the archived version is available at Zenodo with the DOI 10.5281/zenodo.3627138. The dataset used for testing is available at Zenodo with the DOI 10.5281/zenodo.3403956."
},
{
"pmid": "32129846",
"title": "DeepPod: a convolutional neural network based quantification of fruit number in Arabidopsis.",
"abstract": "BACKGROUND\nHigh-throughput phenotyping based on non-destructive imaging has great potential in plant biology and breeding programs. However, efficient feature extraction and quantification from image data remains a bottleneck that needs to be addressed. Advances in sensor technology have led to the increasing use of imaging to monitor and measure a range of plants including the model Arabidopsis thaliana. These extensive datasets contain diverse trait information, but feature extraction is often still implemented using approaches requiring substantial manual input.\n\n\nRESULTS\nThe computational detection and segmentation of individual fruits from images is a challenging task, for which we have developed DeepPod, a patch-based 2-phase deep learning framework. The associated manual annotation task is simple and cost-effective without the need for detailed segmentation or bounding boxes. Convolutional neural networks (CNNs) are used for classifying different parts of the plant inflorescence, including the tip, base, and body of the siliques and the stem inflorescence. In a post-processing step, different parts of the same silique are joined together for silique detection and localization, whilst taking into account possible overlapping among the siliques. The proposed framework is further validated on a separate test dataset of 2,408 images. Comparisons of the CNN-based prediction with manual counting (R2 = 0.90) showed the desired capability of methods for estimating silique number.\n\n\nCONCLUSIONS\nThe DeepPod framework provides a rapid and accurate estimate of fruit number in a model system widely used by biologists to investigate many fundemental processes underlying growth and reproduction."
},
{
"pmid": "20634557",
"title": "Object detection with discriminatively trained part-based models.",
"abstract": "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function."
},
{
"pmid": "26353135",
"title": "Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.",
"abstract": "Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224 × 224) input image. This requirement is \"artificial\" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, \"spatial pyramid pooling\", to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 × faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition."
},
{
"pmid": "33049577",
"title": "Deep neural network models for computational histopathology: A survey.",
"abstract": "Histopathological images contain rich phenotypic information that can be used to monitor underlying mechanisms contributing to disease progression and patient survival outcomes. Recently, deep learning has become the mainstream methodological choice for analyzing and interpreting histology images. In this paper, we present a comprehensive review of state-of-the-art deep learning approaches that have been used in the context of histopathological image analysis. From the survey of over 130 papers, we review the field's progress based on the methodological aspect of different machine learning strategies such as supervised, weakly supervised, unsupervised, transfer learning and various other sub-variants of these methods. We also provide an overview of deep learning based survival models that are applicable for disease-specific prognosis tasks. Finally, we summarize several existing open datasets and highlight critical challenges and limitations with current deep learning approaches, along with possible avenues for future research."
},
{
"pmid": "31476576",
"title": "Learning to detect lymphocytes in immunohistochemistry with deep learning.",
"abstract": "The immune system is of critical importance in the development of cancer. The evasion of destruction by the immune system is one of the emerging hallmarks of cancer. We have built a dataset of 171,166 manually annotated CD3+ and CD8+ cells, which we used to train deep learning algorithms for automatic detection of lymphocytes in histopathology images to better quantify immune response. Moreover, we investigate the effectiveness of four deep learning based methods when different subcompartments of the whole-slide image are considered: normal tissue areas, areas with immune cell clusters, and areas containing artifacts. We have compared the proposed methods in breast, colon and prostate cancer tissue slides collected from nine different medical centers. Finally, we report the results of an observer study on lymphocyte quantification, which involved four pathologists from different medical centers, and compare their performance with the automatic detection. The results give insights on the applicability of the proposed methods for clinical use. U-Net obtained the highest performance with an F1-score of 0.78 and the highest agreement with manual evaluation (κ=0.72), whereas the average pathologists agreement with reference standard was κ=0.64. The test set and the automatic evaluation procedure are publicly available at lyon19.grand-challenge.org."
},
{
"pmid": "31499320",
"title": "RMDL: Recalibrated multi-instance deep learning for whole slide gastric image classification.",
"abstract": "The whole slide histopathology images (WSIs) play a critical role in gastric cancer diagnosis. However, due to the large scale of WSIs and various sizes of the abnormal area, how to select informative regions and analyze them are quite challenging during the automatic diagnosis process. The multi-instance learning based on the most discriminative instances can be of great benefit for whole slide gastric image diagnosis. In this paper, we design a recalibrated multi-instance deep learning method (RMDL) to address this challenging problem. We first select the discriminative instances, and then utilize these instances to diagnose diseases based on the proposed RMDL approach. The designed RMDL network is capable of capturing instance-wise dependencies and recalibrating instance features according to the importance coefficient learned from the fused features. Furthermore, we build a large whole-slide gastric histopathology image dataset with detailed pixel-level annotations. Experimental results on the constructed gastric dataset demonstrate the significant improvement on the accuracy of our proposed framework compared with other state-of-the-art multi-instance learning methods. Moreover, our method is general and can be extended to other diagnosis tasks of different cancer types based on WSIs."
},
{
"pmid": "29203879",
"title": "QuPath: Open source software for digital pathology image analysis.",
"abstract": "QuPath is new bioimage analysis software designed to meet the growing need for a user-friendly, extensible, open-source solution for digital pathology and whole slide image analysis. In addition to offering a comprehensive panel of tumor identification and high-throughput biomarker evaluation tools, QuPath provides researchers with powerful batch-processing and scripting functionality, and an extensible platform with which to develop and share new algorithms to analyze complex tissue images. Furthermore, QuPath's flexible design makes it suitable for a wide range of additional image analysis applications across biomedical research."
},
{
"pmid": "23370772",
"title": "Interobserver variability and the effect of education in the histopathological diagnosis of differentiated vulvar intraepithelial neoplasia.",
"abstract": "No published data concerning intraobserver and interobserver variability in the histopathological diagnosis of differentiated vulvar intraepithelial neoplasia (DVIN) are available, although it is widely accepted to be a subtle and difficult histopathological diagnosis. In this study, the reproducibility of the histopathological diagnosis of DVIN is evaluated. Furthermore, we investigated the possible improvement of the reproducibility after providing guidelines with histological characteristics and tried to identify histological characteristics that are most important in the recognition of DVIN. A total number of 34 hematoxylin and eosin-stained slides were included in this study and were analyzed by six pathologists each with a different level of education. Slides were reviewed before and after studying a guideline with histological characteristics of DVIN. Kappa statistics were used to compare the interobserver variability. Pathologists with a substantial agreement were asked to rank items by usefulness in the recognition of DVIN. The interobserver agreement during the first session varied between 0.08 and 0.54, which slightly increased during the second session toward an agreement between -0.01 and 0.75. Pathologists specialized in gynecopathology reached a substantial agreement (kappa 0.75). The top five of criteria indicated to be the most useful in the diagnosis of DVIN included: atypical mitosis in the basal layer, basal cellular atypia, dyskeratosis, prominent nucleoli and elongation and anastomosis of rete ridges. In conclusion, the histopathological diagnosis of DVIN is difficult, which is expressed by low interobserver agreement. Only in experienced pathologists with training in gynecopathology, kappa values reached a substantial agreement after providing strict guidelines. Therefore, it should be considered that specimens with an unclear diagnosis and/or clinical suspicion for DVIN should be revised by a pathologist specialized in gynecopathology. When adhering to suggested criteria the diagnosis of DVIN can be made easier."
},
{
"pmid": "9474319",
"title": "Inter-observer and intra-observer agreement in the interpretation of visual fields in glaucoma.",
"abstract": "Visual field changes are one of the main parameters used to monitor progression of glaucoma. This study assesses the degree of intra-observer and inter-observer agreement among nine observers in grading visual fields in glaucoma patients using a visual field system previously described by Jay. The results show a median inter-observer agreement of 61% (median kappa = 0.52) and a median intra-observer agreement of 72% (median kappa = 0.65). This system for grading fields in glaucoma has a high degree of intra-observer agreement, suggesting it is a useful system for longitudinal follow-up of patients by a single observer. The higher degree of disagreement between observers points to the need for careful pretraining of observers in clinical management and research where the results from visual field examinations are to be graded by more than one clinician."
}
] |
Scientific Reports | 35273215 | PMC8913668 | 10.1038/s41598-022-07692-5 | Multiscale heterogeneous optimal lockdown control for COVID-19 using geographic information | We study the problem of synthesizing lockdown policies—schedules of maximum capacities for different types of activity sites—to minimize the number of deceased individuals due to a pandemic within a given metropolitan statistical area (MSA) while controlling the severity of the imposed lockdown. To synthesize and evaluate lockdown policies, we develop a multiscale susceptible, infected, recovered, and deceased model that partitions a given MSA into geographic subregions, and that incorporates data on the behaviors of the populations of these subregions. This modeling approach allows for the analysis of heterogeneous lockdown policies that vary across the different types of activity sites within each subregion of the MSA. We formulate the synthesis of optimal lockdown policies as a nonconvex optimization problem and we develop an iterative algorithm that addresses this nonconvexity through sequential convex programming. We empirically demonstrate the effectiveness of the developed approach by applying it to six of the largest MSAs in the United States. The developed heterogeneous lockdown policies not only reduce the number of deceased individuals by up to 45 percent over a 100 day period in comparison with three baseline lockdown policies that are less heterogeneous, but they also impose lockdowns that are less severe. | Related workCOVID-19 epidemic modelingSince the initial outbreak of COVID-19, there has been extensive research into modeling its spread within a population21–26. Commonly used compartmental models partition the population into labeled groups, each of which describes a different phase of infection2,27–32. For example, the SIR model separates the population into three compartments, those who are susceptible to the virus \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$(S)$$\end{document}(S), those who are currently infectious \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$(I)$$\end{document}(I), and those who have been removed from the model’s consideration \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$(R)$$\end{document}(R)33. Given such a partition of the population, systems of ordinary differential equations are often used to model the dynamics of the disease’s spread. By including additional compartments in the model, and thus refining the partition of the population into more detailed categories, predictions and analysis of specific changes and quantities of interest can be made. For example, partitioning the population into different age categories allows for the analysis of age-specific targeted lockdown policies2. Giordano et al.32 consider eight categories in their compartmental model, allowing for discrimination between infected individuals depending on whether they have been diagnosed and on the severity of their symptoms; this refinement aims to enable the model to reflect the observed high number of asymptomatic individuals who are still able to cause transmissions.While such compartmental models provide an easy-to-interpret means of analyzing the spread of COVID-19, they may only be used in the context of relatively large populations. Conversely, agent-based models instead encode rules for agents—simulated members of the population—to follow. These models simulate the spread of the disease resulting from these behaviors34–40. Agent-based models allow for simulation of the effectiveness of behavioral interventions on the level of individual members of the population, such as mask-wearing and social distancing requirements within an enclosed space.Several papers have considered further extensions of compartmental models. Chang et al.41 model transmission within a network where households make contact at common points of interest. Karaivanov42 incorporates a social network model with an SIR model to provide a more realistic model of the interactions within a population, as opposed to the uniform mixing assumed by most SIR models. However, none of the above papers consider the problem of optimal control. Table 1 summarizes several representative references for COVID-19 modeling.Table 1List of representative references for COVID-19 modeling.CompartmentalAgent-basedNetwork-basedOtherSIR2, SEIR27, SIQS30, SUQC31, SIDARTHE3234–4012,41–4445–50Table 2List of representative references for COVID-19 control.ControlSingle-scaleOptimalityGeographic informationDemographic InformationLockdown1,2,13,14,511,213,142,51Testing3,15,5231552Vaccination4–9,164–6164–6COVID-19 related controlSeveral papers have investigated the problem of control analysis for various COVID-19 related policies, including lockdowns, testing, and vaccine distribution. Sardar et al.53 use compartment-style pandemic models to study the effect of lockdowns on the spread of COVID-19. However, they do not synthesize lockdown control policies, nor do they study geographically heterogeneous lockdowns. Chatzimanolakis et al.54 and Buhat et al.55 both study the problem of optimally distributing test kits under limited supply. Other works study the trade-offs between focusing allocation of vaccinations to either high-risk or high-transmission age-groups in the context of SIR models3–6. Goldenbogen et al.56 study a human-human interaction network and analyzes the optimal policy for vaccine distribution.Similar to our work, several papers have studied the problem of synthesizing or evaluating lockdown policies within various epidemiological models to balance the tradeoff between viral spread and economic impact. Alvarez et al.1 study the problem of minimizing the deceased people in a basic SIR model while controlling the impact on the economy. Acemoglu et al.2 extends this work to consider differing dynamics and control among age groups. Both of these works, along with other COVID-19 control-related papers13,14,41,51,57,58, only consider a spatially homogeneous population and control policy. Similar to the regional model considered by Della Rossa12, we consider a hierarchical model allowing for region-specific dynamics and control. Table 2 summarizes several representative references for COVID-19 control. | [
"33397941",
"33037190",
"33414495",
"33780289",
"32616574",
"32574303",
"32703315",
"32171059",
"33052167",
"32322102",
"31932805",
"32647358",
"31911652",
"33633491",
"33171481",
"32770169",
"16292310"
] | [
{
"pmid": "33397941",
"title": "Anomalous collapses of Nares Strait ice arches leads to enhanced export of Arctic sea ice.",
"abstract": "The ice arches that usually develop at the northern and southern ends of Nares Strait play an important role in modulating the export of Arctic Ocean multi-year sea ice. The Arctic Ocean is evolving towards an ice pack that is younger, thinner, and more mobile and the fate of its multi-year ice is becoming of increasing interest. Here, we use sea ice motion retrievals from Sentinel-1 imagery to report on the recent behavior of these ice arches and the associated ice fluxes. We show that the duration of arch formation has decreased over the past 20 years, while the ice area and volume fluxes along Nares Strait have both increased. These results suggest that a transition is underway towards a state where the formation of these arches will become atypical with a concomitant increase in the export of multi-year ice accelerating the transition towards a younger and thinner Arctic ice pack."
},
{
"pmid": "33037190",
"title": "A network model of Italy shows that intermittent regional strategies can alleviate the COVID-19 epidemic.",
"abstract": "The COVID-19 epidemic hit Italy particularly hard, yielding the implementation of strict national lockdown rules. Previous modelling studies at the national level overlooked the fact that Italy is divided into administrative regions which can independently oversee their own share of the Italian National Health Service. Here, we show that heterogeneity between regions is essential to understand the spread of the epidemic and to design effective strategies to control the disease. We model Italy as a network of regions and parameterize the model of each region on real data spanning over two months from the initial outbreak. We confirm the effectiveness at the regional level of the national lockdown strategy and propose coordinated regional interventions to prevent future national lockdowns, while avoiding saturation of the regional health systems and mitigating impact on costs. Our study and methodology can be easily extended to other levels of granularity to support policy- and decision-makers."
},
{
"pmid": "33414495",
"title": "Plasma Hsp90 levels in patients with systemic sclerosis and relation to lung and skin involvement: a cross-sectional and longitudinal study.",
"abstract": "Our previous study demonstrated increased expression of Heat shock protein (Hsp) 90 in the skin of patients with systemic sclerosis (SSc). We aimed to evaluate plasma Hsp90 in SSc and characterize its association with SSc-related features. Ninety-two SSc patients and 92 age-/sex-matched healthy controls were recruited for the cross-sectional analysis. The longitudinal analysis comprised 30 patients with SSc associated interstitial lung disease (ILD) routinely treated with cyclophosphamide. Hsp90 was increased in SSc compared to healthy controls. Hsp90 correlated positively with C-reactive protein and negatively with pulmonary function tests: forced vital capacity and diffusing capacity for carbon monoxide (DLCO). In patients with diffuse cutaneous (dc) SSc, Hsp90 positively correlated with the modified Rodnan skin score. In SSc-ILD patients treated with cyclophosphamide, no differences in Hsp90 were found between baseline and after 1, 6, or 12 months of therapy. However, baseline Hsp90 predicts the 12-month change in DLCO. This study shows that Hsp90 plasma levels are increased in SSc patients compared to age-/sex-matched healthy controls. Elevated Hsp90 in SSc is associated with increased inflammatory activity, worse lung functions, and in dcSSc, with the extent of skin involvement. Baseline plasma Hsp90 predicts the 12-month change in DLCO in SSc-ILD patients treated with cyclophosphamide."
},
{
"pmid": "33780289",
"title": "Spatial Inequities in COVID-19 Testing, Positivity, Confirmed Cases, and Mortality in 3 U.S. Cities : An Ecological Study.",
"abstract": "BACKGROUND\nPreliminary evidence has shown inequities in coronavirus disease 2019 (COVID-19)-related cases and deaths in the United States.\n\n\nOBJECTIVE\nTo explore the emergence of spatial inequities in COVID-19 testing, positivity, confirmed cases, and mortality in New York, Philadelphia, and Chicago during the first 6 months of the pandemic.\n\n\nDESIGN\nEcological, observational study at the ZIP code tabulation area (ZCTA) level from March to September 2020.\n\n\nSETTING\nChicago, New York, and Philadelphia.\n\n\nPARTICIPANTS\nAll populated ZCTAs in the 3 cities.\n\n\nMEASUREMENTS\nOutcomes were ZCTA-level COVID-19 testing, positivity, confirmed cases, and mortality cumulatively through the end of September 2020. Predictors were the Centers for Disease Control and Prevention Social Vulnerability Index and its 4 domains, obtained from the 2014-2018 American Community Survey. The spatial autocorrelation of COVID-19 outcomes was examined by using global and local Moran I statistics, and estimated associations were examined by using spatial conditional autoregressive negative binomial models.\n\n\nRESULTS\nSpatial clusters of high and low positivity, confirmed cases, and mortality were found, co-located with clusters of low and high social vulnerability in the 3 cities. Evidence was also found for spatial inequities in testing, positivity, confirmed cases, and mortality. Specifically, neighborhoods with higher social vulnerability had lower testing rates and higher positivity ratios, confirmed case rates, and mortality rates.\n\n\nLIMITATIONS\nThe ZCTAs are imperfect and heterogeneous geographic units of analysis. Surveillance data were used, which may be incomplete.\n\n\nCONCLUSION\nSpatial inequities exist in COVID-19 testing, positivity, confirmed cases, and mortality in 3 large U.S. cities.\n\n\nPRIMARY FUNDING SOURCE\nNational Institutes of Health."
},
{
"pmid": "32616574",
"title": "The challenges of modeling and forecasting the spread of COVID-19.",
"abstract": "The coronavirus disease 2019 (COVID-19) pandemic has placed epidemic modeling at the forefront of worldwide public policy making. Nonetheless, modeling and forecasting the spread of COVID-19 remains a challenge. Here, we detail three regional-scale models for forecasting and assessing the course of the pandemic. This work demonstrates the utility of parsimonious models for early-time data and provides an accessible framework for generating policy-relevant insights into its course. We show how these models can be connected to each other and to time series data for a particular region. Capable of measuring and forecasting the impacts of social distancing, these models highlight the dangers of relaxing nonpharmaceutical public health interventions in the absence of a vaccine or antiviral therapies."
},
{
"pmid": "32574303",
"title": "A Simulation of a COVID-19 Epidemic Based on a Deterministic SEIR Model.",
"abstract": "An epidemic disease caused by a new coronavirus has spread in Northern Italy with a strong contagion rate. We implement an SEIR model to compute the infected population and the number of casualties of this epidemic. The example may ideally regard the situation in the Italian Region of Lombardy, where the epidemic started on February 24, but by no means attempts to perform a rigorous case study in view of the lack of suitable data and the uncertainty of the different parameters, namely, the variation of the degree of home isolation and social distancing as a function of time, the initial number of exposed individuals and infected people, the incubation and infectious periods, and the fatality rate. First, we perform an analysis of the results of the model by varying the parameters and initial conditions (in order for the epidemic to start, there should be at least one exposed or one infectious human). Then, we consider the Lombardy case and calibrate the model with the number of dead individuals to date (May 5, 2020) and constrain the parameters on the basis of values reported in the literature. The peak occurs at day 37 (March 31) approximately, with a reproduction ratio R0 of 3 initially, 1.36 at day 22, and 0.8 after day 35, indicating different degrees of lockdown. The predicted death toll is approximately 15,600 casualties, with 2.7 million infected individuals at the end of the epidemic. The incubation period providing a better fit to the dead individuals is 4.25 days, and the infectious period is 4 days, with a fatality rate of 0.00144/day [values based on the reported (official) number of casualties]. The infection fatality rate (IFR) is 0.57%, and it is 2.37% if twice the reported number of casualties is assumed. However, these rates depend on the initial number of exposed individuals. If approximately nine times more individuals are exposed, there are three times more infected people at the end of the epidemic and IFR = 0.47%. If we relax these constraints and use a wider range of lower and upper bounds for the incubation and infectious periods, we observe that a higher incubation period (13 vs. 4.25 days) gives the same IFR (0.6 vs. 0.57%), but nine times more exposed individuals in the first case. Other choices of the set of parameters also provide a good fit to the data, but some of the results may not be realistic. Therefore, an accurate determination of the fatality rate and characteristics of the epidemic is subject to knowledge of the precise bounds of the parameters. Besides the specific example, the analysis proposed in this work shows how isolation measures, social distancing, and knowledge of the diffusion conditions help us to understand the dynamics of the epidemic. Hence, it is important to quantify the process to verify the effectiveness of the lockdown."
},
{
"pmid": "32703315",
"title": "SEIR model for COVID-19 dynamics incorporating the environment and social distancing.",
"abstract": "OBJECTIVE\nCoronavirus disease 2019 (COVID-19) is a pandemic respiratory illness spreading from person-to-person caused by a novel coronavirus and poses a serious public health risk. The goal of this study was to apply a modified susceptible-exposed-infectious-recovered (SEIR) compartmental mathematical model for prediction of COVID-19 epidemic dynamics incorporating pathogen in the environment and interventions. The next generation matrix approach was used to determine the basic reproduction number [Formula: see text]. The model equations are solved numerically using fourth and fifth order Runge-Kutta methods.\n\n\nRESULTS\nWe found an [Formula: see text] of 2.03, implying that the pandemic will persist in the human population in the absence of strong control measures. Results after simulating various scenarios indicate that disregarding social distancing and hygiene measures can have devastating effects on the human population. The model shows that quarantine of contacts and isolation of cases can help halt the spread on novel coronavirus."
},
{
"pmid": "32171059",
"title": "Early dynamics of transmission and control of COVID-19: a mathematical modelling study.",
"abstract": "BACKGROUND\nAn outbreak of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has led to 95 333 confirmed cases as of March 5, 2020. Understanding the early transmission dynamics of the infection and evaluating the effectiveness of control measures is crucial for assessing the potential for sustained transmission to occur in new areas. Combining a mathematical model of severe SARS-CoV-2 transmission with four datasets from within and outside Wuhan, we estimated how transmission in Wuhan varied between December, 2019, and February, 2020. We used these estimates to assess the potential for sustained human-to-human transmission to occur in locations outside Wuhan if cases were introduced.\n\n\nMETHODS\nWe combined a stochastic transmission model with data on cases of coronavirus disease 2019 (COVID-19) in Wuhan and international cases that originated in Wuhan to estimate how transmission had varied over time during January, 2020, and February, 2020. Based on these estimates, we then calculated the probability that newly introduced cases might generate outbreaks in other areas. To estimate the early dynamics of transmission in Wuhan, we fitted a stochastic transmission dynamic model to multiple publicly available datasets on cases in Wuhan and internationally exported cases from Wuhan. The four datasets we fitted to were: daily number of new internationally exported cases (or lack thereof), by date of onset, as of Jan 26, 2020; daily number of new cases in Wuhan with no market exposure, by date of onset, between Dec 1, 2019, and Jan 1, 2020; daily number of new cases in China, by date of onset, between Dec 29, 2019, and Jan 23, 2020; and proportion of infected passengers on evacuation flights between Jan 29, 2020, and Feb 4, 2020. We used an additional two datasets for comparison with model outputs: daily number of new exported cases from Wuhan (or lack thereof) in countries with high connectivity to Wuhan (ie, top 20 most at-risk countries), by date of confirmation, as of Feb 10, 2020; and data on new confirmed cases reported in Wuhan between Jan 16, 2020, and Feb 11, 2020.\n\n\nFINDINGS\nWe estimated that the median daily reproduction number (Rt) in Wuhan declined from 2·35 (95% CI 1·15-4·77) 1 week before travel restrictions were introduced on Jan 23, 2020, to 1·05 (0·41-2·39) 1 week after. Based on our estimates of Rt, assuming SARS-like variation, we calculated that in locations with similar transmission potential to Wuhan in early January, once there are at least four independently introduced cases, there is a more than 50% chance the infection will establish within that population.\n\n\nINTERPRETATION\nOur results show that COVID-19 transmission probably declined in Wuhan during late January, 2020, coinciding with the introduction of travel control measures. As more cases arrive in international locations with similar transmission potential to Wuhan before these control measures, it is likely many chains of transmission will fail to establish initially, but might lead to new outbreaks eventually.\n\n\nFUNDING\nWellcome Trust, Health Data Research UK, Bill & Melinda Gates Foundation, and National Institute for Health Research."
},
{
"pmid": "33052167",
"title": "The threshold of a deterministic and a stochastic SIQS epidemic model with varying total population size.",
"abstract": "In this paper, a stochastic and a deterministic SIS epidemic model with isolation and varying total population size are proposed. For the deterministic model, we establish the threshold R 0. When R 0 is less than 1, the disease-free equilibrium is globally stable, which means the disease will die out. While R 0 is greater than 1, the endemic equilibrium is globally stable, which implies that the disease will spread. Moreover, there is a critical isolation rate δ*, when the isolation rate is greater than it, the disease will be eliminated. For the stochastic model, we also present its threshold R 0 . When R 0 is less than 1, the disease will disappear with probability one. While R 0 is greater than 1, the disease will persist. We find that stochastic perturbation of the transmission rate (or the valid contact coefficient) can help to reduce the spread of the disease. That is, compared with stochastic model, the deterministic epidemic model overestimates the spread capacity of disease. We further find that there exists a critical the stochastic perturbation intensity of the transmission rate σ*, when the stochastic perturbation intensity of the transmission rate is bigger than it, the disease will disappear. At last, we apply our theories to a realistic disease, pneumococcus amongst homosexuals, carry out numerical simulations and obtain the empirical probability density under different parameter values. The critical isolation rate δ* is presented. When the isolation rate δ is greater than δ*, the pneumococcus amongst will be eliminated."
},
{
"pmid": "32322102",
"title": "Modelling the COVID-19 epidemic and implementation of population-wide interventions in Italy.",
"abstract": "In Italy, 128,948 confirmed cases and 15,887 deaths of people who tested positive for SARS-CoV-2 were registered as of 5 April 2020. Ending the global SARS-CoV-2 pandemic requires implementation of multiple population-wide strategies, including social distancing, testing and contact tracing. We propose a new model that predicts the course of the epidemic to help plan an effective control strategy. The model considers eight stages of infection: susceptible (S), infected (I), diagnosed (D), ailing (A), recognized (R), threatened (T), healed (H) and extinct (E), collectively termed SIDARTHE. Our SIDARTHE model discriminates between infected individuals depending on whether they have been diagnosed and on the severity of their symptoms. The distinction between diagnosed and non-diagnosed individuals is important because the former are typically isolated and hence less likely to spread the infection. This delineation also helps to explain misperceptions of the case fatality rate and of the epidemic spread. We compare simulation results with real data on the COVID-19 epidemic in Italy, and we model possible scenarios of implementation of countermeasures. Our results demonstrate that restrictive social-distancing measures will need to be combined with widespread testing and contact tracing to end the ongoing COVID-19 pandemic."
},
{
"pmid": "32647358",
"title": "Revealing COVID-19 transmission in Australia by SARS-CoV-2 genome sequencing and agent-based modeling.",
"abstract": "In January 2020, a novel betacoronavirus (family Coronaviridae), named severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), was identified as the etiological agent of a cluster of pneumonia cases occurring in Wuhan City, Hubei Province, China1,2. The disease arising from SARS-CoV-2 infection, coronavirus disease 2019 (COVID-19), subsequently spread rapidly causing a worldwide pandemic. Here we examine the added value of near real-time genome sequencing of SARS-CoV-2 in a subpopulation of infected patients during the first 10 weeks of COVID-19 containment in Australia and compare findings from genomic surveillance with predictions of a computational agent-based model (ABM). Using the Australian census data, the ABM generates over 24 million software agents representing the population of Australia, each with demographic attributes of an anonymous individual. It then simulates transmission of the disease over time, spreading from specific infection sources, using contact rates of individuals within different social contexts. We report that the prospective sequencing of SARS-CoV-2 clarified the probable source of infection in cases where epidemiological links could not be determined, significantly decreased the proportion of COVID-19 cases with contentious links, documented genomically similar cases associated with concurrent transmission in several institutions and identified previously unsuspected links. Only a quarter of sequenced cases appeared to be locally acquired and were concordant with predictions from the ABM. These high-resolution genomic data are crucial to track cases with locally acquired COVID-19 and for timely recognition of independent importations once border restrictions are lifted and trade and travel resume."
},
{
"pmid": "31911652",
"title": "U1 snRNP regulates cancer cell migration and invasion in vitro.",
"abstract": "Stimulated cells and cancer cells have widespread shortening of mRNA 3'-untranslated regions (3'UTRs) and switches to shorter mRNA isoforms due to usage of more proximal polyadenylation signals (PASs) in introns and last exons. U1 snRNP (U1), vertebrates' most abundant non-coding (spliceosomal) small nuclear RNA, silences proximal PASs and its inhibition with antisense morpholino oligonucleotides (U1 AMO) triggers widespread premature transcription termination and mRNA shortening. Here we show that low U1 AMO doses increase cancer cells' migration and invasion in vitro by up to 500%, whereas U1 over-expression has the opposite effect. In addition to 3'UTR length, numerous transcriptome changes that could contribute to this phenotype are observed, including alternative splicing, and mRNA expression levels of proto-oncogenes and tumor suppressors. These findings reveal an unexpected role for U1 homeostasis (available U1 relative to transcription) in oncogenic and activated cell states, and suggest U1 as a potential target for their modulation."
},
{
"pmid": "33633491",
"title": "An agent-based model of the interrelation between the COVID-19 outbreak and economic activities.",
"abstract": "As of July 2020, COVID-19 caused by SARS-COV-2 is spreading worldwide, causing severe economic damage. While minimizing human contact is effective in managing outbreaks, it causes severe economic losses. Strategies to solve this dilemma by considering the interrelation between the spread of the virus and economic activities are urgently needed to mitigate the health and economic damage. Here, we propose an abstract agent-based model of the COVID-19 outbreak that accounts for economic activities. The computational simulation of the model recapitulates the trade-off between the health and economic damage associated with voluntary restraint measures. Based on the simulation results, we discuss how the macroscopic dynamics of infection and economics emerge from individuals' behaviours. We believe our model can serve as a platform for discussing solutions to the above-mentioned dilemma."
},
{
"pmid": "33171481",
"title": "Mobility network models of COVID-19 explain inequities and inform reopening.",
"abstract": "The coronavirus disease 2019 (COVID-19) pandemic markedly changed human mobility patterns, necessitating epidemiological models that can capture the effects of these changes in mobility on the spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)1. Here we introduce a metapopulation susceptible-exposed-infectious-removed (SEIR) model that integrates fine-grained, dynamic mobility networks to simulate the spread of SARS-CoV-2 in ten of the largest US metropolitan areas. Our mobility networks are derived from mobile phone data and map the hourly movements of 98 million people from neighbourhoods (or census block groups) to points of interest such as restaurants and religious establishments, connecting 56,945 census block groups to 552,758 points of interest with 5.4 billion hourly edges. We show that by integrating these networks, a relatively simple SEIR model can accurately fit the real case trajectory, despite substantial changes in the behaviour of the population over time. Our model predicts that a small minority of 'superspreader' points of interest account for a large majority of the infections, and that restricting the maximum occupancy at each point of interest is more effective than uniformly reducing mobility. Our model also correctly predicts higher infection rates among disadvantaged racial and socioeconomic groups2-8 solely as the result of differences in mobility: we find that disadvantaged groups have not been able to reduce their mobility as sharply, and that the points of interest that they visit are more crowded and are therefore associated with higher risk. By capturing who is infected at which locations, our model supports detailed analyses that can inform more-effective and equitable policy responses to COVID-19."
},
{
"pmid": "32770169",
"title": "Using a real-world network to model localized COVID-19 control strategies.",
"abstract": "Case isolation and contact tracing can contribute to the control of COVID-19 outbreaks1,2. However, it remains unclear how real-world social networks could influence the effectiveness and efficiency of such approaches. To address this issue, we simulated control strategies for SARS-CoV-2 transmission in a real-world social network generated from high-resolution GPS data that were gathered in the course of a citizen-science experiment3,4. We found that tracing the contacts of contacts reduced the size of simulated outbreaks more than tracing of only contacts, but this strategy also resulted in almost half of the local population being quarantined at a single point in time. Testing and releasing non-infectious individuals from quarantine led to increases in outbreak size, suggesting that contact tracing and quarantine might be most effective as a 'local lockdown' strategy when contact rates are high. Finally, we estimated that combining physical distancing with contact tracing could enable epidemic control while reducing the number of quarantined individuals. Our findings suggest that targeted tracing and quarantine strategies would be most efficient when combined with other control measures such as physical distancing."
},
{
"pmid": "16292310",
"title": "Superspreading and the effect of individual variation on disease emergence.",
"abstract": "Population-level analyses often use average quantities to describe heterogeneous systems, particularly when variation does not arise from identifiable groups. A prominent example, central to our current understanding of epidemic spread, is the basic reproductive number, R(0), which is defined as the mean number of infections caused by an infected individual in a susceptible population. Population estimates of R(0) can obscure considerable individual variation in infectiousness, as highlighted during the global emergence of severe acute respiratory syndrome (SARS) by numerous 'superspreading events' in which certain individuals infected unusually large numbers of secondary cases. For diseases transmitted by non-sexual direct contacts, such as SARS or smallpox, individual variation is difficult to measure empirically, and thus its importance for outbreak dynamics has been unclear. Here we present an integrated theoretical and statistical analysis of the influence of individual variation in infectiousness on disease emergence. Using contact tracing data from eight directly transmitted diseases, we show that the distribution of individual infectiousness around R(0) is often highly skewed. Model predictions accounting for this variation differ sharply from average-based approaches, with disease extinction more likely and outbreaks rarer but more explosive. Using these models, we explore implications for outbreak control, showing that individual-specific control measures outperform population-wide measures. Moreover, the dramatic improvements achieved through targeted control policies emphasize the need to identify predictive correlates of higher infectiousness. Our findings indicate that superspreading is a normal feature of disease spread, and to frame ongoing discussion we propose a rigorous definition for superspreading events and a method to predict their frequency."
}
] |
Scientific Reports | 35273245 | PMC8913695 | 10.1038/s41598-022-07846-5 | RGB-D based multi-modal deep learning for spacecraft and debris recognition | Recognition of space objects including spacecraft and debris is one of the main components in the space situational awareness (SSA) system. Various tasks such as satellite formation, on-orbit servicing, and active debris removal require object recognition to be done perfectly. The recognition task in actual space imagery is highly complex because the sensing conditions are largely diverse. The conditions include various backgrounds affected by noise, several orbital scenarios, high contrast, low signal-to-noise ratio, and various object sizes. To address the problem of space recognition, this paper proposes a multi-modal learning solution using various deep learning models. To extract features from RGB images that have spacecraft and debris, various convolutional neural network (CNN) based models such as ResNet, EfficientNet, and DenseNet were explored. Furthermore, RGB based vision transformer was demonstrated. Additionally, End-to-End CNN was used for classification of depth images. The final decision of the proposed solution combines the two decisions from RGB based and Depth-based models. The experiments were carried out using a novel dataset called SPARK which was generated under a realistic space simulation environment. The dataset includes various images with eleven categories, and it is divided into 150 k of RGB images and 150 k of depth images. The proposed combination of RGB based vision transformer and Depth-based End-to-End CNN showed higher performance and better results in terms of accuracy (85%), precision (86%), recall (85%), and F1 score (84%). Therefore, the proposed multi-modal deep learning is a good feasible solution to be utilized in real tasks of SSA system. | Related workThe task of target recognition should be done autonomously to minimize the risk of collision in space13. The vision-based sensor such as camera2,14–16 is the most significant component in SSA to observe visual data and build data-driven AI solution. Various methods have been proposed in previous research works to track and monitor inactive and active satellites from one side and remove space debris from the other side. LiDAR sensor was also used for debris removal, target detection, and pose estimation2,15–17. Pose estimation methods were found to match 3D spacecraft wireframe (target) with 2D image utilizing the matching process between visual features extracted from both image and wireframe18. The Perspective-n-Point (PnP) problem was solved to find the pose18. The conventional computer vision algorithms such as Sobel and Canny detectors were used to extract the edge features19,20. On the other hand, traditional machine learning algorithms were considered in the task of pose estimation utilizing principal component analysis (PCA)21. The PCA was applied to a query spacecraft image and then compared with the ground truth poses in the dataset for matching purposes.Object detection and image classification are two main tasks in computer vision to detect the objects, calculate their bounding boxes, and predict the categories. Deep learning algorithms have produced better results than computer vision algorithms because they use automatic feature learning and extraction. Therefore, deep learning algorithms have been used in the space applications to recognize spacecraft and debris for various purposes. Pre-trained convolutional neural network was one of the deep learning models used to estimate the pose of the spacecraft22,23 such as GoogLeNet CNN24,25. On the other hand, to determine the translation, and rotation of a space object relative to a camera, VGG CNN26,27 was trained and tested on synthetic dataset. Furthermore, to estimate the pose of uncooperative spacecraft without 3D information and to predict the bounding box of space objects, ResNet CNN was demonstrated18,28.The performance of deep learning and its generalization ability are based on the size of data fed to deep model. The data size should be large to produce the expected improvement compared to traditional machine learning methods. In space application, the cost of spacecraft data acquisition is expensive. Therefore, various synthetic datasets were proposed in research works for 6D pose estimation including Unreal Rendered Spacecraft On-Orbit (URSO) dataset29 and Spacecraft pose estimation dataset (SPEED)30,31.In addition to the cost of space data acquisition, object tracking is a complex task because the surrounding spacecrafts or targets are varied in sizes. To address the previous problems, researchers have considered the data acquisition process to collect images of space objects such as spacecraft and debris. Therefore, they generated high resolution synthetic spacecraft dataset using Unity3D game engine environment simulator32. To propose sufficient labelled space dataset, a novel SPARK dataset was found specifically for space object classification2,3. The SPARK dataset was represented by realistic earth, and the surrounding objects around the earth. ResNet28 and EfficientNet33 were demonstrated as pre-trained CNNs utilizing SPARK dataset with several scenarios2. The three scenarios are: (1) random initialization of the models and training from scratch. (2) feature extraction by freezing the backbone and training only the classifier in top layers. (3) using the pre-trained weights and then fine-tuning the whole model including the backbone and classifier. They found that the models trained on both RGB, and depth images showed better performance than single models2. | [] | [] |
Subsets and Splits