accession_id
stringlengths
9
11
pmid
stringlengths
1
8
introduction
stringlengths
0
134k
methods
stringlengths
0
208k
results
stringlengths
0
357k
discussion
stringlengths
0
357k
conclusion
stringlengths
0
58.3k
front
stringlengths
0
30.9k
body
stringlengths
0
573k
back
stringlengths
0
126k
license
stringclasses
4 values
retracted
stringclasses
2 values
last_updated
stringlengths
19
19
citation
stringlengths
14
94
package_file
stringlengths
0
35
PMC10787969
38218836
Introduction Lentil ( Lens culinaris Medik), is one of the diploid annual legumes (2n = 2x = 14); their grains are known for their richness in proteins, minerals (Fe, K, Zn, P) and fibers [ 1 ]. Lentil grains are an important component of the daily diet for large populations in North Africa, sub-Saharan Africa, the Middle East, and the Indian sub-continent [ 2 ]. The regular consumption of lentil could help in overcoming mineral deficit for more than half of the world's population [ 3 – 5 ]. Lentil crop residues could also be used as livestock feed [ 6 , 7 ]. Moreover lentil is a nitrogen-fixing legume that contributes to enhance soil fertility and promotes the sustainability of agricultural systems [ 8 , 9 ]. The world production of lentil was in average 6.315 million tons between 2017 and 2021, with distribution according to the five continents: Asia (42%), America (42%), Oceania (10%), Africa (3%), and Europe (3%) [ 10 ]. The domestication of lentil began around 7000 B.C . [ 11 ] in the Near East via wild populations of Lens orientalis that were found in the mountains between Syria and Turkey [ 12 – 14 ]. After domestication, lentil with other important basic crops such as pea, faba bean, chickpea, wheat, and barley have been diffused from Near East to Greece, Central Europe, Egypt, Central Asia, and India. It has arrived in Morocco from Central Europe via Mediterranean Islands at ninth century [ 12 , 15 ]. While Canada and USA started growing lentil only since 1969 and 1916, respectively [ 16 ] During domestication, several characteristics were targeted especially seed dormancy and pod indehiscence [ 17 ]. Lentil has traditionally been cultivated in Morocco, using mostly local varieties selected by farmers on the basis of a number of quality, yield, adaptation and other desired characteristics [ 18 ]. In Morocco, lentil is currently grown as a rainfed crop, in rotation with cereals. The average cultivated area is around 40,000 ha, yielding an average production of 28,163 to 41,602 tons from 2017 to 2021. [ 10 ]. The human population is expected to grow to 10 billion by 2050 , which will put a strain on the world's resources [ 19 ]. Climate change and the emergence of new diseases and parasites threaten the productivity of agriculture worldwide [ 20 – 22 ]. To meet the needs of this growing population, breeders are investigating efficient methods for developing new cultivars with genetic resistance to diseases and to different abiotic stresses. However, the conventional breeding approaches adopted for enhancing the productivity of lentil take a long time and requires many years to release new and adaptive cultivars. The extended photoperiod is one of the methods that reduce the duration of the plant cycles [ 23 , 24 ]. In many studies, it has been demonstrated that extended photoperiod have a positive benefit effect for breeding, by accelerating flowering and reducing plant life cycle in Safflower [ 25 ], Strawberry [ 26 ], Soybean [ 27 ], Barley [ 28 ], Wheat [ 29 ], Chickpea [ 30 ], Faba-Bean [ 31 ] and Lentil [ 23 , 32 , 33 ]. The extended photoperiod can be achieved with artificial light [ 34 ]. In lentil, it has been reported that the reduction of time to flowering is favored by a photoperiod with a light intensity that can be varied around 500 μmol m −2 s −1 [ 35 ], with a duration of 16, 18, 20 and 22 h of light and 8, 6, 4 and 2 h of dark, respectively, [ 23 , 24 , 36 , 37 ] . The lentil is a plant with long or neutral days [ 38 – 40 ]. For breeding programs based on conventional methods, in normal environmental conditions greenhouses and field conditions, the development of homozygous lines from segregating populations after hybridization takes 7–9 years, if only one generation is produced per year, while a prolonged photoperiod with continuous illumination can reduce time to flowering and accelerate growth resulting in a shorter life cycle [ 23 ]. The selection of genotypes with early flowering, early development, and high yield are among the challenges of the breeders, to adapt the crops life cycle to available growing season [ 41 , 42 ]. The major element for the duration of the crop life cycle is the period between sowing and flowering, this period is regulated by the effect of temperature, photoperiod, genotype, and interactions between these parameters [ 43 ]. The temperature influence the expression of the transcription factor which directly affects the floral induction [ 44 ]. In Soybean, the light environment in which this plant is cultivated significantly influences the genotype, making it the most crucial factor [ 45 ]. To our knowledge, no studies have been done so far on the evaluation of genetic variability to the response of extended photoperiod using lentil genotypes of different latitudinal origins. The objectives of our study were (1) to analyze the genetic diversity of 80 landraces from three latitudinal origins (low, medium and high latitudes) from different countries (Russia, Serbia, Ukraine, Montenegro, Belgium, Armenia, Chile, Ethiopia, India, Iran, Afghanistan, Morocco, Italy, Turkey, and Greece) in response to the application of an extended photoperiod regime, and (2) to evaluate the sensitivity to photoperiod and select accessions that could be more adapted to use under extended photoperiod in order to use them as parents for rapid generation turnover in speed breeding growth chambers.
Material and methods Plant material A total of 80 lentil ( Lens Culinaris Medik.) accessions from different countries (Afghanistan [ 3 ], Armenia [ 1 ], Belgium [ 5 ], Chile [ 13 ], Ethiopia [ 3 ], Greece [ 3 ], India [ 1 ], Iran [ 4 ], Italy [ 6 ], Morocco [ 27 ], Montenegro [ 2 ], Russia [ 3 ], Serbia [ 2 ], Turkey [ 6 ], Ukraine [ 1 ]) were characterized under extended photoperiod. The accessions were classified into three latitudinal origins: Low (0°–20°), Medium (21°–40°) and High (41°–60°) (Fig. 1 ; Table 1 ). The low, medium and high categories refer to the latitude 0 in reference to the equator, and therefore to the natural flowering photoperiod in the original latitude. All the accessions used in this study come from our gene bank based at the National Institute of Agricultural Research in Settat, Morocco. Photo-thermal regime and plant growth conditions The experiment was carried out in a growth chamber at the Laboratory of Food legumes breeding, regional center of Settat, the national institute for agricultural research (INRA Morocco), during 150 days, from seed germination to plant physiological maturity. A completely randomized block design was used, with each variety planted three times. In all, three separate planting sessions were carried out, with three plants per pot for each accession, all subjected to a prolonged photoperiod treatment of 22 h of light at 25 °C and 2 h of darkness at 25 °C. Using light emitting diode (LED) Lamps (Standard ECO SLIM LED) (36 lamps of 9 W which each lamp about 14.81–18.51 μmol m −2 s −1 of light intensity) (Fig. 2 ). The accessions were planted in plastic pots (500 ml capacity) filled with 2/3 of soil and 1/3 of peat compost. During the experiment, plants were irrigated every 4–7 days depending on the growth stage of the crop and the corresponding water consumption. Plants were harvested at physiological maturity. Then qualitative and quantitative measurements were made. Early vegetative growth, development stages and phenological characterization Percentage of green canopy cover (GCC) corresponding to the proportion of the ground covered by plants was measured using a digital application (Canopeo) which is a mobile application, using images and video, it is an automatic color threshold image analysis tool that uses color values to classify all pixels in the image. The pixel analysis is based on blue to green (B/G) and red to green (R/G) [ 46 ]. Seedling vigor (SV) was estimated according to a scale modified [ 47 ] for the photoperiodic stress from 1 to 5, respectively (1 very poor, 2 poor, 3 medium, 4 good, 5 very good). Time to flowering (TF) Measured by counting the days of plant sowing to the day of the first flower's appearance. Time of pods set (TPS) Measured by counting the days from sowing to appearance of the first pod. Time to maturity (TM) Measured by counting the days from sowing to the yellowing and desiccation of the plant and the pods. Harvest Index (HI) was calculated according to the following formula: “Harvest index = Grain yield/Biological yield”, grain yield is a number and weight of seeds in (g), and biological yield is a dry weight of the aerial part measured after drying in an oven at 70 °C for 48 h. The vegetative stage length (VGS) corresponds to the number of days after sowing until the appearance of the first flower. While the reproductive stage length (RPS), corresponds to the number of days from the appearance of the first flower to the formation of the first pod. Finally, the seed filling stage length (SFS), corresponds to the number of days from the appearance of the first pod until 80% maturity. Statistical analysis For each parameter, descriptive statistics, analysis of variance were performed to test the effect of genotype and latitude under speed breeding by extended photoperiod. In addition, in order to assess the hypothesis of differentiation of the accessions according to their geographic origins in response to the application of the extended photoperiod and to determine the contribution of each trait in discriminating between origins, a canonical discriminant analysis was carried out by using the Statistical Package for the Social Sciences (SPSS) database software, version 21 for Windows. Graphical extrapolation of the results was performed using Microsoft Excel and (SPSS). While R software was used for variance analysis and through the " agricolae " package [ 48 ]. Duncan post-hoc test was used to test the differences between the different light intensity treatments by the “ multcomp ” R package [ 49 ]. The principal component analysis was performed using the R package ‘ FactoMineR , factoextra’ [ 50 ].
Results Genetic variation of vegetative and phenological traits The analysis of variance revealed a significant effect (p ≤ 0.05) of genotype on all traits, and a significant variation was also observed according to the latitudinal origins for all traits except reproduction stage and seed filling stage length (Table 2 ). Genetic variation of accessions from the latitudinal origins In this study, an analysis of genetic variation was carried out on accessions from the three distinct latitudinal origins: low, medium and high latitudes. To visualize and compare this genetic variation, we used boxplots (Fig. 3 ). Through observation of these boxplots, we identified interesting trends in genetic variation between latitudinal origins. Important differences in genetic distributions are clearly discernible between the Low, Medium and High latitudinal origins (Fig. 3 ; Tables 2 , 3 ). These results suggest a distinct genetic structuring between different latitudinal groups, highlighting the potential influence of environmental and latitudinal factors on the genetic diversity of the studied accessions. These results boost our understanding of genetic diversity in a geographical context, which could have important implications for the preservation and future use of these genetic resources in crop improvement and biodiversity conservation programs. Genetic variability of vegetative growth under extended photoperiod Vegetative cover rate showed significant differences for the 80 accessions, those from low latitudinal origin showed a low percentage of vegetative cover with 1.6%. While, those from Medium latitudinal origin showed a higher percentage of vegetative cover with 3.66%, same trend was observed for seedling vigor (Table 3 ). For vegetative cover and seedling vigor, two distinguished groups were observed according to Duncan test (Table 3 ). Genetic variability of phenological stages under extended photoperiod Significant differences obtained according to Duncan test among the studied accessions were observed for phenological stages between Low, Medium and High latitudes (Table 3 ). For the flowering time, the Low latitudinal accessions were the earliest ones at flowering with an average of 69 days after sowing, while the accessions from the High latitude were the latest at flowering with an average of 96 days after sowing (Table 3 ). For the time of pods set, the Low latitudinal accessions, have averages of 62 days after sowing being the earliest, in contrast to the accessions from High latitude as the latest, with averages of 104 days after sowing (Table 3 ). Time to maturity registered a difference between the accessions, low latitudinal accessions, in average with 75 days after sowing as the earliest, whereas the accessions from the High latitude as the latest in average with 121 days after sowing. Regarding vegetative, reproductive and seed filling stages, the Low latitude accessions have the lowest number of days values. It should be noted that some accessions of Medium latitudinal origin (18%) and High latitudinal origin (57%) failed to flower, while all accessions from Low latitudinal origin have flowered (Fig. 4 ). Variation of harvest index according to genotype under extended photoperiod The yield and biomass measurements allowed us to estimate the harvest index for the different accessions with a very highly difference, the accessions from the High latitude presented the lowest index with 0.011, while the accessions from Low latitude presented the highest index with 0.12, and three distinguished groups are showed by using Duncan test (Table 3 ). Correlation between different traits The Pearson correlation method was used to examine the links between the variables in our study (Fig. 5 ). In our study, we observed a Pearson correlation coefficient of 0.79 between the GCC and SV variables, indicating a strong positive correlation between them. This suggests that an increase in vigor is generally associated with an increase in canopy cover and vice versa, in the same trend the phenological traits showed strong positive correlations with each other TF and VGS with (1.00), TF and RPS with (0.72), TF and TPS with (1.00), TF and TM with (0.99). In contrast, the variables HI against TF, VGS and TPS had a correlation coefficient of (-0.70), indicating a negative correlation between them, and this suggests that a prolongation of phenological stages has a negative influence on yield under extended photoperiod. Multivariate analysis Canonical discriminant analysis To test the hypothesis of differentiation of the studied accessions according to their latitudinal origins, a canonical discriminant analysis was carried out, providing a graphical view that illustrated the existence of groups using the origin of accessions (Low, Medium and High latitudes) as a dependent variable. Time to flowering, time of pods set, time to maturity, harvest index, green canopy cover, seedling vigor, vegetative stage length, reproduction stage length and seed filing stage length as an explicative variables. The two first functions were significant, for the first function, Wilks' Lambda (0.57), Chi-square (41.48) and P < 0.000, and for the first function, Wilks' Lambda (0.76), Chi-square (20.44) and P < 0.001. The first function explained 50.8% while the second function explained 49.2% of the total variance, and corresponding to the correlations of 0.5 and 0.49, respectively. The two-dimensional (2D) Scatter diagram of the discriminant space (canonical plot) (Fig. 6 ) presented the distribution of samples separated by the first two functions. Based on the standardized coefficients of the Discriminant Function Analysis (DFA), the accessions from High latitudinal origin were highly weighted in the negative part of DFA-F1, while those from Medium latitudinal origin are the most weighted in the positive part of DFA-F1. The accessions from Low latitudinal origin was clearly distinguished from the other origins by the function 2. These results suggest that discriminant analysis has successfully identified distinct characteristics between the groups, enabling them to be effectively discriminated in this reduced two-dimensional space. Principal component analysis Principal component analysis was applied based on the mean values of all variables for the extended photoperiod treatment. The first two components explained 59% for the first axis PCA1, 19.4% for the second axis PCA2 (Fig. 7 A). The first principal component was highly and positively correlated with time to flowering (0.98), vegetative stage length (0.98), time of pod set (0.99), and time of maturity with (0.98), while highly and negatively correlated with harvest index (− 0.75). The second principal component was strongly and positively correlated with green canopy cover (0.93) and with seedling vigor (0.91).
Discussion Many studies on lentil have focused on the effect of genotype on physiological and morphological traits under normal temperature and photoperiod conditions in either the controlled environments or field. However, studies on the sensitivity to prolonged photoperiod are limited. Therefore, our study aims to investigate the effect of genotype and latitudinal origin of different lentil accessions in an extended photoperiod environment. Implications of latitudinal origin on photoperiodic response The impact of light conditions on flowering and plant development is of great importance, particularly in the context of plant adaptation to varying climates and daylengths. Plants are sensitive to the duration of daylight and darkness, which influences the start of flowering [ 51 ]. Long or short photoperiods can modify the flowering period according to the specific needs of each species or cultivar. For instance, rice plants from equatorial regions prefer shorter days to start flowering, while those from regions further from the equator require longer days [ 52 ]. Moreover, plants have the ability to adjust to environmental variations, including changes in photoperiod. When transplanted to new environments, plants can recalibrate their internal circadian clocks to adjust to local photoperiods [ 53 ]. Plants have photoreceptors, such as phytochromes and cryptochromes, that enable them to detect light, including red (R) and blue (B) light [ 54 ]. When activated by light, these photoreceptors initiate specific signaling pathways, acting as molecular switches. Light signals are integrated into the plant circadian clock, composed of genes and proteins that regulate gene expression throughout the day, among these genes FLOWERING LOCUS T (FT) plays a key role in regulating flowering in response to specific light signals [ 55 ]. The diverse latitudinal origins of the lentil accessions in our study could potentially contribute to the variations observed in their photoperiodic responses. Geographic latitude, longitude and climate may influence the natural photoperiod to which these accessions have adapted over generations [ 27 , 56 ]. This adaptation could have caused variations in their sensitivity to extended photoperiods, as proven by [ 57 ] that some wild lentil genotypes are less sensitive than cultivated ones to light quality. The response to photoperiodism is a major factor in determining the timing of flowering, and is governed by the complex interplay between internal circadian rhythm and external day length, which varies according to geographical latitude [ 27 ]. Impact of extended photoperiod on genetic variation of phenological and reproductive stages The various lentil genotypes studied comes from different geographical origins and have distinct genetic characteristics, resulting in varying sensitivity to environmental factors such as extended photoperiod. Previous research [ 32 ] shows that some genotypes from different origins can show increased or reduced sensitivity to specific environmental conditions such as long days, vernalization and temperature. The application of extended light duration induced an early flowering of the long day plants (LDP) such as lentil and chickpea [ 58 ]. Similar results have been reported by [ 23 ] on advanced lines, local populations, and wild accessions ( Lens orientalis ) in lentil. According to [ 31 ], the time from sowing to flowering differed significantly among accessions and also varied with the photoperiodic regimes. Moreover, [ 48 ] studied the responses of cultivated and wild-type lentil accessions in a growth chamber under controlled conditions (22◦C/16 h during the day and 16◦C/8 h at night) in different light environments (red/far-red ratio (R/FR) and photosynthetically active radiation (PAR)), the authors showed that time to flowering were significantly influenced by genotype, light environment, and the interaction between them, and this is related to the origin of each accession. In the present study, several lentil accessions showed a significant delay in their flowering process, or even failed to flower within the experiment's period. The reason for this response resides mainly in the photoperiod, i.e. the duration of day and night light to which these plants are subjected. The accessions from higher latitudes, such as Russia, showed a marked tendency to delay flowering by more than 80 days, and 57% of these plants failed to flower (Fig. 4 ), in this experiment, even under a prolonged photoperiod. This is explained by their genetic adaptation to environments where flowering days are characterized by a notably prolonged photoperiod, with up to 17 h of natural light as shown in the Table 1 . Therefore, they may need to be given a longer light duration than their natural light duration, more than 22 h of light or continuous light (24 h of light) in a period between plant emergence to flowering, to become sensitive to this light duration and accelerate their flowering process as LDP [ 59 , 60 ]. Another hypothesis is that there is a possible need for vernalization for this group of accessions, a process where prolonged exposure of lentil seeds to cold temperatures may be necessary to induce and accelerate flowering [ 32 ]. If the genetic variability observed in response to photoperiodism is associated with the potential role of vernalization, it may highlight the complex interaction between genetic and environmental factors in the regulation of flowering. Further research questions, that needs future exploration, related to the response of these accessions and their progenies (after crosses) to extended photoperiod under speed breeding in terms of flowering arise. In contrast, accessions from Ethiopia and India, located at lower latitudes, have evolved naturally to prosper in more balanced light conditions, with days and nights typically lasting 12 to 14 h during their flowering period, as shown in the Table 1 . Previous studies of lentil response to photoperiod have shown that genotypes from subtropical regions were less sensitive to variations in daylength [ 61 ]. These observations highlight the essential impact of plants' genetic adaptation to their local environment, particularly with regard to the length of day and night, knowledge that is crucial to breeding and selection programs aimed at developing varieties adapted to specific regions. Therefore, the influence of photoperiod on plant growth and development is mainly linked to the regulation of the long-day-dependent flowering pathway, such as the FLOWERING LOCUS T (FT) pathway [ 55 ]. Instead of directly accelerating photosynthesis, a prolonged photoperiod promotes the transition to the early flowering phase by modulating this signaling pathway. Significant genetic variability was observed for the duration of different development stages. The extended photoperiod significantly influenced the development stages of each genotype as proved by [ 62 ]. Although vegetative stage length and reproductive stage length were positively correlated with the genotypes earliness, except seed filling stage revealed the opposite, this is explained by the time compensation for the duration of the vegetative phase which will determine the rate and duration of seed filling stage length as proved by [ 63 ]. Effect of extended photoperiod on genetic variability of vegetative growth In our study, a high genetic variability (p = 0.000) was observed for green canopy cover between the studied accessions. Canopy cover, determined by the Canopeo, the Green canopy cover, is a very important parameter for biomass estimation based on the percentage of green color of plants [ 64 ]. However, the photoperiod regime can influence the growth and development of the plant as proven by [ 24 ]. Significant genetic diversity was observed among accessions with regard to seedling vigor. This trend has also been observed in lentil plants exposed to drought stress and well-watered conditions in previous studies [ 47 , 65 ], and it means that when plants are subjected to stress conditions, whether controlled or not, they grow differently, tolerating the stress or being sensitive to it. The importance of our study coincides with current efforts in the field of speed breeding and genetic improvement. The significant genetic variability observed for key traits such as flowering time, developmental stages and harvest index under extended photoperiod conditions has significant implications for accelerated breeding. As breeders and researchers work to develop crop varieties with improved performance and yield potential, it is crucial to understand the genetic basis of rapid development. Our results provide a valuable information on the possibility of exploiting genetic diversity to speed up the breeding process and obtain the desired characteristics through SB techniques. By revealing the complex interaction between genetic diversity and photoperiodic responses, our study offers a valuable gateway to the targeted manipulation of reproductive traits, enabling the rapid creation of high-yielding crop varieties.
Conclusion The results of this study indicated that extended photoperiod highly influenced the growth and development of different lentil genotypes. However, a large genetic variability for response to prolonged photoperiod was observed among the different accessions. Genotypes of Low latitudinal origin showed early flowering and maturity, and high yield, therefore, higher adaptability and easy use under extended photoperiod conditions without any strategies of initial induction of flowering (vernalization...). While, many other accessions especially from High and Medium latitudinal origins did not flower during this experience, therefore, using these accessions under extended photoperiod would be difficult and needs additional initial steps such as vernalization that could slow down the speed breeding process. Hence, our results suggest that Low latitudinal and some Medium latitudinal accessions are more recommended for breeding programs applying extended photoperiod to accelerate plant growth and flowering.
Lentil is an important pulse that contributes to global food security and the sustainability of farming systems. Hence, it is important to increase the production of this crop, especially in the context of climate changes through plant breeding aiming at the development of high-yielding and climate-smart cultivars. However, conventional plant breeding approaches are time and resources consuming. Thus, speed breeding techniques enabling rapid generation turnover could help to accelerate the development of new varieties. The application of extended photoperiod prolonging the duration of the plant’s exposure to light and shortening the duration of the dark phase is among the simplest speed breeding techniques. In this study, genetic variability response under extended photoperiod (22 h of light/2 h of dark at 25 °C) of a lentil collection of 80 landraces from diverse latitudinal origins low (0°–20°), medium (21°–40°) and high (41°–60°), was investigated. Significant genetic variations were observed between accessions, for time to flowering [40 → 120 days], time of pods set [45 → 130 days], time to maturity [64 → 150 days], harvest index [0 → 0.24], green canopy cover [0.39 → 5.62], seedling vigor [2 → 5], vegetative stage length [40 → 120 days], reproduction stage length [3 → 13 days], and seed filing stage length [6 → 25 days]. Overall, the accessions from Low latitudinal origin demonstrated a favorable response to the extended photoperiod application with almost all accessions flowered, while 18% and 57% of accessions originating from medium and high latitudinal areas, respectively, did not successfully reach the flowering stage. These results enhanced our understanding lentil responses to photoperiodism under controlled conditions and are expected to play important roles in speed breeding based on the application of the described protocol for lentil breeding programs in terms of choosing appropriate initial treatments such as vernalization depending on the origin of accession. Keywords
Acknowledgements Many thanks to the team at the Laboratory of Food Legumes Breeding at the Regional Center of Agricultural Research in Settat. Your help and expertise were very important to our study. Thank you for your wonderful collaboration, it really made our work great! Author contributions MM Conceptualization, methodology, software, validation, formal analysis, investigation, data curation, writing—original draft preparation, writing—review and editing. OI Conceptualization, methodology, validation, formal analysis, writing—review and editing, supervision, funding acquisition. AB Validation, formal analysis, writing—review and editing, supervision, funding acquisition. BB Writing—review and editing, supervision. Funding This research received no external funding. Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-15 23:43:46
Plant Methods. 2024 Jan 13; 20:9
oa_package/79/55/PMC10787969.tar.gz
PMC10787970
38218943
Introduction Asthma is a global bronchial inflammatory disease that affects individuals in all age-groups, and its prevalence has shown an increasing trend in many countries [ 1 ]. Asthma is a heterogeneous disease characterized clinically by reversible bronchoconstriction and airway hyperresponsiveness [ 2 ]. Asthma can be classified into 4 phenotypes based on the predominant type of inflammatory cells in the sputum: eosinophilic asthma (EA), neutrophilic asthma (NA), paucigranulocytic asthma (PA), and mixed granulocytic asthma (MA) [ 3 ]. Distinguishing the asthma phenotypes facilitates the analysis of clinical features, biological markers, and individualized treatment. NA is usually associated with more severe asthma, glucocorticoid resistance, and poor prognosis [ 3 ]. Therefore, identifying relevant biomarkers and developing therapeutic strategies for NA are key research imperatives. Interleukin (IL-36) is a member of the IL-1 superfamily of three endogenous agonists, IL-36α, -β, and -γ, which promote inflammatory cell infiltration through signaling at the IL-36 receptor (IL-36R) [ 4 ]. Under physiological conditions, low levels of IL-36 cytokine expression can be observed in organs such as the skin, intestine, lung and brain; during inflammation, IL-36 receptor agonists are predominantly expressed by keratinocytes, epithelial cells, and inflammatory monocytes/macrophages [ 5 ]. IL-36 cytokines are activated by neutrophil-derived cathepsin G, elastase, and protease-3, which are mainly released by activated neutrophils [ 6 ]. Studies have indicated the potential involvement of IL-36 in a wide range of inflammatory and oncogenic processes in the skin, lung, kidney, liver, and intestine, which is mediated via activation of immune and non-immune cells, such as T cells, keratinocytes, and epithelial cells [ 7 ]. In a mouse model of unilateral ureteral obstruction, IL-36α was found to activate the IL-23/IL-17 axis, amplify inflammation, and promote the development of renal lesions. We hypothesized that a similar phenomenon may occur in the context of asthma [ 8 ]. A study found that IL-36γ promotes allergic rhinitis by enhancing eosinophil infiltration, and that IL-36α is involved in the allergic inflammatory response by regulating Th17 [ 9 ]. There are many similarities in the pathogenesis of allergic asthma and allergic rhinitis, and these are common diseases that frequently occur together [ 10 ]. As mentioned above, the heterogeneity of asthma and IL-36 may lead to inconsistency between the results of experimental studies. Therefore, in this study, we compared the sputum concentrations of IL-36 in asthma and healthy non-asthmatic individuals, and investigated the relationship between IL-36 and associated inflammatory cytokines. Furthermore, we investigated the sputum concentration of IL-36 in patients with different asthma phenotypes.
Methods Study population The diagnosis of asthma was based on the Global Initiative for Asthma (GINA) guidelines for current episodes of respiratory symptoms, evidence of variable airflow obstruction, and clinical diagnosis [ 11 ]. This study required sputum induction maneuvers; therefore, only asthmatic patients in a mild controlled stage were enrolled. The exclusion criteria were [ 1 ] pregnant women [ 2 ]; patients with severe cardiovascular diseases [ 3 ]; malignant tumors [ 4 ]; active tuberculosis or interstitial lung disease [ 5 ]; history of oral corticosteroid or antibiotic therapy in the past year [ 6 ]; exacerbation of asthma within the 4-week period immediately preceding the study; and [ 7 ] previous change of treatment within 4 weeks. In addition, age- and sex-matched healthy non-asthmatic subjects were also recruited as healthy non-asthmatic controls. All asthma patients and healthy non-asthmatic subjects were recruited from the Second Hospital of Jilin University. All subjects completed a bronchodilator test prior to enrolment. All subjects were of Mongolian ethnicity, i.e., yellow race. All subjects completed the questionnaires, including treatment history, smoking history, and presence of respiratory symptoms. All subjects provided written informed consent. The Ethics Committee of the Second Hospital of Jilin University granted ethical approval for this study (2016-34). Sputum collection All participants inhaled ultrasonically nebulized hypertonic saline (4.5%) for 15 min to induce sputum after adequate cleaning of the oral cavity and pharynx. The induced sputum was collected into petri dishes and the sputum plugs were isolated. Dithiothreitol (DTT) was added to lyse the sputum plugs and the volume of sputum plugs was recorded. After 30 min of rotational mixing at room temperature, phosphate buffer solution (PBS, pH 7.4) with 4 times the volume of sputum was added and mixed. The filtered filtrate (60 μm) was centrifuged at 400× g for 10 min and the supernatant was stored at − 80 °C for subsequent experiments. Sputum cell smears were prepared by cell precipitation, fixed in methanol for 10 min, rinsed, stained with hematoxylin for 30 s, rinsed with Chromotrope 2R (C2R acid)-paraffin mixture for 20 min, rinsed again, air-dried, and sealed with neutral resin [ 12 , 13 ]. Measurement of IL-36 and other cytokines The concentrations of IL-36α, IL-36β, IL-36γ, IL-36Ra, and IL-1β were measured using a commercial human ELISA kit (CUSABIO, China). The IL-2, IL-4, IL-6, IL-9, IL-10, IL-13, IL-17 A, IL-17 F, IL-22, IFN-γ, and TNF-α concentrations were determined using the Multi-Analyte Flow Assay Kit (Biolegend, USA) with a Cytometric Bead Array (CBA). The above assay steps were performed according to the manufacturer’s recommended protocol. Asthma phenotype classification The numbers of various inflammatory cells in the induced sputum smear were observed microscopically and recorded. Patients with neutrophils ≥ 61% in sputum were categorized as NA, patients with eosinophils ≥ 3% in sputum as EA, patients with eosinophils < 3% and neutrophils < 61% in sputum as PA, and patients with eosinophils ≥ 3% and neutrophils ≥ 61% in sputum were categorized as MA [ 14 ]. Statistical analysis All data were analyzed using the Statistical Package for the Social Sciences for Windows (SPSS) statistical software Version 20 (SPSS Inc., IL, USA). Non-normally distributed continuous variables were subjected to logarithmic transformation, after which statistical analysis was performed on normally distributed logged data. Normally distributed variables were expressed as mean ± standard deviation (SD) and statistical analysis was performed using ANOVA with a least significant difference (LSD). Non-normally distributed variables were expressed as median and interquartile range (IQR), and statistical analysis was performed using Kruskal Wallis H test with Bonferroni correction or Mann–Whitney U test. Categorical variables were analyzed using Chi-squared test. Correlations between each inflammatory factor in sputum supernatant, and correlation of inflammatory factors with lung function, and inflammatory cells in sputum were analyzed using partial correlation. P values < 0.05 were considered indicative of statistical significance.
Results Clinical characteristics of asthmatic patients and healthy non-asthmatic controls A total of 62 patients with asthma (27 male and 35 female) were included in this study. Sixteen healthy volunteers (10 males and 6 females) were enrolled in the control group. There were no significant between-group differences with respect to the baseline clinical data ( P > 0.05). The predicted and post values of forced expiratory volume in 1 s (FEV1) in the asthma group were significantly lower than those in the healthy non-asthmatic control group ( P < 0.001 and 0.002, respectively). The number of eosinophils, neutrophils, macrophages, and lymphocytes in the induced sputum were significantly greater in the asthma group compared to the control group (eosinophils: P < 0.001, neutrophils, macrophages, lymphocytes: P = 0.001) (Table 1 ). In the asthma group, the concentrations of IL-36α and IL-36β were significantly higher ( P = 0.003 and 0.031), while the IL-36Ra concentration was significantly lower compared to the control group ( P < 0.001). However, there was no significant between-group difference with respect to IL-36γ concentration ( P = 0.603). The concentrations of IL-10, IL-13, and IL-17 A in the asthma group were significantly lower than those in the control group (IL-10: P = 0.043; IL-13: P = 0.014; IL-17 A: P = 0.026). There were no significant between-group differences with respect to the other inflammatory factors (Fig. 1 ). Clinical features of inflammatory phenotypes in asthma The asthma group was further divided into EA, MA, NA, and PA groups based on the examination of induced sputum; the clinical characteristics of these groups were comparable ( P > 0.05) (Table 2 and Fig. 2 ). Concentrations of IL-36 and other inflammatory mediators in Asthma phenotypes Sputum IL-36α and IL-36β concentrations in the NA group were significantly higher than those in the PA and EA groups. Sputum IL-1β concentration in the NA, PA, and MA groups were significantly higher than that in the EA group. Sputum IL-13 and IL-10 concentrations in the NA group were significantly lower than those in the PA and EA groups. Sputum IL-6 concentration in the NA group was significantly higher than that in the EA group. The concentrations of other inflammatory factors were comparable among the groups (Fig. 3 ). Association between IL-36 and inflammatory cells We compared inflammatory mediators and concentrations of inflammatory cells in the induced sputum. IL-36α and IL-36β showed positive correlation with sputum neutrophils and total cell count (TCC) (R = 0.689, P < 0.01; R = 0.304, P = 0.008; R = 0.689, P < 0.042; R = 0.253, P = 0.026). In addition, there was a significant positive correlation between IL-36α and IL-36β (R = 0.658, P < 0.01) (Fig. 4 ). Association of IL-36 with other inflammatory mediators We compared the concentrations of IL-36 and other inflammatory mediators in the sputum supernatant. IL-36α, IL-36β, and IL-36γ showed strong positive correlation with IL-6, TNF-α, and IL-17 A, respectively (R = 0.592, 0.451, and 0.431, P < 0.01) (Table 3 ). Association of other inflammatory mediators In addition, our study also innovatively performed multiple comparisons of IL-2, IL-4, IL-6, IL-9, IL-10, IL-13, IL-17 A, IL-17 F, IL-22, IFN-γ, and TNF-α (Fig. 5 ). Our results found a significant positive correlation between IL-2 (R = 0.614) and IL-4 (R = 0.614), IL-9 (R = 0.710), IL-10 (R = 0.275), IL-13 (R = 0.327), IL-17 A (R = 0.307), IL-17 F (R = 0.628), IL-22 (R = 0.540), IFN-γ (R = 0.546) ( P < 0.05). IL-1β had a significant positive correlation with IL-6 (R = 0.271; P < 0.05). IL-13 had a significant positive correlation with IL-2 (R = 0.327), IL-4 (R = 0.272), IL-10 (R = 0.553), IL-17 F (R = 0.279) ( P < 0.05). IL-4 had a significant positive correlation with IL-2 (R = 0.614), IL-9 (R=,0.365), IL-10 (R = 0.350), IL-13 (R = 0.272), IL-17 A (R = 0.506), IL-17 F (R = 0.811), IL-22 (R = 0.738), IFN-γ (R = 0.500) ( P < 0.05). IL-6 was significantly and positively correlated with IL-1β, IFN-γ (R = 0.271, 0.446; P < 0.05). IL-9 had significant positive correlation with IL-2 (R = 0.710), IL-4 (R = 0.365), IFN-γ (R = 0.377), IL-17 F (R = 0.314) ( P < 0.05). IL-10 had significant positive correlation with IL-2 (R = 0.275), IL-13 (R = 0.553), IFN-γ (R = 0.363), IL-17 F (R = 0.258) ( P < 0.05). iFN-γ was significantly correlated with IL-2 (R = 0.546), IL-4 (R = 0.500), IL-6 (R = 0.446), IL-9 (R = 0.377), IL-10 (R = 0.363), IL-17 A (R = 0.418), IL-17 F (R = 0.525), IL-22 (R = 0.432), and TNF-α (R = 0.366) ( P < 0.05). TNF-α had significant positive correlation with IL-17 A, IL-22, and IFN-γ (R = 0.333, 0.292, 0.366; P < 0.05). IL-17 A had significant positive correlation with IL-2 (R = 0.307), IL-4 (R = 0.506), IL-10 (R = 0.258), IL-17 F (R = 0.512), IL-22 (R = 0.378), IFN-γ (R = 0.418), and TNF-α (R = 0.333) ( P < 0.05). IL-17 F showed a significant positive correlation with IL-2 (R = 0.628), IL-4 (R = 0.811), IL-9 (R = 0.314), IL-10 (R = 0.472), IL-13 (R = 0.279), IL-17 A (R = 0.512), IL-22(R = 0.755), IFN-γ(R = 0.525) ( P < 0.05). IL-22 was significantly correlated with IL-2 (R = 0.540), IL-4 (R = 0.738), IL-10 (R = 0.321), IFN-γ (R = 0.432), TNF-α (R = 0.292), IL-17 A (R = 0.378), and IL-17 F (R = 0.755) ( P < 0.05).
Discussion The involvement of IL-36 in the pathogenesis of autoimmune diseases is well established. However, its role in the pathogenesis of asthma is not well characterized. IL-Rrp2 is the common binding receptor for all IL-36 isoforms, and IL-36α, IL-36β, and IL-36γ compete with IL-36Ra for binding to this receptor [ 15 ]. In our study, asthmatic patients had higher sputum IL-36α and IL-36β concentrations, and lower IL-36Ra concentration compared to healthy non-asthmatic controls. In a mouse model of S. aureus -induced epidermal inflammation, IL 36α and IL-4 released from keratinocytes were found to promote B-cell IgE secretion, plasma cell differentiation, and elevated serum IgE concentrations. However, these changes were significantly attenuated in IL-36R-deficient transgenic mice or wild-type mice treated with anti-IL-36R antagonistic antibodies [ 16 ]. Our results support this study; however, there is a paucity of studies on IL-36 isoforms in different asthmatic phenotypes. Therefore, we sought to investigate whether IL-36 concentrations differed among asthma phenotypes, and if so, whether these differences could be explained by heterogeneity of asthma inflammation or differences in asthma phenotypes. We further examined the concentrations of various IL-36 subtypes in the sputum supernatant of patients with different asthmatic phenotypes. Interestingly, sputum IL-36α and IL-36β concentrations were significantly higher in the NA group compared to the PA and EA groups. However, there were no significant differences between the phenotypes with respect to sputum IL-36γ and IL-36Ra. Moreover, IL-36α and IL-36β showed a positive correlation with sputum neutrophils and TCC. These findings indicate a key role of IL-36 isoforms in inducing infiltration and activity of neutrophils in asthma, and underline their involvement in the pathophysiology of airway inflammation in the asthmatic phenotypes. IL-36α has a pro-inflammatory effect on the lung. One study found that the neutrophil environment can activate IL-36α and IL-36γ [ 17 ]. Intratracheal administration of IL-36α drops in a mouse model was found to induce the activation of the NF-κB and MAPK pathways, and induce neutrophil chemokine expression, ultimately leading to neutrophil intracellular flow [ 18 , 19 ]. In addition, IL-36 pro-inflammatory factors can promote the expression of neutrophil chemokines such as CXCL8, CXCL1, and CXCL2, which induce neutrophil endocytosis [ 19 , 20 ]. IL-36 induces the production of pro-inflammatory factors such as IL-1β, TNF-α, IL-12, and IL-23. IL-36β is involved not only in inducing Th1 cell polarization but also in the Th1 immune response following mycobacterial infection [ 21 ]. These studies are consistent with our findings. In addition, IL-36 cytokines have been shown to be mainly involved in the Th1 immune response, while the in vivo expression of IL-36α and IL-36β promotes neutrophil recruitment in asthmatic airways [ 22 , 23 ]. IL-36R expression is increased in naive CD4+ T cells, and IL-36β, together with IL-12, promotes the Th1 polarization of naive CD4+ T cells [ 21 ]. IL-36 has now been shown to be involved in the polarization process of Th17 [ 22 ]. IL-36α and IL-17 have a strong feedback loop in switching skin inflammation signaling [ 24 ]. Our study also found a significant positive correlation between IL-36γ and IL-17 A concentrations. It has been found that the level of IL-36γ increases after IL-17 stimulation [ 9 ]. Perhaps IL-36γ and IL-36β are jointly involved in the enhanced feedback loop of IL-17 for activating the immune response in asthma. IL-36α, IL-36β, IL-36γ, and IL-36Ra may be involved in the pathogenesis of asthma phenotypes via different pathways and may be important biological targets for asthma therapy. Our study also had an interesting finding. It is well known that IL-13 H and IL-17 are classical pro-inflammatory cytokines, usually expressed at higher levels in asthma patients. However, in our study, IL-13 and IL-17 levels were lower in the asthma group. This contradictory result is the reason for the further differentiation of asthma into four different subtypes in our study. The heterogeneity of asthma leads to such contradictory results; therefore, further studies to differentiate asthma into subtypes are important for individualized and precise treatment of asthma. In our study, we found that IL-13 and IL-10 levels were lower in neutrophilic asthma. IL-13 is a cytokine secreted mainly by Th2, typically accompanying Th2 asthma, and IL-13 correlates with the severity of asthma, including eosinophilic airway inflammation, mucus secretion, airway hyperresponsiveness, and remodeling. In addition, anti-IL-13 therapy plays a significant role in targeted asthma therapy. CCL11 (eotaxin1) and CCL17 promote eosinophil and leukocyte infiltration into the lung mediated by IL-13 [ 25 – 28 ]. One study found significantly increased IL-13 in BALF, lung block biopsy specimens, and sputum of asthmatics; however, further differentiation of asthma subtypes revealed that IL-13 was not increased in non-eosinophilic asthma [ 28 , 29 ]. IL-10 is a cytokine with both anti-inflammatory and pro-inflammatory effects and is mainly produced by activated monocytes, peripheral blood T cells, B lymphocytes, macrophages, mast cells, eosinophils, and dendritic cells. In asthma, IL-10 can negatively regulate the inflammatory response mediated by Th2 and Th17 and can alleviate the severity of neutrophilic asthma [ 30 ]. Due to the complex function of IL-36, the results of different studies may not be consistent with each other. A previous study found significantly increased expressions of serum IL-36 cytokine mRNA and protein in patients with allergic rhinitis and asthma [ 31 , 32 ], which is consistent with our study. However, the serum IL-36γ and IL-36R mRNA and protein expressions were also significantly elevated in patients with allergic rhinitis, which is different from our findings. These inconsistent findings may be attributable to the different proportions of patients with different asthma phenotypes in the study sample. In addition, our study also innovatively performed multiple comparisons of IL-2, IL-4, IL-6, IL-9, IL-10, IL-13, IL-17 A, IL-17 F, IL-22, IFN-γ, and TNF-α (Fig. 4 ). The cytokines that are closely related to IL-36 isoforms are described below. IL-1β plays a pro-inflammatory role in the pathogenesis of asthma. IL-1β expression was found in lavage fluid, epithelial cells, and alveolar macrophages of asthmatic patients. IL-1β is a regulator of airway hyperresponsiveness in asthma and can mediate eosinophil inflammation by inducing chemokines and cytokines. In addition, IL-1β is also involved in neutrophil-mediated inflammation [ 33 ]. IL-1β can promote the production of IL-6 and chemokines in the lung, recruit neutrophils, and promote the inflammatory response [ 34 ]. In addition, the pathogenesis of neutrophilic asthma is associated with IL-1β/IL-17-induced neutrophil activation [ 35 , 36 ]. We determined that the pro-inflammatory factor IL-36 can promote neutrophil aggregation in asthma airway inflammation, but the exact underlying mechanisms are not clear [ 22 ]. Therefore, we further examined asthma-associated inflammatory factors and assessed their correlation with IL-36. We observed that IL-36α was positively correlated with IL-6; IL-36β was positively correlated with TNF-α, and IL-36γ was positively correlated with IL-17 A. IL-6 is known to induce neutrophil recruitment and its level increases with increasing neutrophil numbers [ 37 ]. IL-36α was shown to induce the composition of MyD88 linked molecules to form complexes and induce activation of JNK, MAPK, and ERK1/2 signaling pathways to enhance IL-6 expression [ 38 ]. Studies have shown that in the airway epithelium, IL-36α and IL-36γ promote IL-1β, IL-17 A, and TNF-α, an effect that is mediated through Toll-like receptors 2/6, 3, 4, and 5. This is consistent with our findings [ 39 ]. We also innovatively found a positive correlation between IL-36β and TNF-α, which was not found in previous experiments. In vitro, treatment of cultured human keratinocytes with TNF-α and IL-17 A resulted in significantly higher levels of IL-36α and IL-36γ, forming a positive feedback loop with Th17 cytokines, which also stimulated the production of pro-inflammatory cytokines such as TNF-α, IL-6, and IL-8 [ 40 ]. TNF-α is produced by a variety of pro-inflammatory cells and structural cells during the pathogenesis of asthma, and TNF-α is mainly associated with the Th1 response. It also works with IL-17 A to produce cxcl8, which promotes neutrophil aggregation, and is associated with the inflammatory mechanisms and airway hyperresponsiveness in neutrophilic asthma [ 41 – 44 ]. It plays an important role in airway remodeling and inflammatory response and promoting neutrophil and eosinophil migration by promoting pro-inflammatory factors and adhesion molecules such as vascular cell adhesion molecule 1 and intercellular adhesion molecule 1 [ 45 ]. Our study supports these results in that IL-36β showed a positive correlation with TNF-α. IL-17 A is a characteristic cytokine of TH17. A previous study described the association of IL-36 with TH17 cellular responses. In our study, IL-36γ showed a positive correlation with IL-17 A concentration, supporting our previously mentioned point. However, our study also found no significant differences in TNF-α and IL-17 A concentrations between the asthma group and healthy non-asthmatic controls, and between the different asthma phenotypes. This may be related to our sample size and the regional characteristics of asthma patients, and further underlines the heterogeneity of asthma. TNF-α, IL-17 A, and IL-1β act in consort with IL-36 to regulate Th1 cell responses by sharing downstream signals through pathways such as JNK, MAPK, ERK, p38, and NF-κB [ 24 , 39 ]. IL-36 may promote airway neutrophil aggregation and airway inflammation through the IL-6/IL-17 A/TNF-α axis. Further exploration of the role of IL-36 receptor blockers in animal models of asthma and in vitro experiments are required to better characterize the role of IL-36 in asthma. Based on our results, we suggest that IL-36 is associated with neutrophil recruitment in the airways and that IL-36 exacerbates the asthmatic airway inflammatory response via Th1-related cytokines. These results may serve as a basis for further investigation of the different pathophysiological mechanisms of IL-36 in NA and EA in the future.
Conclusions Our study indicates the involvement of IL-36α and IL-36β in the pathophysiology of airway inflammation in asthma, which is likely mediated via promotion of neutrophil recruitment in the airways. Our findings provide insights into the inflammatory pathways of neutrophilic asthma and identify a potential therapeutic target for the asthma phenotypes. However, more in vivo and in vitro experiments are required to investigate the role of IL-36 in various asthma phenotypes to assess the potential of IL-36-based therapeutic targets in asthma.
Interleukin (IL)-36 family is closely associated with inflammation and consists of IL-36α, IL-36β, IL-36γ, and IL-36Ra. The role of IL-36 in the context of asthma and asthmatic phenotypes is not well characterized. We examined the sputum IL-36 levels in patients with different asthma phenotypes in order to unravel the mechanism of IL-36 in different asthma phenotypes. Our objective was to investigate the induced sputum IL-36α, IL-36β, IL-36γ, and IL-36Ra concentrations in patients with mild asthma, and to analyze the relationship of these markers with lung function and other cytokines in patients with different asthma phenotypes. Induced sputum samples were collected from patients with mild controlled asthma (n = 62, 27 males, age 54.77 ± 15.49) and healthy non-asthmatic controls (n = 16, 10 males, age 54.25 ± 14.60). Inflammatory cell counts in sputum were determined. The concentrations of IL-36 and other cytokines in the sputum supernatant were measured by ELISA and Cytometric Bead Array. This is the first study to report the differential expression of different isoforms of IL-36 in different asthma phenotypes. IL-36α and IL-36β concentrations were significantly higher in the asthma group ( P = 0.003 and 0.031), while IL-36Ra concentrations were significantly lower ( P < 0.001) compared to healthy non-asthmatic controls. Sputum IL-36α and IL-36β concentrations in the neutrophilic asthma group were significantly higher than those in paucigranulocytic asthma (n = 24) and eosinophilic asthma groups (n = 23). IL-36α and IL-36β showed positive correlation with sputum neutrophils and total cell count (R = 0.689, P < 0.01; R = 0.304, P = 0.008; R = 0.689, P < 0.042; R = 0.253, P = 0.026). In conclusion, IL-36α and IL-36β may contribute to asthma airway inflammation by promoting neutrophil recruitment in airways. Our study provides insights into the inflammatory pathways of neutrophilic asthma and identifies potential therapeutic target. Keywords
Abbreviations Interleukin Tumor necrosis factor Enzyme-linked immunosorbent assay Body mass index Airway hyperresponsiveness Asthma control questionnaire Fractional exhaled nitric oxide Forced expiratory volume in 1 s Forced vital capacity Total cell count Standard deviation Analysis of variance Least significant difference Interquartile range Odd ratio Confidence interval Inhaled corticosteroid Neutrophilic asthma Eosinophilic asthma Paucigranulocytic asthma Mixed granulocytic asthma Acknowledgements Not applicable. Author contributions PG contributed to the conception of the study. WL and JYL drafted the manuscript. HND, ZDW and YQH reviewed and revised it critically for important intellectual content. All authors revised the manuscript critically and approved the final version. Funding This research was funded by the Natural Science Foundation of Jilin Province (20210101460JC), National Natural Science Foundation of China (82070037), Jilin Province Natural Science Foundation (202000201384JC), Jilin Province Development and Reform Commission Plan (2019C047-7), and Jilin Provincial Department of Finance, Provincial Talent Project (2019SCZT033). The design of the study and writing of the manuscript were performed in accordance with the rules of the funding bodies. Data availability All data generated or analyzed during this study are included in this article. Declarations Ethics approval and consent to participate This study was approved by the Ethics Committee of the Second Hospital of Jilin University (approval number: 2016-34). Written informed consent was obtained from all subjects prior to their enrollment. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:46
Allergy Asthma Clin Immunol. 2024 Jan 13; 20:3
oa_package/e2/e5/PMC10787970.tar.gz
PMC10787971
38218798
Introduction Heart failure (HF) is a prevalent ailment worldwide, and despite substantial advancements in medical technology over the past few decades, HF holds the global record for the highest fatality rates [ 1 , 2 ]. HF imposes a significant global burden, impacting more than 64 million individuals worldwide and incurring an annual cost exceeding $100 billion US dollars [ 3 – 5 ]. Research reveals that one out of every five individuals will encounter HF during their lifetime, and approximately half of these HF patients will not survive beyond five years [ 6 , 7 ]. Consequently, it becomes evident that HF shoulders a substantial share of the burden in terms of CVD-related morbidity, mortality, and healthcare expenditures [ 8 ]. Hence, given the global prevalence and significant burden of HF, it is necessary to assess HF-specific mortality and its associated risk factors. HF presents a debilitating state in which the heart’s inability to pump blood to adequately meet the body’s demands leads to the failure of multiple organs and eventual fatality [ 9 , 10 ]. Survival in patients with HF is a significant concern. Studies have shown that HF leads to a substantial loss of life expectancy, with comorbidities playing a major role in this loss [ 7 , 11 , 12 ]. A collection of factors, including lifestyle elements (such as inadequate diet, sedentary habits, smoking, and drug abuse). ), preexisting medical conditions (e.g., diabetes mellitus, hypertension, hyperlipidemia), physiological anomalies, and therapeutic interventions (such as radiation or chemotherapy), can contribute to the development of HF [ 9 , 13 ]. Analyzing modifiable risk factors can offer valuable insights into effective treatment and preventive measures to improve HF patient survival. Therefore, knowing the distribution of these factors holds significant importance. Despite the existence of numerous studies conducted in some regions, there are limited data in Iran. The allure of this topic will intensify when the risk factors for mortality are scrutinized based on the specific cause. The variety of causes of death in patients with HF is high. Therefore, competing risk models can be used to investigate and analyze the time to death of patients. Competing risks refer to a situation in which an individual or unit can experience multiple events, but only one event can occur. The Cox proportional hazards (PH) model is commonly used in competing risks for analysis. The survival function estimator conditional on , , in this model, assumes a constant proportional hazard. This means that the relative hazard between individuals remains constant over time. This assumption may not hold in practical scenarios where risks change over time. Additionally, in the estimation of survival probability, the application of traditional survival analysis methods such as CoxPH may lead to biases due to ignoring competing risks that are present [ 14 , 15 ]. CoxPH is by far the most commonly used survival model in competing risk. However, it has limited compatibility with specific probability distributions for survival times. In such cases, the accelerated failure time (AFT) model can be a realistic alternative [ 16 ]. On the other hand, AFT shifts focus to quantify the direct variable influence on survival time, which is distinct from the hazard assessment in the Cox PH model [ 17 ]. Within the framework of the PH model, it is not feasible to make predictions without an estimate of the baseline hazard function. Therefore, solely reporting coefficients, which is a common practice, prevents others from predicting survival. As the AFT model follows a log-linear structure, one can easily calculate a point estimate of survival for covariates. Recent research has focused on improving the CPH model in competing risks. Some papers discuss a combination of Cox and Bayesian survival models to enhance both model interpretability and predictive power [ 18 , 19 ]. S.N. Al-Aziz et al. introduced a Bayesian methodology for analyzing competing risk data, utilizing a generalized log-logistic baseline distribution for the proportional hazard (PH) specification [ 20 ]. Traditional statistical inference techniques typically rely on estimating parameters using available data, with the maximum likelihood estimator (MLE) often being the preferred method. However, when dealing with survival data, it is important to consider the past information available, such as the medical history of patients in medical sciences. The MLE cannot incorporate prior information in data analysis. In contrast, Bayesian reasoning is renowned for its ability to incorporate prior information. Additionally, Bayesian methods provide more accurate estimation results than MLE [ 21 ]. The analysis of survival Bayesian in competing risks encompasses a range of models and techniques that aid in comprehending the duration of events and the factors that impact them [ 22 ]. Considering the limitations of the Cox model, another purpose of this study is to consider combining the AFT method and the Bayesian approach in the competing risk. On the other hand, very few studies have simultaneously explored three approaches, competing risks, parametric models, and Bayesian analysis, in investigating risk factors for the survival of patients with HF. Therefore, the current study using the Bayesian AFT approach was designed to predict patient survival based on the cause of death and identify risk factors, specifically differentiating between causes of death (HF-related mortality and non-HF-related mortality).
Methodology Study area The study was conducted in the Rajaie Cardiovascular Medical and Research Center (RCMRC), Tehran, Iran, which is considered one of the largest tertiary centers for cardiovascular medicine in the Middle East and includes many departments, including the heart failure and transplantation department. Study design and population In this retrospective study, data were derived from the Rajaie Acute Systolic Heart Failure Registry (RASHF), the first HF registry in Iran. This registry was started in RCMRC, based on data from hospitalized patients with acute HF diagnoses. The data were collected and recorded in dedicated forms designed by the medical Information Technology team of the center. The data of interest of the RASHF registry include the following items: medical and drug history of patients, type of HF presentation (decompensated or de novo), cardiomyopathy type (nonischemic or ischemic), admission-time vital signs, initial clinical symptoms (dyspnea, chest pain, edema, etc.), precipitating factors of acute HF, laboratory findings during admission, baseline electrocardiogram and echocardiographic findings, medications during hospital and at discharge, in-hospital course and outcome status. The hospital information system [ 23 ] (HIS) was utilized to identify all patients enrolled in the RASHF registry from March 2018 to August 2018. The mortality status of the identified individuals was examined and followed up for up to five years (June 2023). In cases where the hospital records or death registration system lacked sufficient information, efforts were made to contact the individuals themselves or their families to complete the missing details. Utmost care was taken to handle this communication sensitively and without causing any discomfort to the individual or their family. The process was conducted indirectly to ensure that the sensitive nature of the event was respected and that information about the event’s status was obtained discreetly. Inclusion criteria Patients with acute HF with reduced ejection fraction (HFrEF) diagnosis based on international HF guidelines enrolled in the RASHF registry. Exclusion criteria Patients for whom sufficient information was not recorded in their files and individuals who had not received any treatment. Ending time Patients with HF who were enrolled in the study were followed up for mortality status for up to five years (June 2023) and categorized by the cause of death. Individuals whose mortality status was uncertain were censored. This means that the type of survival data is right-censored. According to the approach of this study, the cause of death was categorized into “HF-related mortality” and “non-HF-related mortality” as competing risks. Additionally, we considered in-hospital mortality. HF-related mortality Death due to HF complications such as causes of decompensation (infection, pulmonary emboli, electrolyte disturbance, etc.), low cardiac output state and shock, and arrhythmias. Non-HF-related mortality Death due to other causes (non-HF). For example, brain stroke, cancer, old age, etc. Statistical analysis In this study, categorical variables are reported as frequencies and percentages, and numeric variables are reported as medians. In addition, we considered the trend effect for ordinal categorical variables. Survival rates across variables were compared through the implementation of a log-rank test. In this study, we used the Bayesian parametric AFT method with competing risks analysis. Employing the Bayesian AFT method in competing risks survival analysis leads to the creation of more accurate survival models, allowing us to examine the effects of different variables with greater precision, specifically in terms of the cause of death differentiation. In this approach, separate Bayesian models for competing risks are considered, and an appropriate distribution for survival time is selected to conduct the analysis (Fig. 1 ). Time ratio (TR A ): cause-specific TR HF-related mortality. Time ratio (TR B ): cause-specific TR non-HF-related mortality. Bayesian models were compared with DIC to recognize the true model. The model’s superior fit for the data is indicated by the lower DIC values [ 18 ]. This part of the analysis was carried out using R 4.3.0 software utilizing the spBayesSurv package [ 24 ]. The significance level was set at 0.05. Then, the association between survival time and other variables was analyzed by univariate and multivariable Bayesian AFT regression by cause of death. These parts of the studies were conducted using Stata17 software (StataCorp, College Station, Texas, USA). Bayesian survival analysis is a method for calculating the probability of an event occurring based on prior information related to events associated with that phenomenon. The parameters include the regression coefficients of the variables. Various prior distributions can be considered for them. Determining the appropriate form of the prior can often be challenging. There is no definitive rule for selecting the best prior distribution to formulate the Bayes estimator. However, in cases where only limited or vague knowledge about the parameters is available, a noninformative prior can be employed [ 21 ]. In this study, we utilized sensitivity analysis for the optimal selection and tuning of the prior distribution variance. The reason for using noninformative prior distributions is often to allow the data to speak for themselves, ensuring that inferences are not influenced by external information unrelated to the current data. Consequently, all resulting inferences were entirely objective rather than subjective. Prior distribution π ( θ ) In this study, we utilized a normal distribution with a large variance (mean 0 and variance of 10,000; Non-Informative) as the prior distribution for the regression coefficients [ 21 ]. Likelihood L(β|X, t) The likelihood equation is as follows: where is the censoring indicator (0 = censored and 1 = death) and in Weibull regression is and Log-Logistic regression is where In Log-Normal regression is where is the standard normal distribution and Posterior distribution A mixture of the prior distribution and likelihood. Variables in the study In this study, death was considered an event of interest. The response variable was the survival time of HF patients (in months), which was defined as the difference between the time of diagnosis and time to one of the events “HF-related mortality” and “non-HF-related mortality”. The variables in this study were categorized into three groups: demographic, disease symptoms, and clinical factors. Demographic variables: Age (years), sex, employment status, education level, place of residence, and marital status. Disease symptom variables: dyspnea, chest pain, limb swelling, temperature, and heart rate. Clinical variables: history of hypertension, history of diabetes mellitus (DM), coronary artery disease (CAD), history of hyperlipidemia, smoking, chronic kidney disease (CKD), atrial fibrillation (AF), stroke, and acute decompensated HF (ADHF).
Results Participant characteristics The median survival time for the patients was 43.40 months. Out of 435 HF patients, 61.1% were male. The mean age of the patients was 56.57 years, ranging from 14 to 95 years. In addition, 86% of the patients had education levels below a diploma, 92% lived in the city, and 90% were married. In addition, 34% of patients presented to the hospital with dyspnea, while 88.3% reported chest pain, 89% exhibited limb swelling, 11% of patients had a heart rate < 60, 25% of patients had a heart rate greater than 100 beats/min, and only 10% of patients had a temperature > 37.5 degrees Celsius (see Table 1 for more information). Comparison of mortality rates and participant characteristics between two causes of death At the end of the follow-up time, 24.6% of the patients were still alive, and the mortality rates due to HF and non-HF were 36.8% and 22.3%, respectively. In HF-related mortality, 64% were unemployed patients, 64% had education below the diploma level, 63% lived in the city, and 62% were married. Patients 61.5%, 62%, and 63% sought medical attention at the hospital with symptoms such as dyspnea, chest pain, and limb swelling, respectively. In non-HF-related mortality, 36% were employed patients, 36% had education below the diploma level, 37% lived in the city, 38% were married and 38%, 38%, and 37% had symptoms of dyspnea, chest pain, and limb swelling, respectively. The average body temperature was 36.56 degrees Celsius for patients who had HF-related mortality and 36.75 degrees Celsius for patients who had non-HF-related mortality (see Table 1 for more information). In HF-related mortality, the 1-, 3-, and 5-year survival rates were 80.66% (95% CI: 0.76–0.84), 68.03 (95% CI: 0.63–0.72), 59.52% (95% CI: 0.54-64), respectively, and in non-HF-related mortality, they were 91.78% (95% CI: 0.88–0.94), 79.08% (95% CI: 0.74–0.83), and 70.29% (95% CI: 0.64–0.75), respectively. Outcome rates The mortality rate for HF and non-HF increased significantly with increasing age. Patients with chest pain, hyperlipidemia, and chronic kidney disease were associated with higher outcome rates for both causes of death; however, certain variables exhibited elevated mortality rates in non-HF, and these differences did not have statistical significance in HF-related mortality ( P < 0.05) (see Table 2 for more information by cause of death). Bayesian model selection criteria According to the DIC values (Table 3 ), the Bayesian Weibull AFT model had the best fit HF dataset among the three models. Univariable bayesian AFT competing risk parametric model Table 4 shows the final results for the univariable Bayesian Weibull AFT regression, and as this, the results show that in HF-related mortality, the survival time of patients is statistically significantly affected by age (TR = 0.98), chest pain (TR = 0.30), temperature (< 36 degrees Celsius) (TR = 0.51), hyperlipidemia (TR = 0.30), and ADHF (TR = 0.08). In non-HF-related mortality, age (TR = 0.97), chest pain (TR = 0.32), hypertension (TR = 0.53), CAD (TR = 0.52), hyperlipidemia (TR = 0.54), CKD (TR = 0.38), and AF (TR = 0.53) showed a significant relationship with reducing the survival time of patients. Subsequently, all significant variables determined through univariate analysis were incorporated into the multivariate parametric modeling approach. Sensitivity analysis Considering the sensitivity analysis results, there was a difference of more than 10% in most variables. Therefore, given the sample size and the sensitivity of the analysis to variance changes, results were reported for both causes of death with a larger variance (10,000). This choice allows us to effectively represent the variations in the results (Tables 5 and 6 ). Additionally, considering the study aims, a larger variance can be a more appropriate choice for better examining and understanding the effects of variables. Multivariable bayesian AFT competing risk parametric model Based on the results of the best model, with the increase in age, the survival time of patients was shorter in HF-related mortality [time ratio = 0.98, confidence interval 95%: 0.96–0.99]. In addition, patients who had ADHF [TR = 0.11, 95% (CI): 0.01–0.44] were associated with a lower survival time for HF-related mortality. Chest pain in HF-related mortality [TR = 0.41, 95% (CI): 0.10–0.96] and in non-HF-related mortality [TR = 0.38, 95% (CI): 0.12–0.86] was associated with a lower survival time. The next significant variable in HF-related mortality was hyperlipidemia (yes): [TR = 0.34, 95% (CI): 0.13–0.64], and in non-HF-related mortality hyperlipidemia (yes): [TR = 0.60, 95% (CI): 0.37–0.90]. In the Weibull survival model, a one-unit increase in hyperlipidemia was associated with a 66% and 40% decrease in the survival time of patients. In other words, for a unit increase in hyperlipidemia, the risk of both causes of death increases. CAD [TR = 0.65, 95% (CI): 0.38–0.98], CKD [TR = 0.52, 95% (CI): 0.28–0.87], and AF [TR = 0.53, 95% (CI): 0.32–0.81] were other variables that were directly related to the reduction in survival time of patients with non-HF-related mortality (Table 7 ).
Discussion In this study, we investigated the survival risk factors in patients with HF using a Bayesian parametric survival modeling approach. Using the Bayesian approach for competing risks has advantages compared with other survival modeling methods. In this manner, by utilizing prior information and background knowledge about the parameters in the analysis of patient survival times, broken down by the cause of death, more precise estimates can be provided. Moreover, it allows for examining the uncertainty in estimates for each parameter and continually updating them with new data. Additionally, this approach provides high flexibility and allows the modeling of different survival models with ease by altering distributions and functions in competing risk AFT models. This enables researchers to consider a broader and more diverse range of variables for examination, categorized by the cause of death. Therefore, Bayesian parametric models provide valuable tools for understanding the relationship between heart disease and survival outcomes [ 25 , 26 ]. In our dataset, among all the parametric models examined for both causes of death (HF-related mortality and non-HF-related mortality), the Weibull model outperformed the other models. Parametric models have been widely used in the analysis of survival data, including in the context of heart disease. These models specify the distribution of the time to event in terms of unknown parameters. In addition, in other studies, the Weibull distribution is suitable for proportional hazard models in the analysis of HF data [ 27 , 28 ]. However, in some other studies, the Bayesian log-normal AFT model was found to be the best fit for analyzing the HF dataset [ 29 ]. In the current study, in HF-related mortality, the 1-, 3-, and 5-year survival rates were 80.66%, 68.03, and 59.52%, respectively, and in non-HF-related mortality, they were 91.78%, 79.08%, and 70.29%, respectively. In line with this study, Jones NR et al. found that the survival rates for patients with chronic HF at 1, 2, and 5 years were 86.5%, 72.6%, and 56.7%, respectively [ 7 , 30 ]. Despite improvements in survival over the years, mortality associated with HF remains high [ 30 ]. Morbidity and mortality remain high for patients with HF, with a five-year mortality rate of approximately 50% [ 31 ]. It remains a prevalent condition among older adults, with a significant five-year mortality risk. Understanding the broader implications of HF can guide research, resource allocation, and policy-making for noncommunicable disease mitigation [ 32 ]. In this study, for patients who had mortality due to HF between 2018 and 2023, as age increased, the survival rate of patients decreased. Similar to our results, some research has demonstrated a direct correlation between age and survival rates among patients with HF [ 31 , 33 – 36 ]. The median age of our patients with both causes of death was less than 60 years, and the predominant sex was male. In a study in Asia, the prevalence of HF was higher in men and younger than in studies in Europe and the US [ 37 ]. HF-related mortality is a common and growing health problem, with a prevalence that increases with age. It affects approximately 2% of the adult population and doubles in prevalence for each decade of age [ 38 ]. This can be caused by additional chronic ailments, weakness of the immune system due to old age, and delayed diagnosis in elderly patients. Therefore, preventive strategies targeting HF risk factors should be prioritized for individuals aged 50 and above. Patients with chest pain and hyperlipidemia were associated with a lower survival time. Chest pain is a public sign in patients with HF. Some studies have also reported that chest pain serves as a sign of exacerbation and worsening of patients’ cardiac conditions [ 39 ]. Hyperlipidemia emerged as another noteworthy factor associated with mortality, displaying an inverse correlation with patient survival time. Hyperlipidemia in adulthood is associated with an increased risk of mortality from future HF disease. This result aligns with findings from earlier research, which likewise indicated a negative relationship between hyperlipidemia and patient survival [ 36 , 40 , 41 ]. The association between hyperlipidemia and HF as a risk factor for mortality is significant in patients with HF. Hyperlipidemia can lead to the formation of fatty deposits in the walls of coronary arteries, impairing heart function and causing damage to the blood vessels and heart muscle. Other studies have shown similar results [ 42 , 43 ]. Therefore, controlling hyperlipidemia can help increase the survival time of patients with HF. These precautions include proper nutrition, regular exercise, and consistent use of lipid-lowering medications. ADHF was another factor associated with the survival time of patients who had HF mortality. ADHF is a type of HF that requires urgent medical attention and hospitalization [ 44 ]. ADHF is the leading cause of hospital admissions in patients older than 65 years and is associated with poor outcomes, including rehospitalization and death [ 45 ]. The majority of patients with ADHF have a previous history of HF and present with symptoms and/or signs of congestion and normal or increased blood pressure [ 46 ]. Different classification criteria have been proposed for ADHF, reflecting the clinical heterogeneity of the syndrome, including classifications based on the history of HF, systolic blood pressure upon presentation, and the presence or absence of congestion and peripheral hypoperfusion [ 47 ]. CAD, CKD, and AF had a significant relationship with survival time in non-HF-related mortality in our study. Other studies have shown similar results; patients who have both CAD and HF are at a heightened risk of health complications, including mortality events [ 43 ]. Our study examined the relationship between CKD and mortality in patients with HF, with CKD emerging as a severe complication of HF. Individuals afflicted by both conditions exhibit more unfavorable outcomes, including a higher risk of mortality compared with those with a single condition [ 41 ]. CKD patients face an escalated likelihood of HF development, and the coexistence of HF in CKD patients exacerbates their prognosis [ 48 ]. In this study, one of the significant factors contributing to mortality was AF among non-HF-related mortality. According to a study, AF and HF are common cardiac conditions that often co-occur, sharing risk factors. AF can worsen HF, as seen in more than 50% of AF patients [ 49 ]. Therefore, preventing AF in HF involves lifestyle changes (changes in dietary patterns, increased physical activity, reduced consumption of drugs or alcohol, stress management, and improved sleep quality), screening, and optimal therapy [ 48 ]. Strengths and limitations The RASHF registry stands as the inaugural heart failure registry in Iran, and the data derived from it holds a unique within our country. The study’s strengths lie in its highly suitable sample, extended follow-up period, and utilization of statistical Bayesian and AFT techniques to identify risk groups. This study is an example of the significant utility of relative survival within HF research, particularly in competing risks. The findings of this study are reinforced by the appropriate sample size of patients visiting this hospital who come from all over the country and Iran’s neighboring countries. Therefore, this study results in a more diverse and representative dataset, thereby enhancing the study’s generalizability. It also enables robust trend analysis and a comprehensive grasp of the broader impact of the topic. The main limitation of this study was inadequate recording of death by the cause of death. To address this, researchers established contact with individuals or their families based on hospital record information to verify and ensure the accuracy of their status. To prevent bias in data collection and information bias, patient records were reviewed without knowledge of their final status, except for cases where hospital death had occurred.
Conclusion In this study, using a Bayesian approach, we concluded that chest pain and hyperlipidemia levels are significant risk factors for predicting mortality in HF-related mortality and non-HF-related mortality. Furthermore, we have discussed risk factors separately for each cause of death. Exploring the survival duration of patients with HF by cause of death offers a valuable approach to tackling societal health issues, as it reveals factors linked to mortality. The findings of this study can heighten awareness regarding determinants that contribute to the cause of death in individuals with HF. Moreover, these scientific insights can be shared with health authorities, enabling policymakers to enhance public comprehension of factors that worsen the risk of HF-related mortality. This awareness is crucial because early screening and timely interventions can facilitate effective prevention, treatment, and preservation of lives.
Purpose Heart failure (HF) is a widespread ailment and is a primary contributor to hospital admissions. The focus of this study was to identify factors affecting the extended-term survival of patients with HF, anticipate patient outcomes through cause-of-death analysis, and identify risk elements for preventive measures. Methods A total of 435 HF patients were enrolled from the medical records of the Rajaie Cardiovascular Medical and Research Center, covering data collected between March and August 2018. After a five-year follow-up (July 2023), patient outcomes were assessed based on the cause of death. The survival analysis was performed with the AFT method with the Bayesian approach in the presence of competing risks. Results Based on the results of the best model for HF-related mortality, age [time ratio = 0.98, confidence interval 95%: 0.96–0.99] and ADHF [TR = 0.11, 95% (CI): 0.01–0.44] were associated with a lower survival time. Chest pain in HF-related mortality [TR = 0.41, 95% (CI): 0.10–0.96] and in non-HF-related mortality [TR = 0.38, 95% (CI): 0.12–0.86] was associated with a lower survival time. The next significant variable in HF-related mortality was hyperlipidemia (yes): [TR = 0.34, 95% (CI): 0.13–0.64], and in non-HF-related mortality hyperlipidemia (yes): [TR = 0.60, 95% (CI): 0.37–0.90]. CAD [TR = 0.65, 95% (CI): 0.38–0.98], CKD [TR = 0.52, 95% (CI): 0.28–0.87], and AF [TR = 0.53, 95% (CI): 0.32–0.81] were other variables that were directly related to the reduction in survival time of patients with non-HF-related mortality. Conclusion The study identified distinct predictive factors for overall survival among patients with HF-related mortality or non-HF-related mortality. This differentiated approach based on the cause of death contributes to the estimation of patient survival time and provides valuable insights for clinical decision-making. Keywords
Acknowledgements The authors express their sincere gratitude to the Research Deputy of Rajaie Cardiovascular, Medical, and Research Center, the HIS Department of the hospital, and the specialized cardiologist for HF for their invaluable collaboration. Informed consent All participants, or their legal guardians, provided informed written consent on registration in the database. Additionally, all methods were carried out according to relevant guidelines and regulations. Authors’ contributions Conceptualization: SN, MAJ, SM, EH. Data curation: SN, SM. Formal analysis: SN, MAJ. Methodology: SN, MAJ, EH. Writing – original draft: SN, MAJ. Writing – review & editing: SN, MAJ, SM, EH. Funding Not applicable. Availability of data and materials The datasets used in the current study are available from the corresponding author upon reasonable request. Declarations Ethics approval and consent to participate This study was approved by the ethics committee of the School of Medical Sciences – Tarbiat Modares University under the approval ID IR.MODARES.REC.1402.012. The participants’ privacy was preserved. All the processes were approved by international agreements (World Medical Association, Declaration of Helsinki, Ethical Principles for Medical Research Involving Human Subjects). Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:46
BMC Cardiovasc Disord. 2024 Jan 13; 24:45
oa_package/05/51/PMC10787971.tar.gz
PMC10787972
38218866
Introduction BLCA is the most frequent diagnosed malignant tumor of the genitourinary system [ 1 ]. Although unprecedented progress has been made in early diagnosis of BLCA tumors and multiple treatment options have been established (such as surgery and intravesical BCG) for primary BLCA in the past decade, the high recurrence rate and poor prognosis of this disease still remains invasive, especially for individuals diagnosed with muscle-invasive BLCA [ 2 ]. Long-term regular infusion of chemotherapy drugs after surgery is the most effective preventive care to reduce the recurrence of nonmuscle-invasive BLCA, while postoperative chemotherapy is the first-line therapeutic strategy for BLCA with progression. However, due to intratumor heterogeneity and chemotherapy resistance, the efficacy of the current treatment methods for BLCA is largely limited, and the 5 year of survival rate is still unsatisfactory [ 3 , 4 ]. The median survival rate of patients who received the most common chemotherapy regimen, gemcitabine and cisplatin (GC scheme), was limited to a timespan of 14 months [ 5 ]. Therefore, exploring the mechanism of tumor resistance is crucial for discovering new targets for chemotherapy sensitivity and promoting the progress of precision therapy. It is well known that metabolic reprogramming is a hallmark of cancer [ 6 ]. More and more evidence has shown that cancer cell response to treatment is controlled by the metabolic state, suggesting that metabolism-related pathways could overcome resistance through the controlled metabolic state [ 7 ]. Li Y et al. reported that the GLUT1/ALDOB/G6PD axis regulate glucose metabolism reprogramming and promotes chemotherapy resistance in pancreatic cancer [ 8 ]. Zhou et al. proved the important role of lipid metabolism during the process of cancer resistance in the treatment of castration resistant prostate cancer [ 9 ]. Wong TL et al. confirmed that SCD1 promotes the formation of lipid droplets to target 5-fluorouracil and cisplatin resistance in gastric cancer [ 10 ]. Solanki S et al. identified amino acid metabolism to be essential in the cellular reprogramming process of chemoresistance in chemotherapeutic-resistant patients diagnosed with colon cancer [ 11 ]. However, there are few studies on metabolism in chemotherapy-related pathways resistance in BLCA. BLCA cells rely on their own unique metabolic transformation to maintain the energy needed for their growth and proliferation [ 12 ]. At the same time, the metabolism of bladder cancer is characterized by increased fatty acid synthesis and the phosphopentose pathway, and decreased AMP-activated protein kinase and Krebs cycle activity. The mRNA modification of PKM2 promotes glucose metabolism in BLCA [ 13 ]. Afonso J et al. described that glucose metabolism could be a target to improve BLCA immunotherapy [ 14 ]. However, there is a lack of systematic analysis on the relationship between the potential mechanism of chemotherapy resistance and metabolic recombination in BLCA. In this study, we obtained drug-resistant differentially expressed by RNA sequencing of the established gemcitabine-resistant bladder cancer cell line, combined with MRGs, and then established and justified a prognostic model that is found in several BLCA databases through Cox and LASSO regression analysis. Our studies found that this model is a very accurate predictor of overall survival (OS), and is significantly related to metabolic reprogramming, gene mutation, and the tumor microenvironment. In addition, FASN was considered the representative gene of RM-RM. We proved that FASN promoted BLCA gemcitabine resistance, while TVB-3166, an inhibitor of FASN, reversed BLCA gemcitabine resistance in vitro and in vivo. In summary, we will provide a new model for predicting the survival and therapeutic strategies for BLCA patients.
Materials and methods Cell culture and reagents T24 and UMUC3 cells were obtained from the American Type Culture Collection (ATCC, Manassas, VA). These BLCA cells were cultured in DMEM (for UMUC3 and UMUC3-R) and RPMI-1640 (for T24 and T24-R) mediums added into 10% foetal bovine serum (Gibco, USA). The lentivirus was used to knock down FASN and the vector were purchased from GeneChem. Our operation steps were strictly used for maintaining the instruction requirements. Gemcitabine (HY-17026) and TVB-3166 were purchased from MCE. Establishment of gemcitabine-resistant cell lines Two types of BLCA cell lines (T24, UMUC3) were first incubated with gemcitabine in several concentration gradients (0–20 μg/ml) for 48 h. The IC50s were calculated depending on their absorbances. Then, the cell lines were cocultured with gemcitabine at the concentration levels at which the IC50s were attached. Repeat the above steps. RI (resistance index) is calculated as the IC50 of the drug-resistant cells divided by the IC50 of the wild-type (WT) cells. 1–5 indicated low drug resistance, 5–15 indicated moderate drug resistance, and more than 15 indicated high drug resistance. Previous studies have found that the IC50 values of gemcitabine-resistant cells in two cell lines are 5 to 10 times higher or more compared to WT cells [ 38 ]. When the RI (resistance index) > 5, then we considered that drug-resistant cell lines were successfully constructed. RNA sequencing Three groups of repeated cells were used for RNA sequencing after the establishment of the T24 gemcitabine-resistant cell line. This sequencing was completed by APT (APPLIED PROTEIN TECHNOLOGY). By means of the R “limma” (Version 3.54.0) package [ 39 ], we checked out the drug-resistant differential genes of gemcitabine resistance (p < 0.05, |Fold change|> 2). Data acquisition The MRGs were collected from MSigDB [ 40 ]. The TCGA BLCA database was acquired from UCSC Xena as the training set, and two BLCA data: GSE69795 [ 41 ] and GSE31684 [ 42 ] were acquired from GEO as the validation set. The marker genes of endothelial cells and fibroblasts were collected from the literature, CellMarker database and R “xCell” (Version1.1.0) package [ 43 ]. Visualization of differentially expressed genes The volcano plot and heatmap are presented by R “ggplot2” (Version 3.4.0) [ 44 ] to demonstrate the distribution of DEGs. Moreover, the Venn diagram demonstrated the connection of differentially expressed genes of gemcitabine resistance (R-DEGs) and metabolic-related genes (MRGs) to gain resistance and metabolism-related differentially expressed genes (RM-DEGs). Enrichment analysis of genes GO analysis and KEGG analysis were carried out by the R “clusterProfiler” (Version 4.6.0) package to deeply study the major molecular functions and significantly enriched pathways of the DEGs [ 45 ]. We took P < 0.05 as the standard of significant difference. Unsupervised clustering analysis We used the R ConsensusClusterPlus' (version 1.62.0) package to perform hierarchical consistency clustering analysis [ 46 ]. Establishment and validation of the drug resistance and metabolism-related prognosis risk assessment model (RM-RM) First, we screened out the main genes that correlated with the OS of BLCA patients from RM-DEGs by using univariate Cox regression. Then, the R “glmnet” (Version 4.1–6) package [ 47 ] was used for LASSO Cox regression to evade the overfitting of characteristics and to narrow the number of factors for predicting OS. Finally, we further evaluated the genes that were recognized by LASSO regression using multiple Cox regression analysis, obtained seven key genes, and used them to create a forecast risk model on the basis of drug resistance and metabolism. The drug resistance and metabolism-related risk score (RM-RS) formula is described below: RM-RS = ∑ (β × Exp), in which β and Exp are respectively representation of coefficient and genes expression that were standardized. Survival analysis In accordance with the median RM-RS, patients with BLCA were subdivided into a high RM-RS group and a low RM-RS group. KM survival analysis was used to prove the variance in OS of different RM-RS groups. In addition, the ROC curves were used to estimate the prognostic value of the RS-RS using the R “Survival ROC” package. Subsequently, through the R “survival” (Version 3.5–5) package, we carried out the analysis of independent factors affecting BLCA prognosis by univariate and multivariate regression. The above determination methods were verified in two independent gene sets. Immunohistochemical staining assay We performed an immunohistochemical staining assay on the tissue chips in accordance with a previously described method [ 48 ]. The antibodies we used included anti-GPC2 (1:200, AF2304SP, Goat, IgG, Novus), anti-CNOT6L (1:75, abs108959, Rabbit, IgG Absin), anti-FASN (1:300, 66,591-1-Ig, Mouse, IgG, Proteintech), anti- MAP2 (1:2500, 66,846-1-Ig, Rabbit, IgG, Proteintech), anti-BMP6 (1:500, bs-10090R, Rabbit, IgG, Bioss), anti-CARD10 (1:300, bs-7081R, Rabbit, IgG, Bioss), anti-GALNT12 (1:100, ab201196, Rabbit, IgG, Abcam), anti-IgG (ab238004, Mouse, Abcam), anti-IgG (A7007, Goat, Beyotime), and anti-IgG (30,000-0-AP, Rabbit, Proteintech). The IRS (value, 0–12) was calculated by multiplying the staining strength grade by the positive area grade. The grade of staining strength was prescribed below: 0, negative; l, weak; 2, moderate; and 3, strong. The positive area grade was described as follows: zero-grade, less than 5%; first-grade, 5% to 25%; second-grade, 26% to 50%; third-grade, 51% to 75%; and fourth-grade, greater than 75%. GSEA and ssGSEA GSEA (Gene Set Enrichment Analysis) was carried out by means of the “clusterProfiler” package and GSEA software (4.3.2) to reveal the relevant signaling pathways, and visualization was implemented by means of the R “enrichplot” package (Version 1.20.0) and the “ggplot2” package. The ssGSEA (single sample Gene Set Enrichment Analysis) was performed by the R “GSVA” package (Version 1.48.3), and the individual score of each sample-specific pathway was obtained by the sample-related gene expression. In order to explore the metabolic pathway of BLCA gene sets, we searched for relatable papers [ 49 , 50 ] and data in the MSigDB. Gene mutation analysis We obtained somatic mutation information using the TCGA BLCA database. Meanwhile, using the R “Maftools” (Version 2.14.0) package, we analyzed various differences in mutations in the two RM-RS subgroups [ 51 ]. Analysis of the TME In order to evaluate the immune and stromal scores of each BLCA patient, we used different algorithms on the online tools: the xCell, MCP-counter, and EPIC. Subsequently, we evaluated the infiltration of different cells in the two subgroups by box plot visualization. Finally, we performed an association analysis by means of the R “corrplot” package (Version 0.92) to prove the intimate connection between RM-RS and characteristic cell marker genes. Multiplex immunofluorescence staining assay We performed an immunofluorescence staining assay on the tissue in accordance with the method illustrated previously [ 52 ]. The antibodies we used included anti-FASN (1:200, 66,591-1-Ig, ProteinTech), anti-BMP6 (1: 3,000, bs-10090R, Bioss), anti-CXCL12 (1:200, 17,402-1-AP, ProteinTech) and anti-CD34 (1:200, ab81219, Abcam). Through ImageJ software analysis, we obtained corrected total cell fluorescence (CTCF) to evaluate the content of protein expression in BLCA and adjacent tissues. Prediction of drug sensitivity With the purpose of predicting the sensitivity of two risk subgroups to multiple chemotherapeutic drugs, we jointly analyzed the TCGA database, Genomics of Drug Sensitivity in Cancer (GDSC) and Cancer Therapeutics Response Portal (CTRP) data. By means of the R “oncoPredict” package [ 53 ], we obtained the IC50 of each sample in two different RM-RS groups for hundreds of drugs. Statistical analyses Our data processing was performed by R software (version 4. 2. 1). Human samples With the agreement of the Ethics Committee of the First Affiliated Hospital of Zhengzhou University, we gathered BLCA tissues and normal bladder tissues from BLCA patients, partially collected them in a -80 °C freezer and partially embedded them in paraffin. BODIPY staining First, we placed cells or fresh tissues in 4% paraformaldehyde solution. Subsequently, we incubated cells or tissues with BODIPY and DAPI in the dark for 30 min and 10 min. Then ImageJ software was used for analysis. FASN, FFA, TG and T-CHO measurement assay The levels of FASN were assessed by enzyme-linked immunosorbent assay (ELISA) in accordance with the FASN ELISA kit’s instructions (Abcam, ab279412). The contents of FFAs, TGs and T-CHO were correspondingly assessed by an FFA assay kit, TG assay kit, and T-CHO assay kit (Nanjing Jiancheng Bioengineerin), in accordance with the instructions. Western blotting The proteins of cells and tissues were extracted using RIPA buffer containing phosphatase and protease inhibitors. Subsequently, 30 μg of protein was put into a Bis–Tris gel to accomplish protein electrophoresis. Then, we transferred the proteins to polyvinylidene fluoride (PVDF) membranes and blocked the membranes in 5% skim milk. After that, we took the membrane together with primary antibodies overnight, incubated it with the second antibody for 1 h, and exposed the membrane. The antibodies included anti-FASN (1:1,000, 66,591-1-Ig, ProteinTech) and anti-β-actin (1:10,000, 20,536-1-AP, Proteintech). Cell proliferation assays Gemcitabine-resistant cell lines (T-24 and UMUC3 cells) treated with TVB-3166 (1 μmol) or transfected with shRNA were treated with gemcitabine (5 μg/ml). Cell viability was determined by using Cell Counting Kit-8 (CCK-8) in accordance with the manufacturer's instructions [ 54 ]. Drug sensitivity test After transfection with shRNA for 48 h or treatment with TVB-3166 (1 μmol), gemcitabine-resistant cell lines (T-24 and UMUC3 cells) were treated with gemcitabine for 24 h at six concentrations (1 μg, 2 μg, 4 μg, 8 μg, 16 μg and 32 μg per ml). Their viabilities were detected by CCK-8 according to the guidelines provided by the manufacturer. Colony formation assay Different reagents were added as needed: gemcitabine (5 μg/ml) and TVB-3166 (1 μmol). A total of 1000 cells were cultured per well of the 6-well plate for 1 week, followed by colony analysis.
Results Identification of gemcitabine resistance and metabolism-related differentially expressed genes in BLCA With the purpose of studying the molecular biological changes in BLCA cells after gemcitabine resistance, we obtained drug-resistant differential genes by RNA sequencing of the established gemcitabine-resistant BLCA cell line (Fig. 1 A). Subsequently, we performed GO (Additional file 1 : Figure S1A) and KEGG (Fig. 1 B) enrichment analyses. The KEGG results revealed that these gene were related to lipids, fatty acid metabolism, cholesterol metabolism and amino acid metabolism. The top ten GO terms were enriched in cholesterol synthesis and metabolism, the response to the lipid, extracellular matrix, and the response to the chemical, etc. With the purpose of studying the metabolic changes of in BLCA cells after gemcitabine resistance, we further screened 597 resistance- and metabolism-related differentially expressed Genes (RM-DEGs, Fig. 1 C). Patients with BLCA were divided into two subgroups on the basis of RM-DEG expression in the TCGA BLCA database by consensus clustering (Additional file 1 : Figure S1B–E). The two subgroups included 177 and 189 patients (Fig. 1 D, E), respectively. Through KM analysis, we discovered noteworthy differences in OS between the two subgroups. (Fig. 1 F). Then, we carried out further functional analysis of RM-DEGs. The KEGG analysis revealed that the RM-DEGs were significantly linked with fatty acid biosynthesis, steroid biosynthesis, the PPAR signaling pathway, ferroptosis and other metabolic pathways (Fig. 1 G). The top ten GO enrichment pathways mainly included the following aspects (Fig. 1 H): biological process included fatty acid, steroid and purine nucleotide metabolism procedure; cellular components included extracellular matrix, endoplasmic reticulum and lipid droplet; molecular function included glycosyltransferase activity, oxidoreductase activity and extracellular matrix binding activity. In short, the above consequences revealed that metabolic reprogramming of tumors play a significant role in drug resistance progression and on the overall survival of BLCA Patients. Establishment of the RM-RM to predict the OS of BLCA patients First, by analyzing the relationship between a single gene and the OS of BLCA, we selected 134 OS-related RM-DEGs (P < 0.05, Additional file 2 : Form S1). As shown in Additional file 1 : Figure S2A, most of the OS-related RM-DEGs were closely correlated, indicating that the progression of BLCA resistance is a whole metabolic rearrangement. Then, we performed LASSO Cox regression analysis on OS-related RM-DEGs to further narrow the range of the primary genes that predict prognostic risk. As shown in Fig. 2 A, B , 28 genes were obtained by removing any overfitting data to avoid the minimum likelihood of bias. Finally, through the multivariable Cox regression analysis, we obtained 7 independent prognostic genes. As shown in Fig. 2 C , the hazard ratio and 95% confidence interval of the four genes were greater than 1, and the remaining three genes were less than 1. This suggested that GPC2, CNOT6L, GALNT12 and CARD10 were independent protective factors and that FASN, MAP2 and BMP6 were independent risk factors. Through the gene index obtained from multivariable Cox regression analysis, we constructed RM-RM and drug Resistance and Metabolism-Related risk Score (RM-RS) = (− 0.16) *GPC2 gene expression + (− 0.65) * CNOT6L gene expression + 0.42 * FASN gene expression + 0.18 * MAP2 gene expression + (− 0.15) * GALNT12 gene expression + 0.18 * BMP6 gene expression + (− 0.13) * CARD10 gene expression. After calculating the risk score, we divided 366 BLCA patients into a high-hazard cluster and a low-hazard cluster in accordance with the median of the RM-RS (Fig. 2 D). As shown in Fig. 2 E, the OS of the high-hazard cluster was apparently shorter than that of the low-hazard cluster. Compared with patients subjected to low RM-RS, patients with high RM-RS usually have a poor prognosis (Fig. 2 F). The results showed that the area under the curves (AUCs) were 0.74,0.75, and 0.76 in the first, third, and fifth years, respectively. (Fig. 2 G). The ROC curve suggested that RM-RM had good sensitivity and specificity and was better than other clinical parameters (Fig. 2 H). These clinical parameters included sex, age, T stage, N stage, M stage and clinical stage. Finally, we carried out univariate regression and multivariate regression analyses. The results (Fig. 2I) showed that RM-RM was closely related to OS and was potentially the most meaningful independent predictor for BLCA. As shown in Fig. 2 J and Additional file 1 : Figure S2B, we found that the distribution of RM-RS was routinely consistent with the distribution of other clinical findings. We also found that as RM-RS increased, the expression of FASN, MAP2 and BMP6 increased, and that of GPC2, CNOT6L, GALNT12 and CARD10 decreased. Justification of the prognostic value of the RM-RM in two BLCA databases and real-world study In order to verify the prognostic value of the RM-RM, we checked two databases including OS data of BLCA patients: GSE69795 and GSE31684. According to the calculation formula of RM-RS obtained above, we also calculated the RM-RS of each patient, and divided the patients into high-hazard clusters and low-hazard clusters in accordance with the RM-RS (Additional file 1 : Figure S3A). Similar to the TCGA dataset, we obtained the connection between the RM-RS and the survival rate. The results demonstrated a noteworthy difference between the two clusters (Fig. 3 A), and the higher the result of the RM-RS, the worse the prognosis of the patients (Additional file 1 : Figure S3B). As shown in Fig. 3 B, RM-RM has excellent diagnostic value in both short- and long-term survival rates. For the two independent validation sets, RM-RM was also superior to the other only clinical features in terms of diagnostic sensitivity and specificity (Fig. 3 D). Next, we carried out univariate regression and multivariate regression analyses in accordance with the two databases. Although the clinical data of the two validation sets are not as comprehensive as the TCGA database, RM-RM was still the best independent predictor of OS for only the existing clinical data within the two independent BLCA cohorts. In addition, as shown in Additional file 1 : Figure S3C, we obtained coherent results compared to the TCGA database in two validation sets by analyzing the correlation among RM-RS, clinical characteristics and independent prognostic genes expression. Finally, we further verified the prognostic value of the RM-RM by using samples of collected tissues in a real-world Study. As shown in Fig. 3 E and Additional file 1 : Figure S3D, immunohistochemistry (IHC) was finished to detect the expression of genes in RM-RM on the basis of protein expression (pRM-RS), and pRM-RS was obtained according to the immune response score of genes in RM-RM. The results were consistent with the TCGA database. FASN, MAP2 and BMP6 were highly expressed in bladder cancer, while GPC2, CNOT6L, GALNT12 and CARD10 were expressed at low levels in bladder cancer (Fig. 3 F). According to pRM-RS, BLCA patients with survival data were divided into two subgroups. The KM survival analysis also directly revealed that the OS of the high-hazard cluster was notably shorter than that of the low-hazard cluster (Fig. 3 G). shows that pRM-RS was closely related to grade, T stage, M stage and clinical stage. In summary, we concluded that RM-RM had a high diagnostic value of prognosis for BLCA. The molecular function and mechanism of RM-RM in BLCA To analyze the molecular function of RM-RM, we completed GSEA and found that the risk model was strongly linked with the incidence, recurrence, distant metastasis, tumor proliferation and angiogenesis of BLCA ( Fig. 4 A ) . With the purpose of further analyzing the mechanism of the model, we performed gene expression analysis on two risk subgroups and obtained 878 significantly differentially expressed genes (DEGs), of which 687 genes were overexpressed in the high-hazard subgroup and 191 genes were overexpressed in the low-hazard subgroup ( Fig. 4 B ) . Through KEGG analysis, we discovered that the DEGs were strongly connected with drug metabolism, regulation of lipolysis in adipocytes, galactose metabolism and the PPAR signaling pathway (Fig. 4 C). As shown in Additional file 1 : Figure S4A, the result of GO analysis demonstrated that these DEGs were strongly linked with the reaction to the fibroblast growth factor, intermediate filament organization, cellular response to xenobiotic stimulus and intermediate filament cytoskeleton organization, suggesting that the two subgroups in RM-RM had different microenvironments and tumor stroma. Energy metabolism is an important support for tumor function. Through ssGSEA, we obtained the metabolic score of each BLCA patient in the TCGA database ( Fig. 4 D ) . As can be seen from the Fig. 4 E and Additional file 1 : Figure S4B, the high-hazard subclass was extensively higher than the low-hazard subclass in terms of fatty acid synthesis, monocarboxylic acid transport and ATPase (Resp. complex V). In the GSEA of representative metabolic pathways, we obtained the same results ( Fig. 4 F ) . The results suggested that the high-hazard subgroup was mainly involved in fatty acid synthesis, while the low-hazard subgroup correlated with phosphoinositide metabolism. The two subgroups also showed different amino acid metabolism. RM-RM is correlated with the mutation and tumor microenvironment characteristics of BLCA Gene mutations can lead to the development of mutant cells which may have some selective advantages over adjacent cells. To explore the connection between gene mutations and drug resistance in BLCA, we analyzed gene mutations in the RM-RM subgroups, As can be seen from the Fig. 5 A. By comparing the top twenty genes with the highest mutation rates, we suggested noteworthy variances in the gene mutation levels between the two groups. Missense variation was the most frequent category, and the results demonstrated no obvious differences compared to the transition and transversion of mutant genes between the two subgroups. Among the six transition and transversion events of the subgroups, the proportion of groups c and t demonstrated the highest transitions. By comparing the mutation probability of the two subgroups, we obtained the top ten most distinct differences of mutant genes ( Fig. 5 B ) . These gene mutations may be an important factor leading to the progression of drug resistance in BLCA. The tumor microenvironment (TME) mainly includes tumor cells, tumor extracellular matrix, immune cells, cancer-associated fibroblasts (CAFs), cancer-associated adipocytes and tumor-derived endothelial cells (TECs). The TME could be subdivided into an immunological microenvironment led by immune cells and a nonimmunological microenvironment led by cancer-associated fibroblasts. As shown in Fig. 5 C, the stroma score of the high RM-RS cluster was higher than that of the low RM-RS cluster, suggesting that the nonimmunological microenvironment led by fibroblasts in the high-hazard cluster was more vigorous than the nonimmunological microenvironment led by fibroblasts in the low-hazard cluster. The detailed TME regulatory pathway of GSEA enriched by RM-RM mainly included positive regulation of fibroblast proliferation, responses to the drug, carcinoma-associated fibroblasts and angiogenesis, as shown in Additional file 1 : Figure S5A. Then, we used three classical algorithms: xCell, MCP-counter and EPIC, to calculate the ratio of TME cells in BLCA patients from the TCGA database (Fig. 5 D and Additional file 1 : Figure S5B). We found that CAFs, endothelial cells, adipocytes and CD4 + T cells were more abundant in the high RM-RS subgroup. As shown in Fig. 5 E, F and Additional file 1 : Figure S5C, the RM-RS subgroup was strongly connected with the marker gene expression of TECs and CAFs. The higher the risk genes expression (FASN and BMP6) ( Fig. 5 G ) , the higher the expression of an endothelial cell marker gene (CD34) and a fibroblast cell marker gene (CXCL12). Sensitivity of Drugs in the two RM-RS Subgroups After viewing the previous KEGG analysis ( Fig. 4 C ) which demonstrated that RM-RS is involved in drug metabolism, we further studied the different sensitivities of BLCA individuals to drugs in different RM-RS subgroups. First and foremost, we comprehensively analyzed the pathways of drug metabolism relevant to BLCA resistance through GSEA in the two RM-RS groups (Fig. 6 A). These results indicated that the high RM-RS cluster was correlated with drug response, aging, hypoxia, and doxorubicin resistance pathways, while the low RM-RS group correlated with endocrine therapy resistance, DNA repair, and decreased resistance to gefitinib. As shown in Fig. 6 B, the MSI score of the high RM-RS cluster was meaningfully lower than that of the low RM-RS cluster, and the exclusion score was meaningfully higher than that of the low RM-RS cluster. The results suggested that the immune escape potential of the high RM-RS group was enhanced, and the effect of immunotherapy drugs is poor. From the Additional file 1 : Figure S5D it is shown that there was no meaningful difference in dysfunction scores between the two subgroups. To provide treatment guidance for different BLCA clusters, we compared the sensitivity of two RM-RS clusters to various anticancer drugs. For chemotherapeutic drugs commonly used in BLCA, the drug sensitivity of high RM-RS individuals was significantly lower than that of low RM-RS individuals such as gemcitabine, carboplatin, docetaxel and epirubicin ( Fig. 6 C ) . Then, we recommended sensitive drugs in different subgroups. The high-risk ( Fig. 6 D ) subgroup was sensitive to BRD2/3/4 inhibitors (e.g., OTX015_1626), tankyrase inhibitors (e.g., WIKI4_1940), B-RafV600E inhibitors (e.g., PLX-4720_1036), and HMG-CoA reductase inhibitors (e.g., lovastatin), while the low-risk ( Fig. 6 E ) subgroup was sensitive to PARP inhibitors (e.g., Olaparib_1017), tyrosine kinase inhibitors (e.g., Gefitinib_1010), vincristine, and maleimide analogs (e.g., MIRA-1_1931). Upregulation of FASN promotes drug resistance and poor prognosis in BLCA Through the analysis of the molecular function and mechanism of RM-RM in bladder cancer, we found that BLCA resistance is closely related to lipid metabolism. In order to further uncover the connection between genes in RM-RM and gemcitabine resistance in BLCA, we established another bladder cancer gemcitabine-resistant cell line (UMUC3). (Additional file 1 : Figure S6A). The BODIPY assay ( Fig. 7 A ) was conducted, and the results validated our prediction that lipid metabolism in drug-resistant cells is more active. We detected the content of free fatty acids (FFAs), triglycerides (TGs), and total cholesterol (T-CHO) in gemcitabine-resistant cells and normal BLCA cells ( Fig. 7 B ) , and the results showed lipid accumulation in T24 gemcitabine-resistant (T24-R) cells and UMUC3 gemcitabine-resistant (UMUC3-R) cells. Through the establishment of the RM-RM, we found that FASN has the highest risk ratio ( Fig. 2 C ) . As shown in Fig. 7 C, we demonstrated that FASN is overexpressed in drug-resistant BLCA cells. In order to prove the function of FASN in the development of gemcitabine resistance, we first established T24 and UMUC3 BLCA gemcitabine-resistant cells with stable low expression of FASN ( Fig. 7 D ) . Subsequently, we used gemcitabine to treat T24-R and UMUC3-R cells with FASN knockdown. The results showed that T24-R and UMUC3-R cells were refractory to gemcitabine, and T24-R and UMUC3-R cells with FASN knockdown had restored sensitivity to gemcitabine, indicating that FASN promotes gemcitabine resistance in BLCA ( Fig. 7 E ) . It also indicated that the overexpression of FASN promotes the proliferation of BLCA cells. The sensitivity of FASN knockdown on T24-R and UMUC3-R cells to gemcitabine was consistent with the above results ( Fig. 7 F ) . In the Colony formation assay shown in Fig. 7 G, after FASN expression was knocked down, the tumorigenic ability of single cells of drug-resistant cells was inhibited, and the inhibitory effect was more obvious under gemcitabine treatment, while the control group was not sensitive to gemcitabine (Additional file 1 : Figure S6B). Through the BODIPY assay ( Fig. 7 H ) and the determination of lipid content (Fig. 7I), we further verified the relationship between the expression of FASN and cellular lipid metabolism. The results showed that the intracellular lipid content decreased after FASN knockdown. Therefore, we can conclude that FASN further promotes drug resistance progression in BLCA cells by affecting lipid accumulation in BLCA cells. In order to further verify our prediction, we conducted in vivo tumor formation experiments in mice ( Fig. 7 J ) . As shown in Fig. 7 K, L, after FASN knockdown, the tumor growth rate was significantly repressed and the resistance of the tumor to gemcitabine was reversed. Altogether, the above results revealed that knockdown of FASN inhibit tumorigenesis of gemcitabine-resistant BLCA cells in vitro and in vivo. TVB-3166 inhibited BLCA progression and reversed gemcitabine resistance TVB-3166 is an orally active, reversible and selective inhibitor of FASN. As shown in Additional file 1 : Figure S6C, under the action of TVB-3166, the FASN content of T24-R and UMUC3-R cells was significantly reduced. The results of plate cloning experiments and CCK-8 assays indicated that TVB3166 could almost completely eliminate the influence of FASN on the proliferation and gemcitabine resistance of T24-R and UMUC3-R cells (Fig. 8 A, B, C). also shows that TVB-3166 reversed gemcitabine resistance in BLCA. Second to the lipid changes presented after TVB-3166 treatment, as shown in the BODIPY assay ( Fig. 8 D ) and the determination of lipid content ( Fig. 8 E ) , the treatment group showed lower lipid aggregation than the contrast group. Next, in vivo experiments, we obtained consistent results ( Fig. 8 F–H ) . These results demonstrated that compared with the control group, the volume, proliferation rate and mass of subcutaneous tumors treated with TVB-3166 decreased, while the volume, proliferation rate and mass of subcutaneous tumors treated with gemcitabine were not meaningfully different from those of the contrast group. The volume, proliferation rate and mass of subcutaneous tumors in mice treated with gemcitabine after TVB-3166 treatment significantly decreased, indicating that TVB-3166 improved the sensitivity to gemcitabine in gemcitabine-resistant BLCA cells. The ELISA results (Fig. 8I) showed that TVB-3166 reduced the FASN gene of the tumor, consistent with the in vitro results. BODIPY staining and IHC assay detection were carried out on subcutaneous tumors ( Fig. 8 J ) . TVB-3166 treatment can reduce lipid accumulation, inhibit cell proliferation and increase the apoptosis rate. Thus, these results proved that by contrast with the control group, the proliferation and apoptosis rate of mice xenograft tumor cells treated with gemcitabine alone did not change significantly, while the proliferation rate of mice xenograft tumor cells treated with gemcitabine after TVB-3166 was inhibited and the apoptosis rate increased, indicating that TVB-3166 reversed gemcitabine resistance.
Discussion Gemcitabine is the most common drug in cancer chemotherapy, including BLCA. The occurrence of gemcitabine resistance remains the most important challenge in the treatment of tumor patients [ 15 ]. Drug-resistant cancers, under pharmacological pressure, exhibit complex molecular mechanisms aimed to inhibit treatment [ 16 ]. Gu J et al. found a novel therapeutic target to overcome gemcitabine resistance in pancreatic cancer [ 17 ]. Studies have found that cisplatin resistance in BLCA is related to epigenetic mechanisms such as DNA methylation, noncoding RNA regulation, m6A modification and posttranslational modification. Cocetta V et al. described the relationship between cisplatin resistance and cancer metabolism in detail [ 18 ]. However, there are a lack of systematic studies on gemcitabine resistance in BLCA cells. In our study, we identified and analyzed RM-DEGs based on RNA sequencing of gemcitabine-resistant BLCA cells and metabolic-related genes (MRGs). We also constructed and validated an RM-RM for predicting the OS of BLCA patients using several BLCA databases. Tumor cell metabolism is a representative pattern of variable, alloplastic, and adaptive phenotypic characteristics. It is the result of a combination of internal and external factors that enable cancer cells to outlive, pervade the body, and obtain resistance to antineoplastic drugs [ 19 ]. Bacci M et al. described the function of abnormal lipid metabolism in affecting the antitumor treatment response and maintaining drug resistance [ 20 ]. Considering the essential role of tumor metabolism in chemotherapy resistance, we collected all MRGs on the basis of the MSigDB, established a gemcitabine-resistant cell line of BLCA cells, scientifically and thoroughly considered the metabolic model of BLCA resistance, and designed an RM-RM on the basis of OS to support precise prognosis information and guidance of treatment for BLCA patients. In our research, we initially recognized and considered RM-DEGs in the TCGA BLCA dataset. The RM-DEGs were mostly associated with fatty acid and amino acid metabolism. Notably, these RM-DEGs were also enriched in extracellular matrix organization and drug metabolic processes. According to these RM-DEGs, we subdivided BLCA sufferers into two clusters with noteworthy variances in OS. These findings indicated the heterogeneity of BLCA metabolism, and that BLCA patients with diverse modes of metabolism have different prognoses. Then, by means of univariate, LASSO and multivariate Cox regression analyses, we selected seven central RM-DEGs related to survival. Based on these seven genes, the TCGA dataset was viewed as the training data to create an RM-RM for predicting the survival of BLCA sufferers. The consequences showed that RM-RS was closely related to T, N, M and clinical stage, revealing that the deterioration of BLCA was associated with the reprogramming of tumor metabolism. Afterward, we used a variety of analytical methods to further prove that RM-RM was a reliable detached predictor and demonstrated the highest accuracy compared with other clinical indicators. Subsequently, two GEO datasets were used to further verify that RM-RM could be a promising clinical predictor for BLCA treatment. To discover the prospect of RM-RM for clinical conversion, we used immunohistochemistry to detect the protein expression levels of clinical specimens. The study further found that RM-RS in these patients was intimately linked with the prognosis and clinical characteristics. The metabolism of BLCA patients represents a key issue for cancer research. Cao D et al. found that some genes, through inhibiting glucose metabolism, repressed tumor proliferation and improved cisplatin-induced apoptosis of BLCA cells [ 21 ]. We divided BLCA patients into two subclasses of different RM-RSs subgroups and performed GSEA and ssGSEA analysis. It was found that gemcitabine resistance in BLCA cells was closely related to lipid metabolism. Patients in the high RM-RS group showed more active lipid synthesis than those in the low RM-RS group. Through gene mutation analysis, we uncovered considerable differences between the two different RM-RS subgroups. Previous studies have shown that ARID1A gene alterations may mediate resistance to platinum-based chemotherapy and estrogen receptor degradation/modulators [ 22 ]. Our study also found the top ten genes with the most obvious differential mutations, including ARID1A. The specific mechanisms of these genes are subjected to further research. In addition, current researches have proven that the TME plays a vital role in the procedure of tumor drug resistance [ 23 ] and have also proven the cross-link interference between metabolic reprogramming of cancer cells and the changes in the TME [ 24 , 25 ]. Saw PE et al. proposed targeting cancer-associated fibroblasts (CAFs) to overcome anticancer drug resistance [ 26 ]. Particularly, in our study, we discovered that endothelial cells and fibroblasts obviously infiltrated in the TME of the high RM-RS subgroup. This may provide a new therapeutic target for patients with chemotherapy resistance of BLCA. After that, we also found that the high RM-RS group was insensitive to a variety of classic chemotherapy regimens but was sensitive to other drugs, such as antiangiogenic drugs (B-RafV600E inhibitor: PLX-4720_1036) and lipid-lowering drugs (lovastatin). By predicting the different sensitivities of the two groups to anticarcinogen, we could precisely provide compatible drugs for patients with different metabolic sensitivities, suggesting the potential application of RM-RM in clinical guidance in the future. The key genes in the RM-RM include FASN (Fatty Acid Synthase), MAP2 (Microtubule Associated Protein 2), BMP6 (Bone Morphogenetic Protein 6), GPC2 (Glypican 2), CNOT6L (CCR4-NOT Transcription Complex Subunit 6 Like), GALNT12 (Polypeptide N-Acetylgalactosaminyltransferase 12) and CARD10 (Caspase Recruitment Domain Family Member 10). We found that FASN, MAP2, and BMP6 were upregulated in BLCA tissues, while GPC2, CNOT6L, GALNT12 and CARD10 were downregulated. FASN is an essential enzyme in fatty acid synthesis [ 27 ]. It not only plays a vital role in lipometabolism, but also is related to tumor proliferation. In addition, FASN can adjust the immune microenvironment and take part in epithelial-mesenchymal transition, thereby regulating tumor progression [ 28 ]. Li Y et al. found that FASN was associated with sorafenib resistance in patients with liver cancer [ 29 ]. MAP2 belongs to the microtubule-associated protein of the MAP2/Tau family, which is related to the collection of signal proteins and the modulation of microtubule-mediated transport [ 30 ]. Pulkkinen HH et al. found that BMP protein regulates angiogenesis and endothelial cell proliferation [ 31 ]. GPC2 protein is a promising therapeutic target for pantumor [ 32 ]. Katsumura S et al. found that CNOT6L protein can coordinate energy intake and consumption when stimulated [ 33 ]. Guda K et al. identified the mutation of GALNT12 protein in colon cancer patients and explored its function in the occurrence and progression of colon cancer [ 34 ]. CARD10 protein mediates the occurrence and progression of various kinds of cancers [ 35 ]. Zhu L et al. revealed that CARD10 protein also plays a crucial role in the formation of a growth factor signaling axis that mediates immunosuppression and tumorigenesis by TBKBP1 and TBK1 [ 36 ]. FASN, as a representative gene, was further verified as a promoting factor for gemcitabine resistance in vitro and in vivo. Previous researches have proven that the effect of a FASN inhibitor (TVB-3166) on carcinogenic signals and gene expression enhances the antitumor efficacy of various xenograft tumor models [ 37 ]. Our study further demonstrated that TVB-3166 can reverse gemcitabine resistance. In summary, this study constructed an RM-RM with high diagnostic accuracy for predicting OS and treatment response in patients with bladder cancer. We hope that the constructed RM-RM can provide guidance in the treatment of BLCA patients.
Bladder cancer (BLCA) is the most frequent malignant tumor of the genitourinary system. Postoperative chemotherapy drug perfusion and chemotherapy are important means for the treatment of BLCA. However, once drug resistance occurs, BLCA develops rapidly after recurrence. BLCA cells rely on unique metabolic rewriting to maintain their growth and proliferation. However, the relationship between the metabolic pattern changes and drug resistance in BLCA is unclear. At present, this problem lacks systematic research. In our research, we identified and analyzed resistance- and metabolism-related differentially expressed genes (RM-DEGs) based on RNA sequencing of a gemcitabine-resistant BLCA cell line and metabolic-related genes (MRGs). Then, we established a drug resistance- and metabolism-related model (RM-RM) through regression analysis to predict the overall survival of BLCA. We also confirmed that RM-RM had a significant correlation with tumor metabolism, gene mutations, tumor microenvironment, and adverse drug reactions. Patients with a high drug resistance- and metabolism-related risk score (RM-RS) showed more active lipid synthesis than those with a low RM-RS. Further in vitro and in vivo studies were implemented using Fatty Acid Synthase (FASN), a representative gene, which promotes gemcitabine resistance, and its inhibitor (TVB-3166) that can reverse this resistance effect. Supplementary Information The online version contains supplementary material available at 10.1186/s12967-024-04867-8. Statement of Significance The RM-RM aid to accurately predict survival rates and are used to help guide BLCA patients to choose the appropriate treatment option, and by inhibiting the fatty acid synthesis pathway involved in FASN a potential therapeutic strategy for BLCA is presented Supplementary Information The online version contains supplementary material available at 10.1186/s12967-024-04867-8.
Supplementary Information
Author contributions CHG, FYT, and LJZ conceived the study. LJZ, KXD, and YHD designed experiments. LJZ, KXD, YHD, and YMZ performed experiments. LJZ, K.X.D., YMZ, and YBL assisted with animal experiments. MDR, YHL, and W.B.P. helped to obtain BLCA patients’ clinical information. LJZ, KXD, LLZ, RHZ, and DPF analyzed the data LJZ and KXD wrote the manuscript and all authors reviewed and approved the manuscript for publication. Funding This work was supported by grants from the National Natural Sciences Foundation of China (NO. 82203099 to L. J. Z, NO. 82173294 to C.H.G.), the Training Program for Middle-aged and Young Discipline Leaders of Health of Henan Province (NO. HNSWJW-2021004 to C.H.G.); the Key Program Jointly Built by Henan Province and the Ministry of Medical Science and Technology(NO.SBGJ202102127 to C.H.G. and SBGJ202102095 to F.Y.T.); the Training Program of Young and Middle-aged Health Science and Technology Innovation Excellent Youth (NO.YXKC2021033 to C.H.G.); the Program of International Training of High-level Talents of Henan Province (NO.202207 to C.H.G.); the Science and Technology Research and Development Plan Joint Foundation of Henan Province (NO. 222301420017 to C.H.G.); the Key Project of Research and Practice of Education and Teaching Reform of Zhengzhou University (NO. 2022ZZUJG082 to C.H.G.); the Professional Degree Graduate Quality Teaching Case Project of Henan Province (NO. YJS2023AL013 to C.H.G.); the Funding for Scientific Research and Innovation Team of The First Affiliated Hospital of Zhengzhou University (NO. QNCXTD2023023 to C.H.G.); the Key Technologies R & D Program of Henan Province (NO. 232102521032 to C.H.G.); the Basic Research Incubation Program for Young Teachers of Zhengzhou University (NO. JC21854035 to F.Y.T.); the Joint Construction Project between Medical Science and Technology Research Project of Henan Province (No. LHGJ20220335 to L.J.Z.). Availability of data and materials All data are available in a public, open access repository. R and other custom scripts for analyzing data are available upon reasonable request. Declarations Ethics approval and consent to participate Not applicable. Animal experiments The subcutaneous xenograft model was agreed by the Ethics Committee of Experimental Animal Center of Zhengzhou University. Male BALB/c nude mice (approximately 4 weeks old), purchased from Beijing Weitong Lihua Experimental Animal Technology, were divided into 8 groups with 5 mice per subgroup. Four groups be used for studying the effect of FASN knockdown on the reversal of gemcitabine resistance induced in vivo: Group I (DMSO: DMSO), group II (DMSO: gemcitabine), group III (FASN knockdown: DMSO) and group IV (FASN knockdown: gemcitabine). Four groups be used for studying the effect of TVB-3166 on the reversal of gemcitabine resistance in vivo: Group I (DMSO: DMSO), group II (DMSO: gemcitabine), group III (DMSO: TVB-3166) and group IV (gemcitabine: TVB-3166). In this experiment, 2*10 6 BLCA cells were injected subcutaneously into each mouse. After that, the volume of the tumor was recorded every 5 days. Drug therapy was initiated when the average tumor size was measured at 100–200 mm 3 .After 50 days, we removed the subcutaneous tumor from the mice. After measurement and recording, we partially stored these tumors at in a − 80 °C freezer and partially embedded them in paraffin. Mice treated with gemcitabine were intraperitoneally injected with gemcitabine 50 mg/kg every 2 days. Mice treated with TVB-3166 were intragastrically administered 60 mg/kg TVB-3166 daily for approximately 5 weeks. Consent for publication All authors agree to publish. Competing interests All the authors declared that they had no competing interests.
CC BY
no
2024-01-15 23:43:46
J Transl Med. 2024 Jan 13; 22:55
oa_package/78/36/PMC10787972.tar.gz
PMC10787973
38218759
Introduction Currently, 5.32 billion people in the world use a smartphone, and 4 out of 5 mobile devices are active and permanently connected to the Internet [ 1 ]. In addition, three-quarters of the planet’s inhabitants use social networks as communication channels and for social interaction [ 2 – 4 ]. Numerous studies have demonstrated the usefulness of new technologies and social networks in information exchange, such as socialisation [ 5 ], mental health [ 6 ], improved self-esteem [ 7 ], emotional benefit [ 4 ], self-expression and increased quality of life for individuals [ 8 , 9 ]. New technologies and social networks have enabled the improvement of living standards and changes in people’s consumption concepts have greatly boosted the development of tourism [ 10 , 11 ], which has further benefited the hospitality sector [ 12 , 13 ]. An increasing number of studies are addressing the need to limit the use of IT and reduce digital hyperconnection [ 3 , 14 , 15 ]. In recent years, these lines of research have focused on proposing therapies to combat the adverse health and wellbeing effects of this technological addiction [ 13 , 16 ]. The research on addiction to the Internet and social media stands out in the scientific literature [ 11 , 17 ] analyse the mechanical attitude and behaviour of users that lack self-control and self-awareness [ 18 ]. Other studies have revealed the negative impacts of IT on society, such as the influence of fake news [ 9 ], polarization of public opinion [ 19 ], data protection and privacy,especially in the health sector where patient data is highly sensitive and there are concerns that anonymisation of data is not sufficient to preserve patient privacy, cybercrime, addiction to being connected [ 20 ], the obsessive attraction of social media [ 21 ], hyperconnection of cyber workers [ 22 ], control of big data and virtual monetary systems without financial regulation [ 12 , 23 ] or the new domain of Artificial Intelligence (AI) [ 24 ] and use of virtual reality [ 2 ]. The tourism sector has responded to the growing demand for digital disconnection trips and holidays by offering DFT experiences [ 25 ]. Digital Free Tourism (DFT) has become an attractive tourism market and an emerging business opportunity. Existing lines of research have studied the application of new technologies in business and the hospitality sector but have not considered the concept of digital disconnection and wellbeing in a holiday context [ 15 , 17 , 26 ]. In the field of tourism, the motivations of tourists on a DFT trip have been examined the effects that a DFT experience can have on well-being [ 7 , 27 – 29 ] and also on health [ 9 , 16 , 30 ]. The results have identified DFT-derived attributes that provide important findings that can inform strategies in the tourism sector and its promotion as an emerging and future market [ 11 , 31 – 33 ]. This phenomenon emerged in 2013 in the United States and extended throughout the world in only a few years, becoming a global emerging market opportunity for the tourism sector, wellness and health and for economic sustainability [ 4 , 34 – 36 ]. DFT accommodation and travel agencies offer services for technological disconnection that limit access to information with alternative activities, exclusive stays free of electronic devices or therapies such as yoga, hiking, mindfulness and pilates, which offer to improve the well-being of the customers [ 15 , 37 ]. There are strategies that try to help users temporarily disassociate from their digital devices or use them in a balanced and responsible way [ 38 – 40 ]. However, there are barriers to the decision to take a DFT travel. There a few studies on the behavioural intention of tourists to use experiences that limit the use of smartphones [ 3 , 41 ]. This study addresses a new problem with a strong impact of technology and tourism. The methodology is based on an exploratory analysis, building on previous studies, using a survey of potential Spanish tourists. The scientific production of studies on Digital Detox and specific studies on Digital Free Tourism is very scarce. The use of structural equations to evaluate the results of the questionnaires is a novelty in the work as it employs a pioneering statistical analysis carried out with PLS-SEM and DFT [ 42 ]. The aim of this study is to examine the opportunities that DFT can bring in the tourism sector for tourism service providers and, in turn, to investigate the influence of tourists’ behavioural intentions (BI) on the variables offered by DFT attributes linked to social and family engagement, relaxation and wellbeing and connection with nature. It also studies the impact of BI on DFT economic sustainability in the new economic scenario and the complex relationship between digital technologies and tourism. In summary, we use a quantitative approach that investigates the attitudes and motivations of potential DFT tourists by employing a new dimension, sustainability as a cornerstone of DFT attributes and its influence on the behavioural intention of these potential tourists. This research offers a thoughtful perspective to understand how providers can leverage DFT strategies to achieve greater appeal to potential travellers. The drive for new technologies and digitalisation has led to the need to rethink business models and segments oriented towards the sustainability and viability of tourism resources. In this sense, service providers are making efforts to turn their destinations into service providers are making efforts to turn their destinations into ideal destinations that meet the needs and experiences of their potential customers [ 12 , 17 ]. With these premises, tourism managers are starting to promote sustainable strategies in line with their clients’ offer. Environmental and economic sustainability is a priority objective for the new manifestations of tourism, such as DFT, which proposes and promotes the revaluation of authentic resources [ 43 , 44 ]. For these reasons, research population would be clients of this promote sustainable strategies and with issues related with DETOX. The research questions are as follows: Investigate whether in the new digital era DFT can offer a competitive advantage in attracting tourists and be a driver of economic sustainability. To identify DFT as a new business model and generator of business initiatives that promote health and wellness tourism. Expand the field of knowledge of DFT by adding economic sustainability as a factor influencing the behavioural intention of tourists when seeking DFT experiences. After this introduction, the existing scientific literature about DFT is reviewed. Then, the methodology of the research and the data collection system using an online questionnaire of 426 tourists are given, the results obtained are discussed, and the conclusions of the study are presented.
Methods The objective of this research is to advance the knowledge of new structures of motivational factors that can understand the decision of a tourist to make a DFT trip. To this end, it is investigated whether family and social engagement and health and relaxation have a positive impact on the behavioral intention of the potential tourist and whether this influences sustainability due to the importance of DFT in the new economic framework. For this purpose, a quantitative approach has been used with an online survey including question areas from previous studies [ 7 , 15 , 24 , 29 ]. The questionnaire investigates the profile, attitudes and motivations of DFT tourists [ 4 , 71 , 72 ]. This allows tourism service providers and managers to consult this research and adapt marketing strategies to tourists who demand these types of wellness and health services. Data collection The answers to questions about the proposed relationships and the influence of each dimension of sustainability of DFT were measured with a five-point Likert scale (5 = “strongly agree”, 1 = “strongly disagree”) [ 73 ]. The study uses a conceptual model that analyses the interrelationships of the variables that contribute to behavioural intention for the DFT experience. The methodology employed is a questionnaire in an attempt to reach a broad audience. In our research, we conceptualise sustainability DFT as a pioneering study through an analysis of PLS-SEM results that can contribute to critical debates in technology and tourism studies. The common method of bias with the Harm test has been taken into account [ 43 ]. The model is used to analyse the influence of the above variables on economic sustainability and sustainable tourism. The theoretical model in the proposal above (see Fig. 1 ) connects social and family engagement, nature connectedness and health-relaxation variables to behavioural intention for DFT and the contribution to economic sustainability. The indicators selected in previous studies were also analysed. The most important studies and elements in the literature were reviewed [ 7 , 15 , 24 , 29 ]. To measure sustainability, the scales proposed in previous work were adapted [ 4 , 24 , 34 ], such as DFT experiences generate profitability for the tourism sector, DFT is a driver of future economic sustainability, DFT promotes new jobs and DFT creates new companies and entrepreneurs. This study uses the proposal of [ 38 ] to evaluate health relaxation. For nature connectedness, the items were proposed using the work of [ 7 , 15 , 26 ]. Social and family engagement items were adapted from those used by [ 11 , 16 , 38 ]. The analysis of behavioural intention was based on previous work by [ 7 , 15 , 29 , 49 ]. The items included for each construct are shown below in Tables 1 and 2 . The measurement scales that were developed and adapted using the literature on previous research are also shown in Table 2 . Sampling procedure A specially created online questionnaire was used in the research, and respondents were asked to answer questions about DFT. It is important to note that the questionnaires were anonymous. The questionnaire has been previously validated with experts from the tourism sector and academics using a Google Forms format. Of the experts, 5 are academics from the University of Extremadura, 2 are researchers from the Lisbon Research Centre and 5 are professionals from the Spanish tourism sector. The procedure was to use no probabilistic convenience sampling. The questionnaire about tourist destinations, entrepreneurship, mindfulness, relaxation and meditation was advertised on social networks in Spain with the corresponding permission and rights of the respondent. Stratified sampling by age group was used in training sessions for businesspeople, academics and entrepreneurs, as well as public administration staff and industry professionals who were given the questionnaire for research purposes and collaboration with the study. The data were collected between July and October 2022 and were first analysed for missing values. Of the 435 questionnaires received, 9 were eliminated due to incomplete or unanswered items and did not count towards the total sample. In the end, 426 questionnaires were obtained with valid responses. In the questionnaire’s preparation, wording, order and characteristics, it is possible to indicate that [ 74 ] recommendations have been taken into account. In particular, it should be noted that a control question was included to eliminate questionnaires that did not pass this question. Likewise, an item was added to control the error, which turned out to be lower than indicated by these authors. Statistical analysis The statistical programs SPSS and Smart PLS 4 were used to analyse the results [ 42 ]. All questionnaire variables were pre-coded. IBM SPSS Statistics 26.0 statistics software was used to evaluate the data obtained by descriptive analysis using Smart PLS 4 software to confirm the relationships in the model and the research hypotheses [ 75 ]. PLS is the most efficient way to analyse data using the SEM methodology since it provides the theoretical and empirical conditions of behavioural and social science and is especially applicable when the conditions for a closed system are not met [ 76 ]. PLS was chosen for several reasons: first, PLS imposes no requirement of normality on the data and is a suitable technique for predicting dependent variables in small samples, given a certain degree of quality in the model [ 77 ]. Furthermore, PLS is more appropriate when the objective is to predict and investigate relatively new phenomena [ 78 ] as is the case of DFT and technology in the tourism sector also applied to business management research [ 66 , 76 ].
Results Analysis of the measurement model The reliability and validity of the proposed model are checked to verify that the observed variables accurately measure the theoretical concepts. All the constructs are reflective, which means that the model uses data that have item reliability, with all factorial loads greater than 0.505 [ 79 ], presenting values between 0.759 and 0.949. Bootstrapping with significant loads (99.99%) was used to find the t statistics. The calculations for Cronbach’s alpha for each of the constructs gave values higher than 0.7, which is the established minimum [ 80 ]. These values were between 0.869 (Social and family engagement) and 0.920 (Health-Relaxation). The composite reliability was seen to be internally consistent because all the constructs had values greater than 0.9, which are higher than the proposed minimum of 0.7 (Hair et al. 2011). The results for the average variance extracted (AVE) resulted in values between 0.718 (social and family engagement) and 0.890 (behavioural intention), which verify convergent validity, as they are all greater than the minimum of 0.50 (Fornell & Larcker, 1981) (see Table 2 ). The calculation of AVE ≥ 0.5 means that more than half the variance of each indicator is explained by the construct [ 42 , 81 , 82 ]. Table 3 shows how all the indicators used in the research meet the requirements established for discriminant validity, since the diagonal values are all higher than the other values in the same columns and rows [ 79 ]. In addition, the heterotrait-monotrait criterion (HTMT) was calculated to find the discriminant validity. The values of HTMT must be less than 1 to show discrimination of two factors [ 77 ]. Table 3 (final columns) shows that all variables had discriminant validity when following the criteria for HTMT. From the results obtained, the measurement model was considered to have sufficient levels of validity and reliability, and the evaluation of the structural model can proceed. Structural model analysis Once the measurement model validity has been verified, the structural model of the different constructs is analysed to evaluate the coefficient and path significance [ 80 ]. The values of R 2 , which is the explained variance of the latent dependent variables, verify that the endogenous constructs of the model are predictive and explanatory [ 83 ] (see Table 4 ). The model explains 61.5% of nature connectedness, 36.9% of behavioural intention and 63.8% of economic sustainability. Student’s two-tailed t-distribution was used to compare the significance of β coefficients using a bootstrapping process with 5000 samples [ 80 ]. The values for the constructs of the model (standardized β path coefficients) are greater than 0.2 [ 84 ] or have t values greater than 1.96, apart from the relationship between social and family engagement and behavioural intention. This means that all the proposed hypotheses used in the structural model were significant except for the hypothesis about the relationship of social and family engagement and Behavioural Intention because this does not reach the minimum accepted value for the t statistic (see Table 5 ). Similarly, the p-values are also less than 0.05 level of significance, except for H3 which is the positive influence of Social and family engagement on Behavioural intention. The value obtained is higher (0.41) and is not supported because the significance level is higher than the 0.05 threshold, which means that the confidence level is lower than 95%.
Discussion The first and second hypotheses are validated, which show that both ENG and REL have a positive influence on BI and NAT [ 27 , 32 , 41 ]. This coincides with the findings of [ 29 , 49 ] and therefore validates the research hypothesis. Other authors, however, consider that the constant need for commitment to the family is an obstacle to enjoyment and creates an obligation to communicate. This can cause frustration and discomfort and means that tourists are under pressure because they do not have the necessary language skills to communicate [ 71 , 78 ]. This negative feeling is highest in a natural, isolated and unconnected environment [ 50 , 85 ]. proposes that this drawback does not influence behavioural intention for connection with nature with DFT. Being in a cabin in the forest can help visitors gain self-knowledge and immerse themselves in the environment, but this does not happen in places such as hotels or urban resorts where the feeling of being in a natural environment can be blurred and therefore reduce the enjoyment of the natural environment. On the other hand, social commitment, defined as the process of establishing and improving ties with family and friends, has a positive influence on tourist motivation to participate in a DFT trip and therefore has an influence on tourist intention to experience DFT [ 7 , 29 , 72 ]. However, contrary to what is proposed in the third research hypothesis, ENG does not positively influence BI. Some authors affirm that it is not a predictor of DFT intention [ 50 ] because social bonding does not necessarily occur due to DFT experience but is gained from different activities that tourists do together in the company of others while on holiday. This may be because our family and friends are connected to the Internet and social media, and the best way to connect with them is digitally; thus, in these circumstances, being disconnected does not benefit social relationships [ 15 , 48 ]. On the other hand, the results suggest that nature connectedness and health relaxation contribute positively to behavioural intention, especially the first construct, so the fourth and fifth hypotheses are validated. These results are consistent with the idea that factors of health relaxation and nature connectivity during a trip are decisive when recommending or repeating a DFT trip. The feeling of unity with the natural environment is an attractive reason for a DFT experience is an idea proposed in the scientific literature [ 6 , 7 ]. On the other hand, relaxation influences motivation to make a DFT trip. A better sensory experience, feeling of freedom, sensory experience and relaxation are possible rewards after engaging in activities without digital media [ 39 ]. Relaxation means feeling peaceful and quiet while refreshing the body and mind, which is in line with studies that have found that relaxation can motivate tourists to go sightseeing without digital devices [ 32 , 49 ]. The results of these studies try to explain that tourist intention to not use digital devices during their holidays has its origin in the belief that a DFT trip will allow a person to feel relaxed and mindful, allow them to express themselves and help them avoid technostress [ 29 , 68 ]. Other studies also support this theory about the benefits of DFT for improving health and well-being and increasing relaxation and satisfaction [ 6 , 26 , 38 ]. In addition, relaxation and mindfulness have positive impacts on tourist intention to travel by limiting the use of technology [ 3 , 9 ]. In other lines of research, it is concluded that the excessive use of new technologies minimizes commitment to family and social relationships [ 29 ]. The sixth hypothesis predicts that Behavioural Intention positively influences economic sustainability [ 34 , 54 ]. The results reveal (β = 0.xxx. t = 2. xxxx) that the hypothesis is supported. These data are in line with the studies of [ 24 , 40 , 70 ]. Figure 2 presents the model with the confirmed relationships of the research, the trajectory results and their statistical significance. The above data justify proposing a new tourism product based on the voluntary absence of technology during a trip [ 7 , 16 ] to promote the sustainable economy of a territory [ 4 ] because behavioural intention clearly influences economic sustainability. This confirms what most authors in the peer-review literature propose [ 14 , 15 ] for five of the research hypotheses used in this study. Four different elements of motivation that positively affect behavioural intention to go on a DFT trip have been identified. These are economic sustainability, social and family engagement, nature connectedness and health relaxation.
Conclusions Theoretical implications Theoretical contribution with DFT as a driver for attracting potential tourists to help service providers to offer efficient, sustainable services to support the health and wellbeing demanded by tourists who wish to digitally disconnect. DFT can be a driver of economic sustainability and health and wellness therapy in tourism in the digital age. Innovative technologies are increasingly important as a fundamental part of the tourist experience, and this study contributes to the scientific literature on the topic and adds to the limited number of studies on the motivation of tourists to go on a DFT trip. It advances knowledge by proposing a new structure of motivational factors that could explain the decision of a tourist to make a DFT trip. To this end, it empirically proposes how variables such as social and family commitment, connection with nature, relaxation or preference for economic sustainability influence the decision to make a trip that is free from technology and digital devices. Study participants have consistently indicated the positive impacts that temporary abandonment of digital devices can have during holiday periods. This empirical study also expands the lines of research on DFT and proposes new dimensions to try to lay theoretical foundations for future studies into DFT, such as disconnection from work, privacy or sustainable tourism and the positive impacts on the decision to choose to disconnect digitally while taking a trip. The study shows a great variation in the traveller’s desire to disconnect, as some already want to disconnect digitally, while others live attached to their devices and make them an integral part of their lives. Much of the debate about hyper connectedness and the ubiquity of new technologies has focused on data given empirically in this research, concluding that the decision to disconnect from DFT is complex and that it is not just an individual choice but has other factors inhibiting voluntary disconnection that are all influenced mainly by the social environment of work and family. Practical implications Being disconnected while travelling is an added value for DFT tourists. This has obvious advantages and means it can become part of the creation and design of products and DFT service packages with companies in the sector. All this can result in increased productivity and contribution to well-being, sustainability and an improved lifestyle. At the same time, it offers an opportunity for small and medium-sized companies to turn the disadvantage of lack of technology into a defining advantage for their product. DFT proposes an adequate use of existing resources that can be improved with efficient strategies and does not require large infrastructures and investment. Therefore, the practical findings of this research are that digital connections alter the travel experience and the evolution of the rapid adoption of recent technologies in tourism. The omnipresence of digital connections is also changing, and a social transition is beginning for the connection-disconnection dilemma in tourism. Limitations and future research DFT is an alternative and emerging trend that companies and the tourism sector can use to adapt offers to changing market needs. Disconnecting from the digital world, for leisure and for treatment, can be used to create a catalogue of services that can generate new jobs and specialize areas, spaces and regions for this type of tourism. First, this study is limited to only one target audience made up of people of legal age who travel regularly. The complexity of making the decision to disconnect is latent since most of the scientific literature focuses on opinions and not on empirical data. There are individual choices and an age bias that allows us to distinguish digital profiles such as natives, immigrants, generation z, and millennials. There is also only limited empirical research in this area [ 7 , 27 , 32 , 86 ]. Second, potential DFT travellers supplied the data collected in this quantitative study. Future research could develop this conceptual model with travellers who have already taken a DFT trip and check the degree of loyalty and recommendations to future tourists, which would allow the factors of intention for these experiences to be researched. A temporary digital disconnection is accepted and considered positive. Encouraging self-awareness, control and moderation at different types of DFT accommodation (resort, hotel, mountain hut, rural accommodation), the various sizes of travel groups (singles, couples, with family, with friends) and a research agenda of travel-related factors can all be used to predict enjoyable elements for DFT travellers and therefore suggest a future roadmap including other conceptual models such as well-being, DFT experience, and loyalty that could all influence decision-making and give a predictive model for DFT traveller services and products. Third, the conceptual framework can be useful for the future of tourist destinations that promote or specialize in DFT by generating a collaborative ecosystem that would allow for the expansion of the results of other studies, such as creating a network of disconnected tourist destinations or for potential use by addiction treatment centres. However, some situations are given which help to focus on the study aims. Tourists on a disconnection experience trip may be limited by the potential recall and forgetfulness bias that could be felt in a hypothetical situation and may differ from the way the traveller behaves once disconnected, so this area of research may warrant future lines of research examining how tourists on a DFT experience trip behave. Studies still must analyse patterns that analyse the intentions of tourists regarding digital disconnection experiences. The relationships between diverse types of DFT in various places around the world can suggest lines and areas of future research whose results can be used by professionals in the tourism sector to make pragmatic efforts to meet the potential DFT demand in the market. The aim is to generate strength and power for remote areas with reduced means of communication and without current tourism development, which can be rural and undeveloped areas away from busy tourist routes and mass tourism destinations. It is also an opportunity for combined destinations to establish a catalogue of innovative DFT services complying with the following characteristics: lack or limited access to IT with leisure activities in an exclusive and healthy environment. This would allow entities to plan strategies and alternatives for tourism development and marketing policies focused on sustainability, relaxation and social and family commitment as valuable elements of well-being when taking part in the experience.
Background The excessive use of information technologies (IT) and online digital devices are causing symptoms of burnout, anxiety, stress and dependency that affect the physical and mental health of our society, extending to leisure time and work relationships. Digital free tourism (DFT) is a phenomenon that emerges as a solution to technostress and pathologies derived from digital hyperconnection. The objective of this research is to advance the knowledge of new structures of motivational factors that can understand the decision of a tourist to make a DFT trip. To this end, it is investigated whether family and social engagement and health and relaxation have a positive impact on the behavioral intention of the potential tourist and whether this influences sustainability due to the importance of DFT in the new economic framework. Methods With a quantitative approach, the methodology used consisted of an online questionnaire among potential travelers. IBM SPSS Statistics 22.0 statistical software was used to evaluate the data obtained and confirm the relationships of the model and the research hypotheses. Results The results of the questionnaire assessed the contribution of each construct to the tourist’s behavioral intention and the tourist’s decision to make the decision to undertake a DFT experience. Conclusions DFT can be a driver of economic sustainability and health therapy in tourism in the digital age. This study aims to expand the lines of research on DFT and determine the complex factors that can lead a tourist to participate in the DFT experience. The results obtained can help managers of companies in the sector to offer more efficient and sustainable services that contribute to the health and wellbeing of tourists as a differentiating factor. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-023-17584-6. Keywords
Background Social and family engagement Every individual wants to have social relationships with others [ 15 , 25 ]. Therefore, the influence exerted by family and friends is very important in our lives since it can directly affect decision-making and condition attitudes and behaviour [ 41 ]. Several studies have shown that enjoying a vacation with family or friends is beneficial for social relationships [ 16 ]. In addition, vacations help create close bonds that increase sociability [ 29 ], promote face-to-face communication [ 45 ], build trust [ 26 ] and generate social and family commitment [ 39 ]. Jiang and Balaji’s (2021) research identifies several reasons for tourists to participate in a DFT trip, including social and family engagement, connection with nature, relaxation and novelty, which all increase well-being during holidays. It can even reinforce bonding points with loved ones without the constant need to send social media notifications and enhance that active engagement with a DFT experience [ 7 ]. Family and social commitment can represent a barrier to holiday enjoyment that must be negotiated and addressed personally by the potential tourist as technology wields extensive power in the experience [ 28 ]. Nature connectedness Other research suggests that social and family engagement can be enhanced with an immersive experience in the natural environment [ 7 ]. Nature connectedness has been defined as the subjective feeling of association with the environment that implies meaningful participation in something greater than oneself and that can be related to scales of natural emotional, social and psychological well-being [ 46 , 47 ]. Researchers have found that immersion in a natural environment creates positive communication bonds, enhances the development of personal skills, reinforces attachment and interpersonal harmony, and increases sociability [ 15 , 46 , 48 ]. When choosing an experience of well-being and relaxation with family and friends, DFT in nature gives the tourist a chance to try new activities that favour full enjoyment of the environment [ 26 ] and strengthen social and family bonds [ 38 ]. Nature can enhance self-expression and self-control and contribute to a healthy experience [ 11 ]. DFT limits the constant presence of IT with activities in environments that allow tourists to enter the natural environment [ 49 ] and engage with family, friends and fellow travellers in activities that improve interpersonal relationships without the need to rely on mobile devices. Thus, bonds of unions are reinforced without the constant obligation to send emails, upload photos to social networks or publish videos on the Internet [ 38 , 40 ]. Hence, the following hypothesis is proposed for research: H1. Social and family engagement positively influences nature connectedness . Health and relaxation One of the consequences of a world with digital communication without limits is the increase in stress levels [ 21 , 50 ]. Some studies conclude that DFT is a way for tourists to reduce technostress, which is a subtype of stress that is characterized by a loss of control due to being connected to the Internet with devices such as smartphones, causing frustration, anxiety and an absence of privacy [ 51 ]. DFT can allow tourists to escape from their usual work routines and disconnect in the middle of nature with limited use of IT [ 29 , 52 ]. This increases the feeling of well-being and relaxation [ 6 , 15 , 53 , 54 ]. It also improves the participants’ health by avoiding compulsive use of the Internet in daily online activities, such as posting on social networks, instant messaging, sending and receiving emails or watching online videos [ 15 , 21 , 38 ]. Hence. The following research hypothesis is proposed: H2. Health relaxation positively influences nature connectedness . Behavioural intention Behavioural intention is the subjective probability that a person is going to act in some way and have certain behaviour [ 41 ]. In the tourism sector, conceptual models have tried to investigate what factors influence the behaviour of a tourist when choosing a type of experience and how these affect the tourist’s intention to book a trip [ 50 ]. DFT reduces the negative impact of technology and the Internet during leisure activities and holidays by limiting the use of digital devices that cause distractions and pathologies [ 14 ]. An excessive use of technology causes technostress, depression, low self-esteem, anxiety and other new diseases associated with technological addiction, such as nomophobia, FOMO disorder, and phubbing [ 55 , 56 ]. This study is based in other research works, but this model has a lot of new apports. Similar model as [ 7 ]. These authors present a model Digital-Free tourism holiday as a new approach for tourism well-being. Additionally, Previous studies such as Zhuang et al. [ 57 ] and Jiang and Balaji [ 15 ] have showed different models and relations with some variables, as the positive relation between ‘Use digital technologies during holidays’ in ‘Tourist self-control during holidays’. Egger et al. [ 32 ] and Dickinson et al. [ 26 ] presented the negative influence of ‘Use digital technologies during holidays’ in ‘Technology dark traits in holidays’. On the hand, Technology dark traits in holidays’ have a positive influence in DFT [ 58 ]. Finally, Jackson [ 59 ] and Fong et al. [ 60 ] established the influence of ‘Tourist attribution’ in DFT. Several investigations have concluded that certain factors, such as social and family engagement, nature connectedness and health relaxation, favour the intention to participate in a DFT experience and positively affect tourists’ behavioural intention [ 7 , 15 , 29 ]. Social and family engagement can influence tourists’ intention to choose a DFT experience, and an increasing number of friends, family and private circles recommend enjoying DFT trips [ 49 ]. Due to the above literature, the following research hypothesis was proposed: H3. Social and family engagement positively influences behavioural intention . As seen above, an immersive trip in nature can motivate a person to escape from a hyperconnected world [ 18 , 26 ]. This increases tourists’ enjoyment of the trip [ 29 ]. This approach has been supported by other studies researching digital disconnection experiences at destinations surrounded by nature, such as campsites [ 26 ], detox retreats [ 61 ] or mountain huts [ 49 ]. All of these factors provoke positive and authentic emotions in tourists who consider them decisive elements when making a DFT trip with full immersion in nature [ 6 , 62 ]. The contributions to well-being and health of this type of trip means that Behavioural Intention is positive when connecting with nature on a DFT experience [ 7 , 15 , 63 ]. Hence, the following research hypothesis is proposed: H4. Nature connectedness influences Behavioural intention . In addition to social and family engagement and nature connectedness, the desire for relaxation and health is also an element that can condition the decision to choose a DFT destination [ 20 , 50 ]. Numerous studies address the negative impacts of technology addiction and its harmful effects on health [ 64 ]. DFT has a high demand from users who want to mitigate the negative effects of hyperconnection and find enjoyment, pleasure and spirituality [ 7 , 26 , 29 ]. Suppliers in the tourism sector have tried to channel this intention to meet the demand for the well-being of their customers [ 18 , 35 , 65 ]. The following research hypothesis is proposed using the above: H5. Health relaxation positively influences behavioural intention . Economic sustainability and sustainable tourism The revolution and transformation of tourism caused by IT plays a fundamental role in world economies [ 7 ]. In 2030, the United Nations World Tourism Organization program predicts that there will be over 1,800 million tourists [ 1 ]. This will generate income, create new jobs and promote economic opportunities that can increase the sustainability and profitability of the tourism industry [ 66 ]. Technical, social, environmental, economic and political challenges all affect demand and sustainability in many countries that already promote tourism in nature [ 67 ]. The economic sustainability of tourism should allow for viable economic projects in the long term, which produce socioeconomic benefits for all stakeholders. These include alleviating poverty, income-generating opportunities, stable employment, and social services for host communities [ 1 ]. Therefore, sustainability must satisfy the different stakeholders so that there are positive feelings in social commitments, defence of natural resources and improvements of the tourist experience [ 34 ]. DFT can be relevant for the sustainability and profitability of tourist destinations and is important for their economy [ 4 ]. In addition, DFT aims to maintain tourist satisfaction and ensure that tourists live a meaningful experience that will make them aware of sustainability issues and sustainable tourism. Existing studies indicate that tourist awareness is being attracted to new sustainable experiences that are completely different from saturated mass tourism and focus on well-being and authenticity at a DFT destination [ 49 , 68 ]. A DFT tourist seeks a balance between good infrastructure, safety, healthy activities, new experiences, personalized offerings and respect for the environment [ 9 , 17 , 36 ] and an experience that includes quality services that protect nature, ecology and control to reach more efficient, sustainable services without noise or light pollution [ 7 , 54 ]. All these elements are an integral part of sustainable tourism for economic development, society and the environment [ 69 ]. The opportunity that DFT gives for business growth and job creation [ 35 , 70 ] as a new market niche for companies and new entrepreneurs can have an impact on the decision of the DFT tourist and condition their behavioural intention for a trip [ 24 , 34 ]. This means that tourist destinations must promote and specialize in these types of experiences [ 7 , 16 , 50 ]. The last research hypothesis is proposed based on these studies: H6. Behavioural intention positively influences economic sustainability . The relationships between the distinct factors are shown in Fig. 1 . Electronic supplementary material Below is the link to the electronic supplementary material.
Acknowledgements The authors would like to thank all who responded to the questionnaire for their participation. We would also like to thank the reviewers, experts and colleagues who commented on the drafts. Author contributions All authors made substantial contributions to the literature review and the analysis and interpretation of the data in producing the combined framework. All authors reviewed, revised, and approved the final manuscript. This research confirms that all methods used, anonymous questionnaire, have been carried out in accordance with all relevant guidelines and regulations. The questionnaire on tourism destinations, entrepreneurship, mindfulness, relaxation and meditation was advertised on social media in Spain with the permission and rights of the respondent. However, I am sending you the link to the official publication. https://doe.juntaex.es/pdfs/doe/2017/740o/17060728.pdf . The UEX regulations are transparent and guarantee the right of access to the public information of the University of Extremadura, presenting to the citizenship the most relevant information of its governance, processes, procedures and accountability. The datasets are offered in international standard formats so that they are easily reusable by software applications that wish to use these data and represent the information in open linked data, with the maximum level of reusability of 5 stars, recommended by the W3C. If you need any datasets, or queries combining information on existing datasets, please send an email to [email protected]. Funding Not applicable. Data availability All data generated or analysed during this study are included in this published article. Declarations Ethics approval and consent to participate This research confirms that all methods used, anonymous questionnaire, have been carried out in accordance with all relevant guidelines and regulations. Participants in this questionnaire were informed that all data provided were anonymous. All participants were informed that the research was for academic purposes for a thesis at the University of Extremadura. This article does not report the results of a health intervention in human participants. Consent for publication Not applicable. Competing interests The authors declare no competing interests. Abbreviations Artificial Intelligence Behavioural intentions Digital Free Tourism Heterotrait-monotrait criterion Information Technology
CC BY
no
2024-01-15 23:43:46
BMC Public Health. 2024 Jan 13; 24:176
oa_package/0d/ea/PMC10787973.tar.gz
PMC10787974
38218787
Introduction Acute pancreatitis (AP) is a common gastroenterological condition, with approximately 80% of patients developing mild to moderately severe disease (no organ failure > 48 h) and the rest progressing into severe acute pancreatitis (SAP) [ 1 ]. The death rate of SAP is as high as 20%, therefore, early assessment of severity in AP is crucial. Despite the large number of studies exploring early prediction of AP severity [ 2 , 3 ], no ideal multifactorial scoring system and/or biochemical markers have been identified for early assessment of AP severity [ 4 ]. Therefore, early identification of the development of severe AP remains a great challenge. In clinical studies, the components of metabolic syndrome have been found to be associated with the occurrence and deterioration of AP [ 5 , 6 ]. In particular, obesity is an independent risk factor for the AP morbidity and mortality [ 7 – 10 ]. Depending on its location, adipose tissue can be divided into subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT). SAT, accounting for approximately 80% of all adipose tissue, acts as a reservoir for excess lipids. However, once the storage capacity is exceeded, which can only accommodate a limited number of adipocytes with limited expandability, fat begins to accumulate in areas outside the SAT, such as the liver, heart, skeletal muscles, and other sites [ 11 , 12 ]. Numerous studies have shown that VAT, associated with the occurrence and development of AP [ 13 – 15 ], is a key site of inflammation and responsible for driving the systemic inflammatory response and exacerbating AP [ 16 , 17 ], thus serving as an important prognostic indicator of AP severity. Being highly metabolically active, VAT can continuously release adipokines such as resistin, leptin, adiponectin, and visfatin into the portal circulation [ 18 ], which may involve in the development and progression of AP by modulating oxidative stress and inflammatory responses and influencing the severity of AP. Furthermore, resistin, leptin, adiponectin, and visfatin are well-known biomarkers for Nonalcoholic Fatty Liver Disease (NAFLD), which is a strong risk factor for AP and SAP. Resistin has been found to increase the production of pro-inflammatory cytokines such as TNF-α, IL-1β, and IL-6 in mononuclear cells and macrophages [ 19 , 20 ]. Additionally, it stimulates the production of cell adhesion molecules, including vascular cellular adhesion molecule-1 (VCAM-1), intercellular adhesion molecule-1 (ICAM-1), and monocyte chemoattractant protein (MCP)-1, as well as chemokine (C-C motif) ligand 2 (CCL 2), which contribute to chemotaxis and leukocyte recruitment to sites of inflammation [ 21 , 22 ]. Leptin, which is mainly secreted by adipocytes, is a potent chemoattractant for immune cells, causing monocytes and macrophages to accumulate towards adipose tissue, and promoting increased expression of the inflammatory cytokines IL-6 and tumor necrosis factor (TNF) as well as toll-like receptor 4 (TLR4) [ 23 ]. At the same time, leptin is required for T-cell development and promotes the production of pro-inflammatory cytokines in CD4[+] T cells [ 24 – 26 ]. Adiponectin, a hormone mainly produced by white adipose tissue, can inhibit M1 macrophage activation [ 27 , 28 ], exert anti-inflammatory effects by regulating JmJC family histone demethylase 3, which contributes to M2 polarization [ 29 ], and inhibit macrophage infiltration [ 30 ]. In animal studies, adiponectin-deficient mice exhibited more severe AP than wild-type mice, and adiponectin overexpression reduced the severity of AP [ 31 ]. Administration of exogenous recombinant adiponectin to AP mice significantly reduced NF-kB activity, cytokine levels, and tissue damage [ 32 ]. Visfatin has nicotinamide phosphoribose transferase (Nampt) activity, the rate-limiting enzyme of the nicotinamide adenine dinucleotide (NAD) salvage synthesis pathway, and macrophages rely on the NAD salvage pathway to meet their energy requirements and maintain their pro-inflammatory phenotype. Visfatin also promotes the release of pro-inflammatory cytokines IL-1β, IL-6, and TNF-α from peripheral monocytes [ 33 – 35 ]. Despite many studies that have explored the relationship between adipokines and SAP, the findings have been inconsistent. Furthermore, even though a meta-analysis of the relationship between adipokines and SAP has recently been published [ 36 ], it only examined the statistical correlation between resistin and SAP, without addressing the correlation between other adipokines and SAP. Therefore, we performed this meta-analysis, involving such adipokines as resistin, leptin, adiponectin, and visfatin, to explore the correlation between adipokines and SAP.
Method This study was performed in accordance with the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines. Search strategy We conducted a systematic literature search on Embase, Cochrane library, PubMed and Web of Science, using the following keywords: (“adipokines”, “resistin”, “leptin”, “visfatin” or “adiponectin”) AND (“acute pancreatitis”) and MeSH/Emtree terms as well (Table S1 ). The deadline for the search was July 20, 2023. In addition, we checked the references of the screened literature to identify any additional relevant studies. Study selection Inclusion criteria: (1) study subjects with a confirmed diagnosis of AP were included; (2) the severity of the AP was assessed; (3) the concentration of resistin, leptin, endolipin or lipocalin in peripheral blood was measured; (4) complete data calculation metrics were available: including the mean of the concentrations of resistin, leptin, visfatin or adiponectin with corresponding standard deviations (SD) or 95% confidence intervals (CI); (5) studies republished after additional data in the literature on the same topic, using the most recent study data. Exclusion criteria: (1) duplicate articles; (2) reviews, meta-analyses, editorials, and letters; (3) animal studies or in vitro experiments; (4) articles whose data were unavailable; (5) studies that were subgroup analyses of included multicenter studies. Both the study selection and exclusion procedures described above were conducted by two independent investigators (Xuehua Yu and Ning Zhang). Once disagreements occur, a third independent reviewer (Jing Wu) was invited to make the final decision. Data extraction and quality assessment Data were extracted and cross-checked independently by two authors (Xuehua Yu and Ning Zhang) using a pre-developed data extraction form, and in case of disagreement, they were referred to a third investigator (Yunhong Zhao) for verification. Extractions included: first author, year of publication, country, types of adipokines, the time of the blood test, assay method, AP diagnostic criteria, sample size, sample characteristics, etiology, adipokine concentration (mean, SD), and fund. To evaluate the risk of bias and quality of all included studies, we used the Quality Assessment of Diagnostic Accuracy Studies tool (QUADAS) [ 26 ], which was adapted to the studies included in this meta-analysis. All assessments were performed by two independent investigators (Xuehua Yu and Ning Zhang), and any disputes were resolved through consultation or discussion with a third party (Chengjiang Liu). Statistical analysis Continuous outcomes measured on the same scale were expressed as a mean value and standard deviation and were analyzed by using standardized mean difference (SMD). Statistical analyses of heterogeneity were conducted using the chi-squared Q test and the I-square ( I 2 ) statistic. P < 0.10 and I 2 > 50% were considered statistically significant heterogeneity thresholds. Calculation of the pooled SMD was performed using a random effects model. Moreover, subgroup and sensitivity analyses were used to further explore the sources of heterogeneity. All P -values were 2-tailed, and P < 0.05 (except for tests of heterogeneity) was considered statistically significant. Publication bias was assessed by Egger’s test and Begg’s test.
Results Literature search and research characteristics According to a predefined search strategy, we searched PubMed, EMBASE, Web of Science, and Cochrane Library, generating 1266 articles. By strictly following the inclusion and exclusion criteria, 20 articles were finally included, and the specific screening process is shown in Fig. 1 . The main characteristics of the included studies are summarized in Table 1 , Table S2 , S3 and S4 . A total of 1332 AP patients were evaluated in studies conducted in countries (4 in Turkey, 3 in the United States, 3 in China, 2 in Czech Republic, 2 in Germany, 1 in India, 1 in México, 1 in Poland, 1 in Finland, 1 in Saudi Arabia, and 1 in Lithuania). Among the 20 studies, 10 evaluated the predictive effect of resistin on SAP, 8 focused on the predictive effect of leptin on SAP, 7 evaluated the predictive effect of adiponectin on SAP, and 3 investigated the predictive effect of visfatin on SAP. The detailed statistics of each adipocytokine are shown in Table 2 . The quality assessment of all included studies that applied the QUADAS risk of bias assessment tool is shown in Table S5 . Relationship between adipokines and SAP A total of 7 of 10 studies showed significantly increased levels of resistin in patients with SAP relative to patients with mild acute pancreatitis (MAP). A total of 275 SAP patients and 541 MAP patients were included in the summary analysis, as shown in Fig. 2 A. The pooled analysis showed significantly higher resistin levels in SAP patients as compared to MAP patients (SMD = 0.78, 95% CI:0.37 to 1.19, z = 3.75, P = 0.000). However, statistically significant heterogeneity was observed in these studies ( P = 0.000, I 2 = 83.9%). For leptin, 3 out of 8 studies saw significantly higher levels in patients with SAP. A total of 160 SAP patients and 310 MAP patients were analyzed. Leptin levels were not significantly higher in SAP patients than in MAP patients (SMD = 0.30, 95% CI: -0.08 to 0.68, z = 1.53, P = 0.127) (Fig. 2 B). Again, significant heterogeneity was observed in the study ( P = 0.004, I 2 = 66.2%). A total of 1 out of 7 studies showed significantly lower adiponectin levels in patients with SAP as compared to those with MAP. Pooled analysis showed no significant difference in adiponectin levels between 131 SAP patients and 308 MAP patients (SMD = 0.11, 95% CI: -0.17 to 0.40, z = 0.80, P = 0.425) (Fig. 2 C). Significant heterogeneity was found in these 10 studies ( P = 0.190, I 2 = 31.2%). Only 3 studies have examined blood visfatin levels in SAP patients and MAP patients. A total of 91 patients with SAP and 126 patients with MAP were analyzed. Visfatin levels were not significantly higher in patients with SAP than in those with MAP (SMD = 1.20, 95% CI: -0.48 to 2.88, z = 1.40, P = 0.162) (Fig. 2 D). Again, significant heterogeneity was observed in the study ( P = 0.000, I 2 = 95.2%). Subgroup analysis According to year of publication, sample size, mean age of patients, and definition of SAP group and MAP group (Table S4 ), subgroup analysis was performed to explore the impact of these three factors on outcomes as well as to identify potential sources of resistin and leptin heterogeneity. As shown in Fig S1 A, pooled results from the literature published before 2014 and in 2014 and after showed that resistin was predictive of SAP. Pooled results from studies in which the mean age of patients with AP was < 50 years versus age ≥ 50 years also indicated that resistin was a predictor of SAP (Fig S1 B). Studies with a sample size of < 100 patients showed significantly higher resistin levels in SAP patients than in MAP patients (SMD = 0.83, 95% CI: 0.42 to 1.24, z = 3.98, P = 0.000, I 2 = 64.1%, Fig S1 C), but studies having a sample size of ≥ 100 patients showed no statistically significant difference in resistin levels between the two groups (SMD = 0.72, 95% CI: -0.08 to 1.52, z = 1.77, P = 0.076, I 2 = 92.6%, Fig S1 C). In addition, SAP was defined as persistent organ failure (> 48 h) in 6 studies that tested resistin levels, showing a significant difference between in the MAP group and the SAP group (SMD = 0.80, 95% CI: 0.23 to 1.37, z = 2.73, P = 0.006, I 2 = 88.7%, Fig S1 D). Regarding leptin, as shown in Fig S2 , different publication years, ages, and definition of SAP and MAP showed no statistically significant difference in leptin levels between the two groups. However, the pooled results of the 7 studies with sample sizes < 100 showed that leptin levels were higher in the SAP group than in the MAP group, and the difference was statistically significant (SMD = 0.40, 95% CI: 0.02 to 0.77, z = 2.07, P = 0.038, I 2 = 57.2%, Fig S2 C). There were no statistically significant differences in lipocalin levels between the two groups for different publication years, sample sizes, ages, and definitions of SAP and MAP, as shown in Figure S3 . Sensitivity analysis Sensitivity analysis was performed whereby each study was excluded in turn to assess the stability of the results and the impact of each study on the pooled SMD was also determined (Fig. 3 ). It can be seen from Fig. 3 A that the studies by Kibar YI et al., Singh AK et al. and Langmead C et al. had the greatest influence on the results regarding resistin. Although these 3 studies were removed, SAP patients showed significantly higher resistance levels than MAP patients (SMD = 0.66, 95% CI: 0.45 to 0.87, z = 6.21, P = 0.000, I 2 = 0.0%, Fig S4 ). As shown in Fig. 3 B, the results of the study by Türkoğlu A et al. had the greatest impact on the results regarding leptin, and removal of this study still showed no significant increase in leptin levels in SAP patients compared to MAP patients (SMD = 0.13, 95% CI: -0.15 to 0.41, z = 0.88, P = 0.379, I 2 = 23.9%, Fig S5 ). Publication bias For resistin, leptin and lipocalin, symmetry was observed in Begg’s funnel plot (Fig S6 ), with Egger’s test results (P = 0.444, P = 0.869, P = 0.920, respectively, Fig S7 ), suggesting no publication bias.
Discussion The results of this meta-analysis showed that increased resistin levels were associated with SAP, whereas leptin and adiponectin levels were not linked to SAP. Only three of the studies included visfatin, not enough to draw any conclusions. Resistin is a small protein rich in cysteine, with a molecular weight of either 11 or 12.5 kDa. It was first identified in mice in 2001 as a signal molecule produced by adipocytes, and named resistin because it was thought to be involved in the development of insulin resistance [ 37 ]. Resistin belongs to the resistin-like molecule (RELM) family, which includes RELM-α, RELM-β, and RELM-γ [ 38 ]. Unlike mice, where resistin is produced by adipocytes, humans mainly express resistin in monocytes and macrophages [ 39 ]. Despite only sharing 59% of the same amino acids [ 40 ], resistin functions similarly in both humans and rodents, even though they are produced from different sources. Resistin has been identified as a molecule that promotes inflammation and regulates various chronic inflammatory, metabolic, and infectious diseases in humans [ 41 – 44 ]. It modulates many cellular responses in the host, such as recruiting and activating immune cells, promoting the release of pro-inflammatory cytokines, enhancing interferon (IFN) expression, and promoting the formation of neutrophil extracellular trap networks (NETs) [ 45 – 47 ]. The role of resistin in regulating inflammatory pathways has been demonstrated in the context of AP. Resistin increases the levels of calcium in pancreatic follicular cells, as well as the activity of NADPH oxidase, leading to an increase in the production of reactive oxygen species (ROS) within the cells. Additionally, resistin activates the NF-κB pathway, resulting in the expression of pro-inflammatory cytokines such as TNF-α and IL-6 [ 48 , 49 ]. Jiang et al. demonstrated in a laboratory model of AP induced by cerulein that resistin increases the production of pro-inflammatory cytokines TNF-α and IL-6 via an NF-κB-dependent pathway. However, the increased mRNA expression levels of TNF α and IL 6 induced by resistin can be significantly reduced by using an NF-κB inhibitor [ 50 ]. Furthermore, Wang et al. discovered that the severity of SAP lung injury was positively associated with RELMα levels. Moreover, overexpression of RELMα worsened the release of inflammatory cytokines such as interleukin (IL)-1β, IL-6, IL-8, tumor necrosis factor-α, and serum C-reactive protein. This led to an increase in the expression of inflammatory mediators such as phosphorylated (p)-AKT, p-P65, p-P38 mitogen activated protein kinase, p-extracellular regulated kinase, and intracellular adhesion molecule-1, ultimately resulting in lung injury. On the other hand, knocking down RELMα had the opposite effect. It improved the expression of proliferative cellular nuclear antigen, Bcl-2, zonal occludin-1, and Claudin-1 in lung tissue of SAP rats [ 51 ]. Furthermore, numerous studies have confirmed the correlation between resistin and the severity of AP. This suggests that resistin may serve as a valuable marker and potential therapeutic target for SAP [ 52 ]. Leptin is mainly secreted by fat cells and plays a crucial role in the immune response as an immune modulator [ 53 , 54 ]. Monocytes treated with leptin increase the production of type 1 cytokines, including IL-1β, IL-6, TNF, and resistin [ 55 , 56 ]. Adiponectin can inhibit the ROS/NF-κB/NLRP3 inflammatory pathway [ 57 ], activate the anti-inflammatory cytokine interleukin-10 (IL-10), and reduce pro-inflammatory cytokines such as interferon-gamma (IFN-γ), IL-6 and TNF-α in human macrophages [ 58 ]. The results of this meta-analysis showed that leptin and adiponectin levels were not linked to SAP. However, it is still unclear whether leptin and adiponectin have different effects at different stages of inflammation, or whether an imbalance among leptin, adiponectin and other adipokines may inhibit their regulation of the immune response, or whether there are other possible mechanisms, which need to be confirmed by more studies. Although most studies show that visfatin appears to have pro-inflammatory effects [ 33 – 35 , 59 – 62 ], there are some studies that show the opposite [ 29 , 30 , 63 ]. In response to this seemingly contradictory result, the study by Sayers et al. may give us some insight. They found a possible bimodal effect of extracellular Nampt (eNampt) monomer on the stimulation of insulin secretion by β-cells [ 64 ]. Whether this bimodal effect is equally reflected in the stimulatory effect of endolipin on inflammatory factors and the modulation of the inflammatory response, and whether it is this bimodal effect that leads to the unstable prediction of SAP by visfatin, remain to be further explored. Heterogeneity was observed in our pooled analysis. The resistin results were greatly influenced by two studies, while the leptin results were mainly affected by one study. Several factors such as regions, research samples, and detection reagents can affect the outcomes. Small sample sizes can also lead to accidental findings, making heterogeneity between studies inevitable. However, the stability of the results was confirmed even after removing the heterogeneous studies. Furthermore, sample size and mean age of the patients may be associated with resistin heterogeneity. It has been shown that the adverse effects of obesity appear to be reduced in older populations [ 65 ]. Khatua et al. suggested that the different visceral triglyceride saturation status could have varying effects on AP severity, explaining the obesity paradox [ 66 ]. Based on the results of the subgroup analysis in this meta-analysis, it appears that the mean age of patients has an effect on adiposity factors and resultantly affects AP severity, which may provide a new thought for the obesity paradox. There are some limitations to this meta-analysis. Firstly, all studies included were case-control studies with inherent selection, information and confounding biases. Secondly, the sample size was moderate for the included studies and a few of the eligible studies had small sample sizes. Thirdly, changes in testing methods and diagnostic criteria over time may have contributed to the different pooled results between publication years in the subgroup analysis. In conclusion, the results of this meta-analysis suggest high levels of resistin levels are associated with an increased risk of SAP, indicating resistin may be a potential biomarker. Moreover, serum or plasma samples can be easily obtained for resistin detection, and the assay is uncomplicated and can be performed in many laboratories. Since it is often challenging for a single indicator to accurately predict the severity of AP, it may be possible in the future to predict SAP by testing for the levels of resistin in conjunction with other indicators or by incorporating resistin into a scoring system.
Background Severe acute pancreatitis (SAP) is a dangerous condition with a high mortality rate. Many studies have found an association between adipokines and the development of SAP, but the results are controversial. Therefore, we performed a meta-analysis of the association of inflammatory adipokines with SAP. Methods We screened PubMed, EMBASE, Web of Science and Cochrane Library for articles on adipokines and SAP published before July 20, 2023. The quality of the literature was assessed using QUADAS criteria. Standardized mean differences (SMD) with 95% confidence intervals (CI) were calculated to assess the combined effect. Subgroup analysis, sensitivity analysis and publication bias tests were also performed on the information obtained. Result Fifteen eligible studies included 1332 patients with acute pancreatitis (AP). Pooled analysis showed that patients with SAP had significantly higher serum levels of resistin (SMD = 0.78, 95% CI:0.37 to 1.19, z = 3.75, P = 0.000). The difference in leptin and adiponectin levels between SAP and mild acute pancreatitis (MAP) patients were not significant (SMD = 0.30, 95% CI: -0.08 to 0.68, z = 1.53, P = 0.127 and SMD = 0.11, 95% CI: -0.17 to 0.40, z = 0.80, P = 0.425, respectively). In patients with SAP, visfatin levels were not significantly different from that in patients with MAP (SMD = 1.20, 95% CI: -0.48 to 2.88, z = 1.40, P = 0.162). Conclusion Elevated levels of resistin are associated with the development of SAP. Resistin may serve as biomarker for SAP and has promise as therapeutic target. Supplementary Information The online version contains supplementary material available at 10.1186/s12876-024-03126-w. Keywords
Electronic supplementary material Below is the link to the electronic supplementary material.
Acknowledgements We appreciate from both Department of Gastroenterology, Hebei Provincial People’s Hospital and Graduate School of Hebei North University. Author contributions Jing Wu, Xuehua Yu, and Ning Zhang participated in literature collection.Yunhong Zhao, Xuehua Yu, and Ning Zhang participated in data extraction.Chengjiang Liu, Xuehua Yu, and Ning Zhang involved in article quality assessment.Xuehua Yu wrote the manuscript, and Gaifang Liu revised the article critically for important intellectual content. Funding Special Project for the Construction of Academician Workstation of Hebei Provincial People’s Hospital (Project No. 199A7745H). Clinical significance and mechanism of leukocyte elevation in the third condition of severe acute pancreatitis (Project No. 20200747). Molecular mechanism of NNMT/CCL8/VEGF-C signaling axis regulating lymph node metastasis in gastric cancer (Project No. H2022307040). Data availability All data generated or analyzed during this study are included in this published article and its supplementary information files. Declarations Ethics approval and consent to participate Not applicable (this paper was provided based on researching in global databases). Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:46
BMC Gastroenterol. 2024 Jan 13; 24:32
oa_package/c3/22/PMC10787974.tar.gz
PMC10787975
38218799
Introduction Typically the main aim of a phase I dose-finding trials is to identify the maximum tolerated dose (MTD) of the treatment being investigated. The MTD is usually determined under the monotonicity assumption which assumes that as dose increases so does the probability of toxicity. With model-based designs such as the continual reassessment method (CRM) escalation occurs to identify the dose with an associated probability of toxicity based on a pre-defined target. The investigation of multiple-agent treatments in phase I dose-finding trials, where the monotonicity assumption in relation to the dose-toxicity model may not hold, is increasing in early phase trials. Finding the MTD in combinations of treatments, compared to single-agents, presents methodological challenges. Each drug individually may obey the monotonicity assumption we can refer to this as the doses being fully ordered. However, when multiple treatments are combined, the ordering of doses in terms of toxicity may not be fully apparent or may only be partially ordered. An order may be identified for a subset of the doses which would result in a partial order. Without a fully understood ordering it is uncertain which dose should be chosen in decisions of escalation and de-escalation and ultimately as the MTD. This issue is not exclusively reserved for trials with multiple-agents. The monotonicity assumption may not hold for certain drugs in single-agent studies leading to partial orders of dose toxicity. For example, when dose and frequency of administration vary between dose levels. Monotonicity is a very strong assumption. It requires that the probability of toxicity is always increasing - staying the same is not enough. At high enough doses, this assumption is almost surely violated for all interventions when the event probability reaches its maximum. Thus, even when total ordering is possible, the monotonicity assumption could be violated [ 1 ]. This can occur in scenarios where multiple parameters of the treatment schedule are altered for each dose level. For example, two doses could prescribe the same overall total dose but be over different treatment durations and hence have higher and lower daily doses. In this situation, it could be unclear as to whether prolonged exposure to a lower daily dose is more toxic than short exposure to a higher daily dose, which implies a partial ordering of toxicity probabilities. This is the case for the proposed dose levels in the ADePT-DDR trial. Worldwide there are approximately 600,000 new cases of Head and Neck Squamous Cell Carcinoma (HNSCC) each year [ 2 ]. Of which, 12,000 occur in the UK with the most common forms of treatment being surgery, radiotherapy and/or chemotherapy. Radiotherapy is essential for the treatment of cancer. It has been estimated that more than 40% of patients will receive radiotherapy at some point in their treatment [ 3 ]. However, despite recent advancements in radiation techniques and the use of concomitant chemoradiotherapy, patients with solid tumours such as head and neck cancer have suboptimal cure rates [ 4 ]. For those with advanced HNSCC, primary radiotherapy with concurrent chemotherapy is often offered but, it has not been shown to improve survival in patients aged over 70 compared to radiotherapy alone [ 5 ]. Therefore, any strategy to improve the efficacy of radiotherapy without increasing toxicity would have a significant impact on patient outcomes. DNA damage repair (DDR) inhibition is a potential technique which could be utilised as it potentiates the therapeutic effects of ionising radiation in cancer cells [ 6 ]. Combining radiotherapy with DDR inhibition could improve clinical outcomes for these patients [ 7 ]. The ADePT-DDR trial is a platform trial which aims to evaluate the safety and efficacy of different DDR agents, or different immunotherapy agents and/or DDR and immunotherapy combinations, together with radiotherapy in patients with HNSCC. The initial component of this trial is a single-arm dose-finding trial investigating the ataxia telangiectasis and Rad3-related (ATR) inhibitor AZD6738 in combination with radiotherapy. ATR inhibitors not only stop DNA repair but impair the mechanism that allows for repairs to take place. Preclinical models have shown this double blocking to be effective in killing cancer cells [ 8 ]. The aim of this trial is to determine a maximum tolerated dose of AZD6738 in combination with radiotherapy. Further methodological challenges revolve around the issue of late-onset toxicities. Typically, early phase trials implement a short window to observe DLTs (Dose Limiting Toxicities). This works well in situations where toxicities are likely to occur rapidly after treatment. However, this is not optimal for treatments that could cause late-onset toxicities such as radiotherapy. The aim with ADePT-DDR would be to incorporate a larger observation window to account for potential late-onset toxicities from radiotherapy whilst also minimising the trial duration. Due to the historical use of rule-based designs, the majority of the terminology used to describe them, and the ambiguity they raise, have been inherited by modern designs such as the CRM. The MTD in the context of a CRM is not the ‘maximum’ dose patients could tolerate but rather a dose in which there would be an acceptable target probability of a DLT occurring. For example, if the target is set at 25% the MTD would be the dose at which there is a 25% probability of experiencing a DLT. Rather than using the term MTD, the dose to be found will be referred to as the target dose (TD%%, where the %’s are replaced by the target probability), i.e. TD25 would be the dose expected to be toxic in 25% of patients. We will use this terminology throughout the paper. The continual reassessment method for partial orders (PO-CRM) developed by Wages et al. [ 9 ] extends the CRM design by relaxing the assumption of monotonicity and by modelling different potential orders. Wages et al. [ 9 , 10 ] further developed their work on the PO-CRM to deal with late-onset toxicities by implementing a TITE component. This trial design, referred to as the time-to-event continual reassessment method in the presence of partial orders (PO-TITE-CRM) by the authors, was chosen to be used in ADePT-DDR. We aim to provide insight into the methodology of PO-TITE-CRM through application in a real-world scenario.
Methods The PO-TITE-CRM design Wages et al. [ 10 ] introduced the PO-TITE-CRM design which builds directly upon the PO-CRM design by incorporating a TITE component into the dose-toxicity model. The aim of which is to determine the target dose for combinations of drugs where the monotonicity assumption does not hold, in a setting where late-onset toxicities are possible. Using the notation of Wages et al. [ 9 , 10 ], let M denote the number of possible orders and Y be an indicator of a DLT event. Then for a trial investigating k combinations, ,..., , the dose for the j th patient, , j = 1,..., n can be thought of as random . For a specific ordering m , the toxicity probability is modelled by for a weighted dose response model where is the model parameter of the working dose toxicity model. The weight, w as defined by Cheung and Chappel [ 11 ], is a function of the time-to-event of each patient and is incorporated linearly within the dose-toxicity model so that . Each patient is followed for a fixed amount of time T . Let represent the time-to-toxicity of patient j . Then for , For simplicity we will refer to the weight function w ( u ; T ) as w . The weight function will have to be decided upon by the trials team, dependent on the scenario, a simple linear function or a more complex adaptive weights function could be utilised. There are also several working dose toxicity models which could be used for . Wages et al. [ 9 , 10 ] present their design with the power parameter model given by Here are the prior estimates of DLT probabilities, or skeleton, for each potential ordering. Furthermore, prior probabilities are assigned to each order M to account for any prior information regarding the plausibility of each model such that, , where and . When all orders are equally likely or there is no prior information available on possible orderings the prior is discretely uniform and would be . A Bayesian framework is used and a prior probability distribution is assigned to the parameter . The ordering with the largest prior probability is selected as the starting ordering, in the scenario where all priors are equal an ordering is selected at random, subsequently a starting dose is also chosen. After j patients have been entered into the trial, data is collected in the form of . A weighted likelihood for the parameter is used to establish running probabilities of toxicity for each treatment combination. The weighted likelihood under ordering m , is given by which can be used to generate a summary value for each ordering. With the likelihood and the data , the posterior density for can be calculated using This can then be used to establish posterior probabilities of the orderings given the data as We select the single ordering, h , with the largest posterior probability along with its associated working model and generate toxicity probabilities for each dose level. Once the j th patient has been included the posterior probability of DLT can be calculated for so that In turn, the dose level assigned to the ( 1)th patient is the dose, , which minimises where is the target DLT rate. Similarly, once all patients have been recruited and observed and the trial ends, the target dose (TD ) is the dose, , which minimises ( 8 ). PO-TITE-CRM in ADePT-DDR The intended use of this design is for dose-finding in combinations of therapies, as this is the main source of the partial ordering issue. ADePT-DDR however, is a unique implementation of the design as, even though it involves a combination of therapies (radiotherapy and AZD6738), the dose of radiotherapy is fixed and dose-finding is only planned for AZD6738. PO-TITE-CRM is still applicable in this case as the design includes combinations of dose and duration for AZD6738 which are partially ordered. A summary of the proposed dose levels can be found in Table 1 . A two-stage PO-TITE-CRM will be used to find the TD25 of AZD6738. This will be determined by DLTs evaluated by Common Terminology Criteria for Adverse Events (CTCAE) v5.0 and Radiation Therapy Oncology Group (RTOG) late toxicity score. The binary DLT events are pre-defined by a variety of grade 3-4 adverse events notably, haematological, cardiovascular and gastrointestinal/hepatic toxicities as well as significant non-haematological events and specific treatment-related toxicities. DLTs will be monitored for the duration of treatment (seven weeks) and throughout the follow-up period. The total follow-up period post treatment is 52 weeks, so patients will spend a total of 59 weeks in the trial. A maximum of 60 patients will be recruited for the dose-finding aspect of this trial and up to 20 patients as controls. Controls will be utilised to make comparisons for secondary outcomes such as survival and efficacy. Control patients will only be receiving radiotherapy, the dose of which is fixed at 70Gy/35F (control patients will not be included in any of the dose-finding aspects of the trial). Controls will be recruited in the interim period between the recruitment of the third patient in a cohort and the completion of the minimum follow-up period. Additionally, patients can also be recruited to the control dose if they do not wish to receive AZD6738 whilst the dose-finding cohort is actively recruiting. The first cohorts of patients will be allocated to dose level 0. The first stage of the design will follow an initial escalation scheme escalating cohorts of three patients to dose level 1, 2a, 2b then 3 if no DLTs occur. If a DLT occurs stage I of the design ends and stage II begins. In stage II cohorts of three patients are assigned to dose levels chosen by the PO-TITE-CRM. Each patient entered into ADePT-DDR will receive fixed dose radiation, totalling 70 Gy in 35 fractions over seven weeks. For the dose-finding aspect we investigate six doses of AZD6738 detailed in Table 1 . Treatment dose and duration to be selected for dose level 3 will be determined based on a combination of data observed, adverse events and compliance. The issue of partial ordering is illustrated in Fig. 1 inspired from plots by Wages et al. [ 10 ]. The doses to be used in this trial are detailed in their appropriate box. Additionally, each dot represents a potential dose combination which theoretically could be investigated. The combinations are colour coordinated to indicate where partial ordering exists in this dose combination space. Doses across the same colour (each diagonal) cannot be distinguished from each other in terms of probability of toxicity. However, it forms a hierarchy in which doses of the same colour can be thought of as less/more toxic that doses in another colour i.e the red dose levels would have a higher probability of toxicity than the yellow dose levels. It is clear that dose levels 2a and 2b would be considered more toxic than dose level 1 due to the increase in treatment duration and treatment dose respectively. However, when comparing 2a and 2b it is unknown whether the increase in dose or duration will be more toxic. Hence there are two possible orderings for ADePT-DDR. Traditionally, dose-finding trials for combinations would select dose levels to form a ‘path’ through the dose combination space such that each subsequent dose level was logically more toxic. This avoids the issue of partial ordering but means doses of interest or effective dose combinations may be missed or not investigated. Specifically, for ADePT-DDR this allows two ‘paths’ from dose level 1 extending to 2a and 2b. In terms of dose level 3 only one of the doses in that tier will be investigated, it was unclear as to which dose level would be best due to a lack of historical data. The choice of dosing for this dose-level will be determined based on data observed throughout the trial. Even though dose level 3 is not yet specified in terms of modelling and simulations it was treated as singular dose. This was done as clinicians thought that it would be unlikely that we would reach these doses and that the probability of toxicity between them would be similar. Preliminary designs of the trial included only five dose levels and planned to use dose level 0 as the starting dose. During the trial design phase it was decided a new lower dose (dose level -1) would be introduced to allow for de-escalation if the initial dose was found to be too toxic. Dose escalation/de-escalation for subsequent cohorts would be determined from the two-stage PO-TITE-CRM. A two-stage design allows for escalation according to a pre-defined escalation scheme similar to a ‘3+3’ design. The first stage dictates that if no DLT’s are observed in the current cohort the dose allocated to the next cohort is the following dose in the escalation scheme. Dose levels continue to be incremented in this fashion until the first DLT is observed. In stage two, dose levels are determined by the PO-TITE-CRM. Typically CRM designs begin by testing the first patient, or cohort, at the prior guess of TD or at a lower dose to be safe. However, clinicians may have safety concerns beginning the trial at higher dose levels as well as escalating to higher dose levels without testing lower ones. Investigators in ADePT-DDR expressed similar concerns as such a two-stage design was adopted. The escalation scheme used in stage one of ADePT-DDR will follow that of the first ordering ( ). If patients in the first cohort (assigned to dose level 0) don’t experience a DLT the next cohort will be allocated to dose level 1 and then if no DLTs are observed again the third cohort will be allocated to dose level 2a and so on and so forth. The dose escalation scheme was determined based on the prior probabilities of toxicity generated for each dose level. Information elicited from the investigators helped generate prior probabilities of toxicity for each dose level. They believed that dose level 2b would be the TD25 with 2a being less toxic. This was used in conjunction with the getprior function from the dfcrm R package [ 12 ] which yielded priors of 0.01, 0.04, 0.08, 0.16, 0.25 and 0.35 for dose levels -1, 0, 1, 2a, 2b and 3 respectively. The half-width of the indifference interval was set at 0.05. The indifference interval is an interval in which the toxicity probability of the selected dose will eventually fall. Prior probabilities are also required for the plausibility of each model and even though the clinicians think that 2b will be more toxic than 2a there is no clear evidence and still a lot of uncertainty. As such it is sensible to assume a plausibility probability of 0.5 for each ordering, implying both orders are equally likely to be the true ordering of these dose levels. The TITE component The observation window for this trial will be up to a year post-treatment as the combination of radiotherapy with AZD6738 is anticipated to cause late-onset toxicity. The acute DLT observation period is 12 weeks (84 days) post radiotherapy end with a minimum of 8 weeks (56 days) for the last patient of each cohort. However, patients will continuously be monitored for occurrence of DLT for at least 12 weeks (84 days), i.e. at least 12 weeks (84 days) from the end of radiotherapy. The full window will last for 52 weeks (365 days) post-treatment. The TITE component incorporates a weighting contribution for each patient dependent on how long that patient has been evaluable in the study. This allows a patient to be evaluated once they have been observed for the minimum DLT period of 8 weeks (56 days). The weighting at this point is 60% rising to 80% at 12 weeks (84 days). A patient will not contribute fully to the model until they have completed 52 weeks (365 days) follow up (or have experienced a DLT at any stage in which case they will be weighted as a whole contribution). Linear weighting functions will be employed for any patient with a length of follow up between these three time points. One weight function to calculate weights between 8-12 weeks and another for weights between 12-52 weeks. For the weighting function where u is the time-to-toxicity of patient j and is the time period with values 8, 12 and 52 respectively. Then for All patients will have a minimum weight of 60% as that is the prescribed weighting to the minimum follow up period before dose escalation/de-escalation decisions can be made. For each additional week the patient is observed, without a DLT occurring, between weeks 8 and 12 their weighting increases by 5%. Similarly for each week between 12 and 52 weeks, without a DLT, weighting increases by 0.5%. Figure 2 illustrates the weight function and how the weight changes for patients dependent on how long they have been followed-up. The dotted lines represent key time points in the trial. The first being after treatment (7 weeks), the second being the minimum follow-up period at 8 weeks post-treatment (15 weeks into the trial) and the third being at 12 weeks post-treatment (19 weeks into the trial). The TITE-CRM originally presented by Cheung and Chappel [ 11 ] did not incorporate a minimum follow-up period and their design allowed for the continual recruitment of patients whenever they became available. There are some practical considerations which make this infeasible in ADePT-DDR. The model would need to be run each time a new patient entered the study which requires statistical input hence the introduction of cohorts. Clinicians may also have safety concerns if we see rapid recruitment at the start of the trial and the model keeps escalating so we impose a minimum follow-up period. Initially this was set at 12 weeks (at 80% weighting) however, this would have meant that dose escalation/de-escalation decisions would have to take place 19 weeks (7 weeks treatment and 12 weeks follow-up) after recruitment of the third patient in the cohort. Dependent on the recruitment rates this could extend the duration of the trial and negates the benefits of using a TITE design. Consultation with the trial clinicians and the Trial Management Group (TMG) indicated that the trial duration would be too lengthy and settled on lowering this period to 8 weeks (at 60% weighting) whilst also including the original 12 week weighting of 80%. Stopping rules A practical modification was included to allow for early stopping of the trial if there is sufficient evidence that the TD25 has been reached. Sufficient evidence is achieved once 15 patients (five cohorts) have been treated at the same dose level and the model allocates that dose level again to a sixth cohort. This rule evolved from the original designs of the trial which involved 30 patients with a dose expansion cohort to ensure at least 15 patients were treated at the TD25. Initial simulations highlighted the inadequacy of these design parameters, as operating characteristics for various scenarios were poor, specifically in terms of correct TD25 selection. Clinicians explained the inclusion of the dose expansion cohort was to ensure the dose-finding aspect of the trial did not take a large amount of time whilst also allowing safety to be assessed at the TD25. In order to ensure that a reasonable amount of patients would be treated at the TD25, the trial wouldn’t take longer than necessary and operating characteristics improved, the sample size was increase and this rule was introduced. A rule was also implemented to allow for early termination of the trial in the case of excess toxicity at the lowest dose. If the probability of DLT at the lowest dose is higher than 0.35 with a probability of 80% and has been tested the trials safety committee will be alerted and will recommend if the trial should be stopped. As the trial starts at dose level 0, which is not the lowest dose, it’s hypothetically possible for the trial to recommend terminating without ever allocating patients to the lowest dose level. As such it was decided early termination would only occur once at least 3 patients (1 cohort) have been allocated dose level -1. An approximate estimate of the variance was calculated using methodology presented by O’Quigley and Shen [ 13 ]. The observed information matrix is obtained by taking the second derivative of the likelihood (eq. 4 ) which is then used to calculate the variance , for estimate which becomes more accurate with larger sample sizes. After each cohort, we sample many times from a normal distribution with parameters based on the estimate of and its variance. These samples are then plugged into our dose-toxicity model to ascertain the probability of toxicity at the lowest dose. The trial will be recommended to stop if it breaks the rule based on the criteria above.
Results Simulations were repeatedly utilised during the design process of the trial to assess how various changes to design features impact the overall performance. Changes to design features such as the sample size, weight function and stopping rules helped inform decisions which led to this design. Functions from pocrm package in R were modified in order to perform simulations. These modified functions will also be used for analysis during the conduct of the trial. The majority of work involved integrating the TITE component and the stopping rules into the code. In standard CRM designs a binary outcome for toxicity is generated for each patient based on a pre-specified true DLT rates for the dose they are assigned. Adding the TITE component means the time the toxicity occurs also has to be generated, the simulation must also track this time and incorporate this information into the PO-TITE-CRM model when it needs to make dose allocation decisions for the next cohort. We defined multiple scenarios to reflect various real life possibilities in order to assess the designs performance. Simulations presented here were based on the design specified in the previous section, which included six dose levels (-1, 0, 1, 2a, 2b and 3) with dose level 3 treated as a singular dose. Standard scenarios include adjusting the true DLT rates to reflect each dose being the TD25. For each of these we calculate the probability of selecting each dose as the TD25. It would be expected that the dose with the highest probability of being selected has its true DLT rate set at 25% to match the target rate. A high probability of selecting the correct dose implies the design works well in the specified scenario. Additional characteristics such as the average number of patients at each dose level and how many receive the ideal dose were also investigated. This can be used to look at how many patients may potentially be allocated to a toxic dose. It is also necessary to consider performance when all doses are too toxic, in which case we would want the design to recommend stopping early. Usually the true DLT rates used to define these scenarios abide by the monotonicity assumption. Due to the partial ordering we consider scenarios in which the true DLT rates follow both orders. For trials with a large amount of orders it may be unfeasible to run so many simulations. However, as ADePT-DDR only has two orders we explored all scenarios for each ordering. We simulated 10000 trials for each scenario using this design detailed in Methods section. It is recommended by Morris et al. [ 14 ] to detail the Monte Carlo standard error in order to quantify the simulations uncertainty. The Monte Carlo standard error for probabilities estimated by 10000 simulations is . This implies that for any differences in selection probabilities greater than 1% are due to more than simulation error. Simulations were based on the assumption that the trial would recruit one patient per month. The occurrence of DLT’s were randomly generated for patients in each cohort using a Bernoulli distribution with the probability set at the true DLT rate for that cohort’s assigned dose level in the specific scenario. For patients who had a DLT occur, the time at which the DLT occurred was randomly generated using a uniform distribution which spanned the start of treatment to the end of follow-up. Table 2 details simulations for eight scenarios to test the performance of the PO-TITE-CRM design using true DLT rates which reflect the first ordering. We analyse scenarios where each dose is the TD25 (scenarios 1-6) and when all doses are too toxic (scenario 8). Additionally, we also investigate performance under conditions where the probability of DLT is fairly similar between doses (scenario 7). This is a notoriously difficult circumstance for CRM designs to deal with as the limited number of patients and events at each dose make it hard to accurately estimate toxicity probabilities if they are similar. Simulation results for the second ordering are shown in Table 3 where dose level 2a is considered more toxic than 2b. This is achieved by altering the true DLT rates so 2b has a lower probability of DLT compared to 2a. In scenarios 1 - 6 (Table 2 ), this design correctly selects the TD25 with probabilities between 43% and 78%, under the assumption 2b is more toxic than 2a. Likewise, for the ordering where 2a is more toxic than 2b, scenarios 9-14 (Table 3 ) have probabilities between 43% and 78% of correctly selecting the TD25. Correct selection probabilities are generally higher when the TD25 is at the first and last dose levels compared to dose levels 2a and 2b. However, these dose levels are still chosen with the highest probability as the TD25 in their given scenarios. For scenarios 7 and 15, the probabilities of toxicity are equally spaced, approximately 5% apart. This is a relatively diffcult scenario for dose-finding studies to handle. The probability of selecting the TD25 is 28% and 32% for orderings 1 and 2 respectively and even if the performance is poor the correct dose is still likely to be selected. In scenarios 8 and 16, where all the doses are too toxic, the design very seldom allocates patients higher than the first three doses and there is a high chance (74% and 73% respectively) that the trial will recommend early stopping. Additionally, we assess designs based on the distribution of patients across doses. Designs may correctly select the TD25 however, this could be undesirable and unethical if the majority of patients are over dosed at the more toxic dose levels. The average number and the percentage of patients at each dose level, for each scenario, is recorded in Tables 2 and 3 . The percentage of patients treated at the TD25 ranges between 23% and 43% for each scenario under both orderings. The design also allocates the most patients on average to the TD25 apart from in scenario 7. In this case more patients were allocated to the next lowest dose, we have already discussed the difficulties of this scenario so this characteristic is not too concerning. The mean number of patients recruited for scenarios 1-6 is 26, 30, 32, 33, 34 and 31 respectively. Similarly for scenarios 9-14 its 26, 30, 32, 34, 33 and 31. Even though we allow for up to 60 patients the majority of trials terminate early based on the pre-defined rules for selecting the TD25. This information is presented in Table 4 which also shows how often the max sample size is reached from the 10000 trials for each scenario. We can see in all scenarios, except those where doses are all toxic, we reach the maximum sample size in a small number of simulations. This is largest for scenario 1 where 21 of the 10000 (0.21%) needed the full sample size of 60 patients. Overall, the simulation results show the specification of this design performs relatively well in a number of scenarios. We have shown there is a high probability of the trial stopping early if all dose-levels are too toxic. We have also shown the design behaves in an appropriate manner when there is a lack of disparity between dose-levels in terms of toxicity. Finally, we have demonstrated that regardless of the ordering we observe the PO-TITE-CRM has a high probability of selecting the correct dose. There are a number of limitations to the operating characteristics presented here which are due to the specification of the simulations and trial design.
Discussion The PO-CRM and PO-TITE-CRM designs offer solutions to the issue of partial ordering where the order of the doses of treatments are only partially known. The original methodology details that this issue commonly arises in trials of multiple agents, where each drug individually may follow the monotonicity assumption but when combined at certain dose levels this may not hold. This issue is typically dealt with by fixing the dose of one of the agents and escalating the other or escalating both agents simultaneously. This means certain drug combinations that are clinically relevant may not be investigated or even considered. Here we have shown that these issues can also arise in other situations. Even though the ADePT-DDR trial uses multiple agents the issue of partial ordering occurs due to the varying treatment dose and schedule for one of its agents AZD6738. Implementing the PO-TITE-CRM design allowed us to deal with this issue effectively. There may be other factors or variables in single-agent dose-finding trials that would lead to the issue of partial ordering and would warrant the use of either PO-CRM or PO-TITE-CRM. A limited literature review highlighted that this may be the first instance of the PO-TITE-CRM design being applied. It is important to note that although this methodology takes into account all the various orderings the main aim is to identify the TD%% and does not attempt to identify the order that is more correct. Compared to other CRM based designs only a few additional pieces of information are required to implement the PO-CRM design, specifically the number of toxicity orderings and prior probabilities for the orders. Dependent on how many dose combinations are available it may not be feasible to investigate all combinations and all orderings. Careful thought and consideration should be given to the combinations and orderings selected which would require input from all relevant investigators (TMG, clinical investigators and other relevant stake holders). In terms of priors for orderings, if no prior information is available all orders should be treated as equally likely to occur. Extending this design to the PO-TITE-CRM requires a fit for purpose weight function and is applied in a similar way to the TITE-CRM methodology. There is an R package available with functions that can be used to run and simulate a PO-CRM trial. These functions were extended to included weighted dose toxicity models as described in this chapter to implement PO-TITE-CRM into ADePT-DDR. The lack of available software for PO-TITE-CRM specifically may be one of the reasons for its lack of use. In terms of the ADePT-DDR trial, dose combinations were decided upon by the clinical investigators. The issue of partial ordering was due to the dose-levels 2a and 2b and as such this methodology was employed to deal with that scenario. This is a very simple example of partial ordering as we only have two possible orderings and six dose levels. The necessity of implementing this methodology was discussed and whether or not adopting an easier solution by simply altering the dose levels would have been better. Ultimately, the dose levels selected by the clinicians were deemed the most relevant with the TD25 likely to be one of these doses. Our design used the power model as the working dose-toxicity model. Alternative models such as the one and two parameter logistic model could also be implemented. Whilst a two parameter model may better estimate the dose-toxicity relationship it is unclear if this is still applicable in the presence of partial orders. Therefore, for the purposes of this trial aiming to identify a TD25 a one parameter model was used. As the original authors of the methodology utilised the power model we felt this would be appropriate to use in this trial as well. Further work could be done via simulations to investigate how other models would perform with this design. Similarly, alternative weight functions such as a polynomial function could also be explored. Our selection of weight function was motivated to a large extent by clinical input. We chose to use a two piecewise linear function due to its simplicity in interpretation. Also, due to the lack of data and certainty around how the weights should actually change over time. Simulations to generate operating characteristics were the main tools used to assess the designs performance as well as help understand the impact of sample size and stopping rules. This was an iterative process that involved running multiple iterations of simulations under various scenarios until the design was finalised. A key point is that scenarios from simulations should account for each of the possible orderings. ADePT-DDR only has two orderings and we ran scenarios for both. For a trial with a greater number of orderings, this may be unfeasible but at least some scenarios should be assessed to ensure the design is behaving as expected. Overall, the design operating characteristics performed reasonably well even in difficult scenarios. One limitation of the simulations is how the time-to-event data is generated. The time of DLTs is sampled from a uniform distribution U (0, 413), where the time of the DLT can occur at any time between the patient beginning treatment and the end of follow-up (413 days). Using this uniform distribution implies that a DLT has an equal probability of occurring at any time-point in the observation window. This may not be an accurate representation of what happens in the actual trial. Similar comments can be made about the accrual rate used in the simulations. Here we specified the recruitment of one patient per month which is in no way guaranteed for the actual trial. Wages et al. [ 10 ], when presenting this methodology investigated four different applications of the PO-TITE-CRM which used different models to enroll patients and allocate DLTs. Results across these four applications were comparable and therefore we assume similar conclusions for this study. The simulations are also able to instantaneously determine dose-levels for incoming cohorts with all available information. This does not fully reflect the process in which dose-escalation decisions would be made during the actual running of the trial. The analysis would require a data snapshot and time would have to be spent cleaning the data and determining the next dose-level. Meaning any data from the point of the snapshot would not be included in any dose escalation/de-escalation decisions.
Conclusion We detail the issue of partial ordering and how we implemented the trial design, in what we believe is the first real-world application of this design. A large amount of simulation work is required to assess the performance of the design. We recommend running several varied scenarios for each potential ordering that will be investigated. This is often an iterative process to refine decisions that were made and often requires input from both clinical and statistical investigators to ensure that the trial design is fit for purpose.
Background In this article we describe the methodology of the time-to-event continual reassessment method in the presence of partial orders (PO-TITE-CRM) and the process of implementing this trial design into a phase I trial in head and neck cancer called ADePT-DDR. The ADePT-DDR trial aims to find the maximum tolerated dose of an ATR inhibitor given in conjunction with radiotherapy in patients with head and neck squamous cell carcinoma. Methods The PO-TITE-CRM is a phase I trial design that builds upon the time-to-event continual reassessment method (TITE-CRM) to allow for the presence of partial ordering of doses. Partial orders occur in the case where the monotonicity assumption does not hold and the ordering of doses in terms of toxicity is not fully known. Results We arrived at a parameterisation of the design which performed well over a range of scenarios. Results from simulations were used iteratively to determine the best parameterisation of the design and we present the final set of simulations. We provide details on the methodology as well as insight into how it is applied to the trial. Conclusions Whilst being a very efficient design we highlight some of the difficulties and challenges that come with implementing such a design. As the issue of partial ordering may become more frequent due to the increasing investigations of combination therapies we believe this account will be beneficial to those wishing to implement a design with partial orders. Trial registration ADePT-DDR was added to the European Clinical Trials Database (EudraCT number: 2020-001034-35) on 2020-08-07. Keywords
Acknowledgements We would like to thank the members of the ADePT-DDR Trial Management Group and Trial Safety Committee for their contributions to the trial. We also thank V.Homer for her help validating the code used to conduct simulations. Finally, thank you to the editor and reviewers whose comments helped improve the manuscript. Authors’ contributions AP wrote the main manuscript text, prepared figures and conducted the simulations. AP and PG designed the trial and worked on the implementation of the design with DS. LB and KB contributed as statistical methodology reviewers. AK and HM are Co Chief Investigators for the trial and CG is the trial management lead. All authors reviewed the manuscript. Funding The ADePT-DDR trial is funded by AstraZeneca. This research was conducted with support from AstraZeneca UK Limited. Availability of data and materials All data presented in this manuscript is simulated data. The results presented here are summaries of the simulations. Declarations Professor Mehanna is a National Institute for Health Research (NIHR) Senior Investigator. The views expressed in this article are those of the author(s) and not necessarily those of the NIHR, or the Department of Health and Social Care. Ethics approval and consent to participate The ADePT-DDR trial has been approved by the South Central - Berkshire B Research Ethics Committee. The trial continues to be conducted in accordance with the protocol, Good Clinical Practice guidelines, and the Declaration of Helsinki. All patients provide written informed consent. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:46
BMC Med Res Methodol. 2024 Jan 13; 24:11
oa_package/21/da/PMC10787975.tar.gz
PMC10787976
38218929
Introduction Informed consent among patients undergoing surgical procedure is the process of shared decision-making made by the client or his/her surrogates after fully explained what he/she is consenting [ 1 ]. It is a voluntary agreement by a competent individual after adequate information regarding the procedure performed, potential benefits and risks, and alternative options of management to make decisions without corrosion [ 2 ]. One of the medical practices associated with high risks that require informed consent is surgical invasive procedures [ 3 ]. The patient has the right to obtain appropriate expression of all risks and benefits, type of producer, options of treatment, and consequences with scientific justification and evidence [ 4 ]. One of the fundamental pillars of surgical treatment is the patient’s informed consent [ 5 ]. A globally recognized safeguard for clients undergoing invasive procedures is informed consent [ 6 ]. The requirement to make informed consent is patient autonomy (a) the ability of the client to self-determination regarding the procedure that will be done on his/her body. It is self-rule and choice regarding what treatment options physicians propose [ 2 , 7 ]. Patient comprehension (b) the ability of the client to understand what is explained by health care providers [ 7 ]. Adequate information (c) means health care provider disclose in sufficient detail the diagnosis, prognosis, treatment option, potential risks, and benefits by using understandable language to his/her expert decision [ 2 , 8 ]. Competency (d) the capacity of the client to understand the information, voluntariness (e) decision of consent based on the information rather than coercion, consent (f) agreement between the patient and treating clinician in the proposed treatment procedure with full understanding. Consent form (g) is a written document signed by the client before the surgical procedure [ 9 – 11 ]. Informed consent is the safeguarding of the patient in medical practice at different standards such as ethical, legal, and administrative purposes [ 2 , 6 , 12 ]. The informed consent document builds trust between patients and physicians and enhances the shared decision-making of the client in the surgical procedure. All surgeons check the informed consent document before entering into operation room. Any invasive procedure without signed consent is illegal as well as unethical [ 12 ]. Knowledge and perception of the client towards informed consent in the primary study were assessed in the composite variable. Knowledge of informed consent is measured by the know reason why they had surgery, the option of alternative treatment, type of surgery, anesthesia-related risks, postoperative care, the complication of surgery, the legal requirement of informed consent, the right to change their mind after sign, and who protects [ 13 , 14 ]. Different literature indicated that patient knowledge of informed consent is low. Research conducted in Benin indicated that one-third of the clients (32.3%) experienced good knowledge regarding informed consent [ 2 ]. Another similar study in Sudan revealed that 46% of clients had good knowledge of informed consent [ 15 , 16 ]. In Rwanda, only 5% of patients had a high level of knowledge, 12% had moderate, and the rest 83% of the patients had a low level of knowledge towards informed consent [ 17 ]. A study done in Kenya revealed that knowledge regarding informed consent is limited, 46% of the patients stated that the purpose of informed consent is for hospital protection and 41% of them stated their wishes [ 17 ]. In Ethiopia, the magnitude of good knowledge of informed consent among surgical patients is low ranging from 10.5% [ 13 ] to 46.9% [ 18 ]. Client perception towards informed consent includes perception of the importance and function of consent forms, the legal and ethical status of consent, and the scope of consent [ 18 – 21 ]. Research in the different countries indicated that the perception of clients towards informed consent is low. A study done in Saudi Arabia indicated that 23.7% of the clients had poor perceptions of informed consent [ 18 , 22 ]. In Rwanda, 23% of patients experience poor perception, and 50% and 31% of clients had moderate and high levels of perception towards surgical informed consent [ 17 , 18 ]. The magnitude of client perception towards informed consent in Ethiopia among post-operated patients is low, ranging from 13.7 to 66.8% [ 16 , 18 ]. Factors affecting knowledge and perception of patient informed consent in surgical procedures are level of education, residence, age, history of signing before, type of surgery, marital status, and occupation significant variables [ 2 , 13 , 17 ]. Many patients around the world, particularly in developing countries undergo surgery without the knowledge of the reason for the surgery, the type of surgery, and identifying the identity of the surgeon [ 13 , 23 ]. The consequence of poor knowledge and perception of clients towards informed consent is patient dissatisfaction, feeling low power in their determination, low control, patient anxiety, and unaccountable for the management [ 18 , 20 , 21 , 24 ]. Despite patient knowledge and perception of informed consent being one of the priority concerns in surgical procedures, the problem still exists in Ethiopia. In addition, studies in small-scale findings are inconsistent and inconclusive about the knowledge, perception, and determinants of informed consent. Therefore, the purpose of this systematic review and meta-analysis study was to determine the pooled prevalence and factors of knowledge and perception of patients towards informed consent among surgical patients in Ethiopia. The findings of this nationwide study will generate evidence with implications to improve physician intervention, health facility managers, and policymakers to establish guidelines for informed consent practice.
Methods Study design and protocol registration A Systematic Review and Meta-Analysis (SRMA) was conducted to quantify the pooled level of patient knowledge and perception towards informed consent and determinants among surgical patients in Ethiopia. A preliminary assessment was done to check whether a similar study was performed or not through Prospero, Epistemonikos, Semantic Scholar, and PubMed and there was no similar study. We prepared this systematic review and meta-analysis according to the preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA-2020) follow diagrams ( S1 Table 1 ). The protocol was registered at Prospero with number CRD42023445409 and is available from: https://www.crd.york.ac.uk/PROSPERO/#myprospero . Search strategies We searched major databases such as PubMed, Hinary, MEDLINE, Cochrane Library, EMBASE, Scopus, African Journal Online (AJO), Semantic Scholar, Google Scholar, google, and reference lists. Besides this, University databases in the country were also searched from August 20, 2023, until September 30, 2023,. Studies conducted between January 01, 2015 to September 30, 2023, were included. This systematic review and meta-analysis used PECO (Population, Exposure, Comparison, and outcome) to identify eligible studies. The study population (P) are surgical patients, exposure (E) associated factors, comparison (C) reference of the factors, and outcome (O) level of knowledge and perception towards informed consent. Boolean operators “OR” and “AND” were used to combine search terms. Keywords used to search includes knowledge, perception, patient, client, “informed consent”, consent, factors, determinants, predictors, “surgical patient”, “post operated patient”, “after surgery”, and Ethiopia. Studies obtained by the reviewers’ search strategy were exported into EndNote for management. All duplicated studies obtained for different database searches were excluded. Studies eligibility was assessed first from the title, then the abstract, and finally, a full-text review was performed. Eligibility criteria All observational studies (cross-sectional, case-control, and cohort) on patient knowledge and perception towards informed consent among surgical patients conducted in Ethiopia were included. Both published and unpublished studies reported the prevalence of patient knowledge and perception toward informed consent and its associated factors were included. All studies reported in English were included. Studies conducted between January 01, 2015 to September 30, 2023 were included. Articles that cannot access full text after failing to contact the primary authors were excluded. Outcome measurement This systematic review and meta-analysis measured three main outcomes. The first outcome of the study was to estimate the pooled level of appropriate knowledge towards informed consent. The second outcome was to estimate the pooled level of perception towards informed consent. The third outcome was the associated factors with knowledge of informed consent among surgical patients. The level of knowledge towards informed consent was measured by 12 items and the level of perception by 8 items of questions. Patients who scored less than the mean for knowledge and perception questions had poor knowledge and poor perception respectively. Data extraction The selection of studies in all the searched databases was conducted by three authors (YT, NK, and FDB) independently. The primary author, study year, year of publication, regions where the study was done, study design, sample size, prevalence, response rate, method of outcome measurement, all associated factors odds ratio, relative risk, lower confidence interval, and upper confidence interval were extracted by using Microsoft Excel format. The corresponding author was supportive of clarification on the inclusion criteria. Disagreements among data extractors were solved by consensus. Quality assessment and risk of bias Three reviewers (MMM, NK, and YT) independently screened the articles that fulfilled the inclusion criteria to avoid the risk of bias. The Newcastle-Ottawa Scale (NOS) checklist was used to appraise the quality of the studies. The tool includes three parts. The first part included methodology [ 5 ] rate with five stars, the second part was comparability [ 2 ] rate with two stars and the third part was outcome with statistical test [ 3 ] rate with three stars ( S2 Table). Three authors (MMM, NK, and YT) independently assessed the quality of the studies. Disagreements among reviewers were resolved by consensus and a third party (FDB). Data processing and analysis Data were extracted by using Microsoft Excel format and imported into STATA version 17 for processing and analysis. The pooled prevalence of patient knowledge and perception towards informed consent was estimated by random effect model meta-analysis. The heterogeneity of the studies was assessed by observing the p-value and I 2 statistics test. Factors associated with patient knowledge for informed consent were estimated by a log odds ratio at 95% CI. The potential source of heterogeneity was identified by subgroup analysis. In addition, Egger’s test statics and funnel plot were performed to identify potential publication bias among the included studies. The result of this meta-analysis was presented by tables, funnel plots, forest plots, and narrations.
Results A total of 1635 studies were searched by using a searching strategy in this systematic review and meta-analysis study. Among those studies, 452 articles were excluded due to redundancy. From the remaining 1115 articles, 1148 studies were excluded in the review of the abstract and title of the study that did not report the level of patient knowledge or perception and its determinants. Of the studies, 28 articles were excluded because of the study location outside Ethiopia. Finally, seven studies were included in this systematic review and meta-analysis study that met the minimum eligibility criteria (Fig. 1 ). Of those articles, seven studies estimated pooled level of knowledge [ 13 , 15 , 18 , 20 , 25 – 27 ], and four studies estimated the prevalence of perception [ 16 , 18 , 20 , 28 ]. Characteristics of the included studies characteristics of the included studies From all included seven articles 2,690 study participants were used to estimate the pooled level of patient knowledge of informed consent among surgical patients in Ethiopia. The maximum sample size was 423 [ 16 ] and the minimum sample size was 302 [ 11 ]. All included studies are cross-sectional study design. The prevalence of patient knowledge of informed consent ranges from 10.5% [ 13 ] to 46.9% [ 18 ] (Table 1 ). Prevalence of patient knowledge and perception for informed consent among surgical patent in Ethiopia We observed that there is a variation in the prevalence of patient knowledge and perception of informed consent among surgical patients in Ethiopia. A random effect meta-analysis model for seven studies pooled prevalence of patient knowledge for informed consent was 32% (95% CI: 21, 43) with (I 2 = 97.87% and p_value < 0.001) (Fig. 2 ). Similarly, four studies pooled the prevalence of perception of patients towards informed consent at 40% (95% CI: 16, 65) with (I 2 = 99.21% and p_value < 0.001) (Fig. 3 ). To identify potential causes of publication bias among the included studies Egger’s test statistics and funnel plot were performed. As a result, the funnel plot indicated that there was asymmetric distribution in the included studies. In addition, Egger’s test statics indicated that there was evidence to show publication bias (p = 0.009) with a standard error of 7.39. Besides this, we performed a sensitivity analysis to identify any outlier that causes a source of heterogeneity to estimate the pooled prevalence of patient knowledge of informed consent among surgical patients in Ethiopia. The finding indicated that there was one outlier study far apart from the confidence interval of the rest included studies. As a result, we were confident enough, that in this systematic review and meta-analysis study, there was a single study that affected the overall pooled prevalence of patient knowledge of informed consent among surgical patients in Ethiopia (Fig. 4 ). Accordingly, we omitted a single study that lies outside the confidence interval and performed a random effect meta-analysis model in six studies. The pooled level of patient knowledge of informed consent after removing one study changed from 32 to 36% (95% CI: 27,44) with (I 2 = 95.33% and p < 0.00) (Fig. 5 ). In addition, the funnel plot changes are somehow symmetrical, and Egger’s test statistics result also revealed that there was no evidence of publication bias (p = 0.17) with a standard error of 20.56. Subgroup analysis Subgroup analysis was performed by using sample size, study period, and region of the study to identify the potential source of heterogeneity. As a result, studies conducted after 2020 were the possible cause of heterogeneity with the higher pooled prevalence estimated which was 44% (95% CI: 40, 48). Besides this, studies conducted in the Oromia region were other sources of heterogeneity with a lower pooled prevalence of 23% (95% CI: 20,26) (Table 2 ). Factors affected patient knowledge of informed consent among surgical patients in Ethiopia The factors of residence, formal education, history of signed informed consent before, and type of surgery were investigated in the pooled effect on patient knowledge of informed consent. The association between formal education and patient knowledge towards informed consent was examined by using three studies, of which there was no association [ 26 ] and the rest two were positive associations with patient knowledge towards informed consent [ 13 , 16 ]. Hence, there was a positive relationship between formal education and patient knowledge of informed consent. The pooled effect of appropriate patient knowledge of informed consent is nearly three times more likely among formally educated patients than counterparts 2.69 (95% CI: 1.18, 6.15) (Table 3 ). Similarly, we examined the association between having history of signed informed consent before and patient knowledge of informed consent by using three studies [ 16 , 20 , 27 ]. Accordingly, there was a statistically positive relation between history of sign before and patient knowledge of informed consent. Patients who had experienced signing informed consent before the pooled effect of appropriate patient knowledge were more than three times more likely than had no history of signing before 3.65 (95% CI:1.02,13.11) (Table 3 ). In this meta-analysis, the pooled effect of residence on patient knowledge of informed consent was examined by using four studies. Of which being urban residence, 2 studies had no effect [ 16 , 26 ] and 2 had a positive relation with patient knowledge of informed consent [ 13 , 20 ]. As a result, there was no statistically significant pooled effect of residence on patient knowledge of informed consent 1.06 (95% CI: 0.26, 3.87) (Table 3 ). Finally, the pooled effect of the type of surgery on patient knowledge of informed consent was assessed using two studies [ 13 , 26 ]. The result of these two studies indicated that there was no statistically significant pooled effect of type of surgery on patient knowledge of informed consent among surgical patients 0.81(95% CI:0.16,4.21) (Table 3 ).
Discussion Patient knowledge and perception of informed consent are important to increase client satisfaction and better health outcomes for surgical patients. Evidence on patient knowledge, perception, and its determinants is crucial for physicians, health managers, and policymakers. Therefore, this systematic review and meta-analysis were performed by using available primary studies in Ethiopia. The finding of the study revealed that the pooled prevalence of appropriate patient perception of informed consent was 40% (95% CI: 16%, 65%) among surgical patients in Ethiopia. This finding was congruent with studies conducted in Egypt 27.3% [ 3 ] and in South Africa 27% of patients perceived signed consent with understanding [ 29 ]. However, the result of this finding was lower than the study conducted in Nigeria 97% of patients were satisfied with the explanation of informed consent [ 30 ] and University of Colorado on repeat back and no repeat back participants, favorable perception of patients towards informed consent was 88% [ 31 ]. The possible justification for this variation might be due to the different methods of the study, the sample size in Nigeria was 398 whereas this study incorporates 2690 participants in the primary study. The pooled prevalence of appropriate patient knowledge of informed consent was 32% (95% CI:21, 43) among surgical patients in Ethiopia. This finding was incongruent with the study finding in German 32.6% of patients correctly answered knowledge questions [ 32 ]. However, this finding is higher than the study done in Rwanda 5% of the participants had a high level of knowledge, 12% moderate, and the rest 83% had a low level of knowledge towards informed consent [ 17 ]. The possible reason for this discrepancy might be due to the difference in sample size in Rwanda was 147 and it was conducted in one military hospital. However, this finding was lower than a systematic review study done in Pakistan 50% [ 33 ], India 68% understood the type and consequence of the study [ 34 ], Portuguese 44.7%, Croatia level of knowledge average, and 60% had partial knowledge [ 35 ]. These variations might be due to the difference in the educational status of study participants, differences in economic status, and giving value for informed consent during surgical producer of the patient. It may vary the culture and behavior of physicians who focus on informed consent. Developed countries have a high-level concern for patient rights and informed consent; whereas in developing countries including Ethiopia focus on patient rights is limited. Subgroup analysis was performed by taking the study setting, sample size, and study period. In this regrade, the study conducted after 2020 indicated a source of heterogeneity of 44% (95% CI: 40, 48) as compared to studies conducted before or in 2020. This implies that as the period of study increases the patient knowledge towards informed consent also increases. This variation might be explained as the period of study is more recent the patient may get more information about informed consent. It might be due to the increase in the number of health professionals from time to time who had room to explain informed consent. In addition, studies conducted with a sample size greater than or equal to 385 were another source of heterogeneity of 36% (95% CI: 16, 55) than a sample size less than 385. This difference might be as the sample size increases and also increases the representativeness of the finding. The pooled effects of patient knowledge towards informed consent among formally educated patients were 2.69 times more likely than counterparts (Table 3 ). This finding is in line with the study in South Africa [ 29 ], Pakistan [ 36 ], and India [ 24 ]. The possible explanation for this finding might be those educated patients can easily understand the physician’s explanation of informed consent [ 37 ]. There may be a language barrier to the understanding of the consent formats. For patients who had experienced signed informed consent before, the pooled effect of patient knowledge towards informed consent was 3.65 times more likely than those not signed before. This finding is consistent with a systematic review done on client comprehension; those patients demonstrated the highest understanding of informed consent (Systematic review) [ 38 ]. The implication of this finding is once the patient was exposed for signed informed consent, had more understanding. Besides this, those patients had more knowledge of diagnosis, treatment, and possible outcomes of treatment. This meta-analysis revealed that there is no statistically significant pooled effect residence on patient knowledge towards informed consent in Ethiopia. In addition, the type of surgery had no statistically significant pooled effect on patient knowledge of informed consent. The limitation of this study primary studies included in this meta-analysis were found to be in Southern Ethiopia, Amhara, Oromia, and Addis Ababa city, which is under-represented in other regions in the country. In addition, a limited number of primary studies are available in Ethiopia. Besides this, only a few systematic review and meta-analysis studies on patient knowledge and perception of informed consent to compare the findings.
Conclusion The appropriate patient knowledge and perception of informed consent in Ethiopia is low. Formal education and history of signed informed consent were positive factors for the level of patient knowledge of informed consent in Ethiopia. Physicians, policymakers, and health facility managers should focus on patients without prior experience with signed informed consent and not had formal education to improve patient knowledge towards informed consent. Physicians should provide clear information regarding the content of informed consent those patients had no formal education and experience before to increase their knowledge of informed consent.
Background Informed consent is one of the safeguarding of the patient in medical practice at different standards such as ethical, legal, and administrative purposes. Patient knowledge and perception of informed consent are one of the priority concerns in surgical procedures. Patient knowledge and perception towards informed consent increased patient satisfaction, feeling high power on their determination, and accountability for the management, and facilitated positive treatment outcomes. Despite this, in Ethiopia, there are small-scale primary studies with inconsistent and inconclusive findings. Therefore, this systematic review and meta-analysis study estimated the pooled prevalence of patient knowledge and perception of informed consent and its determinants in Ethiopia. Methods We searched major databases such as PubMed, Hinary, MEDLINE, Cochrane Library, EMBASE, Scopus, African Journal Online (AJO), Semantic Scholar, Google Scholar, google, and reference lists. Besides this, University databases in the country were also searched from August 20, 2023, until September 30, 2023,. All published and unpublished studies that report the prevalence of patient knowledge and perception toward informed consent and its associated factors were included. All studies reported in English were included. Studies conducted between January 01, 2015 to September 30, 2023 were included. There are three outcome measurements pooled level of patient knowledge towards informed consent, pooled level of patient perception towards informed consent, and pooled effect that affects patient knowledge of informed consent. Three reviewers (MMM, NK, and YT) independently screened the articles that fulfilled the inclusion criteria to avoid the risk of bias. The studies’ quality was appraised using a modified Newcastle-Ottawa Scale (NOS) version. Results The pooled prevalence of appropriate patient knowledge and perception towards informed consent was 32% (95% CI: 21, 43) and 40% (95% CI: 16, 65) respectively. Having formal education 2.69 (95% CI: 1.18, 6.15) and having a history of signed informed consent before 3.65 (95% CI:1.02,13.11) had a statistically significant association with good patient knowledge towards informed consent. Conclusion The appropriate patient knowledge and perception of informed consent in Ethiopia is low. Formal education and history of signed informed consent were positive factors for appropriate patient knowledge of informed consent in Ethiopia. Physicians, policymakers, and health facility managers should focus on patients without prior experience with signed informed consent and not have formal education to improve patient knowledge towards informed consent. The protocol was registered at Prospero with number CRD42023445409 and is available from: https://www.crd.york.ac.uk/PROSPERO/#myprospero . Supplementary Information The online version contains supplementary material available at 10.1186/s13037-023-00386-5. Keywords
Electronic supplementary material Below is the link to the electronic supplementary material.
Acknowledgements We have special thanks to all authors, data collectors, and supervisors of the primary studies included in this systematic review and meta-analysis. Author contributions MMM designed the study, performed analysis, interpreted the data, and prepared the manuscript. EB assisted in the design, participated in data analysis, approved the article with revisions, and prepared the manuscript. YT assisted in the design, approved the article with revisions, and revised the subsequent write-up of the paper. KA participated in data analysis, approved the article with revisions, and prepared the manuscript. FDB assisted in the design, participated in data analysis, approved the article with revisions, and prepared the manuscript. LA Methodology performed analysis, interpreted the data, and prepared the manuscript. AE performed analysis, interpreted the data, and prepared the manuscript. SDK participated in data analysis and approved the article with revisions. MAMethodology performed analysis, interpreted the data, and prepared the manuscript. NK performed analysis, interpreted the data, and prepared the manuscript. All authors reviewed and approved the manuscript. Funding Not applicable. Data availability The datasets used and analyzed during the current study are available from the first author. Declarations Ethical approval and consent to participate Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:46
Patient Saf Surg. 2024 Jan 13; 18:2
oa_package/d9/07/PMC10787976.tar.gz
PMC10787977
38221641
Introduction Uncontrolled asthma can affect sleep quality as increased nocturnal symptoms are synonymous with uncontrolled disease. However, short or excessive sleep duration and poor sleep quality are risk factors for asthma exacerbations and healthcare usage, poorer quality of life and mortality [ 1 ]. Accelerometers provide a novel opportunity to evaluate sleep parameters relative to asthma severity. Accelerometery has been validated against polysomnography for measurement of sleep-related variables in asthma and sleep measures can be obtained using a validated algorithm for wrist-worn accelerometers without the use of accompanying sleep diaries [ 2 ]. These tri-axial devices measure acceleration, allowing estimates of physical activity, sedentary time and sleep. Accelerometers are less cumbersome than sleep diaries, encouraging adherence, provide additional data such as sleep onset and efficiency, and are a cost-effective option compared to polysomnography. We hypothesised that sleep patterns differ between mild and difficult-to-treat asthma populations. We performed a cross-sectional, proof-of-concept analysis comparing sleep parameters from participants with mild and difficult-to-treat asthma utilising accelerometer technology.
Materials and methods Data for this analysis was retrieved from two recent local trials approved by the West of Scotland Regional Ethics Committee (references 16/WS/0200 and 18/WS/0216) and undertaken between 2017 and 2021: one of pulmonary rehabilitation in difficult-to-treat asthma associated with raised body mass index (BMI) alongside a sub-study of activity levels in mild asthma, and a second trial studying weight loss in difficult-to-treat asthma and obesity (trial identifiers: NCT03630432, NCT03858608). Full trial protocols are described elsewhere [ 3 , 4 ]. Both trials were funded by an NHS Greater Glasgow and Clyde Endowment Fund, and none of the contributors to the fund had any input in trial design, results or interpretation, nor any input into this retrospective analysis. All participants provided written consent for data use in future studies. Briefly, difficult-to-treat asthma was defined as per SIGN/BTS and GINA guidelines [ 5 , 6 ], including presence of characteristic symptoms, reversibility (≥ 12% and 200mls increase in FEV 1 post-bronchodilator) or bronchial hyper-reactivity on bronchial challenge testing; asthma treatment with high-dose inhaled corticosteroid (ICS); poor asthma control (Asthma Control Questionnaire score > 1.5) or ≥ 2 exacerbations requiring oral corticosteroids (OCS) or ≥ 1 asthma exacerbation requiring hospitalisation in the preceding 12 months. Patients with mild active asthma (asthma treatment within the preceding 12 months) were recruited from primary care. Mild disease was categorised by maximum preventer treatment with moderate-dose ICS/long-acting β-agonist combination, ACQ ≤ 1.5, < 2 exacerbations requiring OCS treatment and no hospital admissions with asthma in the preceding 12 months. As part of the trial assessments, participants wore an ActiGraph wGT3X-BT accelerometer (ActiGraph, Pensacola, USA) on their non-dominant wrist continually for 7 days (excluding bathing). Devices were initialised to capture data at 30 Hz. Raw data was downloaded using ActiLife software (v.6.14.3; ActiGraph) and saved as .gt3x files and converted to .csv files. Data was exported into R v4.1.2 (R Foundation for Statistical Computing, Vienna, Austria) for subsequent processing using the GGIR package (v2.6.0). Among the variables extracted were number of nights devices were worn; mean sleep window time (time from initial sleep-onset to waking); mean sleep time (accumulated sustained inactivity sojourns overnight); sleep efficiency (sleep time: sleep window); sleep-onset time and wake time. Time variables were described as hours and minutes or 24-hour clock where appropriate. Variables were non-parametric and so summarised as median (IQR) and compared between mild and difficult-to-treat asthma groups using the Mann-Whitney U test. Data was analysed using IBM SPSS Statistics (version 28.0) and significance was set at 0.05.
Results Of 133-patient data-sets available, nine were excluded due to lack of data (defined ≤ 3 nights use), leaving 124 participants (44 with mild asthma, 80 with difficult-to-treat asthma). Of the 124, 56% were female, median (IQR) age was 57 (47, 64) years and the majority were never and ex-smokers (56% and 38% respectively). Baseline characteristics (Table 1 ) showed differences between mild and difficult-to-treat participants in atopy, weight, BMI, asthma control and quality of life, long-acting β-agonist (LABA) use and number of annual exacerbations.Higher baseline fractional exhaled nitric oxide (FeNO) and peripheral eosinophils were observed in the difficult-to-treat asthma group compared to mild asthma. Table 2 summarises sleep-metric findings. Overall, the median number of nights accelerometery was available was 6 (6, 6). Median sleep time was 6hrs35mins (5hrs2mins, 7hrs45mins), with a median sleep window time of 7hrs49 mins (6hrs29mins, 8hrs56mins) and median sleep efficiency of 85% (81, 90). Median time of sleep-onset was 00:08 (23:02, 01:23) and wake time 07:54 (06:48, 09:22). No differences were observed in sleep time, sleep window, sleep efficiency or wake time between the mild and difficult-to-treat groups, though sleep-onset time was later in the difficult-to-treat asthma group (00:24; 23:16, 02:02) compared to mild asthma (23:41; 22:52, 00:45; p = 0.019). In the overall dataset (I.e., mild and difficult-to-treat groups together), Spearman’s rank showed no correlation between sleep-onset time and ACQ (marker of asthma control); rho = 0.049, p = 0.589. Additionally, both unadjusted and adjusted (correcting for weight) linear regression using sleep-onset time as the dependent variable and ACQ as the independent variable showed no relationship between asthma control and sleep-onset time: unadjusted F(1,122) = 0.28, p = 0.866; adjusted for weight F(2,121) = 0.160, p = 0.852.
Discussion We observed no differences in sleep duration or efficiency between mild and difficult-to-treat groups, but whilst there was no difference in wake time, there was a later time of sleep-onset in the difficult-to-treat group which may reflect greater difficulty in sleep initiation in this cohort. The clinical significance of this difference (~ 40 min) is uncertain, however, interestingly correlation and regression analysis suggest this difference is not related to asthma control even when adjusted for weight, a key factor in sleep health. There was a significant between-group difference in proportion of participants with regular LABA use and it is feasible that β-agonist-mediated stimulation could be related to the delay in sleep initiation in the difficult-to-treat group. Compared to the recommended sleep duration, patients from our cohort appear to be on the lower side (6.59 h; 5.04, 7.75) suggesting poorer sleep health despite good sleep efficiency. Factors associated with delayed sleep initiation and reduced sleep duration in difficult-to-treat asthma therefore remain to be elucidated and require further study. Our results are similar to a study performed in 56 healthy adults (mean age 24.5 ± 4.5 years) also using ActiGraph devices (non-dominant wrist) without sleep logs that showed (mean ± SD) sleep time (6 h 56 min ± 49 min), sleep window (7 h 59 min ± 51 min) and sleep efficiency (87%±4), as well as similar sleep-onset (00:05 ± 90 min) and wake times (08:20 ± 84 min) [ 7 ]. A small study of 10 patients with mild-to-moderate asthma [ 2 ] showed reduced sleep time of 5 h 54 min ± 74 min with a similar mean sleep window time of 7 h 34 min ± 40 min. However, this study is clearly limited by the small sample size. Our retrospective analysis has potential limitations. Firstly, groups were not equally weighted with more patients with difficult-to-treat asthma than mild asthma. Secondly, the initial trials data did not include objective assessments of daytime or nocturnal sleep (e.g., Epworth sleep score, Pittsburgh sleep quality index), nor any sleep logs. Thirdly, this analysis was not powered to assess sleep outcomes. Finally, this analysis did not account for factors such as sleep-disordered breathing that may influence outcomes, which should be addressed in future studies. Despite this, key strengths of our study are the sample size, higher than in previous studies, and observed excellent tolerance of accelerometer use (93%). To our knowledge this is the first comparison of mild and difficult-to-treat asthma sleep outcomes using accelerometery and we highlight a difference in sleep initiation between groups unrelated to asthma control and weight. Further study is warranted to explore the relationship between asthma severity and sleep-metrics and whether interventions targeting sleep health can improve asthma outcomes. In summary, patients with difficult-to-treat asthma may have delayed initiation of sleep compared to mild asthma, though this observation appears to be independent of asthma control and obesity. Other sleep parameters are broadly comparable to the general population. Accelerometers are well tolerated, offer more pragmatism than polysomnography and can be used to assess sleep outcomes in asthma but dedicated trials are needed before any definitive conclusions can be drawn.
Introduction Poor sleep health is associated with increased asthma morbidity and mortality. Accelerometers have been validated to assess sleep parameters though studies using this method in patients with asthma are sparse and none have compared mild to difficult-to-treat asthma populations. Methods We performed a retrospective analysis from two recent in-house trials comparing sleep metrics between patients with mild and difficult-to-treat asthma. Participants wore accelerometers for 24-hours/day for seven days. Results Of 124 participants (44 mild, 80 difficult-to-treat), no between-group differences were observed in sleep-window, sleep-time, sleep efficiency or wake time. Sleep-onset time was ~ 40 min later in the difficult-to-treat group ( p = 0.019). Discussion Broadly, we observed no difference in accelerometer-derived sleep-metrics between mild and difficult-to-treat asthma. This is the largest analysis of accelerometer-derived sleep parameters in asthma and the first comparing groups by asthma severity. Sleep-onset initiation may be delayed in difficult-to-treat asthma but a dedicated study is needed to confirm. Keywords
Acknowledgements The authors are grateful to the all participants from the two trials. Author contributions VS aided with study design, data collection and performed data analysis and manuscript preparation. HCR, FS and AG aided with data collection and review of manuscript. DSB aided with analysis of data, manuscript preparation and review of manuscript. DCC aided with study design and manuscript review. Funding None. Data availability Data is available upon reasonable request. Declarations Ethics approval and consent to participate all participants provided written consent and ethical approval was granted for both trials from which this data were taken by the West of Scotland Regional Ethics Committee (references 16/WS/0200 and 18/WS/0216). Consent for publication All trial participants consented to publication of data for the initial trials and any subsequent analyses. Competing interests The authors report there are no competing interests to declare.
CC BY
no
2024-01-15 23:43:46
Allergy Asthma Clin Immunol. 2024 Jan 14; 20:5
oa_package/6f/29/PMC10787977.tar.gz
PMC10787978
38218803
Introduction Lung cancer is the second most common malignancies in China, where up to 39.8% of all 2.2 million worldwide newly diagnosed cases were from China in 2020 [ 1 , 2 ]. Only 17.3% of the lung cancer patients are diagnosed at stage I, others are found with advanced stage [ 3 ]. Given the large number of patients with lung cancer and the poor prognosis [ 4 ], lung cancer contributes prominently to the cancer burden in China with substantial economic and societal impacts in future [ 5 ]. To achieve effective cancer prevention, there is a growing focus on improving cancer control through screening and early diagnosis. Several organizations or medical societies worldwide, including National Cancer Center of China, recommended annual low-dose CT (LDCT) screening for people at high risk of developing lung cancer [ 6 – 9 ]. As a result, millions of participants were diagnosed with lung nodules by undergoing LDCT screening every year [ 10 ]. However, the false positive rate (FPR) of LDCT test was reported as 96.4% and 56.5% in the National Lung Screening Trial(NLST) and Dutch-Belgian Randomized Lung Cancer Screening Trial(NELSON), respectively [ 11 , 12 ]. Consequently, a substantial part of subjects undergo unnecessary clinical examinations following a false-positive screening result which results in extra radiation exposure and over-diagnosis. To make the existing cancer screening programs more efficient targeting, polygenic risk scores (PRSs) are introduced. PRS have the potential to identify individuals at risk of different type of cancers, optimizing treatment, and predicting survival outcomes [ 13 ]. Though translation of PRSs into clinically relevant prediction models is a challenge [ 14 , 15 ]. Recent case–control cohort study suggested that the PRSs could significantly improve discrimination in high risk populations, compared to clinical risk factors (e.g. age, sex, smoking history, cancer histology, etc.) alone [ 16 ]. A large-scale prospective cohort study identified 19 susceptibility loci to be significantly associated with non-small cell lung cancer risk at p ≤ 5.0 × 10 −8 ,and confirmed that PRS was an independent effective risk stratification indicator beyond age and smoking pack-years in Chinese populations, makes PRS a potential candidate for realizing precision screening [ 17 ]. Although promising, none of the candidate PRSs are regularly used in clinical practice, despite studies reporting benefits from using PRS to assess eligibility of several types of cancer screening programs (i.e. breast, prostate and colorectal cancer) [ 18 ]. As the PRS could be used as an indicator to guide risk stratification, we propose to use PRS on the basis of former risk assessment criteria to further assess the eligibility of lung cancer screening, might be one of the potential approaches to realize its utility in population-based cancer screening programs. Few results have been reported to date using these PRSs in screening practice; thus, the health outcomes associated with adjunctive strategies with LDCT as well as the cost-effectiveness remain unclear. Here, we assessed the impact of the current PRS introduced in conjunction with LDCT screening on the effectiveness and cost-effectiveness of lung cancer screening from a societal perspective. Using a Markov model, we evaluated the long-term benefits and harms of lung cancer screening with and without a PRS in Chinese populations.
Methods Study design and model description In this modelling study, the Markov model on lung cancer screening that developed by our previous work was used and adapted for the purpose of assessing the potential impact of LDCT screening with and without a PRS from a societal perspective. Important assumptions and the overall structure of the model have been thoroughly described before and in supplementary material [ 19 , 20 ]. Per China guideline for the screening and early detection of lung cancer (2021,Beijing) [ 21 ] recommended, 3 hypothetical cohorts of 10,000 current and former smokers aged 50–74 years old were simulated until death or age 79 years (mean life expectancy in China),named non-screening cohort, LDCT screening cohort and LDCT&PRS screening cohort. Unlike the normal LDCT screening modality, individuals who enter the cohort of LDCT&PRS were assumed to have received PRS assessment and were included to the top 5% high risk based on PRS. All the simulated individuals from two screening cohorts undergo annual screening until the simulation ended. We further superimposed screening and diagnostic follow-up interventions onto the natural history model for lung cancer and obtained population-level outcomes. Data sources, main outcomes, and the full research design are shown in Fig. 1 . The model was run with a cycle length of 1 year and a discount rate of 5% was applied to both costs and effectiveness. The model construction and all the simulations were conducted using Treeage Pro, version 2021 (Treeage Software). The study was performed according to the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) and was approved by the ethics committee of the Jiangsu Province Hospital of Chinese Medicine; informed consent was not applicable because this was a modeling study. Model input parameters For this modelling analysis, we used China age-stratified data for lung cancer incidence and integrated the effect of smoking rate to model incidence rates for the initial probability of lung cancer for those in the cohorts of non-screening or LDCT screening alone [ 22 – 24 ]. According to the 3 PRS-defined quantiles (ie, the top 5%,5%-95%, and the bottom 5%), we then calculated the relative risk(RR) of the PRS for lung cancer based on the published estimates of the standardized rates of lung cancer events of the three groups of heavy smokers with diverse genetic risk in China Kadoorie Biobank (CKB) cohort [ 17 ]. The proportion of clinical stage for lung cancer detected by LDCT was derived from screening results of the Wenling Lung Cancer Screening Program, which was initiated in 2018 to conduct annual LDCT screening for local populations at high risk of lung cancer with follow-up for 3 years. A total of 20130 asymptomatic individuals were screened by the program by the end of December, 2022, and 287 patients were diagnosed with lung cancer; details of the proportions by cancer stage are presented in Table 1 . Annual screening followed the same screening protocol as in the Cancer Screening Program in Urban China, which determined positive findings by morphologic features and the size of the nodule [ 25 ]. As for those diagnosed by normal clinical pathways, the probability that diagnosed clinically is detailed by stage in Table 1 based on a hospital-based multi-center lung cancer retrospective clinical epidemiological survey in China(LuCCRES) [ 26 ]. The probability of health to all-cause death was estimated as all-cause mortality for smokers by age [ 24 , 27 ]. The probability of lung cancer-specific death was derived from a study by Zhang et al. [ 28 ] and was adjusted for smoking status [ 29 , 30 ]. The probability that a cancerous state progressed to a more advanced state or to a maintenance state is detailed by cancer stage in Table 1 according to Haaf’s work [ 31 ]. The sensitivity and specificity for LDCT were based on a study that enrolled 9,522 person-times over five screening rounds from 2014 to 2018 in Sichuan, China [ 32 ]. Perfect attendance to screening was assumed for base-case analysis and the uptake rates by different screening modality were incorporated in scenario analysis [ 33 ]. A total estimated cost for the lung cancer screening program consisted of two parts, the direct screening cost and the indirect screening cost. Screening related cost data were surveyed by the work team of a local lung cancer screening program for the expenses for public advertising, screening invitation management, staff salary and depreciation of screening machinery. For the indirect screening cost, we conducted a survey to estimate the expenses for transportation and wage for missed work for the participants. We estimated the treatment cost of lung cancer by stage based on the database of local medical insurance bureau, which including 4,947 patients and 107,248 relevant records. Given the potential diversity in treatment cost across the nation, we adapt the treatment cost by stage using published metrics form China Health Statistics Yearbook 2020 [ 38 ]. The cost of maintenance by stage was calculated using the standard follow-up process and the unit price of each test per the price list of medical services in public medical institutions. All the costs in this study are expressed in CNY and are discounted to the price level of 2022 at a discount rate of 5%. For quality-of-life adjustment, we used the utility values for lung cancer state by stage based on a EQ-5D-3L survey from 2586 lung cancer patients in 8 provinces and 12 cities in China through the Cancer Screening Program in Urban China(CanSPUC). In addition to, we derived the utility value of CIS stage from a global systematic review by Sturza et al. [ 36 ]. The utility value for the maintenance state of each stage was derived from a domestic thesis in 2016 [ 37 ]. Evaluated strategies We compared 15 alternative strategies as shown in Table 2 . The first 5 strategies involved non-screening for all the heavy smokers as blank control. The remaining 10 strategies were defined by combinations of risk stratification approaches (smoking pack-years or PRS) and initial screening age from 50 to 70 years by 5-year age bands. We describe these strategies in Table 2 . Outcome measures In this study, primary outcomes included life years (LYs), quality adjusted life years (QALYs), and costs of different strategies. Given the #0 Non-screening strategy as reference, a strategy was deemed cost-effective if the incremental cost-effectiveness ratio (ICER), namely the difference between the overall costs of the two strategies divided by the difference between the total QALYs gained, was lower than the cost-effectiveness threshold of 1–3 times Gross Domestic Production (GDP) per capita per QALY gained (CNY 85,698–257,094) [ 39 ]. Sensitivity analysis and scenario analysis The robustness of the outcomes to uncertainties in the parameter estimates was examined through a series of univariate sensitivity analyses. The cost of screening, treatment cost as well as maintenance cost and consumer price index (CPI) rate were set to vary by 30% compared to base case values. The discount rate was set to range from 0 to 8%. The RR of the PRS for lung cancer was set to range from 2.64 to 5.99. The sensitivity and specificity of LDCT test were set to range from (0.632, 0.648) to (0.948, 0.972). Furthermore, Probability sensitivity analysis (PSA) was also performed with 10,000 iterations to assess the joint uncertainties in the values of input parameters. Input parameters were randomly drawn from beta, lognormal or gamma distribution (see Table 1 ). As for the scenario analysis, we evaluated the health benefits and harms associated with a lung cancer screening program that incorporated the uptake rate of different screening modalities among Chinese high-risk population for lung cancer. Software Modelling was performed in TreeAge Pro 2021 Version R2.1 (TreeAge Software, Williamstown, Massachusetts). IRB approval This project has been approved by Ethics Committee of the Taizhou cancer hospital (code: IRB-[2020]NO.6). Role of the funding source No specific funding was received for this analysis.
Results Base-case analysis In the absence of screening, the total number of lung cancer death per 100,000 heavy smokers aged between 50–79 years were estimated to range from 4,434 to 10,586. The introduction of a screening program led to a decrease of lung cancer deaths, with the reduction rate of lung cancer death ranging from 0.31% to 15.80% across a diverse set of screening strategies. About 95% false-positive cases could be averted by incorporating PRS in the screening program in relative to LDCT screening alone. The LYs and QALYs across all the screening strategies compared with non-screening ranged from 60.26 to 134.93 and from 59.83 to 134.27, respectively. To be specific, screening strategies using PRS as extra eligible criteria obtained lower LY and QALY gained than LDCT screening alone (see Table 3 ). Compared to non-screening, the #1LDCT strategies cost between CNY 104,998.56 and CNY 176,565.66 per LY gained. The #2 PRS&LDCT strategies cost between CNY 191,110.06 and CNY 260,918.20 per LY gained. When adjusting to QALYs, the #1LDCT strategies would cost between CNY 808,80.85 and CNY 150,050.15 per QALY gained. The #2 PRS&LDCT strategies would cost between CNY 156,691.93 and CNY 221,741.84 per QALY gained. All showed an ICER below 3 times GDP per capita (CNY257,094) per QALY gained. Assuming a cost-effectiveness threshold of 1time GDP per capita (CNY 85,698) per QALY gained for the Chinese healthcare system, only annual LDCT screening with the start age of 65–74 and 70–74 years old were cost-effective, yielding an ICER of CNY 85,332.16 and CNY 80,880.85 per QALY gained compared with non-screening. Table 3 provides the outcomes of the model simulation. Sensitivity analysis and scenario analysis Results of sensitivity analyses are shown in Fig. 2 and Fig. 3 . The most influential factors on the ICER were specificity and sensitivity of LDCT, as well as discount rate. The results were robust to the changes of the important values from base-case analysis with no variation exceeding 3 times GDP per capita (CNY257,094) per QALY gained, but also generally exceeding 1 times GDP per capita (CNY85,698) (Fig. 3 ). Notably, the #1LDCT screening strategy compared with the #0 Non-screening strategy with a start age older than 55 years had better than 90% likelihood of being cost-effective when the willingness-to-pay threshold was 3 times GDP per capita (CNY257,094). Meanwhile, the probability of #2 PRS&LDCT screening strategy to be cost-effective ranged from 33.77%-79.68%, varying from different start age. While the per capita GDP (CNY 85,698) serves as the threshold for absolutely cost-effective, the acceptability at willingness-to-pay threshold ranged from 1.44% to 34.18% for #1LDCT screening strategy and from 0.26% to 2.54% for #2 PRS&LDCT screening strategy (Table 4 ). The tornado diagram illustrates the change in the incremental cost-effectiveness ratio (ICER), which was defined as the cost of the PRS&LDCT screening strategy minus the cost of the LDCT screening divided by the difference of the quality-adjusted life-year of the two strategies when important input parameters were varied for both strategies (1 strategy at a time) by 10% ~ 30% higher or lower than their base-case values (shown in Sect. 2.5 Sensitivity analysis and scenario analysis). The vertical axis (dotted dark line) on the left shows the estimated ICER for the base-case analysis, and the vertical axis on the right showed the willingness-to-pay. The column with black color in the tornado diagram showed when the input parameters decrease, their impact for the results. Similarly, the column with grey color showed when the input parameters increase, their impact for the results. Abbreviations: LDCT, low-dose computed tomography; PRS, polygenic risk score; LC, lung cancer; CIS, carcinoma in situ; CPI, consumer price index. The dashed circle is the 95% confidence interval, which indicates the robustness of the model operation. The dashed lines are displayed as the cost-effectiveness threshold of 1 times GDP per capita (CNY85,698) and 3 times GDP per capita (CNY257,094) per QALY gained, respectively. The dots above the dashed line are cost-effective. Abbreviations: LDCT, low-dose computed tomography; PRS, polygenic risk score; WTP, willingness-to-pay threshold. In a previous study, a discrete choice experiment was used to create scenarios on several different possible modalities for the implementation of lung cancer screening in Chinese context [ 38 ]. The uptake rate varied from different screening modalities by mixed-logit model. The uptake rate of screening by blood test would be decreased by 0.08 compared with the baseline, i.e. LDCT screening. The compliance rate of LDCT screening in CanSPUC from 2013 to 2018 remained 34.41%, 37.25%, and 48.21% in urban areas of Shanxi, Henan, and Zhejiang Provinces, respectively [ 5 – 7 ]. However, we found a substantial improvement (91%) on the compliance rate of LDCT in Wenling lung cancer screening program than those reported by CanSPUC. As CanSUPC was a national cancer screening program targeting five cancer types (lung cancer, female breast cancer, liver cancer, upper gastrointestinal cancer, and colorectal cancer) using a combined screening modality. Given the effect on the compliance rate for the combined screening modality of five cancer type might varied from separate screening for each cancer type, we hence used the compliance rate of LDCT screening from the Wenling lung cancer screening program in this study. The compliance rate of PRS test was then estimated as 83.72% for scenario analysis. When we analysed the impact of the compliance rate of LDCT and PRS test, we observed similar patterns as those obtained from our base case analysis with a perfect attendance, despite some differences on the absolute effects due to discrepancies in the compliance rate of the two cohorts (Supplementary Table S 6 ).
Discussion We assessed the effectiveness and cost-effectiveness of lung cancer screening per the NCC recommendation when PRS is introduced to further assess the eligibility of lung cancer screening on the basis of the current definitions of high-risk population for lung cancer in China. The results showed that lung cancer screening programs incorporating PRS of current performance would be cost-effective with the start age of 50–74 years, using a willingness-to-pay threshold of 3 times GDP per capita (CNY257,094) per QALY gained. We demonstrated that as the compliance rate of the screening test decreased by 10%-20% (i.e. a real-world like scenario), its start age must be postponed to 55 years for the screening program to be cost-effective. However, when applied the willingness-to-pay threshold of 1 time GDP per capita (CNY85,698) per QALY gained, all the screening strategies incorporating PRS were not able to be cost-effective anymore. Note that the #1LDCT screening strategy were more cost-effective than #2 PRS&LDCT screening strategy using existing PRS tool in general, yielding more LYs or QALYs at lower cost. These results were sensitive to the sensitivity and the specificity of LDCT, as well as the discount rate. The results were robust when incorporating real-world compliance rate of the LDCT and PRS test in place of the perfect attendance. Overall, our results suggested that we should be more conservative in considering LDCT screening with PRS for lung cancer, unless optimized PRS with better performance emerged. In a modelling study, the Huntley et al. modelled the application of PRS stratification using UK metrics and demonstrated that the PRS-defined high-risk quintile (20%) of the UK population was estimated to capture 26% of lung cancer cases [ 18 ]. However, lung cancer was not presented as being the most plausible use cases for PRS stratification on account of the current PRS predictiveness and the availability of established cancer screening tools than other cancer types like breast, prostate, or colorectal cancer [ 18 ]. Furthermore, rather than considering age and PRS as mutually exclusive options, it is more rational to consider stratification based on a combination of age and PRS, and the other risk factors (notably, for lung cancer, smoking pack-years and family history) [ 40 ]. Nevertheless, a downside of our study is that the modeled strategies cover only one possible group at high risk, i.e. the top 5% based on PRS in the CKB cohort. Due to the crucial effect of smoking status for lung cancer incidence, we were not able to reliably estimate the actual ability to capture lung cancer cases using the area under the receiver operating characteristic curve for PRS alone, nor assess the effect and cost-effectiveness of the scenarios incorporating diverse PRS-defined high-risk quantiles. Hence, there is still a need to further assess the alternative strategies by generating empirical evidence on the utility of risk stratification in population-based screening programs in future. Furthermore, as histologic type was also determinant of long-term outcomes of lung cancer patients, the application for the average probability in the transition probabilities between cancerous states might affect the analytical precision in this work. Further research may benefit from incorporating the histology data for the construction of natural history model for lung cancer. By introduction of new PRS-stratified screening tool, the application in cancer screening could be considered from diverse perspectives. For mass screening based on population, Huntley et al. focused on providing additional screening to the PRS-defined high-risk group [ 18 ], this study explored the modality that adding PRS to the former high-risk criteria to assess eligibility of lung cancer screening. Conversely, using PRS-stratified screening tool to provide less intensive screening to low-risk individuals could also help to reduce the unnecessary harms (i.e. radiation exposure or invasive biopsy) and costs of overscreening. Moreover, several studies have shown that the risk-stratified screening programs [ 41 , 42 ] and personalized screening randomised trials for breast cancer [ 43 , 44 ] were ongoing in the Europe and the United States. The risk-tailored screening modality which determine the screening age range, frequency, and method to each risk group according to the PRS might be a potential solution for lung cancer screening programs as well. Research into new application of PRS in screening programs typically involves breast cancer [ 45 , 46 ], prostate cancer [ 47 , 48 ] and colorectal cancer [ 49 , 50 ]. Current findings can be informative for researchers in the field of cancer epidemiology to guide early adoption of PRS in screening programs or trials for lung cancer, given that they provide extensive information on expected costs, effects, and even cost-effectiveness based on current status. According to our findings, the field of cancer screening and early-detection could move into a direction where PRS will become cost-effective as a molecular diagnostic test in participants with high risk of lung cancer. Although currently the #1LDCT screening strategy were more cost-effective than #2 PRS&LDCT screening strategy using existing PRS tool in general, the obtained data could then potentially be used for a better stratification leading to more participants receiving better screening service. By the time real-world data relevant to the modeled scenarios become available, a more comprehensive and precise cost-effectiveness analysis should be performed for validation purposes. In light of the uncertainties and insufficient performance of the current modality, it seems advisable to accompany adoption with further research to optimize the performance by risk assessment and tailoring of screening frequency and age range of screening for lung cancer. Our findings suggest that lung cancer screening programs incorporating PRS of current performance would hardly be cost-effective using the willingness-to-pay threshold of 1 time GDP per capita, and the optimal screening strategy for lung cancer still remains to be LDCT screening alone for now. Further optimization of the screening modality can be useful to consider early adoption of PRS, in order to identify the best ways to implement lung cancer screening programs that could improve the benefit–harm trade-offs and cost-effectiveness relevant to its implementation.
Introduction Several studies have proved that Polygenic Risk Score (PRS) is a potential candidate for realizing precision screening. The effectiveness of low-dose computed tomography (LDCT) screening for lung cancer has been proved to reduce lung cancer specific and overall mortality, but the cost-effectiveness of diverse screening strategies remained unclear. Methods The comparative cost-effectiveness analysis used a Markov state-transition model to assess the potential effect and costs of the screening strategies incorporating PRS or not. A hypothetical cohort of 300,000 heavy smokers entered the study at age 50–74 years and were followed up until death or age 79 years. The model was run with a cycle length of 1 year. All the transition probabilities were validated and the performance value of PRS was extracted from published literature. A societal perspective was adopted and cost parameters were derived from databases of local medical insurance bureau. Sensitivity analyses and scenario analyses were conducted. Results The strategy incorporating PRS was estimated to obtain an ICER of CNY 156,691.93 to CNY 221,741.84 per QALY gained compared with non-screening with the initial start age range across 50–74 years. The strategy that screened using LDCT alone from 70–74 years annually could obtain an ICER of CNY 80,880.85 per QALY gained, which was the most cost-effective strategy. The introduction of PRS as an extra eligible criteria was associated with making strategies cost-saving but also lose the capability of gaining more LYs compared with LDCT screening alone. Conclusion The PRS-based conjunctive screening strategy for lung cancer screening in China was not cost-effective using the willingness-to-pay threshold of 1 time Gross Domestic Product (GDP) per capita, and the optimal screening strategy for lung cancer still remains to be LDCT screening for now. Further optimization of the screening modality can be useful to consider adoption of PRS and prospective evaluation remains a research priority. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-023-11800-7. Keywords
Summary Evidence before this study China, with 1/3 proportion of smoking population across the world has substantial cancer burden while lung cancer remains the leading cause of cancer-related death. The effectiveness for mortality reduction of lung cancer screening programs has been well confirmed by several trials (e.g. National Lung Screening Trail) and the challenge for lung cancer screening now seemed to be the high false-positive rate of Low-Dose Computed Tomography (LDCT). To make the existing cancer screening programs more efficient targeting, polygenic risk scores (PRSs) are introduced. PRS have the potential to identify individuals at risk of different type of cancers, optimizing treatment, and predicting survival outcomes. We searched PubMed, EMBASE, and Web of Science between January 1, 2000, and July 30, 2023, with no language restrictions, using the terms “China” or “Chinese”, “lung cancer”, “polygenic risk score” or “PRS” or “genetic test” and “cost-effectiveness”, to identify published economic evaluations on PRS-based strategy for lung cancer screening in China. We found no previous studies describing the cost-effectiveness of PRS-based lung screening in China. Only one previous study evaluated the effect of PRS-based screening based on modelling using UK metrics. Added value of this study The comparative cost-effectiveness analysis used a Markov state-transition model to assess the potential effect and costs of the screening strategies incorporating PRS or not. We found that the screening strategy incorporating PRS was estimated to be cost-effective compared with non-screening, with an ICUR of CNY 156,691.93 to CNY 221,741.84 (initial start age range across 50-74 years) per QALY gained. The strategy that screened using LDCT alone from 70-74 years annually could obtain an ICER of CNY 80,880.85 per QALY gained, which was the most cost-effective strategy. The introduction of PRS as an extra eligible criteria was associated with making strategies cost-saving but also lose the capability of gaining more LYs compared with LDCT screening alone. Implications of all the available evidence Our findings suggest that lung cancer screening programs incorporating PRS of existing performance would hardly be cost-effective using the willingness-to-pay threshold of 1 time GDP per capita, and the optimal screening strategy for lung cancer still remains to be LDCT screening alone for now, suggesting that we should be more conservative in considering LDCT screening with PRS for lung cancer. Supplementary Information
Acknowledgements The authors would like to thank all participants who took part in the survey. Data sharing statement The datasets used during the current study are available from the corresponding author on reasonable request. Authors’ contributions Concept and design: Zixuan Zhao; Acquisition of data: Lingbin Du, Shuyan Gu; Analysis and interpretation of data: Yi Yang, Weijia Wu; Drafting of the manuscript: Zixuan Zhao,Yi Yang;Critical revision of paper for important intellectual content: Hengjin Dong, Shuyan Gu; Statistical analysis: Yi Yang, Weijia Wu; Obtaining funding: Zixuan Zhao, Lingbin Du; Supervision:Gaoling WangHengjin Dong. Funding This work was supported by the Scientific Research Foundation of Nanjing University of Chinese Medicine (Grant No.013038029001). Declarations Ethics approval and consent to participate The study was conducted according to the report guidelines of CHEERS and approved by the Ethics Committee of the Taizhou cancer hospital (code: IRB-[2020]NO.6). Written informed consent was obtained from study participants before their enrollment into the study. Consent for publication Not applicable. Competing interests All authors declare that they have no competing interests.
CC BY
no
2024-01-15 23:43:46
BMC Cancer. 2024 Jan 13; 24:73
oa_package/ca/5c/PMC10787978.tar.gz
PMC10787979
38218793
Background Poor oral health is a major global public health problem [ 1 ]. Around 3.5 billion people worldwide are affected by oral diseases, predominantly untreated dental caries (tooth decay), severe periodontal disease, and tooth loss [ 2 ]. These oral conditions not only impact the health of the teeth and mouth but also systemic health [ 3 ]. Periodontal disease has been associated with various systemic diseases, such as diabetes, cardiovascular disease, and cancer [ 3 , 4 ]. Observational studies have repeatedly shown associations between tooth loss– often resulting from periodontal disease– and several cancer types, particularly cancers of the upper gastrointestinal tract [ 5 , 6 ]. Regular oral hygiene practices, namely toothbrushing, have been associated with a decreased risk of developing certain cancers [ 7 , 8 ]. These associations between poor oral health and systemic diseases, including cancer, are suspected to share a common pathway mediated by the oral microbiome [ 9 ]. The mechanism of these associations may involve carcinogenic bacterial metabolites (e.g., acetaldehyde produced by ethanol-metabolizing oral microbes [ 10 ], and nitrosamines formed from nitrate reduced to nitrite by nitrate-reducing oral microbes [ 11 , 12 ]), chronic systemic inflammation triggered by the oral microbiome, or specific periodontal pathogens and their interplay with the host immune response [ 9 ]. Several prospective studies have previously reported adverse associations between poor oral health, as measured by tooth loss and/or periodontal disease, and lung cancer incidence or mortality [ 13 – 15 ]. However, the relationship between oral health and lung cancer risk remains inconclusive, particularly since smoking may modify associations. Some studies found that smokers may have greater risk of lung cancer if they have poor oral health [ 14 , 16 , 17 ], and other studies found no significant associations between oral health and lung cancer in never smokers [ 14 , 17 – 21 ]. In addition, many of the existing studies have been from the United States [ 17 , 20 – 22 ] where smoking is a common exposure that may have altered the relationship between oral health and lung cancer. The current evidence is lacking studies from diverse populations, particularly studies from prospective cohorts outside of the US with adjustment for smoking and other major confounders of lung cancer associations. Here, we examined the association between poor dental health and lung cancer incidence and mortality in the Golestan Cohort Study, a large-scale, population-based prospective study with more than 50,000 participants in Golestan Province, located in northeastern Iran. We used multiple dental health measures, including tooth loss; the sum of decayed, missing, or filled teeth (DMFT score); and frequency of toothbrushing, to investigate the impact of poor dental health on lung cancer risk.
Methods Study population and questionnaire data As described in detail previously [ 23 ], the Golestan Cohort Study is a prospective, population-based cohort of 50,045 individuals between ages 40 and 75 years at baseline in Golestan Province, Iran. Participants were recruited from January 2004 to June 2008 and continue to be followed up. Written informed consent was obtained from all study participants at the time of enrollment. The Golestan Cohort Study was approved by the Institutional Review Boards of the Digestive Disease Research Institute of Tehran University of Medical Sciences, the International Agency for Research on Cancer, and the United States National Cancer Institute. At baseline, participants were interviewed in-person by trained staff using a structured questionnaire to collect sociodemographic and lifestyle information, including age, sex, ethnicity, place of residence, education, and detailed information on the use of cigarettes, nass (a local chewing tobacco product), and opium (e.g., age at initiation and cessation and amount of use per day). Opium consumption is a known carcinogen [ 24 ] and risk factor for different cancers including lung cancer [ 25 ]. Individuals who use opium are exposed to most of the carcinogens present in tobacco smoke [ 26 ]. Fruit and vegetable intake were assessed at baseline using a food frequency questionnaire. Socioeconomic status (SES) was estimated based on a composite wealth score determined by ownership of vehicles, property, and household appliances [ 27 ]. The high reliability and validity of self-reported cigarette smoking and opium use in this population have been demonstrated previously [ 28 , 29 ]. Dental health assessment As part of the baseline interview, trained medical staff counted each participant’s total number of teeth and the number of decayed, missing, or filled teeth, the sum of which constitutes the DMFT score. Participants were also asked about toothbrushing habits, and toothbrushing frequency was categorized as never, non-daily, and daily. The reliability of tooth counts and self-reported brushing frequency have both been shown to be high in this population [ 8 , 30 ]. Specifically, a pilot study was previously conducted for the Golestan Cohort Study where the reliability of teeth counts was evaluated based on repeated examinations of 130 participants occurring two months apart [ 30 ]. These results showed that the reliability of the teeth counts was high, with 88.3% agreement and a kappa statistic of 0.86. Similarly, the reliability of self-reported toothbrushing frequency has been evaluated based on a subset of the cohort (11,418 randomly selected participants) who completed a repeat questionnaire approximately 5 years after the baseline interview where participants were asked how often they brush their teeth [ 8 ]. The self-reported toothbrushing frequency at baseline and from the repeated assessment showed excellent agreement with 77.9% concordance ( p < 0.001). The maximum number of teeth and DMFT score were coded as 32 to represent the total number of adult teeth including third molars because these are not routinely extracted in this population. Case ascertainment All study participants were followed annually through telephone surveys or home visits, and provincial death and cancer registry data were reviewed monthly to identify all incident cancers and deaths due to any cause. In the case of death, a validated verbal autopsy was performed where the closest relative of the deceased was interviewed by a trained physician to obtain information about the cause of death [ 31 ]. Cancer diagnoses and deaths were confirmed by linking to the Golestan population-based cancer registry [ 32 ]. Primary lung cancer was defined using International Classification of Diseases, Tenth Revision (ICD-10) codes C34.0-C34.9. Six subjects diagnosed with nonepithelial malignancies (i.e., 4 subjects with lymphoma and 2 subjects with neuroendocrine carcinoma) of the lung were excluded from the present analysis. Statistical analysis Of the 50,045 cohort participants, 9 subjects missing dental status variables and 83 subjects with other missing covariates were excluded, in addition to the 6 subjects with nonepithelial lung cancer diagnoses, leaving a total of 49,947 subjects remaining in the analysis. We used age-dependent exposure metrics to account for the strong correlation between oral variables and age and sex [ 33 ]. Specifically, a loess model was fit to estimate the predicted number of lost teeth or DMFT score at each integer year of age, stratified by sex. The loess smoothing parameter was selected based on the bias-corrected Akaike information criterion. Excess numbers of lost teeth and DMFT score were calculated for each participant by taking the difference between the loess predicted age- and sex-specific number of lost teeth/DMFT score and the observed number of lost teeth/DMFT score. Those with a difference of 0 or fewer than the expected number were categorized into the reference group, and the remaining subjects with excess tooth loss/DMFT were categorized into tertiles. Cox proportional hazards regression models were used to estimate hazard ratios (HRs) and 95% CIs for the association between oral health variables (i.e., tooth loss, DMFT, and toothbrushing frequency), other potential risk factors (described below) and lung cancer incidence and mortality. The entry time was defined as the date of enrollment into the Golestan Cohort Study. Follow-up ended on the date of lung cancer or other cancer diagnosis (for lung cancer incidence analysis only), death, or last follow-up through March 31, 2021, whichever came first. A total of 518 participants (1.04%) were lost to follow-up during the study period. Cox models were run separately for each dental health variable, including the following sociodemographic and lifestyle variables: age, sex, SES (in quartiles) [ 27 ], ethnicity (Turkmen or non-Turkmen), residence (urban or rural), education (illiterate or literate), nass use (never or ever), cigarette use, and opium use. For cigarette smoking, participants were categorized as never smokers or in tertile categories of their cumulative pack-years of smoked cigarettes, with separate analyses run for former and current smokers. Cumulative pack-years of cigarette smoking was calculated as the number of packs (20 cigarettes in each pack) smoked per day multiplied by the number of years of smoking. For opium use, participants were categorized as never users or in tertile categories of their number of years of consumption. We further performed analyses stratified by cigarette smoking and opium use (never smoker/opium user or ever smoker/opium user) and tested for interactions between oral health variables and smoking/opium use (coded as a binary variable of never or ever smoker/opium user) using the likelihood ratio test. Dental health variables were tested for a linear trend by assigning ordinal numbers to each category, and the Wald test was used for testing for a global trend. Deviations from the proportional hazard assumption were not detected in any of the models based on the Schoenfeld residuals test. All statistical tests were two-sided with a significance level of 0.05. The R programming environment [ 34 ] (version 4.2.2) was used for all statistical analyses.
Results Table 1 shows the baseline characteristics of the cohort participants, overall and by DMFT category. The majority of cohort participants had never smoked cigarettes (82.8%) or used nass (92.3%) or opium (83.1%). Overall, the mean cigarette smoking pack-years was 16.9 (SD 18.6) for ever smokers (mean smoking pack-years was 16.3 [SD 21.0] and 17.3 [SD 16.9] for former and current smokers, respectively), and the mean duration of opium use was 12.2 (SD 10.7) years for ever opium users. The mean number of missing teeth and the mean DMFT score were 18.3 (SD 9.55) and 23.4 (SD 8.73), respectively, and more than half of the cohort participants (55.7%) reported never brushing their teeth. Relative to subjects with the expected DMFT score or lower, a larger proportion of individuals in the highest tertile of DMFT were male, lived in rural areas, smoked cigarettes, and used opium or nass (Table 1 ). During a median 14 years of follow-up there were 119 incident lung cancer cases (crude incidence rate of 17.9 cases per 100,000 person-years), and 98 of these people died of lung cancer. Of the 119 lung cancer cases, 53 (44.5%) were never cigarette smokers, 66 (55.5%) were never opium users, and 45 (37.8%) used neither. We first examined associations between cigarette smoking, opium use, and nass use and lung cancer incidence (Fig. 1 , Table S1 ). Age, cigarette smoking, and opium use were significantly associated with an increased risk of lung cancer, whereas sex, SES, ethnicity, area of residence, education, and nass use did not have a significant association with lung cancer risk, with mutual adjustment for all potential risk factors including the dental health variables. Compared with never smokers, former smokers with over 20 pack-years of smoked cigarettes had a higher risk of lung cancer (HR 2.78 [95% CI: 1.14, 6.80] in a model that included DMFT), but former smokers with 20 pack-years or less did not. All current smokers had higher lung cancer risk compared with never smokers regardless of the number of pack-years. Current smokers with 5.5 pack-years or less and current smokers with 5.5–20 pack-years had HRs of 4.05 (95% CI: 1.87, 8.75) and 4.27 (95% CI: 2.09, 8.71), respectively. Lung cancer risk was further increased for current smokers with over 20 pack-years with HR of 7.98 (95% CI: 4.39, 14.5). Ever using opium for over 5 years was also significantly associated with an increased lung cancer risk with a HR of around 2.2 compared with never users. Ever use of nass was not significantly associated with an increased risk of lung cancer compared with never use. Poor dental status was associated with an increased risk of incident lung cancer (Fig. 1 , Table S1 ) in models adjusted for known and suspected lung cancer risk factors. Specifically, there was an increasing trend in lung cancer risk across the DMFT tertiles (linear trend, p = 0.011; global trend, p = 0.011). Relative to individuals with the expected DMFT score or less, the HR increased from 1.27 (95% CI: 0.73, 2.22) to 2.15 (95% CI: 1.34, 3.43) across the first two tertiles of DMFT but dropped to 1.52 (95% CI: 0.81, 2.84) for the highest tertile (Fig. 1 , Table S1 ). The highest tertile of tooth loss was also associated with an increased lung cancer risk with a HR of 1.68 (95% CI: 1.04, 2.70) compared with subjects with the expected number of lost teeth or fewer, but no associations were found for the first two tertiles of tooth loss (linear trend, p = 0.043; global trend, p = 0.19) (Fig. 1 , Table S1 ). There were no significant associations between toothbrushing frequency and lung cancer risk (Fig. 1 , Table S1 ). We further examined associations between dental status, other potential risk factors, and lung cancer incidence, stratified by cigarette smoking and opium use, important risk factors in this population. Subjects were stratified into binary groups of never ( n = 37,358; 45 cases) and ever ( n = 12,589; 74 cases) users of cigarettes or opium. For the non-oral health related risk factors (i.e., age, sex, SES, ethnicity, area of residence, education, former and current smoking pack-years, and opium and nass use), the results did not change upon stratification (Table S1 ). For DMFT, the results were similar among never and ever cigarette/opium users, with significant associations for the second tertile of DMFT (Fig. 1 , Table S2 ). For never smoker/opium users, HRs were 1.59 (95% CI: 0.71, 3.60), 2.02 (95% CI: 0.94, 4.33), and 1.77 (95% CI: 0.55, 5.66) from the first to the third tertile of DMFT. For ever smoker/opium users, HRs were 1.06 (95% CI: 0.49, 2.30), 2.23 (95% CI: 1.21, 4.09), and 1.42 (95% CI: 0.66, 3.03) from the first to the third tertile of DMFT. Strata-specific HRs were similar to the overall unstratified HRs (2.15 for the second DMFT tertile; Fig. 1 , Table S1 ). Stratification also did not change the results for tooth loss or toothbrushing frequency (Fig. 1 , Table S2 ). We found no evidence of a statistical interaction between smoking/opium use and any of the dental status variables ( p > 0.49). For lung cancer mortality, associations with dental health variables were similar to those for incidence but had slightly elevated risk estimates for DMFT (Table S3 , Fig. S1 ). The second tertile of DMFT was significantly associated with an increased risk of lung cancer mortality, with a HR of 2.55 (95% CI: 1.50, 4.33), and mortality risk significantly increased with higher DMFT tertiles (linear trend, p = 0.0038; global trend, p = 0.0046). For tooth loss, the highest tertile of tooth loss had a HR of 1.71 (95% CI: 1.01, 2.92), and there was a marginally significant linear trend across the tertiles of tooth loss ( p = 0.049). Associations with toothbrushing frequency remained null for lung cancer mortality. Sensitivity analyses excluding the first two years of follow-up did not meaningfully change the Cox regression analysis results for either lung cancer incidence or mortality (Table S4 , Fig. S2 ). Excluding subjects with no teeth (8,709 subjects with no teeth, including 34 incident lung cancer cases) did not change the results for associations between DMFT and lung cancer incidence, but associations with tooth loss and toothbrushing frequency were null (Table S5 , Fig. S3 ). Adjusting for daily fruit and vegetable intake also did not substantially change associations with lung cancer incidence (Table S6 , Fig. S3 ).
Discussion In this large, prospective cohort study, more than half of the cohort members reported never brushing their teeth, and the participants had on average 23.4 decayed, missing, or filled teeth. Higher DMFT scores were associated with a progressively higher risk of both lung cancer incidence and mortality, and the second tertile of individuals with higher-than-expected DMFT score had more than a two-fold risk of lung cancer compared with subjects who had the expected DMFT score or less. Similarly, there was a ~ 1.7-fold increased risk of lung cancer for subjects in the highest tertile of increased tooth loss compared with those with the expected number of lost teeth or fewer. These dental health variables were significantly associated with lung cancer risk after simultaneous adjustment for other risk factors, including age, cigarette smoking, and opium use. We found no associations between toothbrushing frequency and lung cancer risk. Our results from the Golestan Cohort Study show that poor dentition (i.e. higher numbers of tooth loss or higher DMFT score) is independently associated with lung cancer risk, and it is unlikely that these results can be explained by residual confounding by tobacco or opium use. This is in line with previous studies of tooth loss and lung cancer, with a recent meta-analysis including seven studies showing a relative risk of 1.64 (95% CI: 1.44, 1.86) comparing the highest and lowest category of tooth loss for incident lung cancer [ 14 ]. Tooth loss often results from periodontal disease, which has also been shown to be associated with an increased risk of lung cancer in multiple prospective cohort studies (meta-analyzed HR of 1.40 (95% CI: 1.25, 1.58) [ 35 ]). Also similar to our results, a cohort study in Japan found that higher numbers of teeth lost were associated with an increased risk of lung cancer mortality (0–9 teeth remaining vs. 20 or more teeth remaining, HR 1.75; 95% CI: 1.08, 2.83), with adjustment for smoking and other potential confounders [ 36 ]. However, there have also been other studies, such as the prospective cohort analysis of the Sister Study cohort in the US, that did not find a significant association of periodontal disease or tooth loss with lung cancer mortality [ 37 ]. Almost half of the lung cancer cases in our study were never smokers. In addition, more than half of the cases had never used opium, which is another known lung cancer risk factor that is relevant in this population [ 25 ], and 37.8% used neither cigarettes nor opium. We furthermore showed that associations with dental status remained largely unchanged upon stratification by smoking status and opium use. In previous cohort studies, some found no significant associations between poor oral health (tooth loss and/or periodontal disease) and lung cancer incidence [ 14 , 17 , 19 , 20 ] or mortality [ 21 ] in never smokers but found poor oral health to increase risk for current [ 14 ] or former [ 17 ] smokers. It is possible that smoking may modify associations between poor oral health and lung cancer risk, but more studies are needed to clarify this. The mechanism for the association between oral health and lung cancer likely involves the oral microbiome. Oral microbes produce various metabolites that have been linked to carcinogenesis, such as acetaldehyde [ 10 ], nitrosamines [ 38 ], and reactive oxygen species [ 9 ]. Some authors have suggested that edentulism and the healing of gum tissue may ameliorate the negative effects of tooth loss by altering the oral microbiome against the overgrowth of bacterial species that produce carcinogenic metabolites [ 39 ], but we did not find strong evidence to support this hypothesis when we excluded subjects with no teeth from the analysis (Table S5 ). The oral microbiome can also impact cancer risk at distant sites through systemic inflammation, which is a key component of both periodontal disease and carcinogenesis [ 40 , 41 ]. Recently, there have been a few studies that have found potential links between the oral microbiome and lung cancer. A case-cohort study of three US cohorts found that greater diversity in the oral microbiome was associated with lower risk of developing lung cancer, and relative abundances/presence of certain genera were associated with risk; for example, higher relative abundances of Streptococcus was associated with increased lung cancer risk [ 42 ]. In addition, two nested case-control studies (one from a low-income population in the southeastern US [ 43 ] and another among never smokers in China [ 44 ]) found different specific taxa to be associated with increased or decreased lung cancer risk. Another recent nested case-control study conducted in the US found that serum antibodies to 13 periodontal bacteria were mostly inversely associated with lung cancer risk, possibly indicating immunity against certain bacteria that may help reduce cancer risk [ 45 ]. Additional types of evidence beyond observational studies are warranted to understand the exact mechanism of association between poor oral health, the oral microbiome, and lung cancer. Our study has several strengths and limitations. The major strengths of this study include its prospective design and low loss to follow-up. We used multiple measures to evaluate dental status, which were assessed by trained interviewers. However, our study did not examine the participants’ periodontal status, so we could not evaluate the effect of this component of poor oral health. We carefully adjusted (and stratified by when necessary) for multiple potential confounders, including cigarette smoking, opium use, and SES, but, as with all observational epidemiologic studies, our findings may have been impacted by unmeasured confounders or residual confounding. We also accrued a limited number of lung cancer cases and this precluded analysis by histology and restricted power. Finally, all dental health measures were ascertained at a single time point and accounting for changes in dental status over the follow-up period might have led to different exposure ranking of cohort members.
Conclusion We found evidence in this cohort that poor dental status, as indicated by higher DMFT scores and greater tooth loss, was associated with an increased risk of lung cancer incidence and mortality after controlling for other important risk factors such as cigarette smoking and opium use. These results persisted even when the analysis was restricted to never users of cigarettes or opium. We did not find significant associations for toothbrushing frequency. While known risk factors such as smoking and opium use remain important, our results indicate that poor oral health may also contribute to lung cancer risk.
Background Poor oral health has been linked to various systemic diseases, including multiple cancer types, but studies of its association with lung cancer have been inconclusive. Methods We examined the relationship between dental status and lung cancer incidence and mortality in the Golestan Cohort Study, a large, prospective cohort of 50,045 adults in northeastern Iran. Cox proportional hazards models were used to estimate hazard ratios (HRs) and 95% confidence intervals (CIs) for associations between three dental health measures (i.e., number of missing teeth; the sum of decayed, missing, or filled teeth (DMFT); and toothbrushing frequency) and lung cancer incidence or mortality with adjustment for multiple potential confounders, including cigarette smoking and opium use. We created tertiles of the number of lost teeth/DMFT score in excess of the loess adjusted, age- and sex-specific predicted numbers, with subjects with the expected number of lost teeth/DMFT or fewer as the reference group. Results During a median follow-up of 14 years, there were 119 incident lung cancer cases and 98 lung cancer deaths. Higher DMFT scores were associated with a progressively increased risk of lung cancer (linear trend, p = 0.011). Compared with individuals with the expected DMFT score or less, the HRs were 1.27 (95% CI: 0.73, 2.22), 2.15 (95% CI: 1.34, 3.43), and 1.52 (95% CI: 0.81, 2.84) for the first to the third tertiles of DMFT, respectively. The highest tertile of tooth loss also had an increased risk of lung cancer, with a HR of 1.68 (95% CI: 1.04, 2.70) compared with subjects with the expected number of lost teeth or fewer (linear trend, p = 0.043). The results were similar for lung cancer mortality and did not change substantially when the analysis was restricted to never users of cigarettes or opium. We found no associations between toothbrushing frequency and lung cancer incidence or mortality. Conclusion Poor dental health indicated by tooth loss or DMFT, but not lack of toothbrushing, was associated with increased lung cancer incidence and mortality in this rural Middle Eastern population. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-024-11850-5. Keywords
Electronic supplementary material Below is the link to the electronic supplementary material.
Author contributions Y.Y.: Conceptualization, data curation, software, formal analysis, methodology, writing–original draft. C.C.A.: Conceptualization, formal analysis, methodology, writing–original draft, project administration. G.R.: Data curation, investigation, writing–review and editing. A.G.: Formal analysis, writing–review and editing. H.P.: Resources, investigation, writing–review and editing. M.K.: Investigation, writing–review and editing. A.P.: Investigation, writing–review and editing. F.K.: Methodology, project administration, writing–review and editing. P. Bo.: Project administration, writing–review and editing. P. Br.: Project administration, writing–review and editing. S.M.D.: Project administration, writing–review and editing. E.V.: Data curation, writing–review and editing. R.M.: Resources, supervision, investigation, project administration, writing–review and editing. A.E.: Conceptualization, data curation, software, formal analysis, supervision, investigation, methodology, writing–original draft, project administration. Funding The Golestan Cohort Study was supported by Tehran University of Medical Sciences (grant no: 81/15), Cancer Research UK (grant no: C20/ A5860), the Intramural Research Program of the National Cancer Institute, National Institutes of Health, and various collaborative research agreements with the International Agency for Research on Cancer. Data availability The data that support the findings in this study are available from the corresponding authors upon request. Declarations Ethics approval and consent to participate Written informed consent was obtained from all study participants at the time of enrollment. The Golestan Cohort Study was approved by the Institutional Review Boards of the Digestive Disease Research Institute of Tehran University of Medical Sciences, the International Agency for Research on Cancer, and the United States National Cancer Institute. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:46
BMC Cancer. 2024 Jan 13; 24:74
oa_package/70/d7/PMC10787979.tar.gz
PMC10787980
38218769
Introduction Vitamin D deficiency is one of the most ignored and undiagnosed conditions in the general population [ 1 , 2 ]. Studies conducted in the past among various communities and locations of Indian population exhibited prevalence of Vitamin D deficiencies ranging from 50 to 94%, which was indicative of the magnitude of the problem in the country. These deficiencies are associated with individuals with detected systemic illness therefore [ 3 ].The major causes for vitamin D deficiency in Indian population can be attributed to a low vitamin D dietary intake, increased indoor lifestyle, decreased exposure to sunlight, and increased air pollution which in turn hampers synthesis of Vitamin D by the skin after absorption of UV rays [ 4 , 5 ]. Past research has shown that it is imperative to address Vitamin D deficiency and correlate it to systemic health issues since Vitamin D deficiency causes autoimmune diseases, infectious diseases, cancer, skeletal manifestation, and depression [ 6 ]. Research has also shown that Vitamin D deficiency leads to problems in oral health viz. poor tooth formation, development and calcification in younger adults, poor periodontal health, and malignant oral lesions [ 7 ]. Vitamin D is also plays a vital role in innate immune responses by promoting immune cell differentiation and cell maturation. Active Vitamin D is absorbed onto the Vitamin D receptors (VDR) on the immune cells, which in turn promote gene expression and regulation of protective peptides [ 8 ]. This non-classical action of the VDR and CYP27B1 expressed on various cells and tissues are not associated with calcium haemostasis but are instead dependent on pathogen detection and cytokine production via interleukin production. The circulating vitamin D 1,25-dihydroxy vitamin forms complexes with the Retinoid X receptor vitamin D binding protein complex (RXR + VDBP) to attach to the VDRs on various target cells and tissues. Finally, this complex attaches to the vitamin D promoter regions on the vitamin D receptor genes in order to stimulate the production of the antimicrobial peptide LL-37 [ 9 , 10 ]. Vitamin D regulates the innate immune response via production of these peptides in various concentrations in body fluids [ 11 ] (illustrated in Fig. 1 ). The cytokines released during this immune response contains pro-inflammatory cytokines to stimulate the cells, which in turn undergo an immuno-modulatory response. Vitamin D also affects to regulation of T-helper cells by contributing to anti-inflammatory effects. Interleukin IL-17A is produced by Th17A, which in turn regulates NF-KB and other nitrogen activated protein kinases to regulate the IL-6 expression. IL-6 is an important interleukin to promote host defence and immune response, especially in the oral cavity where an absence of the immune response leads to destruction of tooth structure and causing caries [ 12 ]. Keeping this in mind, the present study evaluated the association of salivary Vitamin D levels and the levels of LL-37, IL-6, and IL-17A in the severity of dental caries.
Materials and methods For conducting the present study, necessary approvals were obtained from the Central ethics committee, Nitte (deemed to be) University. Approval obtained dated NU/CEC/2020/0339 and NU/CEC/2022/291, prior initiation of the study, also renewal done. Informed consent was obtained from each individual patient, after they were provided with the patient information sheet.The experimental protocols were approved by the scientific committee prior to the commencement of the study. A total of 377 patients who visited the outpatient department of conservative dentistry and endodontics at the A.B. Shetty Memorial Institute of Dental Services, Deralakatte, Mangalore, were included in the study. Out of these, 272 patients were designated as Caries active and 105 were designated as Caries free. The Sample size (N) was calculated estimating the difference between two means and by using the formula presented below. Where, P1 was the proportion of the 1st group (39%), P2 was the proportion of the 2nd group (24%), α was the significance level (5%), and β was the power of the test (20%). All the laboratory work was conducted at the Central Research Laboratory, K.S. Hegde Medical Academy, Deralakatte, Mangalore. The time period of the study was from August 2018 to January 2022 (4 years). During the study, all patients were evaluated and informed consent was obtained from them for the purpose of the study using an information sheet. Designation as Caries Free and Caries Active was done based on certain inclusion and exclusion criteria. The inclusion criteria pertained to individuals in the age group of 18–40 years who were not exhibiting symptoms of any systemic and/or local illnesses that could potentially hamper salivary flow. Those individuals who were following a restricted diet, exhibiting symptoms of generalized gingivitis or periodontitis, undergoing long-term medication, having poor oral hygiene habits, who were chronic smokers and/or alcoholics, and individuals consuming specific nutritional supplements were also excluded from the study. For evaluating the presence of caries, patients were seated in a dental chair and under ideal illumination and evaluated using a mouth mirror and straight probe. In order to create a DMFT index (Decayed, Missing, and Filled Teeth), the WHO Oral health survey format Annexure 1 was used [ 13 ]. Individuals were then divided into two groups based on prevalence of caries as well as their DMFT score. The Caries Free group had a DMFT score of 0 while the Caries Active group had DMFT scores ranging from 1–10. The individuals in the Caries Active Group were further subdivided into Decay group 1 (1–3 caries), Decay Group 2 (4–10 caries), and Decay group 3 (> 10 caries). The general information of the individuals like age, sex, dietary habits like frequency of food intake, type of food habits like non-vegetarian/vegetarian diet, and their brushing habits were also obtained and noted in addition to their medical history. Following the generation of the DMFT index, a PUFA Index (Pulpal involvement, Ulceration, Fistula, and Abcess) [ 14 ] was recorded to delineate the oral conditions in the individuals who were Caries Active. Based on visible root pulp, ulceration of oral mucosa from root fragments, appearance of a fistula or abscess, a score was assigned and recorded. Pulpal involvement (P/p) was recorded when individuals were noted to have a visible opening of the pulp chamber in their teeth due to caries, thereby leaving only the roots and root fragments. Ulceration (U/u) was recorded in individuals who exhibited significant sharp object trauma from either a broken/dislocated tooth or root fragments as a result of caries. Fistula (F/f) was recorded in those individuals where the pulpal involvement was accompanied by pus releasing sinus tract. Finally, Abscess (A/a) was recorded in those individuals who exhibited a pus contained swelling as a result of pulpal involvement. In order to collect saliva from the individuals, the Navazesh protocol was used [ 15 ]. Individuals were informed to abstain from eating or drinking, brushing their teeth, using mouthwash, or smoking two hours prior to salivary sample collection. Samples were collected between 10 and 11 a.m. In order to maintain a stress free atmosphere and not to hinder salivary flow, the individuals were seated in regular chairs. A Tarson's saliva collection tube was used to collect 5 ml of saliva that had gathered on the floor of the mouth of the individuals. The collected saliva was then centrifuged and the supernatant was stored at -20 °C till further analysis. Analysis of salivary vitamin D levels The analysis of salivary Vitamin D levels was carried out using the 25OH Vitamin D Total ELISA Kit Microtiter Plates (Epitope Diagnostics). 20μL of sample, calibrators, and controls were added to the wells of the plate along with 100μL of the Vitamin D Assay buffer. The plates were covered with aluminium foil and static incubated at room temperature for 30 min. Following this, 25μL of Biotinylated Vitamin D analog was added to each well and static incubated at room temperature for 1 h. Then, each well was washed 5 times with 350μL of the wash solution. This was followed by addition of 100μL Streptavidin Horseradish Peroxidase (HRP) and static incubated at room temperature for 30 min to form the Vitamin D antibody – Vitamin D, Biotin D and HRP conjugated streptavidin complex. The unbound complexes were removed from the wells by washing them five times with 350μL buffer solution and 100μL of tetramethylbenzidine (TMB) was added. The plates were static incubated one final time for 20 min after which 100μL of the stop solution was added. Finally, the reaction mixture was measured spectrophotometrically at 450 nm absorbance with a maximum absorbance time of 10 min. Analysis of salivary Cathelicidin levels For the analysis of the salivary cathelicidins levels, a pre-coated micro-ELISA plate containing the human LL-37-specific antibody (Sincere Biotech) was used. Controls and samples were loaded into the wells of the ELISA plate along with the specific antibody and incubated at room temperature. Following this, the biotinylated detection antibody (specific to human LL-37) and avidin conjugated HRP were added to the wells and incubated once again in static condition. This was followed by a washing step to remove the unconjugated complexes, following which a substrate was added to each well. Those wells in which the complexation occurred turned blue. A final stop solution was added to halt the reaction and the optical density was measured spectrophotometrically at 450 ± 2 nm. Analysis of salivary IL-6, IL-17A levels The analysis of IL-17A and IL-6 were done using commercially available ELISA kits (Booster Biologicals). For the IL-17A, the principle used was the Solid Phase Sandwich ELISA. The samples and standards were added to the wells of the ELISA microtiter plates, facilitating the binding of IL-17A to the immobilized antibodies. Following a washing step, HRP conjugated anti-IL-17A antibody solution was added to the wells, creating an antibody-antigen–antibody sandwich in the process. TMB substrate solution was added to the ‘sandwich’ and incubated followed by stopping the reaction using a stop solution. Finally, the absorbance was measured spectrophotometrically at 620 nm. by booster biological technology. Similar to IL-17A, a sandwich ELISA approach was also used to measure the IL-6 levels. The microtiter ELISA plates contained immobilized rat monoclonal antibodies, to which standards and samples were added to facilitate the binding of the IL-6. Following, this, a anti-IL-6 antibody was added to create the antibody-antigen–antibody sandwich. After a period of incubation, HRP conjugated streptavidin was added to the wells and incubated. Following a wash step to remove unconjugated elements, TMB was added to the wells followed by a stop solution. The final absorbance was measured spectrophotmetrically at 450 nm and absorbance of samples were compared to the standards. Statistical analysis Qualitative statistical analysis was performed on the collected data pertaining to frequency, percentage, mean, and standard deviation. A chi square test was performed for comparing the salivary parameters between Caries Active and Caries Free groups. Analysis of Variance (ANOVA) and t-test were also performed for the two groups. The Receiver Operating Characteristic analysis was performed in order to obtain the optimum cut off levels of sensitivity and specificity for Salivary Vitamin D, LL-37, IL-17A and IL-6. SPSS (version 23; IBM SPSS Corp, Armonk, NY, USA) software was used to perform the statistical comparisons and all the statistical analyses for the P value were observed to be two-sided. The significance level was set to P ≤ 0.05 in order to eliminate overfitting of data.
Results Demographic characteristics Of the 272 Caries Active and 105 Caries Free individuals, 239 were females and 138 were males (Tables 1 and 2 ). Data highlighted in Tables 10 and 11 exhibits the comparison between the demographic data and decay groups. Based on the demographics, a significant population of individuals were from urban areas. When decay groups were compared with the demographic data like age groups, sex, location and gender, individuals from urban population showed significant correlation with the decay group 1(i.e., 1–3 caries) with p value of 0.011. From the data in Table 2 , it can be observed that individuals in the age group 18–25 years were associated to Decay group 1 (1–3 caries). It can also be observed that the 26–35 years age group were closely associated with Decay group 2 (4–10 caries) and Decay group 3 (> 10 caries). For the individuals hailing from urban areas, the PUFA score was observed to be 0 ( p = 0.0) (data highlighted in Table 3 ). Evaluation of salivary antimicrobial peptide LL-37,Vitamin D, IL6, IL-17A levels in dental caries Among the individuals classified in the caries active group, the mean salivary vitamin D level was observed to be 20.85 pg/ml in comparison to the significantly higher ( p < 0.001) 28.56 pg/ml for the individuals in the caries free group (Table 4 ). It was also observed that the mean salivary Vitamin D decreased with increasing severity of caries in the individuals. In the different subgroups of the Caries Active group, mean salivary vitamin D levels of 16.31 pg/ml was observed in decay group 2 and 3, whereas decay group 1 had a mean salivary vitamin D level of 28.77 pg/ml, which was significantly higher ( p = 0.00). When the PUFA index scores were correlated to the salivary vitamin D levels, it was observed that the salivary vitamin D was significantly lower (13.46 pg/ml, p = 0.026) in individuals with PUFA score of 2–5 when compared to individuals with a PUFA score of 1 (21.13 pg/ml) (data highlighted in Table 5 ). The logistic regression performed to establish the odds ratio depicted 0.939 effect of salivary vitamin D on caries active group. A 1 unit decrease in salivary vitamin D levels meant that an individual had a 1.064 chance of being classified as caries active (Table 6 ) The ROC analysis which was performed since the data was significant in the Univariate analysis indicated that the optimal cut-off value for the salivary Vitamin-D was 28.33 pg/ml with a sensitivity of 71% and a specificity of 57%, with AUC of 0.694. The LL-37 assay results exhibited a 7.07 ng/ μl of salivary LL-37 in the caries free group individuals, in comparison to 7.05 ng/ μl for the caries active individuals (data not statistically significant, highlighted in Table 7 ). It was also observed that the salivary LL-37 levels did not significantly vary with severity of caries in decay groups and was not significantly associated with the PUFA scores (Table 8 ). The logistic regression was performed and the odds ratio depicted 1.309 effect of salivary LL-37 on caries active group. ROC analysis was performed and the optimal cut off for LL-37 is 6.81 ng/ μl with low sensitivity and specificity and area under the curve is 0.506. However, the results were not statistically significant (Table 9 ). The salivary levels of IL-17A and IL-6 among the individuals in caries active and caries free groups were observed to be 155.01 ng/ml and 174.20 ng/ml respectively. However, the data was deemed to be statistically insignificant upon further analyses. It was also observed that the salivary IL-17A levels did not vary with the severity of caries in individuals nor did the PUFA scores significantly vary (Tables 10 and 11 ). Logistic regression was performed, and odds ratio showed salivary IL-17A 0.999 effect on caries active group. The ROC analysis that was performed exhibited an optimal cut-off of 189.9 ng/ml of IL-17a, with low sensitivity and specificity and a curve area of 0.556. The data was not statistically significant (data highlighted in Table 13 ). The IL-6 levels in the individuals of the caries active and caries free groups were observed to be 31.15 ng/ml and 28.33 ng/ml respectively with the data not considered to be statistically significant. It was also observed that the salivary IL-6 levels did not vary with the severity of caries or the associated PUFA scores (data highlighted in Tables 10 and 12 ). Logistic regression was performed, and odds ratio showed salivary IL-6 1.006 effect on caries active group. The ROC analysis that was performed exhibited an optimal cut-off of 17.60 ng/ml of IL-6 with low sensitivity and specificity, and a curve area of 0.521. However, the data was not statistically significant (highlighted in Table 13 ).
Discussion Past research has shown that genetic factors play a vital role in risk of dental caries, which in turn is due to the multifaceted nature of caries itself [ 16 ]. Vitamin D has been shown to control calcium haemostasis which in turn significantly influences immune responses and anti-inflammatory activity [ 17 ]. Studies have also noted that bone phenotype, hormonal balance, food, and sun exposure may all play a vital role in the variation of vitamin D receptor gene polymorphism that is observed among different races and age groups [ 18 – 20 ]. In the current study, the practices of individuals in both caries free and caries active groups were similar e.g. brushing teeth once a day, no significant food intake in between meals, and low intake of sugary or sticky food items. Therefore, these factors were not considered for the study. Research has shown that environmental and underlying genetic factors are associated with various other factors that cause development of dental caries [ 21 ]. It has also been shown that susceptibility to caries may differ from individual to individual, even though an individual may be considered as high risk of developing caries [ 22 ]. In the current study, salivary vitamin D levels were observed to be significantly higher in the caries-free group as compared to the caries active group. This can be attributed to the production of protective peptides (LL-37/cathelicidins) following their activation via the TLR2-vitamin D LL-37 mechanism, where the production of these peptides occur as a result of the binding of 1,25(OH)2 D to the Vitamin D receptor. The LL-37 has been noted to possess the potential for increasing the antimicrobial capacity of anti-inflammatory cells like neutrophils [ 23 ]. In the current study, the elevated levels of salivary vitamin D in the caries free group exhibit the effectiveness of vitamin D to play an antibacterial role by regulating the production of these naturally occurring peptides. Vitamin D has also been noted to upregulate numerous proteins such enamelin, dentin sialoproteins, amelogenins, and dentin phosphoproteins, while also stabilizing the demineralization and disintegration of tooth surface while preserving the appropriate surface proteins [ 24 ]. In the current study, the salivary vitamin D levels among participants in both the groups could be linked to normal to average sun exposure and to a variety of dietary sources. A past study by Gyll et al. evaluated the association of dental caries and salivary vitamin D levels post vitamin D supplementation, and noted high vitamin D levels in individuals without caries [ 25 ]. A similar study conducted by Chhonkar et al. exhibited that vitamin D was an important factor in preventing dental caries. Studies have also shown that absence of caries can be attributed to the role of vitamin D in the production of LL-37 peptides via the TLR2-Vitamin D pathway [ 26 , 27 ], which is in accordance with the results of the present study where we evaluated the protective role of vitamin D levels in dental caries progression and prevalence. Antimicrobial peptide LL-37 was evaluated in the current study for both caries active and caries free individuals, with the results exhibiting that the levels of LL-37 were higher in individuals without caries as compared to those having caries (not statistically significant). LL-37 has been shown to reduce biofilm formation on the tooth surface, reduce thickness of existing biofilms, and decreasing the adherence of microbes onto the tooth surface, thereby decreasing the production of inflammatory markers [ 28 ]. Similarly, another study evaluated the LL-37 levels in children wherein it was noted that lower levels were associated with higher caries activity, albeit statistically insignificant. The same study also noted that LL-37 had the potential to be a prognostic marker against caries in children, adolescents, and adults [ 29 , 30 ]. LL-37 has been observed in the past in the carpet, toroidal, and barrel stave models to have potency against Streptococcus mutans by preventing growth and colonization [ 28 ]. Another past study showed that the direct effect of the LL-37 peptide was to cause enzyme mediated destruction of bacteria while the indirect effect was to regulate inflammatory markers [ 29 ]. In another study, the production and biochemical levels of cathelicidins were noted to be directly affected by the levels of inflammation and vitamin D [ 31 ]. The current study also evaluated the IL-6 levels among the individuals as noted an increase in the caries active group in comparison to the caries free group (data statistically insignificant). This could be as a result of the pro-inflammatory function of the IL-6 interleukin. Studies have shown that IL-6 is a key factor in the inflammatory response since it activates neutrophil proliferation at the inflammatory sites. The study also noted that IL-6 plays a vital role in the pathology of diseases due to its pleitropy, role in immunosenescence, and caries formation [ 32 ] therefore the IL-6 may be a potential indicator for inflammation in oral cavity, however needs to be validated with more supporting studies. Apart from IL-6, IL-17A levels were also analyzed in the individuals participating in the current study and it was observed that IL-17A levels were higher in those individuals without caries (data statistically insignificant). This could have been due to the levels of LL-37 maintaining inflammatory balance to promote repair in individuals with caries. Past studies have shown that immune processes, both innate and adaptive, affect dental biofilm formation, which in turn affect caries formation. However, the studies were limited to evaluating a single nucleotide polymorphism in the vitamin D receptor and the polymorphisms in the CAMP gene, another factor affecting LL-37 levels in saliva, were excluded from the scope [ 33 – 35 ].
Conclusions The present study focused on evaluating the levels of Vitamin D, IL-17A, IL-6, and LL-37 in saliva of individuals with and without caries. Salivary vitamin D was higher in caries free individuals as compared to those with caries. This could be because vitamin D plays an important role in preventing caries by activating enzymes which in turn convert 25, hydroxyl vitamin D to 1,25-dihydroxy vitamin D. This in turn binds to vitamin D binding protein to form a complex to activate the LL-37 peptides via binding to vitamin D receptors. The current study noted that LL-37 was higher in caries free individuals but not statistically significant in comparison to individuals with caries, possibly due to LL-37’s role in preventing and neutralizing biofilms and bacterial colonization to hinder caries formation. Interleukins IL-6 and IL-17A were also higher in caries free individuals but not statistically significant from those with caries. This could be attributed to the pro-inflammatory activity of IL-6 and the regulating role of LL-37 in IL-17A production to promote repair respectively. Therefore it can be said that all three biochemical markers could be used as a prognostic marker to predict incidence of caries in individuals.
Introduction Vitamin D performs various functions as a hormone by promoting calcium absorption but plays a major role in innate immunity,cell differentiation, cell maturation through its genomic effects via vitamin D receptor. The immune response also plays a major role in tooth surface and supporting structure destruction and playing a major factor in high caries formation. The inflammatory cytokines are released has proinflammatory cytokines and stimulate cells in disease process. Therefore, in the present study we have evaluated the association of salivary vitamin D, LL-37, interleukins 6 and 17A in various levels of severity of dental caries. Method Ethical approval was obtained (NU/CEC/2020/0339), 377 individuals reporting to department of conservative dentistry and endodontics, AB Shetty memorial institute of dental sciences were included based on inclusion criteria. The individuals were further divided into caries free( N = 105) and caries active( N = 272) based on their caries prevalence. The salivary were collected and evaluated for vitamin D, LL-37,IL-17A and IL-6.Results were statistically analysed with SPSS vs 22 (IBM Corp, USA). Normally distributed data were expressed as mean ± SD. Skewed data were expressed as median and interquartile range. To compare (mean) outcome measures between the two groups unpaired independent t-test was applied and for values in median IQR, Mann Whitney U test was used. All statistical analysis for P value were two-sided and significance was set to P ≤ 0.05. Results The study showed that, the salivary vitamin D statistically decreased with increasing severity of caries which showed that vitamin D plays an important role in prevention of caries. Antimicrobial peptide LL-37 was higher in caries free group but was not statistically significant, salivary IL-6 level was higher in caries active group but intergroup comparison did not show significant difference. Salivary IL-17A did not show statistically significant between caries active and caries free group. Conclusion The salivary levels of vitamin D may play a vital role in prevalence of dental caries and its severity which can be a underlying cause in presence of other etiological factors. Keywords
Acknowledgements Central research laboratory for immense support through conducting the analysis. Authors’ contributions N wrote and planned the study, Hegde MN and Kumari SN reviewed the manuscript and help conduct the study. Funding The following study was funded by Vision group on Science and technology (VGST/RGS-F/GRD-895/2019–20/2020–21/198). Availability of data and materials The datasets used and/or analysed during the current study available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate Approvals were obtained from the Central ethics committee, Nitte (deemed to be) University. Approval obtained dated NU/CEC/2020/0339 and NU/CEC/2022/291, prior initiation of the study, also renewal done. Informed consent was obtained from each individual patient, after they were provided with the patient information sheet. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:47
BMC Oral Health. 2024 Jan 13; 24:79
oa_package/7b/26/PMC10787980.tar.gz
PMC10787981
38218822
Introduction Preparation of dental hard tissues using high-speed rotary instruments generates heat; therefore, adequate cooling of the preparation area must be provided to prevent collateral thermal damage of the surrounding tissues [ 1 ]. Heat generation and cooling have been widely investigated for dental implant site preparation including the use of navigation guides. However, circumstances during navigated endodontic drilling in dentine significantly differ from navigated implant site preparation in human bone. The most common cause of artificial root canal preparation is a narrow and/or calcified root canal; therefore, the drill encounters high resistance. This leads to increased heat generation in the case of implant site preparation in human bone, which is generally softer than dentine [ 2 ]. Although bone is not a particularly well-vascularized tissue, its blood flow may decrease collateral thermal damage contrary to dentine, which has absolutely no blood supply. In the case of bone, the thermally affected tissue is at the site of the preparation, while in the case of root preparation, the entire root membrane must be protected from the heat generated during drilling procedures [ 3 ]. The working length of endodontic drills is generally longer than those of implant drills. The efficiency of cooling decreases with a longer distance of the working end of the instrument from the cooling source and with longer preparation depths (effective working length) [ 4 ]. Cooling efficiency may be further decreased with the use of navigation guides. To overcome this negative effect, a gap between the drill guide sleeve and the gingiva is often maintained during the fabrication of dental implant surgical guides to ensure the access of the coolant to the drill [ 5 ]. However, due to the flexibility of narrower and longer drills used in endodontics, this is rarely possible during navigated endodontic drilling. Another disadvantage of drills thinner than 1.5 mm is they do not have a heat-retaining mass, and their temperature increases faster during the process of drilling. Due to these circumstances, clinicians may expect more heat generation during guided endodontic drilling than during guided implant site preparation. Although guided endodontic drilling is a cutting-edge technology [ 6 ], there are a limited number of reports in scientific literature referencing temperature changes during guided endodontic drilling, of which, the effect of different drilling parameters has not been investigated in detail. [ 7 ] The aim of our study was to determine the temperature changes of root surfaces during guided endodontic drilling with various parameters. Due to the anatomical differences between natural teeth and the varying amounts of calcified dentine embedded in teeth, a large variance of results is expected.
Materials and methods Sample preparation In this study, seventy-two teeth with presumably narrow root canals were used. Navigated endodontic drilling enables straight preparation due to the relative rigidity of drills compared to conventional endodontic instruments. Therefore, only teeth bearing a straight root were selected. Inclusion criteria: Tooth extracted from a patient older than 50 years of age. Tooth extracted due to poor periodontal prognosis. Straight root. Exclusion criteria: Prior endodontic treatment of the tooth. Presence of any of the following conditions: crown restoration, caries, periapical lesions, root resorption and/or root fracture. Root length was not standardized. However, the same effective working length was used during preparations. Variances in root canal morphology were evenly distributed among the test groups. Roots of teeth were embedded in a stable support made of plaster and acrylic resin. Each support contained twelve teeth. A channel for the thermocouple electrode was created in the support for each tooth leading to the middle of the root (Fig. 1 ). A CBCT scan of each support was performed utilizing the Planmeca ProMax 3D imaging system (Planmeca, Helsinki, Finland) with a resolution of 200 microns and an FOV size of 8 × 8 mm (Fig. 2 ). The image set was uploaded to navigated surgical planning software (coDiagnostiX - Dental Wings Inc., Montréal, Canada) (Figs. 3 and 4 )). The type of endodontic drill (1 mm diameter spiral drill - Steco-System-Technik GmbH & Co. KG, Hamburg, Germany) and the corresponding guide sleeve were selected based on the recommendation of the software manufacturer. In the design software, sleeves were positioned as close as possible to the tooth surface to minimize the effective working length. The body of the guide holding the sleeves was generated automatically by the software and 3D printed (Form2, Formlabs Inc., Somerville, USA) using clear resin (Clear Resin, Formlabs Inc., Somerville, USA). The thermocouple channel in the support was filled with PK-Zero thermal compound (Prolimatech, Taiwan), and the thermocouple was fed into the channel up to the root surface. The other end of the thermocouple was connected to a digital thermometer (EL-EnviroPad-TC, Lascar Electronics Ltd., Salisbury, UK) (Fig. 5 ). A marking on the tooth was made through the guide sleeve, enamel was removed for all teeth using a diamond bur and dentin was removed for certain sets of teeth, creating an access cavity (AC) (see group descriptions). Access cavities were prepared with the same sized round diamond burs parallel to the long axis of the tooth. Access cavity width was set by the diameter of this bur. Cavities were prepared until the pulp chamber was reached, or in the case of calcified pulp chambers, preparation was continued until the depth of the cementoenamel junction was reached. Drilling protocol Endodontic preparation through the guide was performed by the same operator, with over five years of experience in guided implantology and endodontics (A.M.). The drill feed rate was standardized using a digital scale. The same micromotor (Bien-Air Chiropro 980, Bien-Air Surgery SA, Le Noirmont, Switzerland) with a 6:1 endodontic handpiece (VDW, München, Germany) was used for the preparation of all teeth. Study groups Four parameters affecting temperature change were investigated in the study: (a) access cavity preparation prior to endodontic drilling, (b) drill speed, (c) cooling and (d) cooling fluid temperature. Twelve teeth were allocated into each of the following test groups: Group 1: Guided drilling without access cavity preparation (w/o AC) at 800 RPM without cooling (w/o C). Group 2: Guided drilling without access cavity preparation (w/o AC) at 1000 RPM without cooling (w/o C). Group 3: Access cavity (w/AC) preparation prior to endodontic drilling and guided drilling at 1000 RPM without cooling (w/o C). Group 4: Access cavity (w/AC) preparation prior to endodontic drilling, guided drilling at 800 RPM without cooling (w/o C). Group 5: Access cavity (w/AC) preparation prior to endodontic drilling, guided drilling at 1000 RPM speed, cooling (w/C) with a room temperature (21 °C) coolant. Group 6: Access cavity (w/AC) preparation prior to endodontic drilling, guided drilling at 1000 RPM speed, cooling (w/C) with a chilled (4–6 °C) coolant. Statistical analysis Sample size was calculated in G*Power version 3.1.9.7. Considering 80% power, 5% alpha error and effect size of 0.5, a minimum of ten samples per group were required. Since the size of the support enabled the fit of more teeth, we analyzed twelve samples per group. This sample size was in accordance with previous studies regarding the subject [ 7 ]. The statistical analyses were performed with SPSS v. 25.0 (SPSS, Chicago, IL). The Kolmogorov‒Smirnov test was applied to test the normality of the distribution of the data. The changes in temperatures were compared between guided endodontic root canal preparation groups with one-way ANOVA, followed by Tukey’s HSD post hoc test. P values below 0.05 were considered significant.
Results No prior recommendation for drill speed in guided endodontic drilling was found among published scientific literature; therefore, we conducted a preliminary study to determine optimal drill speeds. In this preliminary experiment (data not shown) it was found rotary speeds of 1200 RPM and above did not improve drilling efficiency; however, rapid heating of the drill was observed and drill breakage often occurred. Therefore, 1000 RPM was chosen for the cooling efficiency test. On the other end of the spectrum, speeds below 800 RPM were associated with drastically reduced drilling efficiency and with a prolonged temperature rise, resulting in higher peak temperatures than speeds of 800 RPM and above. Mean temperature elevations are shown in Table 1 . The highest mean temperatures were observed for drilling without prior access cavity preparation. In this setup, drill speeds of 800 RPM (Group 1.) resulted in higher mean temperatures (14.62 °C ± 0.63) than drill speeds of 1000 RPM (Group 2.) (13.76 °C ± 1.24). The difference between these two groups was not statistically significant (p = 0.243), however, both groups showed significantly higher (p < 0.01) temperatures than any of the access cavity groups (3.,4.,5.,6.) In groups in which access cavity preparation was applied (Groups 3 and 4) significantly lower (p < 0.01) mean temperature values (10.09 °C ± 1.32 and 8.90 °C ± 0.50, respectively) were measured in comparison to the no access cavity groups (Groups 1 and 2). However, both groups 3 and 4 showed significantly higher mean temperatures than the groups in which cooling was used (Groups 5 and 6; p < 0.01). In this setup (access cavity prepared, no cooling applied), the drill speed had a significant effect, in which 1000 RPM resulted in significantly higher mean temperatures than when compared with 800 RPM (p < 0.05). Cooling significantly decreased (p < 0.01) the mean temperature increase in both groups (5., 6.) (4.01 °C ± 0.22) and 6. (1.60 °C ± 1.17) compared to any of the uncooled groups (1., 2., 3., 4.). The temperature of the cooling liquid had a significant effect (p < 0.01), and the application of a chilled cooling liquid (Group 6.) proved more beneficial than using a room temperature liquid (Group 5.) at the same drill speeds (1000 RPM). The results of the intergroup comparisons are shown in Fig. 6 .
Discussion Guided root canal drilling leads to heat generation at the drill-dentine interface. Excessive heat generation may lead to collateral thermal damage of the tissues of the periodontal ligament surrounding the root [ 8 ]. According to Sauk et al. [ 9 ], hyperthermia at 43 °C can lead to decreased protein synthesis, thus altering the functions of periodontal ligament cells. Eriksson and Albrektsson [ 10 ] found 47 °C temperature for at least 1 min is necessary for bone damage visible by light microscopy. Kniha et al., in their systematic review, discovered a wide range of published threshold values and concluded, due to the heterogeneity of experimental setups, no exact temperature for bone necrosis can be determined [ 11 ]. Cunha et al. in their systematic review demonstrated how many factors may contribute to postoperative pain and discomfort in patients who underwent endodontic treatment [ 12 ]. It can be assumed temperature elevations even below the necrotic threshold values may also contribute to postoperative pain, therefore, any temperature elevation is to be avoided during endodontic treatments, if possible. Most of the studies conducted on thermal bone damage are focused on direct heat transfer to the bone when examining critical temperatures. During guided endodontic drilling, heat is first transferred to the nonvital structure of dentine and only secondarily to bone. In this regard, preparation in the root canal is more similar to broken abutment screw removal from dental implants [ 13 ]. However, conclusions derived from these studies cannot be directly applied to guided endodontics for two main reasons. One premise implies titanium features better heat conductivity when compared with dentine, and the other premise is blood flow in the periodontal ligament has an attenuating effect upon heat transfer from the unvital structure to the bone. Although various anatomical factors, including the length of the root, width of the remaining root canal and calcified tissue inside the root canal may contribute to heat generation, they are difficult to control. Procedural factors, such as the type of drill used, presence of a properly prepared access cavity, drill speed, cooling and temperature of the coolant may also contribute to heat generation. However, the importance and effect of these procedural factors have not yet been fully investigated in published scientific literature. The results show all four tested drilling parameters affected heat generation during in vitro investigation. The lack of access cavity preparation prior to guided endodontic drilling reportedly bears a detrimental effect, increasing root surface temperature by more than 10 °C regardless of the drilling speed applied. Our data implies drilling speed also has a major effect on heat generation when the access cavity is prepared prior to guided drilling. Seemingly, a lower speed (800 RPM) results in less heat generation than higher speed (1000 RPM) drilling. The temperature values were also more consistent with lower speed preparations. This may indicate lower speed preparations are less sensitive to different root canal anatomies. Additionally, cooling of the drill as well as the temperature of the cooling liquid have major effects on heat generation even when higher drill speeds were used. The highest measured temperature elevation with cooling was still lower than the lowest temperature elevation without cooling. In the two cases with the use of refrigerated cooling liquid, no temperature elevation was observed during the entire drilling process. Therefore, it can be assumed cooling the drill is the most predictable method to reduce collateral thermal damage. The mean temperature data (4.01 °C ± 0.22) of Group 5 of our study (access cavity preparation followed by guided drilling at 1000 RPM and cooling with room temperature coolant) were consistent with the mean temperature data (5.07 °C) of the guided endodontic drilling group (access cavity preparation followed by drilling at 800 RPM for 120 s without cooling) from the study published by Zhang et al. [ 7 ] It must be noted, these data only refer to the one specific drill type used for this study. Bur material, diameter, shape and blade configuration may also contribute to accuracy and heat generation; however, investigation of these parameters was beyond the scope of our study [ 14 , 15 ].
Conclusion There is a growing need for the development of technical recommendations and protocols as the technique of guided root canal drilling becomes increasingly more accessible to dental practitioners. With the cautious evaluation of unswayed anatomical factors of the tooth and with the thorough understanding of influential procedural factors, the risk of collateral thermal damage during guided endodontic drilling can be minimized. Based on the results of our study, guided endodontic drilling at drill speeds not exceeding 1000 RPM following access cavity preparation, with constant cooling using a fluid cooler than room temperature, provides the best results in avoiding collateral thermal damage.
Background Navigated endodontics is a cutting-edge technology becoming increasingly more accessible for dental practitioners. Therefore, it is necessary to clarify the ideal technical parameters of this procedure to prevent collateral damage of the surrounding tissues. There is a limited number of studies available in published scientific literature referencing the possible collateral thermal damage due to high-speed rotary instruments used in guided endodontic drilling. The aim of our study was to investigate the different drilling parameters and their effect upon the temperature elevations measured on the outer surface of teeth during guided endodontic drilling. Methods In our in vitro study, 72 teeth with presumably narrow root canals were prepared using a guided endodontic approach through a 3D-printed guide. Teeth were randomly allocated into six different test groups consisting of 12 teeth each, of which, four parameters affecting temperature change were investigated: (a) access cavity preparation prior to endodontic drilling, (b) drill speed, (c) cooling, and (d) cooling fluid temperature. Temperature changes were recorded using a contact thermocouple electrode connected to a digital thermometer. Results The highest temperature elevations (14.62 °C ± 0.60 at 800 rpm and 13.76 °C ± 1.24 at 1000 rpm) were recorded in the groups in which drilling was performed without prior access cavity preparation nor without a significant difference between the different drill speeds (p = 0.243). Access cavity preparation significantly decreased temperature elevations (p < 0.01) while drilling at 800 rpm (8.90 °C ± 0.50) produced significantly less heating of the root surface (p < 0.05) than drilling at 1000 rpm (10.09 °C ± 1.32). Cooling significantly decreased (p < 0.01) temperature elevations at a drill speed of 1000 rpm, and cooling liquid temperatures of 4–6 °C proved significantly (p < 0.01) more beneficial in decreasing temperature elevations (1.60 °C ± 1.17) than when compared with room temperature (21 °C) liquids (4.01 °C ± 0.22). Conclusions Based on the results of our study, guided endodontic drilling at drill speeds not exceeding 1000 rpm following access cavity preparation, with constant cooling using a fluid cooler than room temperature, provides the best results in avoiding collateral thermal damage during navigated endodontic drilling of root canals. Keywords
Acknowledgements None. Author contributions Zs. R.: Formal analysis, Investigation, Writing - Original Draft.I. M.: Data Curation, Validation, Review & Editing.Á. N.: Funding acquisition, Writing - Review & Editing.K. T.: Project administration, Validation. A. M.: Conceptualization, Methodology, Investigation, Writing - Review & Editing. Gy. M.: Conceptualization, Investigation, Writing - Review & Editing. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Data availability The datasets used and/or analyzed during the current in vitro study are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate Ethical approval was obtained by Regional Research Ethics Committee of the Medical Center, Pécs. Written informed consent was acquired from all participants for study participation. Consent for publication Not Applicable (NA). Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:47
BMC Oral Health. 2024 Jan 13; 24:76
oa_package/96/fd/PMC10787981.tar.gz
PMC10787982
38218860
Background The global technical strategy for malaria 2016–2030 of the World Health Organization (WHO) recommends strengthening malaria surveillance as a fundamental activity to inform programme planning and implementation for improved outbreak detection in malaria-endemic countries [ 1 ]. According to the World Malaria Report of 2022, Uganda is ranked as the third-highest contributor to malaria burden globally, with 95% of the country being highly endemic and 5% prone to malaria epidemics [ 2 , 3 ]. A malaria outbreak is characterized as an increase in case counts above the threshold for the normal seasonal pattern of malaria in an area. This threshold is usually calculated based on historical routine data at the district level for a minimum of 5 years [ 4 , 5 ]. The WHO recommends various methods to calculate thresholds, including the 75th percentile, mean ± 2 standard deviations (SD), cumulative sum (C-SUM), and constant case counts [ 4 ]. The 75th percentile method considers the threshold as the 75th percentile of the average number of cases for a specific epidemiological week in that district over the past 5 years. The mean + 2SD method takes the mean number of cases for that week over the last 5 years and adds 2SD to establish the threshold. The C-SUM method involves a running average of cases for the current epi week, the previous week, and the following week over the past 5 years [ 4 ]. To accommodate seasonal malaria peaks that are not necessarily epidemics, modifications to these methods have been proposed, including raising the 75th percentile to the 85th percentile, and increasing the C-SUM method threshold by adding two standard deviations (C-SUM + 2SD) [ 4 ]. These adaptations are meant to improve the ability to distinguish between true outbreaks and regular seasonal variations. The threshold calculation method that is recommended depends on the extent of malaria transmission in a given area. The WHO defines high transmission as an annual parasite index (API) > 450/1000, medium transmission as 251–450/1000 API, low transmission as 101–250/1000 API, and very low transmission as ≤ 100/1000 API [ 4 ]. The C-SUM method is recommended for areas with very low to low transmission; however, it is considered too sensitive for outbreak detection in medium- to high-transmission areas [ 4 ]. In the medium- to high-transmission areas, the 75th percentile method and mean + 2SD methods are both recommended by the WHO; however, they are considered too insensitive to accurately detect outbreaks in low-transmission areas [ 4 ]. For any method used, a malaria epidemic is declared when the malaria cases are above the threshold for > 2 weeks consecutively. Uganda’s malaria epidemic preparedness and response plan for 2019 suggests using the 75th percentile method at the national level and for all districts [ 6 ]. However, some districts use the mean + 2SD and others use the 75th percentile methods, based on the WHO recommendation for similar settings. From 2019 to 2022, Uganda’s health information system reported a rise in confirmed malaria cases [ 7 ]. During the first half of 2022, more than half of the districts in Uganda were in outbreak mode for at least 10 weeks, according to the 75th percentile method used [ 8 ]. While every outbreak should be investigated and responded to by the national rapid response team, limited resources for logistics and human resources forced the national malaria control programme to restrict its response to only a few districts, using the number of complicated malaria presentations and malaria deaths as the prioritization measure. With the rate of progress slowing in terms of malaria control, not only in Uganda but also in other sub-Saharan African countries [ 9 , 10 ], there will be a need to ensure that appropriate methods are being used to identify malaria outbreaks and that prioritization methods are available when sufficient resources are not. The three threshold approaches were evaluated to compare their outbreak-signaling outputs in Uganda for improved malaria epidemic detection and response.
Methods Study setting Uganda comprises 15 health regions, of which 2 (West Nile and Acholi Regions) are considered areas with high annual malaria transmission rates. Five (Lango, Karamoja, Teso, Bukedi, and Busoga Regions) are considered medium malaria transmission areas and seven (South Central, North Central, Kampala, Ankole, Tooro, Bugisu and Bunyoro Regions) are considered low malaria transmission areas. Kigezi Region is considered to have very low malaria transmission and is targeted for malaria elimination in the Uganda National Malaria Strategic Plan 2025 [ 4 , 7 , 11 ].
Results Characteristics of the study data Varying malaria incidence levels were identified for districts in the same malaria transmission region (Table 1 ). Overall, 8 of the 16 districts were recategorized based on the use of district data rather than regional data. These included one district (Nwoya) reassigned from ‘high’ to ‘medium’, two districts (Butambala and Bundibugyo) re-categorized from ‘low’ to ‘medium’, 1 district (Kanungu) recategorized from ‘very low’ to ‘low’, two districts (Alebtong and Kibuku) recategorized from ‘medium’ to ‘low’, two districts (Ntoroko and Bukwo) re-categorized from ‘’low’ to ‘very low’. Due to this identified granularity in actual transmission levels, districts were re-categorized by transmission level using district-level data and these assignments were used in the rest of the analysis (Table 1 ).
Discussion Identifying the appropriate situations to respond to an apparent increase in cases of a disease in an endemic setting is challenging. The use of transmission intensity-specific thresholds, based on historical data, is meant to facilitate the identification of malaria outbreaks and distinguish true increases from seasonal upsurges in endemic areas. Using real examples from Uganda, major differences between threshold calculation approaches in terms of the number of weeks above the threshold detected as well as the number of outbreaks that would require epidemic response were identified. Specifically, two approaches that are both meant to be acceptable for outbreak detection in medium-to-high transmission areas (mean + 2SD and 75th percentile) yielded large differences in the number of outbreak weeks detected across all levels of transmission. The 75th percentile method yielded outbreak weeks more similar to those identified by the very sensitive C-SUM method across all transmission levels. In addition, the true transmission levels in districts were often not reflective of the region to which they were assigned. Both the 75th percentile and mean + 2SD methods have been recommended for malaria outbreak detection in medium- to high-transmission areas, suggesting their comparability and possible interchangeability. However, significant differences in the number of weeks exceeding the outbreak threshold between these two methods were identified, with the mean + 2SD method identifying significantly fewer outbreak weeks. A Kenyan study in three different regions similarly found that the 75th percentile method identified approximately 3 times as many months as being ‘epidemic’ as the mean + 2SD method [ 12 ]. Clear guidance on the application of these methods for specific transmission areas is required for improved malaria outbreak surveillance and detection. While only the C-SUM method is recommended for low- or very low-transmission areas, no significant difference in the number of weeks above the threshold detected by the 75th percentile and C-SUM methods in these districts was observed. Existing guidance discourages the use of the 75th percentile method in low- and very low-transmission areas due to the potential for missing outbreaks [ 4 , 13 , 14 ]. In this evaluation, outbreaks were not missed. However, in medium- and high-transmission areas, the C-SUM method detected significantly more outbreak weeks than the 75th percentile method. This supports not using the C-SUM method in medium- and high-transmission areas to avoid false alarms, as it does not account for seasonal peaks [ 4 ]. Studies conducted in Sudan and Ethiopia for early malaria epidemic detection have suggested the use of both the 75th percentile and C-SUM methods as pre-malaria-outbreak warnings in areas with medium to high malaria transmission [ 15 , 16 ]. The comparable sensitivity of the 75th percentile method and the C-SUM method in very low- and low-transmission areas and the significant differences observed in medium to high transmission areas suggests that the 75th percentile method could be applicable across all transmission levels. Since one objective of surveillance is the timely detection of outbreaks, the sensitivity of the 75th percentile method would provide timely detection of malaria epidemics, especially in medium- and high-malaria transmission areas. However, the use of this approach yielded more outbreaks than were feasible to respond to in Uganda during 2022. Thus, it may be useful to consider whether an alternate, less sensitive approach, such as the mean + SD method, could be applied for epidemic response prioritization when the 75th percentile yields more outbreak districts than can be adequately addressed with existing resources. On adjustment of the 75th percentile to the 85th percentile, no statistically significant difference was observed in the number of outbreak weeks for low and medium transmission areas. Other studies have proposed adjusting the 75th percentile to the 90th percentile instead of the 85th to better accommodate malaria seasonal peaks and improve outbreak detection [ 4 , 17 – 19 ]. However, the small differences in outbreak weeks detected between the 75th percentile and the 85Th percentile might not suffice to recommend this adjustment for better accommodation of seasonal peaks. It may be useful to consider other modified approaches, such as modifying the 75th percentile to the 90th percentile to better accommodate seasonal peaks in some situations. On adjustment of the C-SUM method to the C-SUM + 2SD method, there was a significant decrease in the number of outbreak weeks detected, but no difference from the number of outbreak weeks detected by the mean + 2SD method. This similarity can be attributed to both methods using averages, with the main difference lying in their respective methodologies (the mean + 2SD method takes the mean number of cases for that week over the last five years and adds 2SD to establish the threshold. The C-SUM + 2SD method takes the running average of cases for the current epi week, the previous week, and the week after over the past 5 years and adds 2SD to establish a threshold). Similar findings were observed in Madagascar in a study analysing trends and forecasting malaria epidemics using a sentinel surveillance network which indicated improved specificity when the 2SD is added to the C-SUM [ 17 ]. A consideration of C-SUM + 2SD for epidemic detection in medium to high malaria transmission districts could provide an alternative method for malaria epidemic detection to the mean + 2SD method. In Uganda, transmission levels, on which threshold approaches are meant to be based, are assessed using regional (larger; n = 15 in Uganda) data rather than the district (smaller; n = 146 in Uganda) data. Granularity in the actual malaria transmission levels, different from the regional transmission levels for the districts evaluated was identified. The study revealed notable differences in the malaria transmission of the evaluated districts and their nationally allocated regional malaria transmission levels. Districts in high-transmission regions were found to have medium- or low-transmission levels, while some districts in low-or very low-transmission regions had medium-transmission levels. These findings highlight the need for stratification of the malaria burden at district level rather than regional level. Stratification at district level could be helpful for instances when prioritization for epidemic response is required as it only applies to medium and high transmission areas. This could also support appropriate allocation of resources for improved malaria epidemic surveillance and response at district level. Limitations The study's limitations include the absence of a definitive gold standard approach for identifying outbreaks; however, this is inherent to a highly endemic setting for any disease. Additionally, methods were evaluated in only 16 out of 146 districts in Uganda due to under-reporting by most districts. However, the selected districts were distributed around the country and across all transmission levels, which may enhance the generalizability of the study findings.
Conclusion Our study demonstrated notable differences in district malaria transmission levels from the assigned regional malaria transmission levels. Among the districts evaluated, the 75th percentile approach proved most applicable for all transmission areas. However, the number of epidemic weeks detected for medium- and high-transmission areas was significantly higher than the mean + 2SD method. This would challenge response in resource-limited settings which is the majority of Africa where the malaria burden is high. We recommend use of the 75th percentile method for epidemic detection in all malaria transmission areas and the use of mean + 2SD for prioritization of districts for response in situations of low resources. Furthermore, the stratification of areas to the smallest geographical unit possible would ensure detection of localized malaria outbreaks. Additionally, re-calculation of malaria transmission levels at district level and re-categorization of districts rather than regions would ensure appropriate malaria outbreak surveillance and detection for appropriate response.
Background Malaria outbreaks are detected by applying the World Health Organization (WHO)-recommended thresholds (the less sensitive 75th percentile or mean + 2 standard deviations [2SD] for medium-to high-transmission areas, and the more sensitive cumulative sum [C-SUM] method for low and very low-transmission areas). During 2022, > 50% of districts in Uganda were in an epidemic mode according to the 75th percentile method used, resulting in a need to restrict national response to districts with the highest rates of complicated malaria. The three threshold approaches were evaluated to compare their outbreak-signaling outputs and help identify prioritization approaches and method appropriateness across Uganda. Methods The three methods were applied as well as adjusted approaches (85th percentile and C-SUM + 2SD) for all weeks in 2022 for 16 districts with good reporting rates ( ≥ 80%). Districts were selected from regions originally categorized as very low, low, medium, and high transmission; district thresholds were calculated based on 2017–2021 data and re-categorized them for this analysis. Results Using district-level data to categorize transmission levels resulted in re-categorization of 8/16 districts from their original transmission level categories. In all districts, more outbreak weeks were detected by the 75th percentile than the mean + 2SD method (p < 0.001). For all 9 very low or low-transmission districts, the number of outbreak weeks detected by C-SUM were similar to those detected by the 75th percentile. On adjustment of the 75th percentile method to the 85th percentile, there was no significant difference in the number of outbreak weeks detected for medium and low transmission districts. The number of outbreak weeks detected by C-SUM + 2SD was similar to those detected by the mean + 2SD method for all districts across all transmission intensities. Conclusion District data may be more appropriate than regional data to categorize malaria transmission and choose epidemic threshold approaches. The 75th percentile method, meant for medium- to high-transmission areas, was as sensitive as C-SUM for low- and very low-transmission areas. For medium and high-transmission areas, more outbreak weeks were detected with the 75th percentile than the mean + 2SD method. Using the 75th percentile method for outbreak detection in all areas and the mean + 2SD for prioritization of medium- and high-transmission areas in response may be helpful. Keywords
Data source Historic weekly malaria surveillance data from the District Health Information System version 2 (DHIS2) during 2017–2021 was used for the calculation of thresholds. The health facility malaria data are routinely generated at health facilities in outpatient registers. The data are aggregated weekly into health facility weekly surveillance reports, which are submitted to the DHIS2 using a short message system (SMS). This captures information for all health facilities in the districts. The weekly reporting rates for the districts can also be calculated based on data from this system using submitted reports (numerator) divided by expected reports (denominator). Districts with reporting rates of < 80% are considered to have incomplete data submitted. Study variables, data abstraction, and analysis Pivot tables were used to filter secondary data on weekly confirmed malaria cases by both rapid diagnostic test (RDT) and microscopy from the health information management system weekly disease surveillance reports (HMIS 033b report) from 2017 to 2022 available in the DHIS2. Additionally, data on weekly reporting rates for all districts was extracted. Data were extracted for each year for each district. The Ministry of Health (MoH) considers a reporting rate of ≥ 80% as the minimal level for usable data. Sixteen out of 146 districts were selected for the evaluation based on having reporting rates ≥ 80% over the 5-year period and based on their stated regional malaria transmission intensity (four each in the high, medium, low, and very low transmission regions). District API was calculated using malaria cases (numerator) and the total population (denominator) obtained from Uganda Bureau of Statistics census data for the selected districts. Malaria transmission levels by district were re-calculated using district data to enable us evaluate the accuracy of regional-level assignment of transmission levels and evaluate the different threshold approaches accurately. Using 2022 as the year of review, thresholds were calculated using historic data from 2017 to 2021 for the selected districts. Thresholds were calculated using the three recommended approaches: Mean + 2SD, 75th percentile, and C-SUM to establish their outbreak detection sensitivity, using the highly sensitive C-SUM method as the reference. Case counts were not considered since Uganda is highly endemic for malaria and they are not recommended for such settings [ 4 ]. Malaria cases for 2022 were plotted together with the thresholds and displayed using line graphs. The 85th percentile and C-SUM + 2SD adjusted approaches were also evaluated to see how outbreak week detection changed from the original approaches. The difference in malaria outbreak weeks detected by the various methods were compared for significance using chi-square in STATA software version 14. Finally, the number of outbreak weeks detected by the method used during 2022 and the recommended threshold method were compared, based on the district transmission level. The level of significance was considered at p < 0.05. For graphical presentation in this report, one district was picked randomly from each transmission level category (Fig. 1 ). Outbreak weeks detected per threshold approach and the difference in weeks detected for specific threshold approaches The number ‘outbreak weeks’ varied by method used across the different transmission levels. For all transmission levels, the difference in malaria outbreak weeks detected by the 75th percentile method and the mean + 2SD was statistically significant, with the 75th percentile method detecting ~ 1.5 to 30 times the number of outbreak weeks as the mean + 2SD method (p < 0.001). In low- and very low-transmission areas, the more sensitive C-SUM method usually detected similar numbers of malaria outbreak weeks as the 75th percentile method. As transmission levels increased, there was a tendency for greater differences between the C-SUM method and the 75th percentile method, with the C-SUM method detecting more outbreak weeks (Table 2 ). On adjustment of the 75th percentile method to the 85th percentile, there was no difference in the number of outbreak weeks detected for low and medium transmission levels. The adjustment of C-SUM to C-SUM + 2SD reduced its sensitivity to make it equivalent to the mean + 2SD method (Table 2 ). Graphical presentation of malaria outbreak detection in a high-transmission district The 75th percentile and mean + 2SD methods are both meant to be used for medium- to high-transmission districts. Using Yumbe District (high-transmission district) data, malaria cases using the 75th percentile method exceeded the threshold in 31 weeks compared to 2 (non-sequential) weeks detected by the mean + 2SD method (p-value < 0.001). Since a malaria outbreak is declared with 2 or more sequential outbreak weeks, with mean + 2SD, no malaria outbreak would be detected for Yumbe District. The 75th percentile method classified epidemics from weeks 1–15 and weeks 21–24 (Fig. 2 ). Graphical presentation of malaria outbreak detection in a medium transmission district Bundibugyo District, a medium-transmission district, showed 36 weeks exceeding the threshold using the 75th percentile method and 26 weeks using the mean + 2SD method. This would have resulted in the district having a malaria outbreak requiring epidemiologic investigation from weeks 5 to 25 using the mean + 2SD method, and weeks 4–25, 29–36, and 41–43 using the 75th percentile method (Fig. 3 ). Graphical presentation of malaria outbreak detection in a low malaria transmission district Alebtong District, a low-transmission district, showed 50 weeks exceeding the threshold using the 75th percentile method and 52 weeks using the C-SUM method. The district would have had a malaria outbreak requiring epidemic investigation for 49 weeks in 2022 using the 75th percentile method, and 52 epidemic weeks using the C-SUM method (Fig. 4 ). Graphical presentation of malaria outbreak detection in a very-low malaria transmission district For Kisoro District, Kigezi Region, an area of very low transmission also targeted for malaria elimination in the 2020–2025 Malaria Strategic Plan, the 75th percentile method detected 34 weeks above the threshold while the recommended C-SUM detected 26 weeks. This would have resulted in the district having a malaria outbreak requiring epidemic investigation from weeks 3–6 and 21–33 in 2022 using the C-SUM method, and weeks 3–6, 21–22, and 26–43 using the 75th percentile method (Fig. 5 ).
Abbreviations Annual parasite incidence District Health Information System (DHIS2) Health facility Standard deviation Cumulative sum Health Management Information System Acknowledgements We appreciate the National Malaria Control Division and other national malaria stakeholders for raising the questions that initiated this analysis. Author contributions GMZ conducted data extraction, analysis, and interpretation of the data under the technical guidance and supervision of JRH, ARA, DK, RM, BK and LB. GMZ drafted the manuscript. GMZ, JFZ, LB, RM, BBA, MKM, DK, BK, JO, ARA, and JRH, critically reviewed the manuscript for intellectual content. All co-authors read and approved the final manuscript. GMZ is the guarantor of the paper. Funding This project was supported by the President’s Emergency Plan for AIDS Relief (PEPFAR) through the US Centers for Disease Control and Prevention Cooperative Agreement number GH001353 through Makerere University School of Public Health to the Uganda Public Health Fellowship Program, Uganda National Institute of Public Health. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the US Centers for Disease Control and Prevention, the Department of Health and Human Services, Makerere University School of Public Health, or the MoH. The staff of the funding body provided technical guidance in the design of the study, ethical clearance and collection, analysis, and interpretation of data, and in writing the manuscript. Availability of data and materials The datasets upon which our findings are based belong to the Uganda Ministry of Health. However, the datasets can be availed upon reasonable request from the corresponding author and with permission from the Uganda Public Health Fellowship Program. Declarations Ethics approval and consent to participate We used routinely collected malaria surveillance data in the national health information management system DHIS2 that is publicly available for analysis and use to inform public health intervention. The data is aggregated with no individual identifiers. This activity was reviewed by the CDC and was conducted consistent with applicable federal law and CDC policy.§ §See e.g., 45 C.F.R. part 46, 21 C.F.R. part 56; 42 U.S.C. §241(d); 5 U.S.C. §552a; 44 U.S.C. §3501 et seq. This determination was made because the project aimed to address a public health problem and had the primary intent of public health practice. Additional clearance was obtained from the National Malaria Control division (NMCD) and the Division of Health Information (DHI). Consent for publication Not applicable. Competing interests The authors declare no conflicts of interest.
CC BY
no
2024-01-15 23:43:47
Malar J. 2024 Jan 13; 23:18
oa_package/4e/f1/PMC10787982.tar.gz
PMC10787983
38218826
Introduction As a common malignancy threatening women's health, endometrial cancer (EC) is characterized with a continuously increasing incidence over the past decade. At present, EC has become the second leading cause of gynecological cancer-related death in women [ 1 ]. For a long time, there are no clear standard classification methods for the molecular classification of EC. In 2013, the Cancer Genome Atlas (TCGA) research network broke through the limitations of the CE classification by integrating molecular characterization. However, such classification method is limited by the main disadvantages of its considerable complexity and impracticality in clinical practice [ 2 , 3 ]. According to the classification by the World Health Organization (WHO) in 2014, EC is divided into atypical hyperplasia and atypical hyperplasia. Among patients with atypical hyperplasia, 32 to 37% of cases may develop EC, with a high risk of progression up to 25% [ 4 ]. It was reported that EC patients diagnosed in the early stage have a good prognosis with a 5-year survival rate of ≥ 95%. However, the survival rate is significantly reduced in patients diagnosed with advanced or recurrent EC, and their 5-year survival rate is less than 20% even after combination therapy [ 5 ]. Chemotherapy and hormone therapy are the main management strategies for patients with advanced EC. The main reason for unsatisfactory treatment outcomes is the emergence of drug-resistant tumor cells during treatment. Long-term use of chemotherapeutic drugs makes tumor cells that initially respond to anticancer agents become insensitive or even resistant [ 6 ]. Doxorubicin, sorafenib, cisplatin and paclitaxel (also known as taxol) all are prevalent drugs for the treatment of EC, and although the mechanisms are different, they can all induce the death of cancer cells. However, tumor cells resistant to most chemotherapeutic agents, including EC, have been found clinically [ 7 ]. Previous studies have revealed the effect of salinomycin on mRNA and miRNA expression of drug-resistant genes in Ishikawa EC cell lines by microarray analysis and RT-qPCR. According to the analysis results, the expression of TUFT1, MTMR11 and SLC30A5 differed most significantly; besides, the influence probability between TUFT1 and hsa-miR-3188 (FC + 2.48), mtmr11 and has-miR-16 (FC-1.74), SLC30A5 and hsa-mir-30d (FC-2.01) was the highest. These results indicated changes in mRNA and miRNA activity involved in drug resistance, and these characteristic changes were expected as a result of anticancer therapy [ 8 ]. The underlying causes of drug resistance in malignancies are complex. Vermij L et al. demonstrated that resistance to paclitaxel in EC was attributed to the multidrug resistance 1 (MDR-1) gene expressing the P-glycoprotein (P-gp), and point mutations in the tubulin binding sites that interacted with paclitaxel [ 9 ]. For improving the treatment of EC, the molecular mechanism of resistance to anticancer agents needs further investigation. Fanconi anemia complementation group D2 (Fancd2) is a nuclear protein involved in DNA damage repair. Fancd2 was initially found to be an essential protein for the development of Fanconi anemia [ 10 ], but subsequent studies have revealed its association with cancer development. For example, Houghtaling et al. showed that mice lacking Fancd2 were prone to cancers, including acute myeloid leukemia and squamous cell carcinoma [ 11 ]. Lisa et al. observed that high expression of Fancd2 promoted excessive proliferation and metastasis of esophageal squamous cell carcinoma cells [ 12 ]. Sonali et al. suggested a correlation between subcellular localization of Fancd2 and ovarian cancer survival; and Fancd2 localized in the nucleus led to reduced patient survival [ 13 ]. Collectively, the role of Fancd2 in cancers is evident. Fancd2, moreover, has recently been reported to be associated with chemoresistance in cancer cells. As early as 2005, a study by Chirnomas D et al. pointed out that inhibition of the Fanconi anemia pathway was effective in restoring cisplatin sensitivity in ovarian and breast tumor cell lines [ 14 ]. Alex et al. also proved that reducing Fancd2 expression could not only restore the sensitivity of the human breast epithelial cell line MCF10A to mitomycin C, but also inhibit the repopulation ability of the cancer cells [ 15 ]. In addition, the association between Fancd2 and drug resistance has been reported in multiple myeloma, ovarian cancer, non-small cell lung cancer, and head and neck cancer in vitro [ 15 ]. Based on previous studies, we attempted to propose new solutions to solve EC drug resistance through exploring the correlation of Fancd2 with EC development and chemoresistance in EC.
Materials and methods Tissue specimens Tissue specimens were collected from 20 patients pathologically diagnosed as EC in Meizhou People’s Hospital, Meizhou Academy of Medical Sciences between January 2016 and May 2019. These patients did not receive any chemotherapy or radiotherapy before surgery, and tumor tissue (EC group) and adjacent normal tissue (Normal group) were obtained after surgery. The tissue specimen collection was approved by the ethical committee of Meizhou People’s Hospital, Meizhou Academy of Medical Sciences (Ethical No.: 2022-C-83) and written informed consent was acquired from all patients. The clinical information of patients was shown in Table 1 . Cell culture Human endometrial epithelial cells (hEECs, XY-XB-1546) and EC cells (Ishikawa, SNL-171) were purchased from the American Type Culture Institute. Cell culture was achieved in a RPMI-1640 medium supplemented with 10% fetal bovine serum (FBS) and 1% penicillin/streptomycin, with incubation conditions of 37 °C and 5% CO 2 . The Ishikawa cells were exposed to different concentrations of paclitaxel to obtain paclitaxel-resistant cells (Ishikawa/TAX). Cell transfection Ishikawa cells were transfected with negative pcDNA3.1 (Vector group), over-expression plasmid pcDNA3.1-Fancd2 (Fancd2 group), negative siRNA (siNC group), and Fancd2 siRNA (si-Fancd2 group) by lipo3000 transfection kit (LMRNA001, Invitrogen, California, USA). Ishikawa/TAX cells were transfected with negative siRNA (siNC group), Fancd2 siRNA (si-Fancd2 group), and treated with Ferrostatin-1 (Fer-1, 20 μM). Real-time quantitative PCR (RT-qPCR) The tissue specimens were added with 1 mL TRizol (12183555, Invitrogen, California, USA), and then subjected to a thorough homogenization in a homogenizer and centrifugation at 12,000 rpm for 10 min. The obtained supernatant was centrifuged with 200 uL chloroform to acquire new supernatant, followed by addition of 500 uL isopropanol and centrifugation to allow precipitation of RNA. The acquired RNA was dissolved with nuclease-free water. The same procedures were followed to extract RNA from the cells. After that, 1 μg RNA was reversely transcribed into cDNA using M-MLV reverse transcriptase (28025013, Invitrogen, California, USA). Subsequently, real-time quantitative PCR (RT-qPCR) assay was performed using the SYBR Green PCR kit (4344463, Invitrogen, California, USA) on the Quant Studio 6 Flex system (Applied Biosystems, USA), and the cycle threshold (Ct) of each gene was recorded. The relative expression of the target gene was calculated using the 2 −ΔΔCt method, with glyceraldehyde-3-phosphate dehydrogenase (GAPDH) as the internal reference gene. Real-time PCR cycles included: 95 °C for 10 min, 40 cycles (95 °C for 15 s, 67 °C for 30 s, 72 °C for 30 s), and 72 °C for 5 min. Primers used were shown in Table 2 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2-H-tetrazolium bromide (MTT) assay Ishikawa or Ishikawa/TAX cells were seeded in 96-well plates at 5 × 10 3 cells/well. Upon completion of transfection with/without Fer-1 treatment, the cells were incubated with different concentrations of paclitaxel, cisplatin, doxorubicin, and sorafenib for 24 h. Then, 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2-H-tetrazolium bromide (MTT, C0009S, Beyotime Biotechnology, Shanghai, China) solution (20 μL) was added to each well. Following 4 h of incubation, the reaction was stopped with 150 μL DMSO (ST1276, Beyotime Biotechnology, Shanghai, China) and the cells were shaken for 10 min at room temperature. Subsequently, the absorbance was measured at 490 nm by a microplate reader (Model: 550, BIO-RAD, China) and the viability of cells was calculated. Detection of reactive oxygen species level Ishikawa or Ishikawa/TAX cells were seeded in 12-well plates at 5 × 10 5 cells/well. Firstly, the cells were subject to transfection with/without Fer-1 treatment. Then, 10 μM carboxy-H2DCFDA (C400, ThermoFisher Scientific, California, USA) was added for 30 min of cell incubation at 37 °C in the dark. Subsequently, the cells were washed twice with PBS, resuspended with trypsin, and then collected. Fluorescence values were measured by flow cytometry (emission 495 nm and absorption 525 nm). Apoptosis In strict accordance with the corresponding instructions, Annexin V-FITC (fluorescein Isothiocyanate)/PI (propidium iodide) apoptosis detection kits (APOAF, Sigma-Aldrich, St. Louis, Missouri, USA) were used to detect the apoptosis level of Ishikawa or Ishikawa/TAX cells after different treatments. In short, the cells were washed with PBS, and the cell density was adjusted. Then, the cells were suspended in a 500 μL binding buffer and incubated with 5 μL Annexin V-FITC and 5 μL PI for 30 min at room temperature. After that, the cells were transferred into a flow tube and examined on the Accuri C6 flow cytometer (Tomy Digital Biology, CA, USA). Colony formation assay Ishikawa or Ishikawa/TAX cells were seeded in 6-well plates with 1 × 10 3 cells per well. After 2 weeks of culture, the cells were fixed with 4% paraformaldehyde and then stained with 0.5% crystal violet. The results were observed with a microscope (Nikon, Japan) and analyzed using the ImageJ software. Detection of malondialdehyde, glutathione, and Fe 2+ levels Ishikawa or Ishikawa/TAX cells were seeded in 6-well plates at 1 × 10 6 cells/well. Then, the cells were transfected to knock down or over-express Fancd2 and treated with Fer-1. After that, the cells were washed twice with PBS, followed by cell lysis and centrifugation at 12,000 rpm for 10 min. Next, the supernatant was collected for the detection of malondialdehyde (MDA, S0131S) activity, glutathione (GSH, S0052) level, and Fe 2+ level (S0116) according to the protocol of corresponding kits (Beyotime Biotechnology, Shanghai, China). Western blot Cells were solubilized in RIPA lysis buffer (P0013B, Beyotime Biotechnology, Shanghai, China), and the supernatant was collected after centrifugation at 12,000 rpm for 20 min. Total protein concentration was detected by BCA assay (P0009, Beyotime Biotechnology, Shanghai, China). Proteins were then separated using 12% SDS-PAGE (P0012A, Beyotime Biotechnology, Shanghai, China), and transferred onto polyvinylidene fluoride membranes (PVDF, 88585, ThermoFisher Scientific, California, USA). Upon completion of blocking step in 5% skimmed milk for 1 h, the membranes were incubated overnight at 4 °C with primary antibodies Fancd2 (1:1000, ab108928; Abcam, Cambridge, UK), solute farrier family 7 member 11 (SLC7A11, 1:1000, ab175186; Abcam, Cambridge, UK), and glutathione peroxidase 4 (GPX4, 1:1000, ab252833; Abcam, Cambridge, UK). On the next day, the membranes were incubated with HRP-conjugated secondary antibodies (1:5000, ab205719; Abcam, Cambridge, UK) for 2 h. After that, the protein bands were visualized using the ECL Western blotting Kit (32109, ThermoFisher Scientific, California, USA). Finally, the relative expression of proteins was calculated using GAPDH as an internal reference. Statistical analysis Data were analyzed by Statistical Package for the Social Sciences version 26.0. Differences between two groups were analyzed using paired t tests, and comparisons among multiple groups were analyzed by one-way analysis of variance and Tukey 's post hoc test. P < 0.05 was used as the criterion for a significant difference. All experiments were repeated three times.
Results Fancd2 was up-regulated in endometrial cancer and associated with chemoresistance Fancd2 expression in EC tissues and cells was measured to explore the role of Fancd2 in EC. RT-qPCR and western blot showed that Fancd2 expression was significantly increased in the EC group compared with the Normal group (Fig. 1 A, B), and was markedly higher in Ishikawa cells than in hEECs (Fig. 1 C, D). Further, to observe the effect of Fancd2 on EC chemoresistance, Ishikawa cells were subjected to transfection to inhibit or enhance Fancd2 expression. RT-qPCR and western blot confirmed the up-regulation of Fancd2 in the Fancd2 group compared with the Vector group and the down-regulation of Fancd2 in the si-Fancd2 group compared with the siNC group (Fig. 1 E, F). Subsequently, the effect of over-expression or knock-down of Fancd2 on drug resistance in Ishikawa cells was assessed by MTT assay. The assay results demonstrated that Ishikawa cells over-expressing Fancd2 presented with significantly increased inhibitory concentration (IC)50 under paclitaxel, cisplatin, doxorubicin, and sorafenib treatment, while knock-down of Fancd2 showed the opposite outcome (Fig. 1 G–J). These results indicated that Fancd2 was up-regulated in EC and was associated with resistance to chemotherapy. Fancd2 was up-regulated in Ishikawa/TAX cells and associated with paclitaxel resistance Measurement of Fancd2 expression in Ishikawa/TAX cells was conducted for further observing the effect of Fancd2 on chemoresistance in Ishikawa cells. Based on the RT-qPCR and western blot results, the expression of Fancd2 was much higher in Ishikawa/TAX cells than in Ishikawa cells (Fig. 2 A, B). Subsequently, MTT assay revealed that compared with Ishikawa cells, IC50 was markedly increased in Ishikawa/TAX cells after paclitaxel treatment (Fig. 2 C). Clonal formation experiment exhibited that the number of cell clones in Ishikawa-PR cells increased significantly compared with Ishikawa cells (Fig. 2 D). Besides, flow cytometry further revealed that compared with Ishikawa cells, the level of apoptosis in Ishikawa PR cells was significantly reduced (Fig. 2 E). These results suggested that up-regulation of Fancd2 expression was possibly associated with cellular resistance to paclitaxel. Ferroptosis was decreased in Ishikawa/TAX cells Ferroptosis is a common mode of death in cancer cells [ 16 ]. Detection of ROS level, MDA activity, GSH, and Fe 2+ levels in Ishikawa/TAX cells allowed the determination of ferroptosis level in cells. Compared with Ishikawa cells, Ishikawa/TAX cells showed a significant decline in ROS level, MDA activity, GSH, and Fe 2+ levels (Fig. 3 A–D). Subsequently, the protein expression levels of SLC7A11 and GPX4 in the cells were detected by western blot. The western blot outcomes showed that the protein expression levels of SLC7A11 and GPX4 in the Ishikawa/TAX group were significantly higher than those in the Ishikawa group (Fig. 3 E). The above results indicated a decrease in ferroptosis levels in paclitaxel-resistant Ishikawa cells. Knock-down of Fancd2 improved paclitaxel sensitivity by promoting ferroptosis Subsequently, Fancd2 expression was knocked down by transfection of Fancd2 siRNA into Ishikawa/TAX cells. RT-qPCR showed that Fancd2 expression levels were significantly reduced in Ishikawa/TAX cells in the si-Fancd2 group compared with the siNC group (Fig. 4 A). Additionally, MTT assay results indicated that the IC50 of cells in the si-Fancd2 group was significantly lower than that in the siNC group (Fig. 4 B). Furthermore, the clone formation experiment showed that compared with the siNC group, the number of cell clones in the si-Fancd2 group significantly decreased, while the number of cell clones in the Fer-1 group significantly increased; relative to the si-Fancd2 group, the number of cell clones in the si-Fancd2 + Fer-1 group significantly increased; in contrast to the Fer-1 group, the number of cell clones in the si-Fancd2 + Fer-1 group was significantly reduced (Fig. 4 C). Flow cytometry showed that compared with the siNC group, the apoptosis level in the si-Fancd2 group was significantly increased, while that in the Fer-1 group was significantly decreased. In comparison with the si-Fancd2 group, the level of apoptosis in the si-Fancd2 + Fer-1 group was significantly reduced; while compared with the Fer-1 group, the apoptosis level in the si-Fancd2 + Fer-1 group was significantly increased (Fig. 4 D). To verify the relationship of Fancd2 over-expression with cellular resistance and ferroptosis, Fancd2 was knocked down, and the ferroptosis inhibitor Fer-1 was employed to treat cells. The results showed that knock-down of Fancd2 significantly increased the levels of ROS, GSH, and Fe 2+ and the activity of MDA in Ishikawa/TAX cells compared with the siNC group (Fig. 5 A–E). Western blot results also revealed that knock-down of Fancd2 significantly reduced the protein expression levels of SLC7A11 and GPX4 in Ishikawa/TAX cells (Fig. 5 F). However, the effect of Fancd2 knock-down on ferroptosis in Ishikawa/TAX cells could be significantly inhibited after Fer-1 treatment. Collectively, knock-down of Fancd2 improved the sensitivity of Ishikawa/TAX cells to paclitaxel by inducing ferroptosis.
Discussion Although treatment strategies for EC have gradually improved, patients with advanced EC are prone to develop drug resistance during chemotherapy, seriously affecting the therapeutic effect [ 17 ]. Several recent studies have revealed genes associated with chemosensitivity, and proposed some novel and promising approaches to address resistance [ 18 ]. In this study, Fancd2 expression was up-regulated in EC tissues, and more importantly, Fancd2 was associated with sensitivity to chemotherapeutic agents in EC. Previous studies have linked Fancd2 to susceptibility to cancer. Fancd2 is a key player in the DNA repair pathway and is important for maintaining genomic stability in response to various gene damage [ 13 ]. Recently, there are studies that up-regulation of Fancd2 expression is positively correlated with tumor size and poor prognosis in ovarian cancer, nasopharyngeal carcinoma, glioblastoma, and EC [ 19 ]. Consistent with previous reports, we observed a significant up-regulation of Fancd2 expression in EC tissues and EC cell lines (Ishikawa); and interestingly, after knock-down of Fancd2 expression in Ishikawa, the cells showed a marked decrease in resistance to the above-mentioned chemotherapeutic agents. Yao C et al. demonstrated that Fancd2 was associated with doxorubicin resistance in leukemia [ 20 ]. Dai et al. found that cisplatin resistance in drug-resistant lung cancer cells could be effectively reversed by inhibiting the gene expression level of the Fancd2/BRCA pathway [ 21 ]. In addition, the results of this study showed that Fancd2 expression was significantly increased in Ishikawa/TAX cells compared with Ishikawa cells, and knocking down Fancd2 could restore the sensitivity of Ishikawa/TAX cells to paclitaxel. Also Xiao et al. reported that curcumin reversed the multidrug resistance of multiple myeloma cells MOLP-2/R by inhibiting the Fancd2 pathway [ 22 ]. Taken together, these findings suggest that high expression of Fancd2 may be associated with drug resistance. In the course of exploring the mechanism of Fancd2 in EC, we observed that Fancd2 expression was associated with ferroptosis. Different from apoptosis, necrosis, and autophagy, ferroptosis is an intracellular iron-dependent form of cell death [ 23 ]. Briefly, ferroptosis is characterized by an imbalance in the redox state and manifested as an increase in the ROS level [ 24 ]. Friedmann A J et al. pointed out that tumor cells could significantly enhance their defense against oxidative stress by negatively regulating ferroptosis [ 25 ]. In this study, we found a significant decline in ROS level, MDA activity, GSH level and Fe 2+ level, and a marked increase in SLC7A11 and GPX4 expression in Ishikawa/TAX cells. Such results indicated a low level of ferroptosis in Ishikawa/TAX cells. After knock-down of Fancd2 expression, the above-mentioned ferroptosis-related indicators showed a significant opposite change, indicating a marked increase in ferroptosis levels. Zhang C et al. revealed that Fancd2 was a protein associated with ferroptosis, and high levels of Fancd2 significantly inhibited cellular ferroptosis levels [ 23 ]. Song et al. demonstrated that abnormal expression of Fancd2 led to ferroptosis and was associated with temozolomide resistance in glioblastoma [ 26 ]. These findings yield a conclusion that knock-down of Fancd2 in Ishikawa/TAX cells induces cellular ferroptosis and increases drug sensitivity. This study preliminarily demonstrated in vitro that chemotherapy resistance of Fancd2 in Ishikawa cells was closely related to the reduction of ferroptosis levels. However, there are many shortcomings in this study. First, the results of this study were not further verified through animal experiments. Second, in vitro studies, we used only one cell line and did not perform similar experiments in multiple cell lines and corresponding cell lines of the G2 and G3 phases of the EC. These defects need to be further discussed in the future research.
Conclusion Fancd2 expression was significantly up-regulated in EC. Besides, Fancd2 led to chemoresistance by decreasing ferroptosis levels in Ishikawa EC cell lines. Therefore, Fancd2 may serve as a biomarker and therapeutic target for chemoresistance in EC. This study provides a new approach to address multi-drug resistance in EC cells.
Background Resistance can develop during treatment of advanced endometrial cancer (EC), leading to unsatisfactory results. Fanconi anemia complementation group D2 (Fancd2) has been shown to be closely related to drug resistance in cancer cells. Therefore, this study was designed to explore the correlation of Fancd2 with EC resistance and the mechanism of Fancd2. Methods Real-time quantitative PCR (RT-qPCR) was used to detect the expression of Fancd2 in EC tissues and cells. EC cells (Ishikawa) and paclitaxel-resistant EC cells (Ishikawa/TAX) were transfected to knock down Fancd2. In addition, the ferroptosis inhibitor Ferrostatin-1 was adopted to treat Ishikawa/TAX cells. The sensitivity of cancer cells to chemotherapeutic agents was observed via 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2-H-tetrazolium bromide (MTT) assay, and inhibitory concentration (IC)50 was calculated. Reactive oxygen species (ROS) levels were measured by flow cytometry, the activity of malondialdehyde (MDA) and the levels of glutathione (GSH) and Fe 2+ in cells were detected by corresponding kits, and protein expression of solute farrier family 7 member 11 (SLC7A11) and glutathione peroxidase 4 (GPX4) was obtained through western blot. Results Compared with the normal tissues and endometrial epithelial cells, Fancd2 expression was significantly increased in EC tissues and Ishikawa cells, respectively. After knock-down of Fancd2, Ishikawa cells showed significantly increased sensitivity to chemotherapeutic agents. Besides, compared with Ishikawa cells, the levels of ROS, the activity of MDA, and the levels of GSH and Fe 2+ were significantly decreased in Ishikawa/TAX cells, while the expression levels of SLC7A11 and GPX4 were significantly increased. Knock-down of Fancd2 significantly increased the ferroptosis levels in Ishikawa/TAX cells, but this effect could be reversed by Ferrostatin-1. Conclusion Fancd2 increases drug resistance in EC cells by inhibiting the cellular ferroptosis pathway. Supplementary Information The online version contains supplementary material available at 10.1186/s12905-023-02857-4. Keywords
Supplementary Information
Acknowledgements Not applicable. Authors’ contributions Hai-Hong Lin and Wei-Hong Zeng designed and coordinated the study. Ru Pan and Nan-Xiang Lei collected and analyzed the data. All authors contributed to the interpretations and conclusions presented. Hai-Kun Yang and Li-Shan Huang wrote the manuscript. Funding This study is supported by Meizhou City Science and Technology Plan Project (No. 2022C0301003). Availability of data and materials The authors confirm that the data supporting the findings of this research are available within the article. Declarations Ethics approval and consent to participate The study was approved by the ethical committee of Meizhou People’s Hospital, Meizhou Academy of Medical Sciences (Ethical No.: 2022-C-83) and written informed consent was acquired from all patients. All methods were carried out in accordance with relevant guidelines and regulations. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:47
BMC Womens Health. 2024 Jan 13; 24:41
oa_package/1f/c9/PMC10787983.tar.gz
PMC10787984
38218933
Introduction Postoperative knee rehabilitation is paramount to maintaining joint motion function [ 1 – 4 ]. The incidence of knee stiffness is reported to be as high as 35% with no or inappropriate rehabilitation [ 2 , 4 – 6 ] and significantly affects patient quality of life and satisfaction. Clinically, two intervention strategies can be used to guide patients in postoperative rehabilitation: continuous passive motion (CPM) and traditional physical therapy. CPM was introduced in the 1970s and relies primarily on moving mechanical clips to improve joint mobility and thereby achieve improvement [ 7 – 9 ]. CPM has a positive biological effect on tissue healing, edema, and hematoma [ 10 – 12 ]. Vasileiadis et al. [ 13 ] confirmed the role of CPM in the maturation of heterotopic ossification by performing CPM rehabilitation in a 46-year-old male patient with right deviation. Stopping the progression and maintenance of heterotopic ossification became a useful aid in increasing joint mobility. Traditional physical therapy mainly includes dynamic floor exercises, suspension, gait training, closed chain exercises, open chain exercises, and pedal exercises, and the basic idea is an active activity. At present, both rehabilitation strategies are used to guide postoperative rehabilitation, but there is high controversy in the industry regarding the clinical application of both. Therefore, it is extremely important to conduct high-quality clinical evidence-based studies to explore reasonable rehabilitation strategies after knee surgery to guide clinical practice. Previous studies have reported that the use of CPM has advantages over physical therapy, including reduced swelling, faster return of joint mobility, and reduced analgesia [ 14 ]. However, there is still a great deal of controversy about whether it is beneficial for patients' postoperative recovery in the past two decades of research [ 4 , 15 – 18 ]. Many researchers support these benefits; on the contrary, many studies show that the advantages of CPM compared to physical therapy are not as clear [ 1 , 2 ]. Hence, based on previously presented evidence from high-quality randomized controlled trials, our evidence-based study aimed to determine the effectiveness of CPM compared to physical therapy in postoperative orthopedic rehabilitation, comparing key outcomes including knee range of motion (ROM), The Western Ontario and McMaster University Osteoarthritis Index (WOMAC) pain scores, length of stay, satisfaction of patients, postoperative complications, and medical costs.
Method This systematic review and meta-analysis following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement protocol. This study was registered in the International Prospective Register of Systematic Reviews (PROSPERO) (CRD42023410252). Search strategy and eligibility criteria PubMed, Embase, and Web of Science databases were searched. We take PubMed as an example to demonstrate the search strategy for this study (Additional file 1 : Appendice 1). We developed specific search strategies for each database, and references of the identified studies were checked for potential eligibility. Relevant clinical outcomes published in January 2000 and April 2023 were retrieved. Our inclusion criteria for this meta-analysis included: (1) publications comparing the results for both (physiotherapy interventions including active ground exercise, suspension, gait training, closed chain exercise, open chain exercise, and pedal exercise in the control and experimental groups, and CPM in the experimental group); (2) randomized controlled trial and clinical study; (3) the sample size is feasible and the statistical analysis is scientific; (4) primary selection of patients after knee arthroplasty; (5) published literature in English. Using a standardized data form, we extracted several data elements from the included studies, and two investigators (JZFand WDF), independent of each other, extracted and screened the literature as well as the data according to the inclusion as well as data extraction. If any disagreements arose, they were resolved by discussion or validation by a third-party investigator (XC). Data abstraction We extracted general details and categories mainly including (1) demographics, (2) study characteristics, (3) outcome and prospective measures. Patient statistics included gender, age, and the total number of patients. Characteristics of the trial included author, publication date, study type, CPM, or physical therapy. Outcome measures for this study included ROM (active knee flexion extension and passive knee flexion extension), pain, function, complications, length of hospital stay, and patient satisfaction and were cross-checked. Prognostic indicators such as postoperative pain were evaluated using the Western Ontario and McMaster University Osteoarthritis Index (WOMAC) score and Visual Analog Scale (VAS) score. The WOMAC Osteoarthritis Index [ 19 ] was developed by Bellamy et al. and is one of the most commonly used, patient-reported prognostic indicators for patients with lower extremity osteoarthritis. The WOMAC contains 24 items covering three dimensions: pain (5 items), stiffness (2 items), and function (17 items). The WOMAC has been extensively tested for validity, reliability, feasibility, and responsiveness over time. The VAS has been used since the 1920s to measure intangible indicators of pain, quality of life, and anxiety, and in recent years, the VAS has become a very popular tool for measuring pain [ 20 ]. Risk of bias The quality of the included studies was assessed independently by two reviewers. In this regard, the Jadad Scale (four categories: (1) Randomization, (2) Concealment, (3) Blinded, and (4) Withdraw or drop-out) for RCT , The Cochrane Risk of Bias (ROB) tool for randomized controlled trials was used to assess the methodological quality of the included studies in our study [ 13 ]. Each standard was grouped into three various categories of "low risk", "high risk", or "unclear risk" of bias, and then, the quality of the randomized studies was determined according to institutional health research and quality standards. Statistical analysis We used RevMan version 5.4 Review Manager software for meta-analysis. Weighted mean differences (WMDs) were used to represent the results for continuous data, and 95% CI was used for interval estimation. If p < 0.05 was satisfied suggesting that the difference was statistically significant. Meanwhile, the heterogeneity test was performed on the included literature, and when p ≥ 0.10 and I 2 ≤ 50%, there was no significant heterogeneity, and the fixed-effect model was used to combine the effect sizes for analysis; if p < 0.10 and I 2 > 50%, it indicated that the heterogeneity among the included studies was large, and the sources of heterogeneity were further examined, and after excluding the obvious heterogeneity, the randomized effect model was applied for analysis [ 21 ]. If the heterogeneity between studies is significant, subgroup or sensitivity analyses are required to clarify the source of heterogeneity. Trials are subject to clinical and methodological differences, and in this study subgroup analysis based on available data according to follow-up time was performed to generate a final forest plot for description.
Results A total of 1025 publications was retrieved according to the search method, and a total of 6 clinical articles were screened for inclusion in the analysis based on the minimum standard. Figure 1 represents the screening process. Six direct comparisons of 557 cases of CPM after TKA as well as other physiotherapy RCT were included in this meta-analysis [ 8 , 22 – 26 ]. The baseline information for these studies is listed in Table 1 . Range of motion A comparative analysis of passive knee flexion, passive knee extension, active knee flexion, and active knee extension included in the study was mainly conducted to compare the range of knee motion at different periods. Passive knee flexion For short-term postoperative recovery, CPM produced better results in the first three days of postoperative recovery compared to physical therapy, and six studies reported long-term (3-month postoperative follow-up) results for passive knee flexion, and we analyzed the results using a random-effects model in which WMD was similar between the experimental and control groups (WMD, − 0.17; 95% CI, − 0.98 to 0.64; p =0.68). There was no clear evidence of statistically significant heterogeneity throughout the analysis ( I 2 =28%; p = 0.23) (Fig. 2 A). Passive knee extension A total of five studies with 471 patients was analyzed, and we used a random-effects model to analyze the results. There were no significant differences between the two groups at long-term follow-up (WMD, − 0.28; 95% CI, − 1.47 to − 0.92; I 2 =65%, p = 0.65) (Fig. 2 B). Length of hospitalization A total of 74 patients were included in 2 studies for analysis of length of stay; CPM generates significantly higher in length of stay (WMD, 0.50; 95% CI, − 0.31 to 0.69; I 2 =3%, p < 0.00001) (Fig. 3 ). Pain evaluation Two studies were scored by WOMAC and analyzed by taking a random-effects model (WMD, 6.75; 95% CI, − 6.75 to 8.10; p = 0.86), with a large heterogeneity between the studies' results. The experimental group scored slightly higher on the WOMAC functional difficulty score, but no significant differences were found after two weeks or on any follow-up measures (Fig. 4 A). Moreover, two studies were scored by VAS and analyzed by taking a random-effects model (WMD, 9.41; 95% CI, 3.37–5.45; p = 0.002), with a large heterogeneity between the study results. The experimental group scored slightly higher on the VAS functional difficulty score. The VAS was performed, and there was a significant difference between the experimental and control groups (Fig. 4 B). Satisfaction with treatment For most patients, their status (perceived outcomes) was "better" compared to preoperative. Patients were generally satisfied with their treatment and outcomes in both the experimental and control groups. The CPM group also did not show a significant advantage in terms of patient-perceived outcomes [ 24 ]. Cost in hospital Compared with other physical treatments [ 24 ], CPM generates significantly higher treatment costs and incurs more care costs. Risks of bias All six included RCTs were unblinded. The six RCTs were relatively well designed with a Jadad score range from 4 to 6 points, which indicated that they were of high quality. The Jadad score is summarized in Table 2 . None of the included literature mentioned allocation concealment; the methodological assessment of the quality of the included literature is shown in Fig. 5 A, as CPM requires patient consent and signed informed consent, so such studies were unblinded and highly biased in the blinded method. Of the 7 risks of bias domains (blinding of participants and personnel, performance bias) proved to have a high risk of bias. The graph shows "+" for attainment and "-" for non-attainment. Figure 5 B shows the quality assessment of each entry of the methodological assessment.
Discussion The present study finds that combined with CPM did not significantly improve postoperative functional recovery compared to physical therapy. There was no difference between the two in terms of time to discharge and patient satisfaction. Overall, CPM did not show an advantage in postoperative patient recovery. Rather, it was associated with increased equipment costs and costs of care. Therefore, the current findings are insufficient to support the routine use of CPM to facilitate the recovery process after arthroplasty. In addition, the heterogeneity of included studies was significant. However, we performed a subgroup analysis of WMD to investigate the source of heterogeneity. The association between CPM and ROM is described at different times, i.e., baseline, day 3 or when the maximum value is reached, probably because these times are highly dependent on the time and angle set by the CPM device. Nonetheless, our subgroup analysis showed that regardless of when ROM was measured, the increase in CPM still produced the same results as physical rehabilitation. Although several previous studies have confirmed that CPM improves ROM only in the initial postoperative period and does not have much effect on long-term postoperative recovery, this is consistent with the results of our present meta-analysis. The association between CPM and ROM was described at different times, i.e., baseline, day 3, or at the time of maximal value, possibly because these times were highly dependent on the timing and angle of the CPM device settings. However, our subgroup analyses showed that regardless of when ROM was measured, an increase in CPM still produced the same results as physical recovery. Yang et al [ 27 ] found that CPM use was not frequently associated with improved knee ROM and functional outcomes from hospital discharge to a final follow-up. In our study, the analysis of patient satisfaction was added, as well as the conclusion that CPM generates more inpatient spending and longer hospital stays. In actual clinical practice, however, the use of CPM devices remains the standard of care in many institutions for rehabilitation [ 28 ], although the provision of CPM to patients has now been shown to be associated with insignificant long-term benefits and the short-term therapeutic role of the procedure remains controversial [ 16 , 29 ]. The primary goal of using a CPM device is to increase short-term postoperative knee ROM, as several studies have reported short-term efficacy of CPM in improving CPM [ 30 ], Although most studies have shown nonsignificant results for CPM, CPM is also heavily used, which is related to subjective patient factors as well as recovery expectations, and should be validated by including a larger sample of patients for follow-up. Lee et al studied new CPM machines compared to previous conventional CPM machines to form a clinical assessment of the usefulness and effectiveness of seated CPM machines in patients undergoing total knee arthroplasty, using more objective tools such as digital inclinometers and handheld dynamometers to measure ROM [ 29 ]. We clarified that the difference in the effect of CPM and PT on patients' motor function recovery was not significant. On this basis, the patient's satisfaction is important [ 14 ]. Several previous studies have shown no statistical difference between the two in terms of patient satisfaction. In Gatewood et al. [ 31 ] by analyzing the efficacy of the means of rehabilitation after knee surgery, it was noted that CPM did not improve in terms of patient satisfaction. Wirries et al. [ 32 ] prospectively randomized the analysis of patient satisfaction with CPM after TKA through 40 patients, using the WOMAC and the Knee Social Score (KSS), to assess patient satisfaction and knee function, ultimately concluding that there was no significant difference between the both. Our findings also show that CPM does not improve patient satisfaction, possibly because CPM does not show benefit in any of the outcome indicators assessed, provides additional costs, and requires additional training for implementation [ 33 ]. In the study conducted by Joshi et al [ 32 ], two patients in the CPM group had postoperative complications. One patient was discharged with an acute quadriceps tendon tear and the other had a deep hematoma. One patient in the no-CPM group had a very deep wound dehiscence after a fall. Mau-Moeller et al. [ 26 ] systematically evaluated the effectiveness of TKA's new active sling inpatient ROM exercise program; this physical therapy was easy to perform during hospitalization and was less expensive than CPM treatment. Musa Eymir et al. [ 34 ] held that AHSE (active heel gliding exercise) therapy provides more practical rehabilitation and leads to beneficial outcomes for patients with TKA. Therefore, their active exercise approach that encourages patients to participate in rehabilitation should be the first choice for acute postoperative rehabilitation after TKA rather than CPM. Postoperative knee rehabilitation is essential to maintain joint motor function and significantly affects the quality of life and satisfaction of patients. This study describes in detail the clinical applicability of CPM and PT through meta-analysis, which is of great significance for the selection of rehabilitation exercises and the development of the next rehabilitation program for patients in clinical practice. Meanwhile, our meta-analysis also has some limitations. Firstly, CPM protocols and follow-up periods were inconsistent across all studies, which may lead to the possibility of bias. The long-term impact of CPM should be further assessed. Furthermore, due to the nature of the CPM equipment, it was not possible to blind the subjects to CPM grouping. In addition, some patients had received TKA before this study and therefore knew that the use of CPM devices as standard, could lead to effects that could have uncontrolled patient implications. Therefore, an assessment of the risk of bias revealed a generally high risk of bias in allocation concealment (selection bias) and participant blindness (performance bias). In the case of CPM application, however, these situations are unavoidable. These inconsistent results may be due to inappropriate matching of the CPM machine to the patient as well as measurement errors in ROM between studies.
Conclusion Combined with CPM did not significantly improve postoperative functional recovery relative to physical therapy. There was no difference between the two in terms of time to hospital discharge and patient satisfaction. Overall, CPM did not show superior benefits for postoperative patient recovery. On the contrary, it was associated with increased equipment costs and care expenses. Therefore, the results of the current study are insufficient to support the routine use of CPM to facilitate the recovery process after arthroplasty. We believe that as CPM is used more in orthopedics, further optimization of measurement structures and device innovations are needed for additional evaluation.
Background Continuous passive motion (CPM) is commonly used as a postoperative rehabilitation treatment, along with physical therapy, for postoperative knee rehabilitation. However, the comparison between the two in terms of efficacy in postoperative knee replacement recovery is unclear. Purpose To compare efficacy and safety of combined CPM versus physical therapy alone in postoperative rehabilitation after knee arthroplasty. Methods PubMed, Embase, and Web of Science databases were used to retrieve and access clinical studies on the efficacy of CPM compared with physical therapy. Review Manager software was used for study publication bias assessment and data analysis based on inclusion criteria. Results A total of 6 articles covering 557 patients were included in the study. In terms of range of motion (ROM), passive knee flexion was similar between CPM and physical therapy (PT) (WMD, − 0.17; 95% CI, − 0.98–0.64; p = 0.68). At long-term follow-up, passive knee extension was similar between CPM and physical therapy (PT) (WMD, − 0.28; 95% CI, − 1.47 to − 0.92; I 2 = 65%, p =0.65). In addition, CPM generates significantly higher in length of stay (WMD, 0.50; 95% CI, − 0.31 to 0.69; I 2 = 3%, p < 0.001). CPM generates significantly higher treatment costs and incurs more care costs relative to physical therapy. Conclusion Compared to PT, combined with CPM failed to significantly improve ROM of the knees and patient’s satisfaction. In addition, CPM treatment significantly increased the cost of hospitalization. Supplementary Information The online version contains supplementary material available at 10.1186/s13018-024-04536-y. Keywords
Supplementary Information
Abbreviations continuous passive motion physical therapy range of motion Acknowledgements We thank DaoFeng Wang for his outstanding statistical analysis guidance. Author contributions LWH did conceptualization; JZF collected the data; JZF, ZWP analyzed the data; JZF done original manuscript writing; ZWP and XC checked the language; GWL contributed to writing—review and editing. All authors read the final manuscript and approved the publication. Funding This work was supported by the special fund of the National Clinical Research Center for Orthopedics, Sports Medicine and Rehabilitation. Availability of data and materials Relevant data can be available by contacting the corresponding author. Declarations Ethics approval and consent to participate There are no any ethical/legal conflicts involved in the article. Consent for publication All authors have read and approved the content, and agree to submit for consideration for publication in the journal. Competing interests Each authors certifies that there is no conflict of interest relevant to this article.
CC BY
no
2024-01-15 23:43:47
J Orthop Surg Res. 2024 Jan 13; 19:68
oa_package/38/2a/PMC10787984.tar.gz
PMC10787985
38218878
Introduction Lung malignancy stands as the foremost contributor to sickness and demise linked to neoplasms. Non–small-cell lung cancer (NSCLC) is the most common variant and exhibits a disheartening outlook; this is primarily due to the frequent occurrence of locally advanced or disseminated metastasis in the majority of patients upon initial diagnosis or after surgical intervention [ 1 ]. Classical chemotherapy exhibits restricted efficacy in the management of NSCLC, with a wide range of overall response rates varying from 6.7 to 10.8%, and a meager 5-year survival rate ranging from 7 to 14% [ 2 ]. However, there have been significant changes in the treatment landscape for NSCLC in recent years primarily as a result of the introduction of immunotherapy [ 3 ]. The field of immunotherapy has recently brought about a revolutionary shift in the treatment of NSCLC across diverse scenarios, thus playing a vital role in augmenting the well-being of these individuals [ 4 ]. Numerous clinical studies have consistently demonstrated the effectiveness of immune-checkpoint inhibitors (ICIs) in the treatment of diverse conditions. Evidence has corroborated the efficacy of anti-programmed death 1 (PD-1) antibodies, anti–PD-1 ligand (PD-L1) antibodies, and anti–cytotoxic T-lymphocyte–associated protein 4 (CTLA-4) antibodies [ 5 ]. Nonetheless, most patients with NSCLC do not note substantial advantages solely from immunotherapy [ 6 ]; therefore, it is essential to investigate the potential of combination therapies to enhance the efficacy of immunotherapy. Anlotinib, a multi-targeted anti-angiogenic agent, is a small-molecule compound that has been shown to have inhibitory effects on both tumor cells and angiogenesis [ 7 ]. New findings from recent scientific research have provided strong evidence suggesting that the combination of anlotinib and PD-1 inhibitors could boost results for individuals with advanced lung cancer, specifically improving both progression-free survival (PFS) and overall survival (OS) [ 8 ]. Despite positive results observed in patients with NSCLC with negative driver mutations, there are still some individuals who do not benefit from this treatment. The precise factors contributing to this lack of response have not yet been elucidated [ 9 , 10 ]. Lipids have a vital function as structural constituents of cellular membranes and as secondary messengers within cells. Emerging data have progressively underscored the noteworthy involvement of lipids in the development of diverse forms of cancer, such as lung cancer [ 11 – 13 ]. Long-chain fatty acyl-CoA synthetases (ACSLs), which have been discovered to have the potential to promote the upregulation of lipids, are recognized for their significant role in breast and colorectal cancer and their possible oncogenic properties. However, interestingly, they also exhibit potential tumor-suppressor properties in lung cancer [ 14 ]. Increased levels of lipids, including phospholipids, neutral lipids, and triglycerides, have been observed in lung cancer [ 15 ]. Furthermore, alterations in sphingolipid metabolism have also been identified in lung cancer. The presence of sphingosine kinase 2 (SPHK2) has been linked to unfavorable survival outcomes in NSCLC as well as resistance to gefitinib EGFR TKI therapy [ 16 ]. In general, the reprogramming of lipid metabolism has become a crucial contributor to the advancement and progression of lung cancer. As described in this article, our research revealed that patients with advanced NSCLC lacking driver mutations displayed distinct responses upon receiving a combination therapy involving a PD-1 inhibitor and anlotinib. Further, a comparative analysis of lipid composition in patients who underwent treatment with anlotinib in conjunction with PD-1/PD-L1 inhibitors was conducted. The findings demonstrated that, in the group showing partial responses(PR), there were no notable alterations in lipids between before and after treatment. However, in the group with stable disease (SD), only one phosphatidylglycerol (PG) and three phosphatidylinositol (PIs) exhibited a significant increase after therapy. Conversely, among patients with progressive disease (PD), there was a substantial upregulation in two PGs and 17 PIs. The aforementioned results suggest that ensuring a well-balanced lipid profile is crucial for effectively treating patients with advanced NSCLC lacking driver mutations. This can be achieved by employing the combination of anlotinib and PD-1/PD-L1 inhibitors. Notably, an elevation in PG levels—specifically, the level of PI—after treatment may result in an unfavorable treatment response. By broadening the comprehension of lipid metabolism in lung cancer, this investigation enhances the understanding of potential therapeutic strategies and facilitates the discovery of novel therapeutic biomarkers.
Materials and methods Participants Clinical records were collected from patients with advanced NSCLC with negative driver mutations at Hunan Cancer Hospital between July 2018 and March 2022. In the case of adenocarcinoma patients, it was recommended to utilize tissue samples for NGS sequencing. Patients without EGFR/ALK/ROS-1 driver mutations were included in the analysis. However, according to the 2022 Chinese Society of Clinical Oncology (CSCO) Guidelines for the Diagnosis and Treatment of non-small cell lung Cancer, genetic testing is not recommended for patients with advanced squamous cell carcinoma due to the extremely low EGFR/ALK/ROS-1 mutation ratio. The guidelines suggest that patients with lung squamous cell carcinoma, who can be considered genetically negative drivers, should receive conventional anti-tumor therapy. In this study, 7 out of the 15 recruited lung squamous cell carcinoma patients voluntarily underwent genetic testing, and the results showed that they tested negative for EGFR/ALK/ROS-1 mutations. This finding further supports the observation of a low mutation rate of these genes in lung squamous cell carcinoma. Therefore, the remaining 8 advanced lung squamous cell carcinoma patients who did not undergo genetic testing can also be considered as negative gene drive patients based on CSCO guidelines. The enrolled 30 participants who underwent a treatment regimen consisting of the administration of chemotherapy in conjunction with ICIs, either as their initial or subsequent therapeutic approach. After, when resistance to chemotherapy and ICIs emerged, the patients were given ICIs in combination with anlotinib. Blood samples were taken from enrolled patients before and after treatment involving ICIs and anlotinib. The clinical stage of each patient was determined using the eighth edition of the TNM classification. Prior to the administration of ICIs and anlotinib, the Eastern Cooperative Oncology Group (ECOG) guideline was used to evaluate the performance status. To qualify for inclusion in the present investigation, individuals needed to: (i) exhibit an ECOG performance status ranging from 0 to 1 as well as a histologically confirmed NSCLC clinical stage IIIb–IIIc or IV, (ii) have completed at least two courses of ICI plus anlotinib therapy, (iii) have evaluable disease, and (iv) not have any organ dysfunction. Participants with severe autoimmune diseases or those requiring systemic treatment with corticosteroids or other immunosuppressive medications were excluded from the study. The ethical approval document for the study (no. SBQLL-2021-092) was granted by the Hunan Cancer Hospital. All of the enrolled individuals provided informed consent by signing consent forms prior to their participation in the experiment. Therapy Patients with advanced NSCLC who experienced progression after receiving at least one round of chemotherapy as well as ICIs were subjected to a re-challenge involving the combination of ICIs and anlotinib. For a total of 14 days, patients were administered anlotinib orally at a dosage of 12 mg per day, which was followed by a one-week pause in the regimen. In a 21-day cycle, patients were administered a PD-1 inhibitor via intravenous injection on the first day, such as toripalimab (240 mg), carrelizumab (200 mg), sintilimab (200 mg), or pembrolizumab (200 mg). The treatment was continued until any of the following conditions occurred: progressive disease or death, patient refusal, unacceptable toxicity, pregnancy, or treatment withdrawal for any other reason. The evaluation of the response was conducted based on the Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1, using enhanced computed tomography (CT) scans recorded at two-month intervals [ 17 ]. Lipidomic analysis A revised approach inspired by the methodology outlined in Xuan et al.‘s study was used to analyze the lipid samples [ 18 ]. In brief, venous blood was collected into tubes containing heparin to prevent coagulation. The blood was then centrifuged using 2000 g lasting 15 min under 4 °C to obtain the serum component. The serum was mixed with 80 μL of methanol and 400 μL of MTBE (tert-butyl methyl ether). To obtain the serum component, the blood was centrifuged under 2000 g for 15 min at a temperature of 4 °C. Next, the serum was combined with 80 μL of methanol as well as 400 μL of MTBE (tert-butyl methyl ether), along with lipid standards. This mixture was vigorously vortexed for 30 s and then subjected to centrifugation to separate the upper phase. The separated phases were carefully collected and dried using vacuum evaporation. Finally, the desiccated samples were reconstituted by utilizing 100 μL of a blend consisting of methanol and methylene chloride in an equal proportion of 1:1. For lipid analysis, a mass spectrometer (QTRAP 6500; Danaher Corporation, Toronto, Canada) coupled by a Shimadzu LC-30 A (Shimadzu, Japan) system was used. To separate the lipid components, a CQUITY UPLC® BEH C18 column (2.1 × 100 mm, 1.7 μm; Waters Corp., Milford, MA, USA) was employed. To ensure optimal chromatographic performance, the following set of conditions was established: the oven temperature was maintained for 55 °C, 0.26 mL/min was established as flow rate, along with an injection volume of 5 μl. The mobile phase was composed of two solutions: solution A (a mixture of H 2 O and acetonitrile in a 40:60 ratio, v/v, containing 10 mM of ammonium acetate) and solution B (a mixture of acetonitrile and isopropanol in a 10:90 ratio, v/v, containing 10 mM of ammonium acetate). A gradient, consisting of varying proportions of two solutions (referred to as solutions A and B), was employed in this study. The mobile-phase composition during different time intervals was as follows: from 0 to1.5 min, 68% solution A and 32% solution B were used; from 1.5 to 15.5 min, the proportions of solution A and solution B were 15% and 85%, respectively; from 15.5 to 15.6 min, 3% solution A and 97% solution B were utilized; from 15.6 to 18 min, the same proportions of solution A and solution B were used as in the previous time interval; from 18 to 18.1 min, the mobile phase reverted back to 68%solution A and 32% solution B of; and, from 18.1 to 20 min, the same proportions of solution A and solution B were maintained. For the electrospray ionization, specific parameters were set. The curtain gas pressure was maintained at 20 psi, while the atomizing gas pressure was set at 60 psi. The ion source voltage was alternated between − 4500 and 5500 V, depending on the specific condition. The ion source temperature was carefully controlled and maintained at a constant 600 °C. In addition, an auxiliary gas pressure of 60 psi was applied. Multiple reaction monitoring was employed to monitor the reactions and obtain precise data. Moreover, quality control samples were incorporated to ensure the accuracy and reliability of the liquid chromatography-mass spectrometry examination. These quality control samples were created by blending samples under the same test conditions, and they were analyzed every third sample in order to evaluate the overall quality of the data. Statistical analysis The measurement of peak area determined the abundance of lipids in this study. After, the obtained data were processed and normalized using the website https://www.metaboanalyst.ca/ . This online platform is primarily focused on processing raw spectra, conducting general statistical analysis, and performing functional analysis [ 19 , 20 ]. In order to evaluate the highest covariance between lipidomic samples, i.e., both those treated with a PD-1 inhibitor and anlotinib, and those left untreated, the researchers conducted the discriminant analysis of partial least squares (PLS-DA). Examination of lipid molecule correlations was conducted using correlation heatmaps. The mean ± standard error of the mean format was used to present the data. A paired two-tailed Student’s t test was carried out to analyze the comparison between the non-treatment group and the conjunction of anlotinib and a PD-1 inhibitor–treated group. P < 0.05 was considered statistical significance.
Results Demographic information and therapeutic effect among patients A total of 30 advanced NSCLC patients lacking driver mutations were enrolled in this study. Tables 1 and 2 present the demographic details of these patients. As outlined in the methods section, the patients received the specified treatment regimen. Based on the combined effects of anlotinib and PD-1/PD-L1 inhibitors, patients were stratified into three groups. The first group ( n = 6) showed partial remission of the tumor after therapy (PR), the second group ( n = 17) had a tumor that remained stable after therapy (SD), and the third group ( n = 7) experienced tumor progression after therapy (PD). A lipid composition analysis was conducted on patients exhibiting advanced NSCLC who underwent a combination treatment involving PD-1/PD-L1 inhibitors and anlotinib We conducted an analysis of the lipid profiles to investigate the factors contributing to the varying treatment outcomes among patients. The lipid components of all three of the patient groups before and after therapy were examined. Through lipidomic analysis, a total of 460 lipids were identified, which can be classified into 18 subclasses. The different categories encompass various subclasses of lipids—namely, phosphatidylethanolamine, phosphatidic acid, phosphatidylcholine, PI, PG, phosphatidylserine, lysophosphatidylethanolamine, lysophosphatidic acid, lysophosphatidylcholines, lysophosphatidylinositol, lysophosphatidylglycerol, triacylglycerol, diacylglycerol, cholesteryl ester, fatty acid, ceramide, hexosylceramide, and sphingomyelin. The MetaboAnalyst R software package for conducting PLS-DA was employed to deploy a statistical analysis method relying on multivariate techniques in order to enhance the differentiation and detect unique metabolites among various groups. This analysis yielded evident disparities in the patients’ lipidomic profiles before (pink) and after (green) therapy, indicating a noticeable distinction among individuals from the PR, SD, and PD groups (Fig. 1 A, C and E). In addition, a Pearson correlation analysis to evaluate the resemblance among various types of lipids within three distinct groups was performed (Fig. 1 B, D and F). An examination of the patients’ lipidomic makeup across the three groups was conducted using lipid volume measurements. The findings revealed that there were no significant changes in lipids among the PR and SD groups between before and after treatment (Figs. 2 and 3 ). However, in the PD group, there was a notable increase in both PG and PI contents after treatment (Fig. 4 ). These results indicated that the maintenance of lipid equilibrium holds significant importance in the impressive efficacy of the conjunction of anlotinib and PD-1/PD-L1 inhibitors in treating advanced NSCLC. In addition, an imbalance in particular lipids, such as PG and PI, following treatment could suggest undesirable consequences. To investigate the lipid components closely associated with the therapeutic effect of PD-1/PD-L1 inhibitors in combination with anlotinib Specific constituents of PG and PI were systematically examined and analyzed in order to gain a deeper understanding of the lipid changes. According to the outcomes of this study, it was observed that the composition of PG exhibited a consistent upward trend. However, there were no significant changes observed in individuals of the PR group before and after treatment. Similarly, in the SD group consisting of patients with advanced NSCLC, the only notable change detected post-treatment was a significant up-regulation in the levels of PG36:1. In contrast, the PD group exhibited a substantial increase not only in PG36:1 but also in PG36:0 after treatment. Moreover, the observations revealed an overall upward trend in most PG components within the PD group following treatment, despite the lack of statistical significance in the observed differences (Fig. 5 ). After assessing the distinct elements of PI, it was observed that the composition of PI exhibited an increasing trend. However, there were no significant changes observed in individuals with advanced NSCLC from the PR group who were administered PD-1/PD-L1 inhibitors concurrently with anlotinib. A noteworthy upsurge in PI38:0, PI40:2, and PI44:4 concentrations was exhibited in advanced NSCLC patients in the SD group after therapy. Interestingly, we observed a significant change in PI levels following treatment in patients with advanced NSCLC in the PD group. After treatment, a significant up-regulation was observed in more than half of the PIs, including PI 34:0, PI 34:1, PI 34:2, PI 34:3, PI 36:0, PI 36:1, PI 36:2, PI 38:0, PI 38:1, PI 38:2, PI 38:3, PI 38:4, PI 38:6, PI 40:2, PI 40:3, PI 40:4, PI 40:5, and PI 40:6 (Fig. 6 ).
Discussion For individuals diagnosed with advanced NSCLC lacking driver mutations, the administration of anlotinib in conjunction with PD-1/PD-L1 inhibitors is considered a viable alternative treatment choice for later stages of the disease, and it has demonstrated noteworthy therapeutic efficacy [ 21 , 22 ]. However, there are still patients who do not benefit from this treatment approach. Therefore, a major challenge in clinical practice is to understand why individuals respond differently to drugs. One effective strategy to solve this problem is to identify and exploit potential targets that are triggered by and downstream of cancer-causing signaling pathways. Previous investigations have shown a significant increase in the de novo synthesis of endogenous lipids in numerous cancerous cells [ 23 – 25 ]. Studies have demonstrated that serum metabolomic profiling can reveal metabolic alterations associated with lung cancer, including amino acids, organic acids, and nitrogen compounds [ 26 , 27 ]. Additionally, lipid and lipid-like molecules have been identified as potential biomarkers for NSCLC. Lipids, as crucial components of cell membranes, undoubtedly influence the activity of proteins on the membrane [ 28 , 29 ]. Membrane proteins such as epidermal growth factor receptor and tumor necrosis factor receptors play critical roles in tumor signaling pathways [ 30 ]. In this particular study, noteworthy disparities in the lipid composition among individuals with advanced NSCLC who received a combination treatment of anlotinib and PD-1/PD-L1 inhibitors were observed. The objective of this investigation was to identify lipid components that may be associated with the therapeutic efficacy of advanced NSCLC treated with a combination of anlotinib and PD-1/PD-L1 inhibitors, from a lipidomics perspective. The results derived from this research provide valuable insights into potential innovative therapeutic strategies. During this study, the lipid compositions of individuals in the PR, PD, and SD groups were analyzed, revealing remarkable variations in the lipid composition across these groups. Further analysis identified 19 differential lipids, including two PGs and 17 PIs. PG and PI are two important classes of glycerophospholipids with diverse roles in cell signaling and lipid–protein interactions [ 31 ]. These molecules can be potential targets for novel drug development in the fight against cancer [ 32 , 33 ]. As a crucial structural lipid, PG acts as a precursor of cardiolipin, which is primarily found in mitochondrial membranes, and it plays a key role in mitochondrial functionality and membrane integrity [ 34 , 35 ]. Studies have shown that elevated levels of PG are present in renal cell and hepatocellular carcinomas [ 36 , 37 ]. During the investigation, a notable rise in the levels of two PGs was observed among patients with advanced NSCLC belonging to the PD group who were administered PD-1/PD-L1 inhibitors alongside anlotinib. Furthermore, one PG also exhibited a significant increase in patients with advanced NSCLC in the SD group. However, no PGs that showed significant differences in patients with advanced NSCLC in the PR group. These findings suggest that the abnormal accumulation of PG after treatment could result in irreversible respiratory injury and hinder the use of alternative energy sources to glucose, ultimately leading to tumor progression [ 34 ]. PIs make up only a small portion of the phospholipid content found in cells, yet they exert a pivotal influence on the progress and development of cancer [ 38 ]. The findings in this paper indicated that more than half of the PIs showed a significant increase among individuals with advanced NSCLC in the PD group after treatment. However, only three PIs showed a significant increase in advanced NSCLC patients of the SD group after treatment, and no PIs showed significant changes in patients with advanced NSCLC in the PR group. Prior investigations established that PIs have the ability to function as building blocks for the creation of phosphatidylinositol 4,5-bisphosphate as well as phosphatidylinositol 3,4,5-trisphosphate. These substances are known to be essential in the PI3K-AKT pathway, which regulates cell survival, proliferation, invasion, and growth [ 39 ]. The observations propose that an abnormal elevation in PI concentrations following therapy could impede the advantageous effects of anlotinib combined with PD-1/PD-L1 inhibitors for patients. To summarize, adopting a lipidomics approach, an investigation was performed that aimed at analyzing the elements that contribute to the divergent response of patients with advanced NSCLC harboring negative driver mutations when subjected to a combined therapeutic regimen of anlotinib and PD-1/PD-L1 inhibitors. Based on the results, we propose a possible mechanism by which abnormal elevations of PG and PI hinder the beneficial effects of anlotinib combined with PD-1/PD-L1 inhibitors for patients (Supplemental Fig. 1 ). We observed a positive correlation between PG and PI in each response group (PR, SD, and PD) during the analysis of Pearson correlation. This suggests that an increase in PG content corresponds to an increase in PI content, and vice versa. We hypothesize that the elevation of PG/PI levels could activate the PI3K-AKT pathway. This is because PI can serve as a substrate for PIP2, which is phosphorylated by PI3K. Activation of the PI3K-AKT pathway promotes tumor growth, which may explain the observed tumor progression in patients in the PD group. In these patients, the levels of PG/PI were abnormally elevated after treatment with anlotinib and a PD-1 inhibitor. However, we did not detect the activity of proteins involved in the PI3K-AKT signaling pathway. This limitation prevents us from fully supporting the proposed working model of our investigation. To validate our hypothesis, further investigations should be conducted to determine phosphoinositide 3-kinase kinase activity, as well as the concentrations of phosphoinositide (4,5) bisphosphate and phosphoinositide (3,4,5) trisphosphate. The findings provide valuable insights into lipid metabolism in advanced NSCLC, which not only offers potential for novel therapeutic approaches but also aids in the identification of new therapeutic biomarkers. In addition, lipid metabolism status has the potential to function as a prognostic indicator for determining the eligibility of patients with advanced NSCLC, who may gain advantages from the combination of anlotinib and PD-1/PD-L1 inhibitors. Study strengths and limitations The main advantage of this research is to identify potential factors that may influence the therapeutic outcomes observed in advanced NSCLC patients with negative driver mutations when treated with a combination of anlotinib and PD-1/PD-L1 inhibitors. However, the investigation solely focused on changes in lipids, and the specific mechanisms through which these lipids affect advanced NSCLC treatment are still unclear. The enrolled patients included not only those with lung adenocarcinoma but also those with lung squamous cell carcinoma. It is worth mentioning that the sample size in the PD group was relatively small. Therefore, further evidence is required to comprehensively examine and understand these mechanisms.
Conclusions and clinical perspective The administration of anlotinib along with PD-1/PD-L1 inhibitors has demonstrated promise as a viable tactic for managing individuals with advanced NSCLC lacking driver mutations. The plausible rationale behind this lies in the capacity of the therapy to induce modifications in the lipidomics of patients with advanced NSCLC. Within this study, novel insights have been unveiled regarding the correlation between the combination of anlotinib with PD-1/PD-L1 inhibitors and distinct lipids in advanced NSCLC patients. Such discoveries propose that directing interventions toward these lipid modifications could present a hopeful and encouraging methodology for managing patients with advanced NSCLC.
Background Studies have shown that integrating anlotinib with programmed death 1 (PD-1)/programmed death-ligand 1 (PD-L1) inhibitors enhances survival rates among progressive non–small-cell lung cancer (NSCLC) patients lacking driver mutations. However, not all individuals experience clinical benefits from this therapy. As a result, it is critical to investigate the factors that contribute to the inconsistent response of patients. Recent investigations have emphasized the importance of lipid metabolic reprogramming in the development and progression of NSCLC. Methods The objective of this investigation was to examine the correlation between lipid variations and observed treatment outcomes in advanced NSCLC patients who were administered PD-1/PD-L1 inhibitors alongside anlotinib. A cohort composed of 30 individuals diagnosed with advanced NSCLC without any driver mutations was divided into three distinct groups based on the clinical response to the combination treatment, namely, a group exhibiting partial responses, a group manifesting progressive disease, and a group demonstrating stable disease. The lipid composition of patients in these groups was assessed both before and after treatment. Results Significant differences in lipid composition among the three groups were observed. Further analysis revealed 19 differential lipids, including 2 phosphatidylglycerols and 17 phosphoinositides. Conclusion This preliminary study aimed to explore the specific impact of anlotinib in combination with PD-1/PD-L1 inhibitors on lipid metabolism in patients with advanced NSCLC. By investigating the effects of using both anlotinib and PD-1/PD-L1 inhibitors, this study enhances our understanding of lipid metabolism in lung cancer treatment. The findings from this research provide valuable insights into potential therapeutic approaches and the identification of new therapeutic biomarkers. Supplementary Information The online version contains supplementary material available at 10.1186/s12944-023-01960-7. Keywords
Supplementary Information
Acknowledgements We would like to thank the Changsha SmallAnt Biotechnology Co., Ltd for providing the technical support. Authors' contributions LL and YT conceived and designed the experiments. LL, SZ, HYY performed the experiments. LL, CHZ and YX analyzed the data. LL, NY and YT wrote the article. The author(s) read and approved the final manuscript. Funding This work was supported by the science and technology innovation Program of Hunan Province (2021SK51101, 2021SK51103). Availability of data and materials All data in this study can be obtained from the corresponding author up on request. Declarations Ethics approval and consent to participate This work was approved by the ethics committee of Hunan Cancer Hospital (ethics: no. SBQLL-2021-092) All recruited subjects provided informed consent by signing consent forms prior to participating in the experiment. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:47
Lipids Health Dis. 2024 Jan 13; 23:16
oa_package/62/81/PMC10787985.tar.gz
PMC10787986
38218888
Background Medicinal plants refer to plants that have been recognized and utilized by people to treat human and animal diseases. Medicinal plants play a role in protecting the lives and health of ethnic groups living in the remote areas of developing countries [ 1 , 2 ]. Some of these practices have also been applied in developed areas [ 3 , 4 ]. At least 80% of developing countries rely mainly on local traditional medicine to prevent and treat various diseases in humans and animals [ 5 ]. Medicinal plants are an important basis for the emergence and development of Chinese medicine [ 6 – 8 ]. Ethnoveterinary medicines are generally defined as those used based on folk expertise, beliefs, knowledge, practices, methods related to animals’ health, and to cure various ailments in the ethnic group areas [ 9 ]. Ethnoveterinary medicine is not only an important part of traditional medicine but also an indispensable part of local animal health and the most basic veterinary services [ 10 , 11 ]. Ethnoveterinary medicine plants (EMPs) are the plants used to prevent and control animal diseases, especially in remote and undeveloped areas where access to medical care is limited or missing. EMPs have a long history of practice, especially in countries with more developed animal husbandry practices [ 12 – 17 ]. At the last ten years, the topic of ethnoveterinary has developed a great interest among researchers in China, such as many surveys of ethnoveterinary and EMPs [ 18 , 19 ]. Traditional low-cost methods for treating animal diseases, rather than synthetic drugs, are often desired. The Bai people are the ancestral home of Yunnan [ 20 ]. With a population of 2.09 million, the Bai ethnic group is the 15th largest in China [ 21 ], mainly distributed in Yunnan, Guizhou, Hunan, and other provinces [ 22 ]. Most of the Bai people in China reside in the Dali Bai Autonomous Prefecture of Yunnan Province. The Dali Prefecture was the origin and main settler of the Bai people. The Bai nationality has its own language and belongs to the Bai branch of the Tibetan-Burmese family of the Sino-Tibetan language family [ 20 ]. Based on the regional and national characteristics, Bai and Chinese bilingual bicultural education are carried out for Bai students. Buddhism and “Benzhu” worship constitute an important part of the Bai religious culture. “Benzhu” worship is a unique religious belief of the Bai nationality, which is generally a hero in local myths and legends, and the Bai people regard “Benzhu” as the local protection god [ 23 ]. Bai medicine has a long history, and archaeology has been used since the Ming Dynasty. Bai ancestors generally used local herbs or traditional Chinese medicines to treat diseases [ 24 ]. Bai medicine is an accumulation and summary of the experiences of the Bai people in disease prevention and treatment over generations. Its diagnosis and treatment are characteristic of “Medicine with God.” Treating diseases and praying to gods do not conflict with each other. “Medicine with God” means doctors praying in the healing process. The generation of “Medicine with the God” is bound to primitive religious, cultural, and economic factors [ 25 ]. Local people have developed special diagnostic and treatment methods, such as spa, moxibustion, rolling egg (roll a shelled, boiled egg on the area of discomfort for relief), steam, and medicinal dietary therapies, and these therapies integrate the medical theories and methods of Han, Yi, Tibetan, and other ethnic groups. They were also good at using single-experience prescriptions [ 26 ]. Finally, a medical culture with national and regional characteristics was formed, which contributed greatly to the reproduction of the Bai people, including the research on the medicinal culture of Yunnan Bai people [ 27 ], Yunnan Bai people medicine [ 28 ], habitual plant medicine of the Bai people [ 29 ], and illustrated guide of medicinal plants of the Bai people [ 30 ]. In China, traditional knowledge of ethnoveterinary medicine originates from the daily livestock management of indigenous people and the long history of these practices. The Yunlong County is located west of Dali Bai Autonomous Prefecture and is a collection of remote mountainous areas, ethnic groups, poor areas, and alpine areas [ 31 ]. There are five deeply poor townships in Dali Bai Autonomous Prefecture, and there are four in Yunlong County, accounting for 80% of the deeply poor townships in the prefecture. According to the poverty standard line of 2300 yuan (per capita annual net income) established in 2011, Yunlong County had a total of 151,900 poor people in that year (the total population was 207,117) [ 32 ]. It has a large population in both mountainous and semi-mountainous regions. The Bai people in Yunlong County have a long history of livestock and poultry farming. They are rice farmers with a long history of farming in the plateau region, although the rice-planting area is small in Yunlong County, and the local economy is mainly mountainous agriculture. There are more than ten ethnic groups, including the Bai, Han, Yi, Miao, Hui, Dai, Lisu, and Achang, with the Bai people, accounting for the largest proportion (72.7%) of the total population [ 24 ]. Differences in culture, religion, customs, language, dietary habits, and living environments between ethnic groups have prompted the generation of several traditional medicines with distinct regional characteristics [ 33 ]. According to the results of the seventh national census, the population living in cities and towns in Yunlong County was 51,298, accounting for 28.04% of the total population. The population living in rural areas was 131,679, accounting for 71.96% of the total population [ 34 ]. Most of the rural population is scattered on mountain tops or hillsides; mountain roads are muddy and potholed, and transportation is inconvenient. These areas are far from urban areas, and agriculture and animal husbandry are the main sources of income. Indigenous people have a long history of raising livestock and poultry, such as black goats, pigs, cattle, donkeys, chickens, and ducks, to meet their needs and as a source of family income. In addition, according to the Circular on the 13th Five-Year Plan for Poverty Alleviation issued by the State Council on November 23, 2016 [ 35 ], The Yunlong County established several farms in various townships based on the advantages and endowments of natural resources to revitalize the countryside (Fig. 1 ). The local agricultural department crossbred non-local beef cattle with local breeds to increase meat production. Additionally, it provided policy assistance and economic support to local residents. Veterinarians have used medicinal plants to treat animal diseases, forming a set of unique knowledge systems for local traditional veterinary medicine; however, to date, these have not been systematically studied. Detailed information on the use of traditional ethnoveterinary knowledge by the Bai people in Yunlong County is lacking. In this study, ethnobotanical methods were used to investigate and catalog traditional EMPs in Yunlong County, China. This investigation will contribute to the cataloging of medicinal plants for the treatment of livestock diseases and uncover relevant knowledge of traditional Bai medicine.
Methods Study area The Yunlong County, Dali Bai Autonomous Prefecture, is located in the western part of Yunnan Province, in the longitudinal valley of the Lancang River at the southern end of the Hengduan Mountain, between 98°52′–99°46′ E and 25°28′–26°23′ N. It is located at the junction of Dali Prefecture, Baoshan area, and Nujiang Prefecture, and the total area is 4400.95 km 2 , 90% of which is mountainous [ 36 ]. The distribution characteristics of the water systems in the territory are clear. The Lancang River and its tributaries run west and in the middle of Yunlong County from north to south, respectively. The riverbed has a large slope and is rich in hydraulic resources. The basic topography is high from east to west, low in the middle, gradually decreasing from north to south, and the elevation is approximately 2000–2500 m. The Yunlong County generally has a continental subtropical plateau monsoon climate with distinct dry and wet seasons, the same season of rain and heat, and the same period of dryness and cooling [ 37 ]. It is a complex and changeable “compound three-dimensional climate.” The annual average temperature in Yunlong County is 16.1°C, the hottest monthly average temperature is 22.3°C, and the coldest monthly average temperature is 8.4°C. The difference between the annual average temperature at the highest and lowest elevations is 17°C [ 38 ]. The local mountains are undulating, the forests are dense, the rivers are vertical and horizontal, the sunshine is sufficient, the rainfall is moderate, and the climate is suitable, which provides superior conditions for the growth and reproduction of all kinds of animals and plants, so it is rich in plant resources and is a natural medicinal resource bank. Yunlong County has jurisdiction over 11 townships, including Miaowei, Guanping, Baofeng, Nuodeng, Gongguoqiao, and Tuanjie. This survey area included six villages (Biaocun, Dalishu, Xiaomaidi, Nuodeng, Gongguoqiao, Tuanjie), three local herbal medicinal markets (Tenlong, Miaowei, Baofeng), three traditional animal breeding farms, and four herbal medicine planting bases (Yunlong County Yuanheng biotechnology development Co., Ltd, Songping, Longze, Yunlong county Canwen) from Yunlong County (Fig. 2 ). Data collection Ethnobotanical investigations were conducted from August 2021 to August 2022, which included structured interviews, participatory observations, semi-structured interviews, and key person interviews combined with field investigations to complete the cataloging of traditional EMPs in Yunlong County (Fig. 3 ). All interviews were conducted by Wei Huang in the local Bai language. A total of 68 local residents were interviewed, including 58 men and 10 women. These were farmers and herbal veterinarians with several years of experience in raising and treating livestock diseases. Local farmers with knowledge of veterinary medicine would collect some herbs (Fig. 4 a, b) and hang them at home for drying on rainy days (Fig. 4 c). Herbal veterinarians would prepare the medicine according to common local veterinary diseases and save it for later use (Fig. 4 d). The informants were apprised of our purpose before the interview to gain consent and trust so that we could communicate freely and openly with them. The primary content of the interviews consisted of “5W + H” questions (i.e., questions concerning what, when, where, who/whom, why, and how the informants utilized EMPs). The recorded information was shown again to the informant again to avoid errors and tampering. Plant specimens were collected by the first author and identified using the “Flora of China” and China Digital Plant Museum ( https://www.cvh.ac.cn/ ). The Latin names of the plants were corrected and verified using The Plant List ( http://www.theplantlist.org/ ). The voucher specimens were stored in the plant specimen room of the Key Laboratory of Ethnic Medicine Resources Chemistry, Yunnan Minzu University, Kunming, China. Data analysis For each plant collected, according to its use report (URs), the UR was defined as the type of disease treated by the plant [ 39 ]. To analyze the differences in medicinal plant species used by different herbalists in the treatment of a certain type of disease [ 40 ], the diseases reported by the interviewer were divided into 10 categories. The informant consensus factor (FIC) was calculated as follows: Nur represents the sum of the number of plant species used by all informants to treat a particular disease and Nt is the number of plant species commonly used by all informants to treat the disease. The FIC value ranges from 0 to 1, and the higher the FIC value, the higher the difference in plant species used to treat a disease; the lower the FIC value, the more concentrated the plant species used in the treatment of disease [ 40 ].
Results Informant characteristics A total of 68 informants were interviewed, including 58 men (85.3%) and 10 women (14.7%). The informants ranged in age from 30 to 79 years, with most being older than 50 years (50%), and the average age was 52 years (Table 1 ). These included farmers, herbalists, truck drivers, etc. They were localities with several years of experience treating livestock diseases. Most of them had low primary school education levels. Ethnoveterinary medicinal plant diversity A total of 90 plant species belonging to 51 families and 84 genera were recorded. The Asteraceae (12 spp.) family had the highest number of individual species used in ethnoveterinary practices, followed by Fabaceae (4 spp.) and Apiaceae (4 spp.). Figure 5 shows the EMPs in the Bai region, 7 families had 3 species, 8 families had 2 species, and 33 families had only 1 species. Common livestock diseases in Yunlong County As can be seen from Table 2 , fall injury is the most common disease of large livestock in Yunlong County, followed by gastrointestinal diseases, respiratory diseases, snake bites, and other diseases. The Yunlong County is characterized by high mountains and steep slopes. Local livestock mostly adopt modes of grazing and free-raising, which makes them prone to infectious diseases caused by traumatic wounds. Hot and humid climatic conditions may lead to wound inflammation and slow healing; moreover, gastrointestinal, respiratory, and parasitic diseases often occur in enclosures with poor sanitary conditions. In recent years, economic development has driven improvements in hygiene levels, and the importance of enclosures for healthy livestock growth is increasingly being recognized. In spite of its simplicity, a traditional breeding farm emphasizes regular cleaning of the enclosure. The sheepfold is usually designed with two layers, with a gap in the boards. Most of the feces fall through the gap to the lower layer, which helps maintain cleanliness of the pen while enabling easy cleaning (Fig. 6 A). The use of dry straw washers not only protects cattle and other large livestock from falls and injuries but also provides optimal composting conditions and increases nutrients for crops when using the compost of manure mixed with straw (Fig. 6 B, C ). Many local farmers maintain the traditional habit of livestock feeding, which greatly reduces the possibility of parasitic infection and diseases (Fig. 6 D). The Yunlong County has several retail investors, mostly living on hilltops or hillsides with lush vegetation in the front and back of houses. Livestock may be bitten by snakes during grazing and while in captivity. Reproductive diseases are common in patients with dystocia, persistent placenta, or postpartum weakness. Foot-and-mouth disease is an occasional infectious disease, the control of which is mainly based on prevention, adhering to the principle of “early detection and early treatment.” Once discovered, the infected animals are immediately killed and buried. Foot-and-mouth disease occurs in many areas of China. According to the announcement of the Ministry of Agriculture and Rural Affairs of the People’s Republic of China, for the relevant livestock breeds throughout the country, according to the local actual situation, the appropriate immune vaccines against foot-and-mouth disease type O and A are selected on the basis of scientific evaluation, and the import of epidemic products from abroad is prohibited [ 41 ]. The interviewer was blinded to whether Radix Isatidis, Milkvetch Root, Palmatine, and other proprietary Chinese medicines were supplemented to livestock feed for preventing infectious diseases. The interviewers were also blinded to the specific types of diseases that were termed miscellaneous, including fever, nose bleeding, edema, loss of appetite, and abnormal conditions related to various organ systems of the animal. Miscellaneous information is related to the limitations of local residents’ miscellaneous medical knowledge. Life forms and parts of plants used for ethnoveterinary purposes The survey found that among the 90 EMPs (Additinal file 1 : Table 3), herbs accounted for the largest proportion with 70 species (77.78%), followed by eight shrub (8.89%), seven trees (7.78%), and five liana species (5.56%), as shown in Fig. 7 . This distribution is closely related to local climatic conditions. The Yunlong County has a continental subtropical plateau monsoon climate with abundant shrubs, trees, and herbaceous plant types. Herbaceous plants have a short growth cycle and large growth, which are sufficient to meet the demand and are easy to harvest and process. The total number of medicinal parts was 111 (some plants contained multiple medicinal parts). Roots were the most frequently used plant parts, constituting 40.54%, followed by whole plants (25.23%), leaves (9.01%), stems (7.20%), and mixed plant parts (18.02%) (Fig. 8 ). Root and whole plant medicines were used more frequently. These conditions may be the result of screening by local residents combined with local plant resources and traditional ethnoveterinary practices. Methods of ethnoveterinary medicine preparation and administration Different methods were used to prepare medicinal plants for treating livestock diseases. The most commonly used method for preparing medicinal plants was decoction (52.63%), followed by mashing (23.16%), grinding into a powder (11.58%), soaking in liquor (5.26%), and soaking in boiling water (3.16%). A few of the preparations used honey, sugar, and rapeseed oil (Table 3 ). Medicine administration involved two modes: oral administration (64, 71.11%) and external application (26, 28.89%). Local veterinarians used auxiliary tools for livestock that posed problems with orally administered medicines; the tools were borrowed from a local veterinary station if required. Most auxiliary tools were modified using common materials and the common auxiliary tools included feeding tools, syringes, and steel needles (Fig. 9 ).
Discussion Characteristics of informants and their sources of traditional ethnoveterinary knowledge Most local veterinarians and herbalists worked part-time, mainly in farming or other jobs, such as driver. They were localities with several years of experience in treating livestock diseases and most had low primary school-level education. Traditionally, in Bai culture, women are responsible for housework and men are the breadwinners of the family. Accordingly, men are responsible for feeding the livestock. Traditional ethnoveterinary practices are mainly passed on from older herbalists to the next male heir or protege. During our investigation, a 79-year-old local herbalist accepted a 40-year-old truck driver from the same village as an apprentice in Tuanjie Town. Truck drivers need to use veterinary knowledge to treat livestock in the process of transportation, and first aid experience enables truck drivers to further accumulate veterinary drug knowledge. The traditional medical knowledge of herbal veterinarians comes from self-study or the learning practices of older generations. They continue accumulating experience in treating diseases and learning about the pharmacological effects of plants in their lifetime, which are passed down from generation to generation. In general, nobility is a requirement of the Bai community to become a healer. Many local Bai healers treat their patients without expecting anything in return. In the past, localities who were generally poor did not charge for the treatment of other people’s livestock. This spirit of self-sacrifice in diagnosis and treatment is influenced by Buddhism and legendary tales, and the Legend of the Great Black God, the Legend of the King of Medicine, and the glazed Beast are all myths and legends about “self-sacrifice culture” [ 42 , 43 ]. The “self-sacrifice culture” forms the basis of humanistic care of Bai medicine and is evident throughout the medical history of the Bai people. Characteristics of Bai Ethnoveterinary medicinal plants in Yunlong County During the investigation, 90 species of medicinal plants belonging to 51 families and 84 genera were recorded and used to treat livestock diseases. Plants from the Asteraceae family were most widely used by local healers. This may be related to local hot and humid climatic conditions. Plants of the Asteraceae family, one of the largest families of seed plants worldwide, grew readily in local communities. The biomass and population size of Asteraceae e plants are typically extremely large. The Asteraceae medicinal plants are characterized by their heat-clearing and detoxifying, wind dispelling, and dehumidifying antimicrobial properties [ 44 – 46 ]. The medicinal plants of Asteraceae, Fabaceae, Apiaceae, Ranunculaceae, Euphorbiaceae, Gentianaceae, Lamiaceae, Polygonaceae, and Rosaceae were widely used by the localities, which may be owing to the abundance of wild plant resources in Yunlong County. This is consistent with the results of Yunfang’s investigative study on the diversity of medicinal plant resources and the dominant plant family in Yunlong County [ 47 ]. The Asteraceae and Fabaceae plants were used the most, and the results were similar to the survey results of many other research areas in southern China [ 48 , 49 ]. Among 33 plant families, only one medicinal plant species was eligible as an EMP. Medicinal plants are abundant in Bai village, and local residents collect diverse medicinal species. Most EMPs are collected from wild habitats; they are dug up from near the mountains and planted in their courtyard or in the front and back of the house. In our study, Solanum violaceum Ortega and Phedimus aizoon (L.) 't Hart were planted in the courtyard of a farmer’s house, after boiling, and were fed the livestock to clear away heat and detoxify. The life forms of herbs planted in the courtyard are mostly herbs. These are regularly cared for until required during emergencies and also serve to protect endangered medicinal plants [ 50 ]. Our investigation indicated that herbal veterinarians usually went to various parts of the county to collect the necessary medicinal materials in August, thus avoiding the busy agricultural season and ensuring optimal plant growth. Most of the harvested medicinal plants were herbs. This is not only herbs are the most used plant part for medicine, but also because they are easy to procure [ 49 , 51 ]. The roots and rhizomes were the most commonly used parts for medicines, followed by whole grass, and the result is the same as other ethnic groups (Buyi, Yao, Zhuan, and Maonan) in the choice of medicinal parts [ 52 – 56 ]. However, this traditional utilization method causes significant damage to the biodiversity of the medicinal plants. The selection of medicinal parts should be modified to ensure sustainable utilization of medicinal plant resources. Therefore, the resource utilization rate should be improved. The Bais have herbal medicine markets in various townships in Yunlong County. Raw herbs are used to prevent and treat various diseases. The local herbal medicinal market enriches the diversity of medicinal plants and is an important place for the exchange and dissemination of Bai medicinal culture [ 57 ]. As the education level of the older generation of herbal veterinarians is generally low, their traditional knowledge is derived from previous experiences and daily practice. Local herbalists are avant-garde and dare to accept and try new things. In our study, we encountered an old herbalist who grafted mistletoe ( Viscum coloratum ) onto a succulent plant ( Euphorbia royleana ) to improve plant growth (Fig. 10 ). He was able to acquire the medicinal plant Viscum coloratum by grafting it on the succulent plant near his home. Livestock breed management and treatment of livestock diseases Outbreaks of livestock diseases seriously affect the development of aquaculture and the economic income of residents [ 58 , 59 ]. Livestock breed selection is closely related to disease prevention and economic benefits. According to the interviews, veterinary staff are aware that an improvement in people’s living standards has increased the demand for meat, which the local old breed of beef cattle has been unable to meet. In 1988, local animal husbandry and veterinary management departments began to cross-breed old cattle breeds free of charge to increase the number of beef cattle. The hybrid cattle were strong, disease-resistant, and highly valued. Cross-breeding is now mostly performed by veterinary station staff, which charges 100–300 RMB each time. As older yellow cattle breed are small and rarely get sick, they are more suitable for free breeding in the local mountainous environment; therefore, there is still a large stock in the Yunlong Bai region. Local veterinarians diagnose livestock diseases based on existing medical knowledge. Common diagnostic methods include observation (e.g., observing the physical manifestations and disease symptoms), listening and smelling (e.g., listening to the sound and breath of animals, sniffing secretions, and excreta), questioning (e.g., asking animal keepers about the appearance or history of the disease), and palpation (e.g., touching or pressing the animal’s body, feeling the pulse, and other viscera), which is similar to traditional Chinese medicine [ 60 , 61 ]. Tongue examination is not only an important part of traditional Bai medicine but also an important component of disease diagnosis [ 62 ]. Red fur is wind-cold, yellow fur indicates excess heat, green fur indicates toxicosis, and white fur indicates collapse. After diagnosing the disease, the local veterinarians begin to prescribe the right medicine for the case. Suitable medicinal plants can be used to prevent and treat diseases owing to their medicinal properties. Rodgersia sambucifolia is widely used by local veterinarians for the treatment of livestock respiratory diseases owing to the plant’s polyphenols, flavonoids, terpenoids, and volatile oils [ 63 , 64 ]. Farmers and herbal veterinarians use varied methods to treat their livestock. They often mash Selaginella moellendorffii and feed it to the animals to treat a postpartum abdominal cold for livestock postnatal care. Because of the hot and humid climate in Yunlong County, the Asteraceae medicinal plants ( Chrysanthemum indicum L., Taraxacum mongolicum Hand.-Mazz., Aucklandia costus Falc., etc.) are often mashed or boiled and fed to livestock for heat-clearing and detoxification. In addition to using plants to prevent livestock diseases, local veterinarians have developed unique diagnostic and treatment methods. They use gunpowder to wipe the fur of livestock to treat depilation and apply gasoline to the wounds to ward off maggots. The donkey turned mad and pulled up the long mane on top of its head and put it in with a needle supplemented with cat incense (wildcat secretion) internal administration can cure. The tripe flatulence was inserted directly with a steel needle, turned down, and taken internally with Rodgersia sambucifolia Hemsl, Actaea cimicifuga L., and other medications after bloodletting and outgassing. Prospects and challenges of traditional ethnoveterinary knowledge Although Chinese traditional medical theory is famous worldwide for its application in human health, it is rarely mentioned in countries other than China. Traditional Chinese medicine has been used in veterinary medicine and human medicine practice in China for thousands of years. In modern Chinese society, herbs used for the treatment of animal diseases or animal feed are believed to contain fewer residues than traditional medicines [ 65 ]; moreover, they are believed to reduce bacterial drug resistance and food safety problems caused by modern veterinary drugs [ 66 ]. Notices numbers 194 and 246 of the Ministry of Agriculture and Villages of the People’s Republic of China have led to the ban of the addition of antibiotics to veterinary drugs. Conversely, the various standards of traditional Chinese medicine allowing feed additives for both growth promotion and prevention and control have been revised [ 67 , 68 ]. Therefore, EMPs will gradually be welcomed in the prevention and control of diseases and the health protection of livestock. In remote and poor areas, EMPs are the first choice for local prevention and treatment of livestock diseases. However, under the influence of the mainstream social economy, an increasing number of people choose to work in cities, which hinders the inheritance of the traditional medicine culture and decreases the traditional animal husbandry and the number of animals in rural areas. In this survey, a number of practicing veterinarians said that children in their families would rather sell tea or work in a factory than learn about veterinary medicine. Currently, most people with knowledge of traditional medicinal plants and their use are over 50 years of age. They mostly engage in agricultural labor or breeding and rely only on their spare time to acquire traditional veterinary medical knowledge. These results threaten the inheritance of EMPs and traditional medical knowledge.
Conclusion Traditional veterinary medicine is easy to master and perform and is inexpensive. It plays an important role in the development of local aquaculture and animal husbandry and is the first choice for the prevention and treatment of animal diseases in remote and poor areas. However, with the passing on of the older generation, traditional knowledge of EMPs may disappear. In this study, we collected and sorted traditional knowledge about medicinal plants used in veterinary practice in Yunlong County. We obtained information on 90 EMPs and their corresponding treatment types for livestock diseases and studied the life form, drug preparation, and mode of administration of EMPs. This study plays an important role in the protection and inheritance of Bai EMPs and their traditional knowledge in Yunlong County. Traditional knowledge of ethnoveterinary medicine is related to the local social–cultural characteristics of the Bai people and plays a pivotal role in livestock production. Plants are the carriers of traditional culture, and traditional culture nourishes plant culture. Cultural diversity and biodiversity depend on each other. The traditional community has extremely rich traditional knowledge related to the improvement of people’s health and environmental hygiene conditions.
Background The Bai people in Yunlong County, northwest Yunnan, China, have used medicinal plants and traditional remedies for ethnoveterinary practices. The Bai have mastered ethnoveterinary therapeutic methods in livestock breeding since ancient times. The Bai’s traditional ethnoveterinary knowledge is now facing extinction, and their unique ethnoveterinary practices have rarely been recorded. This study documented animal diseases, EMPs, and related traditional knowledge in Yunlong County, China. Methods Ethnobotanical fieldwork was conducted in six villages and townships of Yunlong County between 2021 and 2022. Data were obtained through semi-structured interviews, participatory observations, and keyperson interviews. A total of 68 informants were interviewed, and the informant consensus factor and use reports (URs) were used to evaluate the current ethnoveterinary practices among the local communities. Information on livestock diseases, medicinal plants, and traditional ethnoveterinary medicine knowledge were also obtained. Results A total of 90 plant species belong to 51 families, 84 genera were recorded as being used as EMPs by the Bai people, and Asteraceae plants are most frequently used. A total of 68 informants were interviewed, including 58 men (85.3%) and 10 women (14.7%). The most commonly used EMPs parts included the roots, whole plants, leaves, and stems, and the common livestock diseases identified in this field investigation included trauma and fracture, gastrointestinal disorders, respiratory disorders, parasitic diseases, miscellaneous, venomous snake bites, reproductive diseases, infectious diseases, skin disease, and urinary diseases. Most of the EMPs are herbs (77.78%). Courtyard is one of the habitats of medicinal plants in Yunlong County. Conclusion Traditional knowledge of ethnoveterinary medicine is related to the local sociocultural characteristics of the Bai. Plants are used in cultural traditions, which, in turn, nourish the plant culture. Cultural diversity and biodiversity are interdependent. This traditional knowledge is at risk of disappearance because of the increasing extension of Western veterinary medicine, lifestyle changes, and mainstream cultural influences. Therefore, it is important to continue research on ethnoveterinary practices. Supplementary Information The online version contains supplementary material available at 10.1186/s13002-023-00633-0. Keywords
Supplementary Information
Abbreviations The informant consensus factor Use reports Ethnoveterinary medicine plants Acknowledgements Thanks go to the local Bai people in Yunlong County, Yunnan Province, who provided valuable information about traditional ethnoveterinary knowledge. Dr. Qingsong Yang assisted with the identification of plant specimens. Members of the School of Ethnic Medicine at Yunnan Minzu University participated in the field surveys, they are Hui Wang and Jingxian Sun. We would like to give sincere thanks to them for their help in the process of research. Author contributions HLG, WH, and CYZ performed the field work and collected data. HLG and XY organized the literature, analyzed the data, and drafted the manuscript. YX conceptualized the study, edited the final version, and obtained funding for the study. All authors have approved the final version of the manuscript for submission. Funding This work was financially supported by the National Natural Science Foundation of China (81760655) and the Open Project Fund of the Key Laboratory of Ethnic Medicine Resource Chemistry of the State Ethnic Affairs Commission and the Ministry of Education, Yunnan Minzu University (MZY2104). Availability of data and materials Not applicable. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Prior and informed consent of local people’s pictures was obtained for publication. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-15 23:43:47
J Ethnobiol Ethnomed. 2024 Jan 13; 20:9
oa_package/6a/b8/PMC10787986.tar.gz
PMC10787987
38218808
Background Acute mesenteric ischemia (AMI) is a rare disease. A British study [ 1 ] suggested that the incidence was 0.63/100,000/year. A study in Sweden [ 2 ] based on autopsy reports showed that the incidence was 12.90/100,000/year. Despite its low incidence, the mortality rate of this disease is as high as 50–69% [ 3 ]. AMI is categorized into four subtypes according to its cause: mesenteric artery embolism (EAMI), mesenteric artery thrombosis (TAMI), nonocclusive mesenteric ischemia (NOMI) and mesenteric vein thrombosis (VAMI). EAMI accounts for 25% of cases, TAMI accounts for approximately 40% of cases, VAMI accounts for approximately 15% of cases, and NOMI accounts for approximately 20% of cases [ 4 ]. Acute occlusive mesenteric ischemia (AOMI) consists of the EAMI, TAMI and VAMI. The incidence rate of complications ranges from 13.33–61.5% [ 5 – 10 ]. The complications include short bowel syndrome (SBS), electrolyte imbalance, intestinal obstruction, intestinal hemorrhage, renal or cardiac dysfunction, intestinal fistula and wound infection. Some of the complications are severe and may lead to death. According to the Clavien‒Dindo score system, patients with the score ≥ 2 have to be readmitted to the hospital, and the cost is high. Will different categories lead to different outcomes? Will early diagnosis and early management improve the outcomes? Will different operation methods decrease the complication rate? In our study, we aimed to identify significant factors that may affect the outcomes.
Methods Case selection Data from patients with AOMI admitted to the Beijing Tsinghua Changguang Hospital surgery emergency department or gastrointestinal surgery department from May 2016 to May 2022 were reviewed retrospectively. All diagnoses were confirmed by computed tomography angiography (CTA). The inclusion criteria were a diagnosis of superior mesenteric artery (SMA) embolism (CTA showed the emboli in SMA and the emboli located at least 3 cm far from the origin of SMA), SMA thrombosis (CTA showed the thrombus in SMA and the thrombus located within 3 cm far from the origin of SMA), or superior mesenteric vein (SMV) thrombosis (CTA showed the thrombus in SMV). The exclusion criteria were as follows: (1) age < 18 years; (2) patients who refused further treatment after diagnosis; or (3) diagnosis of NOMI (CTA showed bowel ischemia with no emboli or thrombus in SMA or SMV). Data collection This study was approved by the Beijing Tsinghua Changgung Hospital Ethics Committee (22003-6-01). Informed consent was waived by the Beijing Tsinghua Changgung Hospital Ethics Committee. All methods were performed in accordance with the Declaration of Helsinki. All the cases were divided into 2 groups according to whether complications(Clavien‒Dindo ≥ 2) occurred within 6 months of the first admission. Cases without complications(Clavien‒Dindo ≥ 2) occurred within 6 months of the first admission were categorized into normal group. The other cases were categorized into complication group. Complications(Clavien‒Dindo ≥ 2) included SBS, electrolyte imbalance, intestinal obstruction, intestinal hemorrhage, renal or cardiac dysfunction, intestinal fistula and death. The following clinical characteristics were examined herein: age, sex, diagnosis, transmural intestinal necrosis (confirmed by the pathology reports), duration from onset to diagnosis, duration from onset to treatment, abdominal pain, abdominal distension, nausea and vomiting, hematemesis and hematochezia, diarrhea, comorbidities (cardiac problem including history of atrial fibrillation, recent myocardial infarction, cardiac thrombi, mitral valve disease, left ventricular aneurysm and endocarditis, previous embolic disease, diffuse atherosclerotic disease, portal hypertension, history of venous thromboembolism, oral contraceptives, estrogen use, thrombophilia pancreatitis), peritonitis, fever, white blood cell count (WBC), C reactive protein (CRP), hemoglobin (HGB), platelet (PLT), percentage of neutrophils (N%), percentage of lymphocytes (L%), neutrophil-to-lymphocyte ratio (NLR), D-dimer, lactate dehydrogenase (LDH), creatine kinase (CK), creatine kinase isoenzyme (CKMB), myoglobin (MYO), cardiac troponin I (CTNI), lactate (LAC), pondus hydrogenii (PH), CTA details (emboli or thrombus in vessel, decreased intestinal wall enhancement, intestinal wall thickening, pneumatosis intestinalis and ascites), surgical approach (endovascular surgery, laparoscopic exploration, open embolectomy and enterostomy), length of necrosis small bowel, length of healthy small bowel, surgical time and intraoperative blood loss. Treatment and follow-up We recommended AOMI cases followed our treatment algorithm as follows (Fig. 1 ). The indications of intestinal necrosis with AOMI were as follows: (1) AOMI with decreased intestinal wall enhancement, intestinal wall thickening, pneumatosis intestinalis, or ascites on CTA; and (2) AOMI with peritonitis. VAMI cases without indications of intestinal necrosis underwent anticoagulation therapy. EAMI or TAMI cases without indications of intestinal necrosis underwent endovascular procedures for recanalization. Subsequently, laparoscopic exploration was performed to confirm that there was no transmural intestinal necrosis. AOMI patients with 1 or 2 indications of intestinal necrosis underwent laparoscopic exploration first. Intestinal necrosis was judged by inspection of the color of the intestines and intestinal peristalsis during the operation. Once transmural intestinal necrosis was confirmed, we converted to an open operation. Resection of the necrotic intestine and ostomy were performed. Otherwise, an endovascular procedure was performed for recanalization. Anticoagulation, antibiotic therapy, rehydration, and nutrition were performed post-operation. The criteria for discharge were as follows: (1) no remaining intestinal necrosis of the bowel; (2) total enteral nutrition tolerated; and (3) no need for intravenous antibiotics. All cases were followed up for 6 months. The primary endpoint was complications(Clavien‒Dindo ≥ 2) occurred within 6 months of the first admission. Statistical analysis The factors were compared between groups. The results were analyzed by SPSS 25.0 (IBM, USA). A t test was used to compare normally distributed continuous variables, the Mann‒Whitney t test was used to compare non-normally distributed continuous variables. The chi-squared test was used to compare categorical data. Factors with significant difference were listed after t test and chi-squared test. Logistic regression (backward stepwise selection based on the likelihood ratio method) was performed to identify factors that were associated with complications among the listed factors. A receiver operating characteristic(ROC)curve was established for CKMB and surgical time. A statistically significant difference was indicated when P < 0.05.
Results 59 patients were enrolled in this study. 23 patients had EAMI, 11 patients had TAMI, and 25 patients had VAMI. The cases didn’t follow our treatment algorithm strictly. Among the 34 patients with EAMI or TAMI, 11 cases had no indications for intestinal necrosis. 1 case refused further surgery and underwent anticoagulation therapy alone, the other 10 cases underwent endovascular surgery and all of them succeeded in recanalization. 4 of 10 cases refused further laparoscopic exploration, and the other 6 of 10 cases underwent laparoscopic exploration. Intestinal necrosis was found in 1 of 6 cases. Open surgery with necrotic small bowel removal and ostomy was performed. 23 cases with EAMI or TAMI had 1 or 2 indications for intestinal necrosis. 2 of 23 cases refused further surgery and underwent anticoagulation therapy alone. 5 of 23 cases underwent endovascular surgery for recanalization and refused further surgery after successful recanalization. 7 of 23 cases underwent open surgery and intestinal necrosis was found in all the cases. Open surgery with necrotic small bowel removal and ostomy was performed. 9 of 23 cases underwent laparoscopic exploration and intestinal necrosis was found in 5 of 9 cases. Open surgery with necrotic small bowel removal and ostomy was performed. The other 4 of 9 cases underwent endovascular surgery and all of them succeeded in recanalization. The management was shown in Fig. 2 . Among the 25 patients with VAMI, 3 cases had no indications for intestinal necrosis and underwent endovascular surgery. 22 cases had 1 or 2 indications for intestinal necrosis. 3 of 22 cases refused further surgery and underwent anticoagulation therapy alone. 4 of 22 cases underwent endovascular surgery for recanalization and refused further surgery after successful recanalization. 6 of 22 cases underwent open surgery and intestinal necrosis was found in all the cases. Open surgery with necrotic small bowel removal and ostomy was performed. 9 of 22 cases underwent laparoscopic exploration and intestinal necrosis was found in 7 of 9 cases. Open surgery with necrotic small bowel removal and ostomy was performed. The other 2 of 9 cases with no intestinal necrosis refused further surgery and underwent anticoagulation therapy. The management was shown in Fig. 3 . Severe complications within 6 months after the first admission occurred in 17 cases (12 males and 5 females) aged 67.94 ± 15.89 years. They were divided into the complication group. 2 patients died within 30 days post first management. 4 patients experienced short bowel syndrome and electrolyte imbalance. 2 patients experienced electrolyte imbalance, and 10 patients experienced intestinal obstruction. The other 42 patients were divided into the normal group. Compared to the normal group, the following parameters differed significantly after univariate analysis: the ratio of transmural intestinal necrosis (82.4% vs. 28.6%, P < 0.01), peritonitis (64.7% vs. 28.6%, P = 0.01), laparoscopic exploration (64.7% vs. 31%, P = 0.02), open embolectomy (82.4% vs. 26.2%, P < 0.01), and enterostomy (82.4% vs. 21.4%, P < 0.01). WBC (14.87 × 10 9 /L, 10.08 vs. 11.49 × 10 9 /L, 7.70, P = 0.03), N% (87.6%, 12.02 vs. 82.6%, 14.7, P = 0.03), L% (5.65%, 8.13 vs. 9.95%, 12.33, P = 0.03), NLR (16.2, 22.54 vs. 8.31, 13.42 P = 0.03), LDH (254.5 IU/L, 79.5 vs. 230.5 IU/L, 36.5, P = 0.03), CKMB (4.38 ng/ml, 5.39 vs. 1.71 ng/ml, 2.27, P = 0.02), CTNI (0.25 ng/ml, 0.09 vs. 0.14 ng/ml, 0.01, P = 0.02), length of necrosis small bowel (75 cm, 187.5 vs. 0 cm, 15, P < 0.01), length of healthy small bowel (325 cm, 221.5 vs. 400 cm, 0, P < 0.01), surgical time (300 min, 94.25 vs.105 min, 182.25, P < 0.01), and intraoperative blood loss (50 ml, 68.75 vs. 10 ml, 98, P = 0.01) also differed significantly. The results are shown in Table 1 . Logistic regression (backward LR method) showed that CKMB (OR = 1.415, 95% CI = 1.060–1.888, P = 0.02) and surgical time (OR = 1.014, 95% CI = 1.001–1.026, P = 0.03) were independent risk factors associated with severe complications, as shown in Table 2 . Additionally, most of the cases with elevated CKMB had cardiac problem.The ratio of it was much higher than that in cases with normal CKMB(82.4%vs 33.3%). Regarding the prediction of severe complications, ROC curves were drawn, and the area under the curve (AUC) values for CKMB and surgical time were 0.69 (95% CI = 0.533–0.848) and 0.814 (95% CI = 0.707–0.92), respectively, as shown in Figs. 4 and 5 . When the cutoff for CKMB was 2.22 ng/ml, the sensitivity and specificity were 82.4% and 66.7%, respectively. When the cutoff for surgical time was 156 min, the sensitivity and specificity were 94.1% and 66.7%, respectively.
Discussion AMI is a rare but lethal disease. The reported mortality within 30 postoperative days ranged from 8.9 to 73.5% during the past decade [ 5 , 8 – 22 ]. The incidence rate of complications ranged from 13.33–61.5% [ 5 – 10 ]. The complications include SBS, electrolyte imbalance, intestinal obstruction, intestinal hemorrhage, renal or cardiac dysfunction, intestinal fistula and wound infection. Some of the complications lead to readmission and high costs. Previous studies have investigated the predictive factors of transmural intestinal necrosis in AMI. Previous studies have also investigated the predictive factors of in-hospital mortality. However, few studies have focused on prognostic factors for outcome. This study aimed to identify factors that may be associated with complications (Clavien‒Dindo ≥ 2) that require readmission. When transmural intestinal necrosis occurs, exudation around the bowel presents, and peritonitis appears. The WSES guidelines suggested that prompt laparotomy should be performed for AMI patients with overt peritonitis due to the high possibility of bowel necrosis [ 23 ]. This study demonstrated that more peritonitis occurred in the complication group. However, multivariate analysis showed that peritonitis was not an independent risk factor associated with complications. Transmural intestinal necrosis leads to bowel resection. A large amount of small bowel resection leads to inadequate absorption and electrolyte imbalance, even SBS. A previous study reported that the incidence of SBS caused by AMI was 25–30% [ 24 ]. In our study, cases in the complication group had a higher ratio of transmural intestinal necrosis, and the necrotic bowel was longer. However, none of them were predictive factors of severe complications. This was mainly because of the 3 points. The healthy bowel left was more than 100 cm, the colon was not involved, and there was late reconstruction of the bowel continuity. Delayed diagnosis and management are associated with intestinal necrosis and in-hospital mortality [ 25 ]. Mikail Cakir et al. reported that irreversible intestinal mucosal necrosis occurred 4 h after occlusion of the superior mesenteric artery in a rat model [ 26 ]. The literature reported that the time from diagnosis to management ranged from 27 to 120 h [ 6 , 13 , 27 ]. Mateusz Jagielski et al. reported that the mortality rate was 100% if the time from diagnosis to management exceeded 24 h [ 8 ]. In our study, there were no significant differences in the duration from symptom onset to diagnosis or the duration from onset to treatment. It might because that the criteria for grouping were not only 30-day post management mortality but also other complications (Clavien‒Dindo ≥ 2). Most of the cases in this study took more than 24 h from diagnosis to management also might influence the result in our study. When the patients admitted to the ER department with suspected AMI, most doctors routinely ordered complete blood cell count, biochemistry, D-dimer, and arterial blood gas analysis. The sensitivity and specificity of these biomarkers remain debated. Some studies reported that WBC, CRP, NLR, red blood cell volume distribution width (RDW), total bilirubin, creatinine, lactate, pH and PLT were significantly different between the intestinal necrosis group and the short-term postoperative death group [ 12 , 22 ]. Other studies reported that a low pH level, low lymphocyte count, low platelet count, high platelet volume distribution width (PDW) level, high platelet-to-lymphocyte ratio and high creatinine level were risk factors associated with intestinal necrosis and short-term postoperative death [ 11 , 15 , 17 , 18 , 20 , 28 , 29 ]. In contrast, another study reported that routinely used laboratory tests could not predict intestinal necrosis or postoperative death [ 7 , 8 , 14 , 30 ]. This study suggested that WBC, N%, L%, NLR, LDH, CKMB, and CTNI differed significantly in the complication group. Few studies have reported that CKMB and CTNI could predict mortality or poor outcomes. Both of them were used to suggest cardiac injury. Most of the cases with elevated CKMB in our study had cardiac problem.The ratio of it was much higher than that in cases with normal CKMB(82.4%vs 33.3%). It would reduce the patient’s tolerance to infection, surgery, and ischemia, and increase the difficulty of postoperative recovery. Our results demonstrated that AOMI with cardiac injury might lead to poor outcome. Logistic regression showed that CKMB was an independent risk factor associated with complications(Clavien‒Dindo ≥ 2) in this study. When the cutoff for CKMB was 2.22 ng/ml, the sensitivity and specificity were 82.4% and 66.7%, respectively. CTA have become the standard to identify AMI recently. Prasaanthan Gopee-Ramanan et al. reported that the accuracy of CTA for identifying AMI was 92.9% [ 31 ]. Decreased intestinal wall enhancement, mesenteric stranding, dilated bowel and ascites pneumatosis intestinalis were reported in CTA with intestinal necrosis [ 13 , 15 , 16 , 32 ]. Mothes, H. et al. reported that the specificities of decreased intestinal wall enhancement, pneumatosis intestinalis, and mesenteric stranding in predicting intestinal necrosis were 88.6%, 98.6% and 77.1%, respectively [ 13 ]. Wang, X. et al. reported that pneumatosis intestinalis (OR = 7.08) and ascites (OR = 9.49) were independent risk factors for intestinal necrosis [ 15 ]. In our study, none of the CTA findings differed between groups; therefore, proper management could decrease the complication rate even with bowel necrosis. The surgical approach for AMI needs to achieve 3 goals: revascularization, resection of necrotic bowel and restoration of viable bowel as long as possible. Compared with endovascular surgery, open surgery leads to a higher rate of complications [ 33 ]. We selected enterostomy after necrotic bowel removal because of the fear of intestinal fistula after one-stage anastomosis. It was reported that the rate ranged from 23.4–27% [ 34 , 35 ]. Enterostomy leads to electrolyte imbalance due to the loss of a large amount of digestive juice, especially in patients with improper home enteral nutrition. That was why our study revealed a higher ratio of enterostomy in the complication group. Although the heathy bowel length was shorter in the complication group, it failed to be a predictive factor associated with complications. This was because the length of the healthy bowel was more than 100 cm and no colon was involved. Unlike other reports, our study revealed that surgical time was an independent risk factor for complications(Clavien‒Dindo ≥ 2). Prolonged surgical time usually caused by open surgery which led to a higher rate of complications [ 33 ] in our study. Prolonged surgical time led to prolonged intestinal ischemia time and prolonged severe infection time, and finally led to poor outcome. When the cutoff for surgical time was 156 min, the sensitivity and specificity were 94.1% and 66.7%, respectively. Limitations Since this study was a retrospective study with a small sample size, the results need to be tested with studies including a larger number of cases.
Conclusions In our study, AOMI patients with a CKMB level of more than 2.22 ng/mL or a surgical time of more than 156 min are more likely to experience complications’(Clavien‒Dindo ≥ 2) occurrence within 6 months of the first admission.
Background Acute mesenteric ischemia is a rare but lethal disease. Acute occlusive mesenteric ischemia consists of mesenteric artery embolism, mesenteric artery thrombosis, and mesenteric vein thrombosis. This study aimed to investigate the factors that may affect the outcome of acute occlusive mesenteric ischemia. Methods Data from acute occlusive mesenteric ischemia patients admitted between May 2016 and May 2022 were reviewed retrospectively. Patients were divided into 2 groups according to whether complications(Clavien‒Dindo ≥ 2) occurred within 6 months of the first admission. Demographics, symptoms, signs, laboratory results, computed tomography angiography features, management and outcomes were analyzed. Results 59 patients were enrolled in this study. Complications(Clavien‒Dindo ≥ 2) occurred within 6 months of the first admission in 17 patients. Transmural intestinal necrosis, peritonitis, white blood cell count, percentage of neutrophils, percentage of lymphocytes, neutrophil-to-lymphocyte ratio, lactate dehydrogenase, creatine kinase isoenzyme, cardiac troponin I, laparoscopic exploration rate, open embolectomy rate, enterostomy rate, length of necrotic small bowel, length of healthy small bowel, surgical time and intraoperative blood loss differed significantly between groups. Creatine kinase isoenzyme (OR = 1.415, 95% CI: 1.060–1.888) and surgical time (OR = 1.014, 95% CI: 1.001–1.026) were independent risk factors associated with complications(Clavien‒Dindo ≥ 2). Conclusions Our analysis suggests that acute occlusive mesenteric ischemia patients with a creatine kinase isoenzyme level greater than 2.22 ng/mL or a surgical time longer than 156 min are more likely to experience complications’(Clavien‒Dindo ≥ 2) occurrence within 6 months of the first admission. Keywords
Abbreviations Acute mesenteric ischemia Acute occlusive mesenteric ischemia Mesenteric artery embolism Mesenteric artery thrombosis Mesenteric vein thrombosis Nonocclusive mesenteric ischemia Computed tomography angiography White blood cell count Percentage of neutrophils Percentage of lymphocytes Neutrophil-to-lymphocyte ratio Lactate dehydrogenase Creatine kinase isoenzyme Cardiac troponin I Short bowel syndrome C reactive protein Hemoglobin Platelet Creatine kinase Myoglobin Lactate Pondus hydrogenii Superior mesenteric artery Superior mesenteric vein Red blood cell volume distribution width Platelet volume distribution width Acknowledgements Not applicable. Author contributions The study design was contributed by PZ; data acquisition was performed by QZ, TM; statistical analysis was carried out by HZ,YL; manuscript writing was completed by QZ,TM, and PZ. The manuscript was reviewed by all the authors, and final approval was performed by PZ. All authors read and approved the final manuscript. Funding Self-funded. Data availability The data used and/or analyzed during the current study are available from the corresponding author upon reasonable request. Declarations Ethics approval and consent to participate This study was approved by the Beijing Tsinghua Changgung Hospital Ethics Committee (22003-6-01). Informed consent was waived by the Beijing Tsinghua Changgung Hospital Ethics Committee. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:47
BMC Surg. 2024 Jan 13; 24:21
oa_package/f5/d2/PMC10787987.tar.gz
PMC10787988
38218865
Introduction Early Childhood Caries (ECC) is a dental condition that affects young children worldwide. Untreated ECC causes dental pain, infections, nutritional impairments, developmental delays, reduced quality of life, and increased healthcare costs for individuals and societies [ 1 ]. Defined as any carious lesion in the primary teeth of children under the age of 6 years, the impact of ECC on wellness and wellbeing is particularly significant among socially disadvantaged populations, thereby exacerbating oral health inequalities [ 2 ] With approximately 514 million affected children globally, ECC ranks among the most common childhood diseases [ 3 , 4 ]. As global health priorities continue to evolve, addressing ECC within the context of the United Nations’ Sustainable Development Goal 8 (SDG8) becomes crucial, as this goal aims to promote sustained, inclusive, and sustainable economic growth, full and productive employment, and decent work for all. SDG8 emphasizes the importance of labor rights, eradicating modern slavery and child labor, and ensuring equal access to the benefits of entrepreneurship and innovation. In addition, it reiterates the value of the reciprocal links between social, environmental, and economic policies, full employment, and decent work. Within the framework of SDG8, there is an opportunity to address the issue of untreated ECC using a human rights perspective [ 5 , 6 ]. The high prevalence of ECC among socially disadvantaged children highlights the need to promote ECC management through the lens of social justice, health equity, and human rights [ 7 , 8 ]. By linking macro-social development with meso- and micro-economic growth, we can potentially achieve a more equitable distribution of wealth and have a direct impact on health, including oral health [ 9 ]. SDG8 also encourages investments in health systems and infrastructure [ 10 ]. Incorporating oral health services into health systems and infrastructure can enhance preventive efforts and early intervention for ECC [ 11 , 12 ]. This integration can lead to a more comprehensive approach to oral health care, aligning with the principles of SDG8 to ensure well-being for all. SDG8 includes 12 targets, one of which is achieving full and productive employment, decent work for all, and equal pay for work of equal value (SDG8.5). Full and productive employment refers to the availability of quality job opportunities that enable individuals to earn a decent income and contribute to economic growth [ 5 ]. Decent work improves income stability and economic security, ultimately leading to greater household income and reduced income inequality [ 13 ]. Achieving equal pay for work of equal value is crucial for addressing gender discrimination in the labor market, which is particularly relevant for ECC since maternal socioeconomic status strongly influences the risk of ECC [ 14 , 15 ]. Accomplishing SDG8.5 can enable households to meet their basic needs, access better healthcare and education, and invest in their future [ 16 ]. It will also lead to improved living standards, reduced poverty rates, enhanced economic resilience, and the creation of a more inclusive society [ 17 , 18 ]. By using a rights-based approach, SDG8 aligns with the goal of achieving equitable access to health, including oral health, for all individuals. Given that ECC is preventable adequate and timely preventive and prophylactic cost-effective programs, and in some cases, early lesions can be reversed with early detection and available treatment options, it is essential to include the management of untreated ECC on the global disease elimination agenda [ 6 ]. Treating dental caries, particularly in young children, can be expensive and time-consuming, leading families to miss work to address their child’s oral health needs, consequently affecting their economic productivity [ 19 ]. ECC is more prevalent in disadvantaged and vulnerable populations who frequently consume sugar, have poor access to adequate dental care and poor education on oral hygiene practices [ 20 , 21 ]. This oral health disparity can contribute to broader health and well-being inequalities that the goals of SDG8 try to address. Conversely, poor economic development and growth can negatively affect the prevalence and severity of ECC. Poor economic growth and development reduces expenditure on health [ 22 ]. yet, higher expenditure on health may be associated with lower prevalence of ECC [ 23 ]. By prioritizing the elimination of untreated ECC within the SDG8 framework, we can strive for a more equitable distribution of resources and higher household income. We conceptualized the impact of interventions related to SDG8 on ECC using the Fisher-Owen et al.’s 2007 model [ 24 ] depicted in Fig. 1 . We perceive that at least, five targets of SDG8 could have a direct or indirect community-level, family-level, and child-level influences on the risk of ECC: SDG8.1 (sustainable economic growth), SDG8.3 (promote policies to support job creation and growing enterprises), SDG8.5 (full employment and decent work with equal pay), SDG8.8 (protection of labor rights and promotion of safe working environments), and SDG8.A (universal access to banking, insurance and financial services). The outcomes of SDG8 can indirectly reduce the risk of and global prevalence of ECC. The exploration of the intersection between ECC and SDG8 can help identify opportunities to leverage economic growth and employment opportunities to strengthen oral health systems. Though there is very little known about the links between SDG8 and ECC, ecological studies suggest that a growth in per-capita gross national income was significantly associated with higher prevalence of ECC in children aged 36 to 71 months [ 25 ]. This association was found to be the reverse for children with ECC in European member countries [ 26 ] and for children in Serbia though the findings in Serbia was not statistically significant [ 27 ]. The aim of this scoping review was to map the evidence on the links between ECC and targets of the SDG8, and to identify research gaps to be filled to provide evidence on the link between SDG8 and ECC.
Methods We conducted this scoping review to explore the connections between ECC and the objectives of SDG8, which encompass economic growth and decent work. To ensure methodological rigor and transparency, we followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines [ 28 ] during the review process. Research questions The following questions guided this review: What is the existing evidence on the association between decent work and economic growth (sustained economic growth, higher levels of productivity and technological innovation, entrepreneurship, job creation, and efforts to eradicate forced labor, slavery, and human trafficking) and ECC? Search strategy In January 2023, a search was conducted on three electronic databases: PubMed, Web of Science, and Scopus. The search utilized a combination of key terms as shown in Additional file 1 : Appendix 1. The search terms were tailored to meet the specific requirements of each database. The key terms used were for the Pubmed search were: (((((((((“Economic Development”[Mesh]) OR “Sustainable Growth”[Mesh]) OR “Right to Work”[Mesh]) OR “Unemployment”[Mesh]) OR “Small Business”[Mesh]) OR “Human Trafficking”[Mesh]) OR “Labor Unions”[Mesh]) OR “Working Poor”[Mesh]) OR “Resource Allocation”[Mesh]) OR “Banking, Personal”[Mesh]. That for Web of Science search were: (((((((((“Economic Development”[Mesh]) OR “Sustainable Growth”[Mesh]) OR “Right to Work”[Mesh]) OR “Unemployment”[Mesh]) OR “Small Business”[Mesh]) OR “Human Trafficking”[Mesh]) OR “Labor Unions”[Mesh]) OR “Working Poor”[Mesh]) OR “Resource Allocation”[Mesh]) OR “Banking, Personal”[Mesh] and ((((((“Dental Caries”[Mesh]) OR “Tooth Demineralization”[Mesh]) OR (caries[Text Word])) OR (dental decay[Text Word])) OR (dental cavities [Text Word])) OR (tooth cavities[Text Word])) OR (enamel demineralization[Text Word]). Screening of publications was conducted from the inception of the databases up to 2023. The search was completed in July 2023. Eligibility criteria and article selection For inclusion in this review, only English language publications until July 2023 were considered. The selected studies included cross-sectional, case-control, and cohort designs, and they reported findings on the association between decent work, economic growth, related factors, and ECC among children aged six years and below. To maintain the focus of this review on the association between decent work, economic growth-related factors, and ECC, studies that solely examined the prevalence and severity of ECC with no reference to the goals of SDG 8 were excluded. Publications that were not primary studies such as ecological studies and letters to the editors were also excluded. The literature obtained from the database searches was exported to Zotero version 6, a reference management software. Duplicate publications were identified and removed using the “duplicate items” function in Zotero. Title and abstract screening were carried out by two independent reviewers (IA, AN) who followed the eligibility criteria established for this review. No attempts were made to contact authors or institutions for additional sources of information.
Results The initial search across three databases, namely PubMed, Web of Science, and Scopus, using the predefined search terms resulted in a total of 761 articles. After removing duplicates and ineligible manuscripts, 84 unique articles remained for further screening. However, none of the identified studies provided data on the association between decent work, economic growth-related factors, and ECC. Figure 2 shows the details of the search findings.
Discussion Recognizing the potential impact of socioeconomic development oral health is crucial, as it paves the way for a future where every child can access high-quality oral healthcare and enjoy a healthy and prosperous life. The SDG8 has the potential to contribute to global health and well-being. However, despite the plausible evidence supporting the link between SDG8 and ECC, this scoping review could identify no evidence derived from primary studies supporting this connection. The study finding suggest there is a lacuna of evidence derived from primary studies on the links between SDG8 and ECC. This study represents the first comprehensive analysis examining the potential association between ECC and SDG8. It highlights the possibility of generating evidence to establish this link through further research. It is important to note that attributing the impact of economic development on ECC to SDG8 may be challenging due to links with other SDGs that can influence the prevalence, burden, and severity of ECC. Nevertheless, this challenge does not negate the potentials for developing new methodologies for assessing the impact of economic development on oral health in children. Perhaps as more countries undertake nationally representative oral health surveys and adopt SDG8 measurements, future investigations of potential interactions are possible. There are numerous studies on the links between human health on health expenditure, economic activity and growth and the SDG8 [ 22 , 29 ]. There are, however, fewer studies on the impact of oral health on economic activity and growth. One study suggests that poor oral health causes an indirect global loss worth $144 billion, direct annual cost of oral problems was about $298 billion [ 30 ]. There are no specific data on the impact of ECC and ECC expenditure on economic activity and growth despite the recognized economic toll ECC exerts [ 31 ]. The absence of specific data can significantly impact the ability of policymakers to establish relevant oral health programs, making it challenging to develop ECC-focused policies and effectively allocate resources for children’s oral health. Concrete data on the economic toll of ECC is crucial for designing sustainable oral health programs and promoting oral health in vulnerable populations. There is a growing body of literature that explores the relationship between macroeconomic activities, economic growth, and population health [ 32 , 33 ]. Economic growth has the potential to positively influence population health by promoting the utilization of preventive health services, improving nutrition, and reducing the risk of health disorders caused by diseases. However, empirical evidence on the impact of economic growth on population health is diverse and lacks a clear consensus [ 34 ]. This is reflected in the findings of the ecological studies on the impact of economic growth on the risk for ECC [ 23 , 25 – 27 ] suggestive of differences in global and country-level findings on the impact of economic development on the risk of ECC. In addition, a prior ecological study further puts a caveat to the possible impact of economic development on ECC wherein the gross national income per capita for females was associated with lower ECC prevalence [ 35 ]: countries with more females living under 50% of median income had higher prevalence of ECC among 3 to 5-year olds [ 36 ]; and the gross national income per capita for females had a great effect on ECC prevalence [ 35 ]. These studies underscore the need for further research and collaborative efforts among experts to gain a comprehensive understanding of the complex relationship between ECC and the SDG8 to promote population oral health in the context of economic growth. Without a concrete understanding of the relationship between economic growth and health, designing targeted and effective programs to address ECC becomes challenging. Moreover, the absence of empirical evidence concerning the effective and efficient allocation of additional resources to promote oral health, specifically in preventing untreated ECC, creates a critical gap that requires attention. Without this evidence, there is a risk of misallocating resources and efforts, leading to inefficiencies in oral health programs. Consequently, preventive measures targeting ECC may not receive sufficient support, allowing the condition to persist and worsen [ 37 ]. The lack of data-driven insights may result in missed opportunities to implement innovative and effective strategies for ECC prevention. Promising interventions may not undergo adequate investigation, and their potential impact on preventing ECC might not be fully realized, especially when competing with other health priorities. Consequently, ECC prevention efforts may not receive the necessary attention and resources required to make a significant impact on children’s oral health [ 38 ]. Understanding this aspect will provide valuable insights for the development and implementation of oral health policies for children. Given the intricate relationship between SDG8 and health [ 39 ], as well as the close connection between oral health and overall health [ 40 ], it is reasonable to assume that SDG8 and oral health are intertwined. Therefore, empirical studies examining the link between economic development, decent workplaces, and the oral health of children are warranted. The SDG 8 targets creates an opportunity to explore the possible impact of having a healthy workforce with decent work and economic growth. The provision of decent, healthy, and safe oral health workforce will help improve ECC outcomes. To quantify contributory benefits of decent work and economic growth on ECC indicators measuring this impact is needed as this evidence can encourage investments in enhancing working conditions and safeguarding oral health workers to tackle ECC. In conclusion, although there are plausible links between SDG8 and ECC, there is currently no evidence derivable from primary studies showing these links. Though the evidence on the associations between SDG8 and health are controversial, these findings further substantiate the possibility to generate evidence on the associations between the SDG8 and ECC. Generating evidence on the links between SDG8 and oral health, inclusive of ECC, will help drive investments, policy formulation, and programs linking macrostructural factors to enhance the control of ECC globally.
Background Early Childhood Caries (ECC) is a prevalent chronic non-communicable disease that affects millions of young children globally, with profound implications for their well-being and oral health. This paper explores the associations between ECC and the targets of the Sustainable Development Goal 8 (SDG 8). Methods The scoping review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines. In July 2023, a search was conducted in PubMed, Web of Science, and Scopus using tailored search terms related to economic growth, decent work sustained economic growth, higher levels of productivity and technological innovation, entrepreneurship, job creation, and efforts to eradicate forced labor, slavery, and human trafficking and ECC all of which are the targets of the SDG8. Only English language publications, and publications that were analytical in design were included. Studies that solely examined ECC prevalence without reference to SDG8 goals were excluded. Results The initial search yielded 761 articles. After removing duplicates and ineligible manuscripts, 84 were screened. However, none of the identified studies provided data on the association between decent work, economic growth-related factors, and ECC. Conclusions This scoping review found no English publication on the associations between SDG8 and ECC despite the plausibility for this link. This data gap can hinder policymaking and resource allocation for oral health programs. Further research should explore the complex relationship between economic growth, decent work and ECC to provide additional evidence for better policy formulation and ECC control globally. Supplementary Information The online version contains supplementary material available at 10.1186/s12903-023-03766-6. Keywords
Supplementary Information
Abbreviations Early Childhood Caries Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews guidelines Sustainable Development Goal Authors’ contributions M.O.F conceived the study. The Project was managed by M.O.F. Data curating was done by MET, RA, IA, and AN. Data analysis was conducted by MOF, RA and MET. MOF developed the first draft of the document. DD and IGS drew the conceptual framework. RA, AK, IGS, DD, IM, AN, JIV, RMS, AV, OAA_B, BG, TM, RJS and MET read the draft manuscript and made inputs prior to the final draft. All authors approved the final manuscript for submission. Funding Not applicable. Availability of data and materials The datasets used and/or analysed for the study are publicly accessible. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests Duangporn Duangthip and Jorma Virtanen are Associated Editors with the BMC Oral Health. Morẹ́nikẹ́ Oluwátóyìn Foláyan, and Maha El Tantawi are Senior Editor Board members with BMC Oral Health. Arthur Kemoli is a member of the Editorial board of BMC Oral Health. All other authors declare no conflict of interest.
CC BY
no
2024-01-15 23:43:47
BMC Oral Health. 2024 Jan 13; 24:77
oa_package/f7/3b/PMC10787988.tar.gz
PMC10787989
38218823
Background Triple-negative breast cancer (TNBC) accounts for around 15% of breast cancers. The prognosis of TNBC is unfavorable due to poor differentiation, strong invasion, and easy recurrence, resulting in 5-year survival less than 30% in metastatic stage [ 1 ]. A fraction of mTNBC patients responded to immune checkpoint inhibitors (ICIs) monotherapy or combined treatments. Thus, the selection of ICIs beneficial subgroups and how to improve the efficacy of ICIs in mTNBC are still challenging. High levels of PD-L1 and stromal tumor-infiltrating lymphocytes (TILs) reflect the potential benefit of ICIs in mTNBC [ 2 , 3 ]. Mesenchyme TILs enrichment contributed to reduced relapse and longer survival in TNBC [ 4 ]. Dieci et al. reported that the five-year overall survival (OS) of high TILs group in neoadjuvant thermotherapy was 91%, in contrast to low TILs group (55%) [ 5 ]. However, the distribution of TILs varied significantly, depending on tumor heterogeneity [ 6 ]. TILs score access is also limited due to the test availability and high expense in hospital. Peripheral blood indices were reported to predict effects of ICIs in non-small cell lung cancer and early-stage hepatocellular carcinoma, highlighting the need to develop circulating biomarkers to foresee recurrence risk in mTNBC [ 7 , 8 ]. Our study proved the correlation between circulating blood cells and the therapeutic efficacies of ICIs in mTNBC.
Material and methods Patient population mTNBC patients treated with ICIs in affiliated hospitals of Anhui Medical University from 2018 to 2023 were collected and screened. They were administered with ICIs and/or chemotherapies. Tumor evaluation by CT (computed tomography) scanning was performed after every two cycles of treatment according to RECIST 1.1 (Response Evaluation Criteria in Solid Tumors version 1.1). Our study was approved by the Ethics Committee of Anhui Medical University (Reference number. PJ 2023-11-58). Treatment and data collection A total 83 mTNBC patients were collected and 50 patients treated with ICIs were included. The baseline features are listed in Table 1 . The peripheral blood cell counts at baseline and prior to second-line treatments included white blood cell (WBC), absolute neutrophil (ANC), absolute lymphocyte (ALC), absolute monocyte (AMC) and blood platelet (PLT). NLR (ANC/ALC ratio), MLR (monocyte/lymphocyte ratio), and PLR (hemocyte/lymphocyte ratio) were calculated. Tumor evaluation was performed post every two cycles of treatment; Adverse events were assessed with Immune-related Response Evaluation Criteria in Solid Tumors. Statistical analysis Patient features were described via descriptive statistics. Overall survival (OS) and progression-free survival (PFS) were collected and analyzed. The Cox proportional risk model was established with hazard ratios (HRs) and 95% confidence intervals (CIs). The multi-variable death model was modified based on age (initial diagnosis) and therapeutic line numbers (0, 1, 2, 3 and higher). All statistical tests were two-sided with significance threshold (alpha, α) at 0.05.
Results Patient characteristics Among a total of 83 mTNBC patients, 50 cases were selected, which received at least two cycles of ICIs. The baseline features are listed in Table 1 . The median age was 54 years old and over half ( n = 26, 52%) had received at least one line of palliative chemotherapy before immunotherapy. Low HER-2 expression is defined by 1 + to 2 + with absence of HER-2 amplification via fluorescence in situ hybridization (FISH). 40% of mTNBC are HER-2 lowly expressed ( n = 20) and the disease control rate (DCR) by ICIs was 80%. The median OS (mOS) was 226 days and the median PFS (mPFS) was 145 days. The baseline and post-ICIs peripheral blood biomarkers in mTNBC The mean baseline peripheral blood lymphocyte (PBLC) of ICI responding mTNBC subgroup (SD, PR and CR post immunotherapy) was 1.242*10 9 /L (95%CI:1.125–1.359), significantly higher than that in non-responding group, 0.925 *10 9 /L (95%CI: 0.0634–1.215) ( P = 0.021). After one cycle of ICIs, the mean PBLC values in both groups were 1.258*10 9 /L (95%CI:1.137–1.380) verus 0.839*10 9 /L (95%CI: 0.6014–1.077) ( P = 0.002). The NLR and MLR (2.30 [1.64–3.67] and 0.25 [0.17-0.0.32]) in beneficial group also decreased significantly, in contrast to those in ICI failed group (4.78[2.21–8.88] and 0.37[0.27–0.62]) (NLR: P = 0.018 and MLR: P = 0.023). The baseline monocyte counts and PLR were not found to significantly correlate with the response of ICIs. The correlation of peripheral blood biomarkers with immunotherapy outcomes Lymphocyte count reduction was defined as < 1.1*10 9 /L [ 9 ]. High PBLC significantly improved OS and PFS in mTNBC either in ICIs naïve cases or post-ICIs (Fig. 1 ). Based on adjusted treatment lines, age, liver metastasis and HER-2 expression, the baseline lymphocyte in ICI treated mTNBC was associated with OS (HR: 0.280; 95% CI: 0.095–0.823; p = 0.021). In the group with baseline lymphocyte over 1.10*10 9 /L (LN-high group), mOS was 520 days (95% CI: 207.8-832.2), and 12-month survival rate was 55.6%. The group with baseline lymphocyte less than 1.10*10 9 /L (LN-low group) showed that mOS was 155 days (95% CI: 117.4-192.6) (HR: 0.482; 95% CI: 0.233–0.999; p = 0.049), and the 12-month survival rate was 17.4% ( p = 0.06) (Fig. 2 ). The 6-month PFS in both groups was 51.9% and 30.4%, but without statistical significance ( p = 0.126). However, the 6-month PFS rate post one cycle of ICIs in LN-high group (55.2%) significantly exceeded that in LN-low group (23.8%) ( p = 0.027). The cutoff points of NLR and PLR were defined at median values of samples. NLR, other than PLR and MLR, significantly extended survivals of mTNBC when it is over 2.75 (Fig. 1 and Supplementary Figures 1 and 2 ). The treatment lines, age, andHER-2 expression were adjusted accordingly in the multivariable analysis. The baseline NLR was significantly associated with OS (HR:1.150; 95% CI:1.052–1.257; p = 0.002) and PFS (HR:1.086; 95% CI:1.002–1.177; p = 0.045) (Table 2 ). The cutoff point of NLR was 2.75. The mOS of NLR-high group (≥ 2.75) and NLR-low group was 143 days (95% CI: 92.4-193.6) and 520 days (95% CI: 110.8-929.2) (HR: 2.575; 95% CI: 1.217–5.447; p = 0.013) (Fig. 2 E). The mPFS in both groups was 118 days (95% CI: 77.2-158.8) and 253 days (95% CI: 110.8-929.2) (HR: 2.189; 95% CI: 1.085–4.414; p = 0.029) (Fig. 2 F). The 12-month survival rates were 24% and 52% ( p = 0.041), while the 6-month PFS rates were 24% and 60% ( p = 0.01). The baseline PLR also showed a positive correlation with survival time after immunotherapy ( p = 0.028) (Table 2 ). However, the inter-group differences were not significant (Fig. 2 and Supplementary Figures 1 and 2 ). HER-2 expression and anti-tumor therapeutic lines on the survival of ICI-treated mTNBC HER-2 low expression is defined as HER-2 1 + or 2 + immunohistochemistry without gene amplification. With adjusted therapeutic lines, age, and liver metastasis, HER-2 expression in ICI-treated mTNBC was significantly associated with OS (HR:3.253; 95% CI,1.418–7.460; p = 0.005) and PFS (HR:2.710; 95% CI:1.226–5.992; p = 0.014) (see Supplementary Figures 3 and 5 ). The median OS and 12-month survival rate in HER-2 low subgroup were 343 days and 50% respectively. The median PFS and 6-month survival rate in HER-2 low group were 206 days and 55%, superior to HER-2 zero group (mOS: 161 days; 12-month survival rate: 30%127; mPFS: 127 days; 6-month rate:33%). Metastatic TNBC patients previously treated with less than two anti-tumor therapeutic lines prior to ICIs showed better OS and PFS than those with over one or two lines (Table 2 and Supplementary Figures 7 and 8 ). Safety A total of 13 patients (26%) had grade 3 to 4 immune-related adverse responses, leading to the cease of immunotherapy. No significance was shown in mTNBC with and without adverse immune events (AEs) on baseline demographics. The most common grade 3 to 4 immune-related AEs (irAEs) included myositis/myocardial damage ( n = 7, 53.8%), pneumonia ( n = 3, 15.4%), myelosuppression ( n = 2, 15.4%), abnormal liver function ( n = 1, 7.7%), and skin reaction ( n = 1, 7.7%). ICIs were stopped immediately upon the occurrence of AEs; glucocorticoids and/or immunoglobulins were administrated accordingly. Four cases died of irAEs (three cases of cardiac dysfunction and one case of myelosuppression). The baseline peripheral lymphocyte count, and monocyte count declined AE population (lymphocytes: p = 0.042, monocytes: p = 0.040).
Discussions In our study, absolute baseline lymphocyte enrichment improves the survival of mTNBC, which was further reflected even post one cycle of ICIs. In non-small cell lung cancer treated with ICIs, OS was prolonged with higher absolute baseline lymphocyte [ 10 ]. The relatively high lymphocyte count in peripheral blood is related to prolonged survival in gynecologic malignancies [ 11 ]. Anosheh et al. also reported less death risk in early-staged TNBCs, with higher absolute lymphocyte counts [ 12 ]. ICIs reverse the effects of PD-1 on lymphocyte signal conduction via PD-1–PD-L1 axis blockage, which facilitates the production of effective T cells and memory cells, inhibits the differentiation of TEX and T-Reg cells, and strengthens anti-tumor T-cell activation [ 13 ]. It is difficult to induce anti-tumor effects in absence of lymphocytes. Additionally, higher PBLC contributes to better OS. Although baseline lymphocyte counts in ICIs naïve mTNBCs are generally higher than in those exposed to second- or third-line ICIs, the positive correlation between PBLC and OS is significant by statistical adjustment. Mechanistically, advanced metastatic breast cancer shows stronger immune suppression with insufficient TILs in cancer tissues [ 14 , 15 ]. Previous studies prove a correlation between TILs and absolute lymphocyte count in breast cancer [ 16 ]. We also found that NLR negatively correlated with OS and PFS, in contrast to positive correlation with OS by PLR. NLR and PLR are predictive of poor prognosis in many types of tumors [ 17 ]. In the latest bioinformatic analysis involving 2,994 patients, TNBC with less genetic NLR were enriched in several immunity-related gene sets; TNBC carrying lower NLR might benefit from ICIs [ 18 ]. Tumor derived platelets recruit hemocyte and immune cells for migration and established inflammatory tumor microenvironment at primary and metastatic sites [ 19 ]. Inflammatory environment via high NLR impaired the clinical efficacy of ICIs and chemotherapy. mTNBCs with baseline NLR less than 3.16 showed prolonged OS and PFS post neoadjuvant chemotherapies [ 20 ]. In non-small cell lung cancer with baseline NLR > 5.9, the therapeutic effect and long-term prognosis of anti-PD-1 inhibitors fell significantly [ 21 ]. In advanced gastric cancer and liver cancer, The NLR cutoff values were 3.23 and 3, respectively [ 22 ]. The values of NLR and PLR in our study were 2.75 and 157.28 (median), respectively. There are no commonly recommended thresholds of NLR etc. to predict the immunotherapeutic efficacy in cancer diseases, partially due to varied tumor immune microenvironments. Studies focusing on ICI predictors were mainly limited to non-small cell lung cancer and melanoma [ 23 , 24 ]. In studies of pan-cancer species, breast cancer patients with high NLR benefited less from ICI clinically. The OS and PFS with low NLR or high tumor mutation loads increased partially post immunotherapy, but without statistical significance and pathological stratification [ 25 ]. Peripheral blood PLR and NLR in TNBC may be positively correlated with PD-L1 expression in immune cells, but without available pathological evidence [ 26 ]. Thus, the correlation of NLR and PLR with prognosis in breast cancer treated with immunotherapy is still veiled. The prognostic effect of HER-2 low expression on breast cancer is still controversial. HER-2 low group intends to possess smaller tumor size and lower Ki67 index. In one retrospective study of 3689 breast cancer patients, HER-2 low is not associated with OS in TNBC without tumor proliferation genetic variation. Menopausal status, histological grade, Ki67 scores and percentage of TILs showed no significance difference [ 27 ]. In Chinese breast cancer population ( n = 772), no statistical significance was observed in pCR rate between mTNBC HER-2 low and HER-2 zero groups; however, in non-pCR groups, prognosis was significantly improved in HER-2 low, other than Her-2 zero group, consistent with our data ( n = 50) [ 28 ]. The prognosis of HER-2 will be validated in expanded sample size cohort studies. In terms of tumor microenvironment (TME), TILs were significantly less expressed in HER-2 low tumor tissue other than HER-2 zero sample, indicating that HER-2 zero group might benefit more from ICIs. Therefore, the TILs levels are not convincing to elaborate prognosis of HER-2 low subgroup. Further investigations on mapped oncogene network in TNBCs were also demanded. The prognostic value of HER-2 low was solely restricted in specific subtypes. T cell exhausting was assumed post multiple lines of chemotherapies, partially contributing to inferior efficacy of ICIs. Our study also proved better survival when ICIs were initiated as the first or second anti-tumor therapy line. Our study was limited by small sample size in the retrospective scale, mainly in Han population. Moreover, different combination of chemotherapies with immunotherapies might affect the exhaustion of bone marrow and PBLCs at baseline. mTNBCs with lowly expressed HER-2 showed longer survivals by immunotherapy than HER-2 negative subgroup. None was treated with anti-HER-2 antibodies or ADC drugs.
Conclusions Our data suggested that the baseline PBLC, NLR, MLR and absolute lymphocyte counts post ICIs clinically predict efficacies of anti-PD-1 antibodies in mTNBCs. HER-2 low expression and early ICI involvement also improve the survivals of mTNBCs. The access to whole blood sample from TNBC are easier and mor convenient in clinical practice. These findings may assist ICI related risk stratification and prevent unnecessary toxicities for those benefiting less from ICIs. However, the further investigation is demanded in a large scale and prospective view.
Background Immune checkpoint inhibitors (ICIs) can improve survivals of metastatic triple negative breast cancer (mTNBC); however, we still seek circulating blood biomarkers to predict the efficacy of ICIs. Materials and methods In this study, we analyzed the data of ICIs treated mTNBC collected in Anhui Medical University affiliated hospitals from 2018 to 2023. The counts of lymphocytes, monocytes, platelets, and ratio indexes (NLR, MLR, PLR) in peripheral blood were investigated via the Kaplan-Meier curves and the Cox proportional-hazards model. Results The total of 50 mTNBC patients were treated with ICIs. High level of peripheral lymphocytes and low level of NLR and MLR at baseline and post the first cycle of ICIs play the predictable role of immunotherapies. Lymphocytes counts (HR = 0.280; 95% CI: 0.095–0.823; p = 0.021) and NLR (HR = 1.150; 95% CI: 1.052–1.257; p = 0.002) are significantly correlated with overall survival. High NLR also increases the risk of disease progression (HR = 2.189; 95% CI:1.085–4.414; p = 0.029). When NLR at baseline ≥ 2.75, the hazard of death (HR = 2.575; 95% CI:1.217–5.447; p = 0.013) and disease progression (HR = 2.189; 95% CI: 1.085–4.414; p = 0.029) significantly rise. HER-2 expression and anti-tumor therapy lines are statistically correlated with survivals. Conclusions Before the initiation of ICIs, enriched peripheral lymphocytes and poor neutrophils and NLR contribute to the prediction of survivals. Supplementary Information The online version contains supplementary material available at 10.1186/s12905-023-02871-6. Keywords
Supplementary Information
Acknowledgements The authors would like to thank Cheng Zhou for his academic assistance. Authors’ contributions X.Y.L., M.Y. and Y.D. collected and interpreted the clinical data. Y.Y.Z, C.Z. and W.T.X. completed the statistical analysis. D.A.S.M. and J.L.A.R. revised the manuscript. H.W. and X.L.H. wrote the manuscript and the graphical illustrations. All authors critically reviewed and approved the manuscript. Funding This work was supported by the grant from Anhui Natural Science Foundation Youth Program (2008085QH424) and Basic and Applied Basic Research Fund of Guangdong Province (2019A1515011331). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. Availability of data and materials All the data we used in this study were available as described in the “ Material and methods ” section. Declarations Ethics approval and consent to participate All procedures performed in this study were in accordance with the ethical standards of the Helsinki declaration. The approved number by the Institutional Review Board in the First Affiliated Hospital of Anhui Medical University is PJ 2023-11-58. Informed consent was obtained from all individuals. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:47
BMC Womens Health. 2024 Jan 13; 24:38
oa_package/77/03/PMC10787989.tar.gz
PMC10787990
38218790
Background Breast Cancer (BC) is the most common type of cancer among women worldwide. Even though BC incidence is higher in high-income countries in comparison to low- and middle-income countries (LMICs), the majority of deaths actually occur in LMIC settings [ 1 ] The higher mortality rates observed in LMICs compared to high-income countries (HICs) are thought to be a consequence of late detection and limited access to standard quality treatment [ 1 ]. In Mexico, BC is the most frequent cancer and the main cause of cancer mortality among women since 2006 [ 2 ]. The majority of BC cases (65%) are diagnosed at advanced stages (IIB to IV) and the estimated overall survival is 72% [ 3 ]. Although the burden of BC disease is usually higher in urban populations, incidence has been also increasing in Mexico’s rural populations [ 4 ]. Evidence shows that women living in marginalized areas have a higher risk of dying from preventable cancer deaths than other populations due to a combination of vulnerabilities that often result in late detection and delayed or incomplete treatment [ 5 – 7 ]. Indigenous minorities in Mexico compound several vulnerabilities: they tend to have lower socioeconomic status, less access to education, and more commonly live in small rural communities that lack access to many services, including healthcare [ 7 – 10 ]. Gender and ethnicity interact and indigenous women in Mexico face a double social vulnerability: that of being women in a society where power structures favor men and that of belonging to a minority ethnic group that has suffered systemic discrimination for more than 200 years [ 11 , 12 ]. Indigenous women in Mexico experience the greatest lags in health (e.g., the lowest life expectancy at birth and the highest maternal and infant mortality ratios), and face the greatest barriers to accessing health services including discrimination at healthcare facilities [ 13 , 14 ]. The superimposition or intersectionality of these social factors of vulnerability configure the systematic inequalities that determine the subordinate position of indigenous women in the social structure [ 15 ]. Several barriers have been described in the international literature for early BC diagnosis among minority populations living in HICs, like immigrants and, afro-descendants [ 16 – 20 ]. However, there is a dearth of studies related with barriers and facilitators of BC early diagnosis in indigenous populations worldwide. In Mexico, the scarce existing literature on barriers and facilitators for early detection of cancer among indigenous women has been limited to understanding their participation in cervical cancer screening [ 21 – 23 ]. Therefore, we undertook this qualitative study to explore barriers and facilitators for early BC diagnosis as perceived by otomí women living in the suburbs of an urban city in central Mexico.
Methods Design An exploratory and descriptive qualitative study was conducted [ 24 , 25 ] with Otomí women living in Jiquipilco, State of Mexico. The study received approval from the National Cancer Institute of Mexico’s institutional review boards (021/041/IBI) (CEI/1592/21). Study setting The State of Mexico is a neighbor state of Mexico City. Jiquipilco is located approximately 45 km from Toluca, the capital of the state, and has a population of 69,031 habitants, of which 23.2% identify as indigenous [ 26 ]. It has an urban central area surrounded by rural areas, and its main economic activity is agriculture. The Otomí people are one of the original ethnic groups of Mexico and live across different regions of the country [ 27 ]. In the State of Mexico, the Otomí population concentrates in 21 municipalities, and Jiquipilco is one of them. In Jiquipilco, the Otomí people tend to concentrate in the rural outskirts of the municipality, living in conditions of poverty, and limited access to services, education and employment opportunities [ 10 , 27 , 28 ]. The Otomí people of the State of Mexico tend to work in agriculture activities part of the year, mainly in the cultivation of corn, beans, wheat, oats and maguey [ 27 ]. In the months when there is no agricultural activity, they migrate from rural communities to the Metropolitan Areas of Toluca and Mexico City where they are most employed as domestic workers, peddlers or as construction workers [ 27 , 29 ]. These occupations are in the informal sector of the economy, and therefore most Otomí people are not covered by social security health insurance, which is provided through formal employment in Mexico. National Guidelines for BC Control in Mexico recommend: monthly breast self-examination starting at age 20, annual clinical breast examination (CBE) starting at age 25, and screeningmammography every 2 years starting at age 40 and up to 69 years [ 30 ]. Access to CBE and screening mammography varies according to women’s health insurance coverage and their capacity to pay for private services. At the time of this study, approximately 40% of the National population was covered by a social security health insurance scheme and only 3% had private insurance. For the uninsured, the state offers health services through its own infrastructure. Mammography units closest to Jiquipilco are in the state capital (Toluca), which is approximately 45 km from Jiquipilco. Study participants We used intentional non-probability sampling to find adult Otomí women -in Mexico, legally adulthood starts at 18 years of age- native of and currently living in Jiquipilco who could speak Spanish, and had no personal history of breast cancer [ 24 ]. The main objective of intentional sampling is to elicit different perspectives from people who represent the opinion of their group of reference [ 31 , 32 ]. The vast majority of indigenous people who live in Jiquipilco speak Spanish. According to data from the National Census, in Jiquipilco 8.0% of residents speak an indigenous language, and only 0.1% of the population speaks an indigenous language and no Spanish [ 33 ]. The State’s Council for the Integral Development of Indigenous Peoples (CEDIPIEM for its acronym in Spanish) helped us establish contact with the community to facilitate the invitation of potential participants. CEDIPIEM is a decentralized public entity whose purpose is to define, execute and evaluate policies directed to improve the lives of the State of Mexico’s indigenous population [ 27 ]. Even though, inviting participants through an official organization could have increased the risk of selection bias, this was the best alternative we found to identify and invite otomí women of the region who would trust our invitation. We explained the study to all of those invited, emphasizing that participation was voluntary and that there would be no repercussions on health care or social benefits if they refused to participate. Written informed consent was signed by all participants previous to their participation in the focus group interviews. We included all who were willing to participate. Conceptual framework Our study was guided by a conceptual framework that integrates the Social Ecological Model [ 34 ], the Health Belief Model [ 35 ] and the Institute of Medicine’s Healthcare Quality framework [ 36 ]. Figure 1 illustrates how we integrated these three theoretical perspectives to guide our interview analyses in the identification of the Otomí women’s perceived barriers and facilitators for early diagnosis of BC. The Social Ecological Model (SEM) is a useful framework to identify the full range of factors that can influence health and health behavior. These factors can be located at different levels: individual, interpersonal, institutional or organizational, community, and public policy levels. The SEM framework emphasizes the interaction and interdependence between factors within and across all these levels [ 34 , 37 , 38 ]. It has been used to study diverse social problems and health behaviors [ 39 – 45 ]. The SEM can be used to integrate components of other theories. We used the Health Belief Model (HBM) to strengthen the analysis of individual level factors that exert an influence on Otomí women’s help seeking behavior, and the Healthcare Quality (HCQ) framework to strengthen our analysis of the organizational level factors (our participants’ perception of the quality of services for BC early diagnosis: primary care clinics and breast imaging services). The HBM stipulates that the following groups of factors influence the likelihood of a person taking a recommended preventive health action: demographic variables -age, race, socioeconomic level-; psychological variables and knowledge of disease (in this case, BC); perceptions of the disease (perceived susceptibility and perceived seriousness of BC); perceptions of the health behavior of interest (perceived benefits and perceived barriers to act on the recommended health behavior) and cues to action [ 35 ]. Perceived susceptibility refers to a person’s subjective perception of their own risk of developing BC. Perceived severity includes assessments of severity and the medical and social consequences of getting BC. Perceived barriers refer to the possible negative effects of the preventive or health behavior such as its costs, secondary adverse effects, and time required. Perceived benefits refer to the individual’s perception of the effectiveness of the health behavior. The main health behavior we were interested in understanding was timely seeking of medical care for breast symptoms, but we also assessed the study participants’ perceived benefits of breast self-examination, screening clinical breast examination and screening mammography. Finally, cues to action are events or things that trigger people to act or perform a certain health behavior (e.g., medical recommendation, mass media messages, etc.) [ 46 ]. Over time this model evolved to include self-efficacy as an important determinant in health behavior [ 47 ]. Self-efficacy is understood as the conviction of people in their own capability to successfully perform a certain behavior [ 48 ]. Finally, we used the Health Care Quality (HCQ) framework to strengthen our analysis of the health system (organizational level) factors. According to the HCQ framework, quality healthcare should be: 1) safe, avoiding harm to patients from the care that is intended to help them, 2) effective, providing services based on scientific knowledge to all who could benefit and refraining from providing services to those not likely to benefit, 3) patient-centered: providing care that is respectful of and responsive to individual patient preferences, needs, and values and ensuring that patient values guide all clinical decisions, 4) timely: reducing waits and sometimes harmful delays for both those who receive and those who give care, 5) efficient: avoiding waste, including waste of equipment, supplies, ideas, and energy, and 6) equitable: providing care that does not vary in quality because of personal characteristics such as gender, ethnicity, geographic location, and socioeconomic status [ 36 ]. Even though we did not analyze patient nor services outcomes, using this framework we were able to identify our participants’ perceptions on HCQ dimensions based on their previous interactions with health services. In this study, we were particularly interested in our participants’ previous experiences with health services and their perceptions regarding patient-centeredness, as this can be especially challenging in the context of care for women that belong to a historically marginalized and discriminated social group. Data collection We conducted three focus group interviews with 19 women in November 2021 [ 49 ]. Focus group interviews are recognized as a useful tool to obtain information about collective points of view and their meanings, and to generate a rich understanding of the experiences and beliefs of the participants [ 50 ]. MST moderated the interviews. She is a woman, psychologist and qualitative researcher with no previous relationship with the Otomí community at Jiquipilco nor with the CEDIPIEM. The interviews were conducted using a semi-structured interview guide with open-ended questions to ask participants about their perceptions of barriers and facilitators, knowledge, attitudes and beliefs about cancer early detection in general and more specifically about early BC diagnosis. We developed our interview guides based on our conceptual framework and key findings from the existing literature on barriers and facilitators for early detection of BC among underserved populations. Each focus group interview lasted approximately 60 minutes and the number of participants in the groups ranged between 4 and 8. All interviews were audio-recorded. Data saturation was achieved with the last focus group and, therefore, no more participants were recruited. We decided saturation was reached when no new codes appeared and each of the codes had been applied to a sufficient amount of data [ 49 , 51 ]. We also collected descriptive demographic data from all the participants including age, marital status, occupation, years of school education and family income. Data analysis Participants’ responses were transcribed verbatim and all transcripts were de-identified prior to analysis. Transcripts and field notes were organized using Atlas.ti 8 software to aid the analysis. We used a pragmatic approach for data interpretation, using both deductive and inductive data analysis to explain findings. These type of analytical processes that engage both deductive and inductive strategies have shown to help researchers apply concepts from the literature and theory, which can in turn support the trustworthiness and applicability of the study [ 52 ]. We identified barriers and facilitators for early BC diagnosis guided by our conceptual framework (Fig. 1 ) which integrates theoretical perspectives of the Social Ecological Model, the Health Belief Model and the Institute of Medicine’s HealthCare Quality Framework [ 53 , 54 ]. But data was also coded using the constant comparison method. The constant comparison method is an iterative and inductive process of reducing the data through constant recoding to assure that all data are systematically compared to all other data in the data set [ 55 , 56 ]. Using this strategy we continually compared data to other data within a single interview, between interviews within the same group and between interviews from different groups [ 57 ]. We read all interview transcripts carefully several times in order to identify the codes through the participants’ narratives. To enhance trustworthiness and rigor, we used triangulation for coding of the data. Data were coded by two different researchers: MST who is psychologist with postgraduate studies in health psychology and KUS is a medical doctor and health systems researcher. The coding results were then reviewed for cases with differing results, reaching consensus between the two coders to establish the final codes.
Results Nineteen Otomí women participated in the study. To keep the confidentiality agreement we made with all of our participants, the names used in this paper are pseudonyms. Participant sociodemographic characteristics are shown in Table 1 . Figure 2 summarizes the perceived barriers and facilitators for BC early diagnosis that we identified in the interviews, and organizes them at the different levels of the Social Ecological Model (SEM). The Health Belief Model constructs were used to code the barriers and facilitators identified at the individual level of the SEM, and the Healthcare Quailty framework was used for the Health Services Organization level. The arrow crossing through all levels represents gender and ethnicity as the key social processes that act at every level of the SEM to influence individual women’s help-seeking behaviors for breast symptoms and timely access to quality medical care for BC early diagnosis. Perceived barriers to early BC diagnosis Health policy barriers Our study participants perceived the elimination of the social program “ Progresa-Oportunidades-Prospera ” (POP) as an access barrier to healthcare services. This was a federal program that gave conditional cash transfers to families living in poverty to improve their access to nutritional food, healthcare and education. The program operated for 20 years and was terminated in 2019 by the current government [ 53 ]. The POP program provided basic health services free of charge, in addition to health promotion actions under three modalities: self-care promotion; individualized guidance and counseling during medical consultations; and health promotion messages aimed at the families of beneficiaries [ 58 ]. Our interviewees reported that through POP they had access to special health programs, health information and better access to health care. They perceive that, as a result of its elimination, people seek less care at health centers, as they report experiences of not receiving medical attention at the health center when they need it, and they feel “lost” regarding where to seek medical attention. Social and cultural context barriers Cultural gender norms Gender issues constantly emerged in the participants’ narratives. Women spoke about cultural gender norms and men’s attitudes towards sexuality as a barrier to BC early diagnosis. They referred to men as being “ machista ”, trying to control their female partners’ behavior. They explained that in their community it was prohibited for women to talk about their breasts, to examine their own breasts, and to get general check-ups with male doctors. Our participants described that this “sexual taboo” limits them to talk about their bodies, their breasts and breast diseases because they feel embarrassed. They said that they do not know their own bodies and that they don’t explore their breasts because of shame and fear of being judged. Due to the assigned gender roles in the community, girls receive less school education than boys. Our participants reported that once girls finish the mandatory 6 years of elementary school education in Mexico, they are considered to be ready for marriage. These low levels of schooling not only have a negative impact in indigenous women’s health literacy and awareness of different health problems, like cancer, but also in their own empowerment to fight for their rights within their families, their communities and in their exchanges with healthcare services. Additionally, as part of their gender roles, women in the community are expected to take care of their children, spouse, and other family members, prioritize the care of others over their self-care, and are also responsible for all the housework (buying food, cooking, cleaning, washing clothes, etc.). They usually have several children as they are not empowered to negotiate birth control with their partners. More and more women are also working outside the house, in search for better economic conditions, but the gender roles of taking care of others and the household are still in place. Our participants referred that they hardly have any time to take care of themselves and this makes it very difficult to seek healthcare when they feel ill and even more so for preventive activities. Myths and beliefs about illness in general and about cancer Beliefs about illness in general were also perceived as barriers for early diagnosis of BC by our participants. For example, they believe that if they think about a certain disease, they can attract it and then fall ill. For this reason, people in the community tend not to talk or think about diseases, as they believe that this way they will avoid getting sick. This makes it very difficult for people in the community to be willing to get health information and to participate in preventive and early detection behaviors. Another common belief about illness in this community, as described by our participants, is that they only perceive themselves to be ill when they feel that their life is in danger. Additionally, they seek medical care only if they feel ill or interpret their symptoms as being life-threatening. Cancer stigma Cancer stigma was perceived as a barrier to seeking medical care. Some participants reported reluctance to talk about cancer in their community and commented that women with BC generally do not reveal their diagnosis even to their own families. They believe this is because of the common belief that cancer is a consequence of having misbehaved, “having been bad”. They see cancer as a divine punishment, so people avoid sharing their diagnosis because of fear of feeling judged by their family members and friends. Additionally, in regard to BC, they spoke about the stigma in relation to mastectomy and “being a woman without breasts”. Cancer in general is viewed as a fatal disease, which they associate with death, pain, suffering and aggressive treatments. Therefore, if they think their symptoms are related with cancer, they are likely to postpone seeking medical care in order to avoid what they see as aggressive unnecessary treatments. This belief is further confirmed once people seek care very late and so in fact receive aggressive treatments and nevertheless die soon. Traditional medicine use Also, our participants said that sometimes people in their community prefer using traditional medicine and postpone seeking medical care, or interrupt medical treatment in favor of traditional medicine treatment. “...Well, a neighbor of my community was going with a “healer” who is, according to her, very famous for healing people with cancer...She was being treated in a hospital in Mexico City, but she abandoned her treatment and instead went to see the healer. She died a year later...” (Cecilia, 28 years old). Fear of COVID Our participants reported that during the pandemic they avoided going to healthcare facilities because of fear of getting infected and dying of COVID. In addition to this postponement of health service utilization due to fear of COVID, they also reported difficulties to access health services due to the reconfiguration of healthcare services to prioritize attention for COVID. Those who tried seeking care faced even longer waiting times than usual to get consultations and tests. Health services organization The majority of the perceived barriers for BC early diagnosis described by our participants were at the level of the health system. According to the HCQ framework, quality healthcare should be safe, effective, patient-centered, timely, efficient and equitable. Our focus groups participants perceived quality problems in the public health services that they are entitled to use, and the problems they described were mainly related with disrespectful (instead of patient-centered), untimely, and inequitable care. Discrimination/Mistreatment by health care personnel Our study participants reported experiences of disrespectful and even discriminatory treatment in their interactions with healthcare personnel in public services. They shared several personal experiences of abuse by healthcare personnel in public services. They questioned the reasons for this, and explained that they think it is due to a combination of their low levels of education, being women and being indigenous. Lack of trust in health personnel Many of our participants expressed a lack of trust in doctors and healthcare personnel in general due to these past personal negative experiences as well as stories they have heard from other people in their community. For this reason, they try to seek care in private services which they perceive as better quality. The problem is that they often can’t afford it. In more extreme cases, our participants described being denied healthcare. The health workers would tell them to return to their homes without giving them care. They would be told that it was due to administrative issues, or lack of time, or insufficient doctors, or sometimes without any explanation. This was perceived by our participants as “unfair” treatment. Language barriers They also commented that language is a barrier for indigenous people who don’t speak Spanish. This mainly affects the elderly. Our participants expressed that healthcare personnel get angry when women don’t speak Spanish. Long waiting times/Difficulties in making an appointment Our participants described long times to get medical appointments at the local health center, long times to get referred to specialists, to receive test results and long hours waiting at the clinics to receive medical attention. They also described very complex administrative procedures to receive care, like having to arrive very early in the morning to the clinic and then stand in line for several hours in an attempt to get a medical consultation, without guarantee that they would succeed. Costs/Distance to health services Our participants described that financial barriers also limit their access to healthcare services, even if the consultations at public services are available without cost to the patient. Having to cover costs of medical care is not only a barrier for private service use. Our participants described that even if they manage to get a consultation in public services without having to pay, they often can’t cover the costs of the medicines that are prescribed. In addition to direct medical costs, there are costs related to transportation and time. Some participants explained that the people in the community have to travel long distances and take several means of transportation to get to medical services, especially if they need specialized care. Interpersonal barriers Influenced by peers At this level of the Social Ecological Model, the influence of peers and family came up as very relevant in the decision of whether or not to seek care, when to seek care, and what type of care to seek: whether traditional medicine, or the local public health center, or even private services. It was reported that when women are ill, instead of going to the doctor, family and friends recommend treating with natural remedies, even one participant reported that a woman in the community with BC abandoned cancer treatment for traditional medicine on the recommendation of her husband. Individual barriers Lack of cancer awareness There was in general low cancer awareness among our participants. Even though they had heard about BC, they recognized they did not have enough information about the disease, its risk factors and how to diagnose it early. Low risk perception of breast cancer Although participants know other people who have been affected with BC, or have heard about it, some participants perceived themselves as not being at risk of developing BC. The fact of thinking that BC is mainly transmitted through family inheritance, makes them feel at low risk of developing it. In words of a participant “I am certain that I will not develop that disease because no one in my family has had it”. (Patricia, 48 years old). Fear of cancer Among our participants, fear of having cancer was perceived as an important barrier to seek care. They described that the fear of having the diagnosis confirmed could cause women in their community to postpone health care-seeking for breast symptoms. This fear is related with their fatalistic attitudes towards cancer. Perceived facilitators to early BC diagnosis Social cultural level Information by media One of the elements that were found within the cues to action dimension were the messages and information received through the media (radio and television commercials), social networks, and screening mammography promotion activities done in their communities. They perceived all this information as facilitators for early breast cancer diagnosis. They find informative posters in the community and community health workshops very useful to keep themselves informed and to inform younger people on the importance of taking care of their health. Health services organization Respectful patient-centered medical care One of the main perceived facilitators that women emphasized would facilitate early cancer diagnosis and medical attention of any health problem was receiving respectful, empathetic care with good attitudes of healthcare personnel and effective communication between doctors and patients. This was more aspirational than actual experiences of the participants. Information by doctors and promoters Women reported receiving information about BC by health care personnel in public and private services. They had heard about breast self-examination mainly in public primary care clinics through nurses “We need a lot of talks, but I would like to include young people because they are beginning to take care of themselves so that they know the care they should have is very important” (Verónica, 46 years old). Interpersonal facilitators Social support Social support from other women and from their family members, especially their partners was reported as a potential key facilitator. Women shared that hearing experiences of women who had cancer could be a strong motivator for them to check themselves and go to the doctor. They also commented that the support of other women is key, especially in two ways: by accompanying them to the health center and by being able to share and discuss these issues with them. Individual facilitators Perceived severity of breast cancer The fact that women perceived BC as a serious disease that begins without symptoms, progresses over time if women do not receive medical attention, and that can spread to other parts of the body and cause death, can motivate them to look for medical care. Perceived benefits of early BC diagnosis Almost all participants were aware of the importance of cancer early detection. They mentioned that early detection increases the chances of cure, and that this motivates them to keep themselves informed and to talk about it with their peers.
Discussion This is the first study to explore perceived barriers and facilitators to timely healthcare seeking and access for early diagnosis of BC among Otomí indigenous women in Mexico. The results reveal barriers and facilitators at different levels of the Social Ecological Model that may inform interventions to improve early diagnosis of BC in this vulnerable population. Among the most salient barriers were: the elimination of well-established social programs that facilitated access to healthcare, fatalistic cultural beliefs about cancer, cultural gender roles related with prioritization of the care of other people, sexual taboos that can interfere with self-detection and healthcare seeking for breast symptoms, lack of trust in healthcare providers due to past experiences of mistreatment and discrimination, and access barriers for use of healthcare services. One of the most striking findings of this study are the participants’ descriptions of mistreatment by healthcare personnel that they have experienced when using medical services. These seem to be a consequence of healthcare ethnic and gender discrimination. Although until recently healthcare racism towards indigenous people was overlooked, both in academia and public policy [ 59 ], there is emerging scientific evidence that identifies various forms of discrimination as a structural determinant of the lack of access to healthcare for these populations [ 60 – 62 ]. Healthcare personnel may hold unconscious biases and heuristics based on gender and ethnic stereotypes [ 63 ], that can negatively impact patient care [ 64 ]. These biases have been found to be further compounded when healthcare providers are faced with patients who are not only women but are additionally poor, from a rural community, and belong to a marginalized ethnic group [ 65 ]. The lack of physician cultural competency and implicit bias by clinicians toward ethnic, racial and gender minorities have been shown to result in the provision of unequal healthcare and disparities in cancer outcomes [ 66 ]. In turn, these experiences of mistreatment and discrimination, damage the patients’ trust in healthcare providers, and thus, can act thereafter as barriers to participation in screening, timely healthcare seeking for cancer symptoms and adherence to treatment [ 67 – 71 ]. The preference of traditional medicine over formal medical care services that is described by some of the study participants could be related, in addition to cultural health beliefs, to the mistreatment that indigenous people often experience when seeking medical care. The use of traditional medicine and home remedies has been described in other studies as a barrier to healthcare seeking of formal medical services and cancer awareness in other indigenous populations in Mexico and Ghana [ 72 – 74 ]. It has been described that they usually consult a traditional healer as a first point of contact, they believe that traditional healers have supernatural powers they have inherited from their ancestors, which cements their authority in the community [ 73 ]. Indigenous people have more trust in traditional medicine and traditional healers than in modern western medicine and medical care providers [ 75 – 77 ], although anthropological evidence allows us to recognize that they also value and make use of allopathic medicine [ 78 ]. Indigenous populations throughout the world have used traditional medicine for many generations, and many communities perceive it as valuable, affordable, and more acceptable as it aligns with their sociocultural beliefs [ 79 – 81 ]. In contrast, indigenous population have described western medicine as very impersonal with very short consultations, little space and opportunity to express their concerns, and almost no explanations of their illness [ 77 ]. However, criticism is directed primarily at how they are treated by healthcare personnel, not at the effectiveness of western medicine therapeutic resources. Also, access to traditional healers is easier for indigenous people both in terms of geographic proximity and waiting times to get a consultation. In addition, within these relationships with traditional healers there are no forms of discrimination and racism based on ethnic differences [ 75 , 82 ]. Our participants described that they perceived easier access to formal healthcare services when the social program POP (Progresa-Oportundiades-Prospera) was in place. That program was coupled to health promotion and prevention activities that took place in health centers. After the elimination of this program in 2019, our participants describe that they lost the direct link they had to healthcare facilities where they could seek care. The elimination of successful social and health programs has also been described as an access barrier for use of reproductive health services by indigenous women in other studies [ 83 ]. The COVID-19 pandemic was a global health crisis that generated uncertainty and fear around the world. Learning and social interaction are factors that help us to understand how risk awareness and fear are generated in the presence of a pandemic [ 84 ]. Our results show that fear of becoming infected with COVID-19 acted as a barrier to approaching health centers. This is consistent with other studies, where COVID-19 fatality rates were higher in indigenous population in comparison to the rest of the Mexican population [ 85 ]. Additionally, due to the pandemic, many health centers were converted to only attend COVID-19 cases, while others were saturated, and this complicated access to the early diagnosis and treatment of cancer worldwide [ 86 – 88 ]. Another group of salient barriers identified in this study were cultural beliefs and roles: fatalistic cultural beliefs about illness, cancer stigma, gender roles related with prioritization of the care of other people, and sexual taboos that can interfere with the detection of breast symptoms and healthcare seeking for symptoms. The study participants described a widespread cultural belief among otomíes of cancer being seen as a divine punishment for “bad behavior”. Similar beliefs have been reported for African American and Hispanic women residing in the USA [ 89 ]. Seeking for healthcare is likely to be postponed if a person doesn’t believe there is much she can do to influence her health [ 89 , 90 ]. Our study participants also described that a commonly shared belief in their community is that if a person thinks about an illness, he/she may attract such illness. This can also act as a barrier to preventive and healthcare seeking behaviors, as people opt to avoid thinking about any diseases in order to avoid being affected by them. To our knowledge, this belief has not been described in previous studies, but given its relevance, should be intentionally explored in future studies. Another salient cultural belief that our participants described as having an impact on health behavior of women in particular is that of sexual taboos and embarrassment to touch their own bodies or have healthcare professionals examine their bodies. They see the touching of their own breasts as a sexual behavior that is disapproved in their community, especially by the men. In the same line, the male partners disapprove of their wives having their breasts or sexual organs examined by a doctor, especially if it is a man. These sexual taboos and male control over their female partners’ health and sexuality can act as barriers for early discovery of breast symptoms as well as for early seeking of medical care, as it has been reported for other populations [ 91 , 92 ]. Embarrassment to be seen or touched by healthcare personnel has also been reported in the literature as a barrier for not participating in BC screening programs in other countries [ 93 ]. Finally, the main barriers identified at the individual level were limited cancer awareness -with misinformation about the disease, its risk factors and how to detect it early- and fear of being diagnosed with BC. Limited cancer awareness has been documented as a major barrier to seeking care, using medical services, as well as late detection and poor outcomes [ 16 , 17 , 20 , 94 , 95 ]. To increase individuals’ knowledge, awareness, risk perception and motivation to seek healthcare, educational interventions can be effective [ 96 ]. But, if they are to be effective in specific indigenous populations, the design of these educational interventions need to be tailored according to the needs, beliefs and cosmovision of the indigenous population towards which they will be directed to [ 97 ]. In addition, interventions directed to increase the perception of severity of the disease should simultaneously increase the perception of benefits of early diagnosis so that fear does not stop women from seeking care. This study has some limitations. Due to our qualitative design and purposeful sampling strategy, our findings are not generalizable to the entire otomí population, not even that residing in Jiquipilco. Also, even though our study participants were instructed to speak on behalf of cultural views that would be representative of their communities, they may also have provided personal views. However, we believe this information is valuable as personal views are often a reflection of shared cultural values.
Conclusions This study identified barriers and facilitators for early diagnosis of BC as perceived by otomí indigenous women. Healthcare providers and policy makers should take notice of indigenous women’s beliefs, access barriers and healthcare discrimination experiences in the design of programs that aim to facilitate early BC diagnosis and treatment for these vulnerable populations. It is urgent to improve the quality of care and access to public healthcare services available in Mexico for the poor, especially for health problems where access to early diagnosis and treatment is key for good outcomes as is the case of cancer. Indigenous women, in addition to often being poor, too frequently face discrimination by healthcare providers due to their gender and ethnicity. Thus, beyond cultural differences, discriminatory treatment stands as a structural barrier to otomí women’s access to BC screening services. This is a characteristic shared by other Amerindian indigenous groups of people. Measures to prevent and eradicate all forms of mistreatment and discrimination in healthcare services are imperative.
Background Literature on barriers and facilitators for early detection of Breast Cancer (BC) among indigenous women is very scarce. This study aimed to identify barriers and facilitators for BC early diagnosis as perceived by women of the otomí ethnic group in Mexico. Methods We performed an exploratory qualitative study. Data was collected in 2021 through three focus group interviews with 19 otomí women. The interview transcripts were analyzed using the constant comparison method and guided by a conceptual framework that integrates the Social Ecological Model (SEM), the Health Belief Model and the Institute of Medicine’s Healthcare Quality Framework. Results Barriers and facilitators were identified at several levels of the SEM. Among the main barriers reported by the study participants were: beliefs about illness, cancer stigma, cultural gender norms, access barriers to medical care, and mistreatment and discrimination by health care personnel. Our participants perceived as facilitators: information provided by doctors, social support, perceived severity of the disease and perceived benefits of seeking care for breast symptoms. Conclusions Healthcare policies need to be responsive to the particular barriers faced by indigenous women in order to improve their participation in early detection and early help-seeking of care for breast symptoms. Measures to prevent and eradicate all forms of discrimination in healthcare are required to improve the quality of healthcare provided and the trust of the indigenous population in healthcare practitioners. Keywords
Abbreviations BC Low- and middle-income countries High-income countries The State Council for the Integral Development of Indigenous People (for its acronym in spanish) Social Ecological Model Health Belief Model Healthcare Quality “ Progresa-Oportunidades-Prospera ” (A social program in Mexico) Minerva Saldaña Téllez Karla Unger Saldaña Acknowledgements We want to thank the Center for the Development of the Indigenous People of the State of Mexico (CEDIPIEM) for their support in participant recruitment and provision of adequate spaces for the interviews. We are grateful to all the participants for allowing us to hear their voices and for sharing their experiences. Authors’ contributions Study conception and design: Saldaña-Téllez, M. and Unger-Saldaña K. Data collection: Saldaña-Téllez, M. and Cano-Garduño, L. Data analysis: Saldaña-Téllez, M. and Unger-Saldaña, K. Drafting of the manuscript: Saldaña-Téllez M., Meneses-Navarro, S. and Unger-Saldaña K. Writing review and editing: All authors. All authors read and approved the final manuscript. Funding MST was supported by Council of Science and Technology of State of Mexico (COMECYT) to carry out this project. Availability of data and materials The data (de-identified interview transcripts in Spanish) that support the findings of this study are available on request from the corresponding author [KUS]. Declarations Ethics approval and consent to participate The study received approval from the National Cancer Institute of Mexico’s institutional review boards (021/041/IBI) (CEI/1592/21). All methods have been performed in accordance with the Declaration of Helsinki. Informed consent was obtained from all individual participants included in the study. Consent for publication The possibility of publication of de-identified data was explained to the participants in the written informed consent forms. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:47
BMC Womens Health. 2024 Jan 13; 24:33
oa_package/2a/43/PMC10787990.tar.gz
PMC10787992
38218934
Background Thyroid eye disease (TED), also known as thyroid-associated ophthalmopathy (TAO) or Graves’ orbitopathy (GO), is the most common autoimmune orbital disease that affects 25–40% of patients with Graves’ disease and other thyroid disorders [ 1 , 2 ].Based on the immune status and disease duration, the pathogenesis of TED can be divided into two phases: an active phase and an inactive phase [ 1 , 2 ]. With TED progression, lesions develop in all orbital soft tissues [ 3 ]. Additionally, TED could be classified into mild, moderate-to-severe, or sight-threatening, based on the evaluation of its clinical manifestations, such as visual acuity, proptosis, and upper eyelid retraction [ 1 ]. Intravenous glucocorticoid (IVGC) therapy is the routinely recommended first-line treatment for active and moderate-to-severe TED, offering potent anti-inflammatory effects that could alleviate extraocular muscles (EOMs) edema and orbital lipid hyperplasia [ 1 , 4 , 5 ]. However, the therapy inevitably brings about risks and can result in side effects, such as hypertension, hyperglycemia, and osteoporosis [ 6 – 8 ]. Therefore, proper implementation of IVGC therapy is crucial to achieve maximum benefit and avoid ineffectiveness. The clinical activity score (CAS) has been used to classify the activity of TED in patients and prescribe IVGC therapy in those with an active status (CAS ≥ 3) [ 1 , 2 , 4 ]. However, CAS does not provide precise prediction, since it is only the record of an ocular inflammatory manifestation and the conceived painfulness, but the pathologic lesions in the posterior orbit are overlooked. In a previous study, based on the application of CAS as a criterion, 38·46% active TED patients (CAS ≥ 3) were determined to be unresponsive to IVGC, whereas 45·45% of the inactive patients (CAS < 3) turned out responsive [ 9 ]. Due to its ability to reveal alterations throughout the orbital soft tissues, magnetic resonance imaging (MRI) has been increasingly utilized for TED examination, which effectively contributes markedly to disease activity assessment and therapy response prediction [ 10 , 11 ]. T2-weighted imaging (T2WI) is a commonly used MRI sequence in clinical applications that provides anatomical and metabolic information of soft tissues [ 12 , 13 ]. The pathogenesis of the orbital tissues in TED could be clearly revealed on T2WI, characterized by inflammatory edema, chronic fibrosis, and fatty degeneration [ 12 , 13 ]. Despite the certain predictive value of signal intensity ratio (SIR) or other simple metrics on T2WI for IVGC therapy response, its effectiveness was found to be limited due to insufficient exploitation of images [ 14 ]. Therefore, conventional semiquantitative measurements may not ideally meet the requirement of therapy response prediction. In recent years, radiomics analysis has emerged as a promising solution to this issue by extracting high-throughput quantitative features for further analysis and model construction [ 15 ]. It is widely utilized in the field of oncology for the prediction of macrovascular invasion and recurrence [ 16 , 17 ]. It was first applied in orbital disease by Duron et al. [ 18 ] in 2021 to construct an MRI-derived radiomics model in differentiating benign from malignant orbital lesions. Hu et al. [ 14 ] have constructed a radiomics model for IVGC response prediction based on the features extracted from EOMs bellies on T2WI, which behaved better than conventional semiquantitative imaging model (AUC = 0·916 vs. 0·745). However, the potential of radiomics for TED therapy response prediction could be further enhanced. Despite EOMs, other vital structures in the orbit, such as lacrimal gland (LG) [ 19 ], orbital fat (OF) [ 20 ], and optic nerve (ON) [ 21 ], also undergo distinct changes in the pathogenesis of TED. The predictive value of these structures has been confirmed in several imaging studies [ 22 – 24 ]. Therefore, our investigation takes a step further in orbital radiomics analysis by integrating all orbital soft tissues to construct a more accurate and robust radiomics prediction model. Interestingly, similar strategies, namely multi-regional radiomics, have been explored in other human structures and diseases, which performed superior to single-regional radiomics [ 25 , 26 ]. To the best of our knowledge, no such techniques have been applied for investigations of the ocular orbit. Indeed, fine segmentation of orbital structures on MRI imaging is a challenging task due to its complicacy and considerable time cost. Hence, our study pioneered in this attempt. In order to process high-throughput data from complex segmentation, various machine learning (ML) algorithms were adopted in our study. Ultimately, we established whole-orbit radiomics (WOR) models for the prediction of IVGC response of patients with active, moderate-to-severe TED and attained satisfactory prediction results.
Methods Patients and clinical evaluations This manuscript adheres to STROBE guidelines. This retrospective study was approved by our Institutional Review Board (SH9H-2021-T246-2), and the requirement for informed consent was waived. Clinical and radiological data of 127 patients with clinically confirmed active and moderate-to-severe TED who had undergone MRI scans before IVGC treatment were collected from the hospital between June 2017 and June 2021. The inclusion criteria were as follows: (1) Patients aged 18–75 years, without complex systemic disease or other orbital disease; (2) High quality of MRI adequate for radiomics analysis; (3) Bilateral manifestation of TED; (4) Disease duration less than 18 months; (5) No previous orbital decompression surgery or radiotherapy, or administration of IVGC ≥ 1·0 g before MRI scans; (6) Patients received IVGC schedule according to standard EUGOGO guidelines (4·5 g, 12 weeks). The disease activity was evaluated by seven-point CAS, including: (1) Spontaneous retrobulbar pain; (2) pain on attempted up or down gaze; (3) redness of the eyelids; (4) redness of the conjunctiva; (5) swelling of the eyelids; (6) inflammation of the caruncle and/or plica; and (7) conjunctival edema. Patients with CAS < 3 and inactive orbital MRI were categorized as inactive TED, and those with CAS ≥ 3 and active orbital MRI were categorized as active TED. If the indicated activity of CAS and MRI contradicted, an orbital disease specialist with 20 years of experience made a final judgment. The disease severity was assessed according to EUGOGO guidelines. Moderate-to-severe refers to those who met two or more of the following criteria: (1) lid retraction ≥ 2 mm; (2) moderate or severe soft-tissue involvement; (3) exophthalmos ≥ 3 mm above normal for race and gender; (4) inconstant or constant diplopia; without signs of sight-threatening conditions. Ophthalmic assessments for each eye were performed prior to and after the IVGC treatment schedule, including: (1) evaluation of CAS; (2) lid aperture; (3) exophthalmos assessment with a Hertel exophthalmometer; (4) best corrected visual acuity (BCVA); (5) intraocular pressure (IOP); (6) diplopia score. Thyroid-stimulating hormone receptor antibodies (TRAb) was measured before IVGC treatment. Restoration of euthyroidism was recorded if the thyroid-stimulating hormone, free triiodothyronine, and free thyroxine were within the normal range. Therapy response of IVGC treatment was assessed within three months after the last administration of IVGC. The definition of “responsive” and “unresponsive” was based on the standard proposed by Bartalena et al. [ 1 ] The responsive group included those with an improvement of at least two of the following in one eye after treatment: (1) Reduction of lid aperture ≥ 2 mm; (2) Reduction of exophthalmos ≥ 3 mm; (3) Eye motility with an increase of ≥ 8°; (4) Reduction in five-item CAS (not including spontaneous or gaze-evoked pain) of ≥ 1 point; without concomitant deterioration in the other eye. Deterioration was defined by the occurrence of dysthyroid optic neuropathy (DON) or worsening of at least two of the four components mentioned above. The unresponsive group was composed of those who did not meet the aforementioned criteria. All patients included were allocated to a training cohort and a test cohort with a proportion of 8:2 using a stratified random splitting method. The flowchart of patient enrollment and the scheme for analysis is presented in Additional file 1 Fig. S1. Orbital MRI acquisition Before the IVGC treatment schedule began, patients were examined using a 3·0 T MRI system (Ingenia CX, Philips Medical Systems) with a 32-channel head coil. During the scan, the patients were placed in the supine position with their eyes closed. Coronal T2-weighted Turbo Spin-Echo with 90° Flip-Back Pulse (T2-DRIVE) imaging was acquired, with the following parameters: repetition time/echo time, 3000/90 ms; field of view, 133·3 133·3 mm 2 ; slice thickness, 3·5 mm; slices, 20; gap, 3·85 mm; acquisition matrix, 320 224. Figure 1 depicts the workflow of the radiomics procedure. Radiomics analysis ROI segmentation Regions of interest (ROIs) were manually segmented on coronal T2WI using the ITK-SNAP software (v. 3.6.0; www.itksnap.org ). Two methods of ROI segmentation were employed (Fig. 2 ). The first approach, multi-organ segmentation (MOS) was applied to eight orbital structures, including LG, OF, ON, and separate EOMs: superior rectus (SR), inferior rectus (IR), medial rectus (MR), lateral rectus (LR), and superior oblique (SO). These ROIs were individually contoured using different labels. The contours of each ROIs were drawn slice-by-slice from the emergence of OF in the anterior orbit to the vanish of EOMs in the posterior orbit. Subsequently, four single-regional radiomics (SRR) models were constructed based on different structures (EOMs, LG, OF, and ON), and the dataset comprising all eight labels was later used to develop the multi-regional radiomics (MRR) model. The second approach, namely fused-organ segmentation (FOS) strategy using one single label was also implemented, which regarded all structures including EOMs, LG, OF, and ON as a cohesive unit. A fused-regional radiomics (FRR) model was later built on this basis. For all manual segmentation work, an experienced orbital radiologist (reader 1) viewed each MRI and conducted ROIs segmentations without knowing the disease status of the participants. Each segmented contour was further reviewed by an orbital radiology expert for accuracy. Discussions were held for any disagreement until a consensus on the final decision was reached. Feature extraction Radiomics features were extracted from ROIs using an in-house feature analysis program implemented in Pyradiomics ( http://pyradiomics.readthedocs.io ) for all radiomics models (SRR, MRR, and FRR models). Orbital structures from bilateral orbits of the same patient were considered as a unit, and the features were extracted in the meantime. All features were categorized into three groups: (1) geometry features, which described the three-dimensional shape characteristics of the ROIs; (2) intensity features, which described the first-order statistical distribution of the voxel intensities within the ROIs; and (3) texture features, which described the patterns or the second- and higher-order spatial distributions of the intensities. Specifically, to extract texture features, various methods were employed, including the gray-level co-occurrence matrix (GLCM), gray-level run length matrix (GLRLM), gray-level size zone matrix (GLSZM), and neighborhood gray-tone difference matrix (NGTDM) methods. Feature selection After feature extraction, reproducibility analysis, Mann–Whitney U-test, Spearman's rank correlation, max-relevance, min-redundancy (mRMR), and least absolute shrinkage and selection operator (LASSO) regression were consecutively performed to reduce the feature dimension for the different radiomics models. Initially, 40 cases were randomly chosen (20 of responsive and 20 of unresponsive), and their orbital MRI were segmented by reader 2 in the same manner as reader 1. Inter-reader variation of radiomics features was evaluated by calculating intraclass correlation coefficients (ICC) between the results from reader 1 and reader 2. Only features with ICC > 0·75 were subjected to further analysis. Afterwards, a Mann–Whitney U-test was then employed to identify significant features between responsive and unresponsive groups, only those with a p-value < 0·05 were kept. Then, the Spearman’s rank correlation coefficient was used to identify highly correlated features (Spearman’s correlation coefficient > 0·9), with one of them randomly retained to avoid redundancy. To depict features to the greatest extent, greedy recursive deletion was applied for feature filtering, where the feature with the most redundancy in the current set was deleted each time. Subsequently, to avoid over-fitting and maximizing the correlation between features and target variables, the mRMR algorithm was implemented to select the top eight features for each label. Eventually, the LASSO regression model with tenfold cross test supported by Onekey AI platform was used for signature construction (Fig. 3 a, b). The retained features with nonzero coefficients were used for regression model fitting and combined into a radiomics signature (Fig. 3 c). The detailed rad score formulae of the models are provided in Additional file 1 : Table S1. Radiomics signature construction SRR, MRR, and FRR models were individually constructed based on the datasets derived from corresponding ROIs as stated above. For all radiomics models, the final selected features were inputted into six robust classification algorithms supported by Onekey AI platform, including logistic regression (LR), NaiveBayes, support vector machines (SVM), extremely randomized trees (ExtraTrees), extreme gradient boosting (XGBoost), and light gradient boosting machine (LightGBM). A five-fold cross-validation was implemented to obtain the final radiomics signatures. Semiquantitative measurements and model construction Semiquantitative measurements on T2WI involved all eight orbital structures, including EOMs, LG, OF, and ON. Two radiologists (reader 1 and reader 2) independently implemented measurements without knowing the disease status of study participants. The signal intensity (SI) of EOMs, LG, and OF was measured by placing polygonal ROIs separately on EOM bellies, LG, and OF, locating the maximum cross-section on the coronal T2WI. The corresponding SI on the anterior and posterior layers of the selected layer of each region of these seven structures were also measured. For measurement of the SI of ON, ROIs were manually segmented on three consecutive layers behind the eyeball, and the surrounding cerebrospinal fluid signal was carefully avoided. For each orbital structure, the maximum, mean, and minimum of SI over the ROI were all extracted, and the final SI max , SI mean , SI min were recorded as the mean value of SI derived from three consecutive layers. Later, they were normalized to SIR max , SIR mean , and SIR min using the formula SIR = SI EOM /SI brain white matter . Inter-observer variation of measurements between the two observers was assessed by ICC. Then univariate analysis was adopted to test the difference of SIR max , SIR mean , and SIR min between the responsive and the unresponsive groups. After screening features with P < 0·05, identical six ML algorithms were employed to construct semiquantitative imaging models (SIR models) through five-fold cross-validation. Assessment and comparison of different prediction models The diagnostic performances of the radiomics and semiquantitative imaging models based on different ML algorithms were assessed using their receiver operating characteristic (ROC) curves. For each model, metrics including area under curve (AUC), accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. Internal validation of the prediction models was performed using an independent set. To compare the largest prediction capacity of different models, the ML algorithm with the highest AUC was finally selected for each model subset for further assessment and comparisons. DeLong’s test was applied to test the difference of diagnostic performance among different models. The calibration curves were depicted to assess the calibration of the prediction models. Decision curve analysis (DCA) was performed to evaluate the clinical usefulness of different models by calculating the net benefits at different threshold probabilities. Statistical analyses All statistical analyses were conducted using Python programming language (version 3.7.6) with the use of SciPy library (1.4.1) and Statsmodels module (v0.11.1). Statistical significance was set at a two-tailed P-value < 0·05. For categorical data, the chi-squared test or Fisher’s exact test was applied to compare the difference between two groups. For numeric data, independent-sample t-test or Mann–Whitney U-test was implemented. Other statistical tools employed for analysis are specified above.
Results Clinical characteristics Of the 127 enrolled patients, 56 were identified as responsive to IVGC treatment, whereas 71 patients were unresponsive. The clinical characteristics of both groups are presented in Table 1 , showing no significant differences in sex, age, or duration time. Univariate analysis revealed significant differences in smoking (P-value = 0·016), diplopia score (P-value = 0·031), CAS (P-value = 0·002), and lid aperture (P-value = 0·031) between the two groups. Radiomics model construction Single-regional radiomics (SRR) models Through MOS strategy, 1906 features were respectively extracted from the ROIs of EOMs, LG, OF, and ON. After feature selection, five, eight, five, and seven were finally retained, respectively. For each structure, the corresponding SRR models based on different ML algorithms performed diversely (Fig. 4 ). For each ML algorithm, EOM radiomics model and OF radiomics model had the best performance. The highest AUC of individual SRR models were achieved by XGBoost on the EOM radiomics model (AUC = 0·766), NaiveBayes on the LG radiomics model (AUC = 0·727), LR on the OF radiomics model (AUC = 0·766), and NaiveBayes on the ON radiomics model (AUC = 0·669), respectively. Details of diagnostic performance of SRR models can be found in Additional file 1 : Table S2. Multi-regional radiomics (MRR) models For the construction of MRR models based on MOS strategy, 15,248 features were extracted from eight independent structures, and 35 were finally retained. Notably, the SVM model achieved remarkable performance, with the highest AUC value of 0·961 in the test cohort. The other models achieved good to excellent AUC values, with LR achieving 0·916, NaiveBayes achieving 0·893, and LightGBM achieving 0·883 (Fig. 5 a, b). Fused-regional radiomics (FRR) models Through FOS strategy, 1906 features were extracted from the cohesive unit of orbital soft tissues and eight were included in the FRR models. All models achieved moderate to good AUC values, with LR achieving 0·916, NaiveBayes achieving 0·896, SVM achieving 0·903 (Fig. 5 c, d). Semiquantitative model construction The inter-reader variation of semiquantitative SIRs was found to be good to excellent, with ICCs ranging from 0·766 to 0·893. Results of semiquantitative measurement were shown in Table 2 . Models yielded moderate to good results, with most AUC values ranging from below 0·7 to a maximum of 0·760 achieved by the NaiveBayes algorithm (Additional file 1 : Fig. S2). Comparison of different prediction models As is shown in Fig. 6 a, radiomics models significantly outperformed semiquantitative imaging model. The WOR models in this study, including MRR (highest AUC = 0·961, SVM) and FRR models (highest AUC = 0·916, LR), had superior performance over all the SRR models, including the formally reported EOM radiomics model (AUC = 0.766) (Fig. 6 a). The calibration curves and DCA provided additional supporting evidence to such conclusion (Fig. 6 b, c). The MRR model based on SVM had the best performance as regards AUC, calibration, and net benefit. However, further analysis using Delong's test showed that the best performing MRR model based on SVM, and the best performing FRR model based on LR, did not have a significant difference in diagnostic performance (Fig. 6 d). Considering the influence of ML algorithms, the comparison of multiple parameters of MRR models and FRR models utilizing the same ML algorithm is shown in Fig. 7 . In most cases, the area of the radar chart of MRR is slightly larger than FRR. However, when utilizing NaiveBayes or ExtraTrees, the AUC of FRR is larger than that of MRR.
Discussion The preliminary application of radiomics analysis in orbital MRI offers a promising solution to the prediction of IVGC therapy response in TED. Nevertheless, radiomics is still underdeveloped in orbital diseases like TED with deficiency in methodology and practice. In this work, we established the WOR models as a credible and efficient tool to predict IVGC therapy response. The MOS strategy was applied to orbital MRI processing, which included all structures potentially affected in TED. An MRR model (AUC = 0·961) was constructed based on this strategy, reaching a predictive value much superior to SRR models (highest AUC = 0·766) and a conventional semiquantitative imaging model (AUC = 0·760). Besides, we proposed a FOS strategy and constructed an FRR model, as a feasible alternative mode of the WOR models and also achieved a satisfactory result (highest AUC = 0·916). To process high-throughput data, a series of ML algorithms were employed to construct different prediction models and the best was finally chosen. It is highly probable that WOR models will substantially benefit clinical decision-making of TED patients, and that MOS and FOS strategies might bring a new prospect for radiomics research for orbital disease and other disease models. The MOS strategy has emerged as a highly effective approach in radiomics analysis, as evidenced by a large number of previous studies. For instance, a recent investigation utilized the similar strategy to construct an MRR model that accurately assessed muscle invasion in bladder cancer, with an impressive AUC of 0·931 [ 25 ]. Similarly, in cervical cancer, Shi et al. [ 26 ] partitioned tumors into two intratumoral subregions to create an MRR model, which were confirmed to be superior to the model based on the whole tumor (AUC = 0·817 vs. 0·562). However, the MOS segmentation is challenging, particularly in the orbital region due to the anatomical complexity. In our study, we incorporated the whole orbital soft tissues associated with the pathogenesis of TED on T2WI. By employing MOS strategy, the MRR model outperformed SRR models that solely relied on a single orbital structure. It serves as another promising application of MOS strategy in radiomics analysis, and the first attempt in orbital setting. Of the different SRR models, the EOM radiomics model and the OF radiomics model showed relatively good predictive performance, with the highest AUC values of 0·766 for both. As previous studies have suggested, the mechanism of TED pathogenesis is complicated since it affects multiple orbital structures [ 2 ]. The pathogenesis of TED is primarily characterized by enlarged and edematous EOMs, making them the major affected structure. Multiple studies revealed that patients who respond well to IVGC have more homogeneous edema within their EOMs, while unresponsive patients exhibit greater tissue complexity and more fibrotic compounds [ 14 , 24 , 27 , 28 ]. Similarly, a previous TED radiomics study also constructed the model based on EOMs to predict IVGC therapy response [ 14 ]. It is also worth noting that significant differences existed in SIR value of MR and IR between responsive and unresponsive groups in our study. These results prove again that MR and IR are the two primary rectus muscles altered during TED pathogenesis. Nevertheless, they fail to alter the fact that SIR models performed poorly in response prediction compared to MRR and SRR models. Despite EOMs, OF is also a vital morbid structure in the orbits of TED. The majority of patients have enlargement of EOMs or OF, with predominance of one or the other in some [ 2 ]. The expansion of OF volume is caused by the accumulation of glycosaminoglycans and adipocytes, which is also the main therapeutic target of IVGC [ 29 , 30 ]. Previous MRI studies of TED had focused relatively less attention on OF, whereas our earlier studies added evidence to its predictive value in IVGC therapy response [ 22 ]. Interestingly, SIR of OF showed no significant difference between the responsive and unresponsive groups, but it is under the premise that the SIR value concentrated on the value determined from a specific point on the structure. However, radiomics model took into account a wider spectrum of features, encompassing geometry, intensity, and texture features. With deeper investigation, detailed information of OF can be extracted and exploited for IVGC response prediction of TED, which was proved to be powerful. Apart from EOMs and OF, other structures including LG and ON were also of certain predictive value. The highest AUC value of the ON radiomics model was 0·727, while that of the LG radiomics model was 0·675, which was inferior to EOMs and OF. In TED, LG is also affected by immunological disorders in the orbit, characterized by multifocal infiltration of lymphocytes and hyperplasia of adipose tissue [ 31 ]. These typical alterations of LG in TED are manifested on T2WI as increased volume and hyperintensity [ 32 ]. The herniation of LG has been established to be associated with therapy response of IVGC, demonstrating its contribution to the predictive models [ 24 ]. ON is mainly related to visual acuity, concerning the emergence of DON. Interestingly, a retrospective study detected an increased ON T2 value in TED compared with healthy controls [ 33 ]. Other studies also indicated a potential correlation between ON and the severity and prognosis of TED. In this investigation, ON was also evidenced to be of predictive value of the IVGC response. Currently, the majority of the studies on activity assessment and response prediction have been focused on EOMs solely, neglecting other affected orbital soft tissues. This has probably attributed to the cognitive deficit, measurability limitations, and time cost. Although the orbital pathologies of different structures are not fully elucidated, and their correlation with radiomics features are scarcely uncovered, we revealed that involving multiple morbid structures in the orbit greatly enhanced the performance of our radiomics models. By including multiple structures, the MRR model achieved excellent predictive results, but its considerable segmentation efforts may limit the universal application due to the significant time cost. Compared with the EOM radiomics model (total average time, 15·8 min), the performance of the MRR model took much longer (total average time, 25·4 min) for each MRI sample. With the relatively low time cost (total average time, 10·2 min), the semiquantitative imaging model had a moderate predictive value, which was better than those of the ON and LG radiomics models (AUC = 0·760 vs. 0·727 and 0·675, respectively). This outcome was presumably attributable to the incorporation of the whole orbital soft tissue offering more conducive information compared with the single structures. However, the AUC value of semiquantitative imaging model was much inferior to MRR model, which cannot satisfy the requirement of accurate prediction. Therefore, we put forward an alternative WOR model, namely FRR model, which was based on the FOS strategy. When utilizing the same ML algorithms, the performance of FRR model and MRR model was approximate and MRR seemed slightly superior, with the highest AUC value of 0·916 and 0·961 (P-value = 0·468 on DeLong’s test) (Figs. 4 a–f, 6 d). It is reasonable that MRR outperformed FRR, in that fine segmentation according to priori knowledge is beneficial to image analysis. A recent radiomics investigation revealed that without segmentation masks, feature descriptors encompassed the entire image, which limited their effectiveness in focusing on ROI and leveraging the available prognostic information [ 34 ]. This limitation, compounded by noise and the loss of local information regarding size, shape, and location, may have contributed to the slightly lower performance observed in the FRR models. Nevertheless, due to the limited sample size applied in this research, further validation with larger samples is necessary to determine whether the MRR model outperforms the FRR model. However, it is worth noting that the segmentation time cost of the FRR model (total average time of 9·6 min) was only 37·8% of that of the MRR model. This shows the potential of applying the FRR model for IVGC response prediction with higher efficiency. Future explorations of the automatic segmentation of different orbital structures might be of great value to resolving this issue. In the construction of the prediction models, the ML algorithms played a crucial role for achieving high accuracy and efficiency. However, it is important to consider the suitability of ML algorithms for the input dataset. Our research revealed a shift in the best performing algorithm types from SRR to MRR models. Simpler algorithms such as LR and NaiveBayes worked better in cases of straightforward mapping relationships in ON (Highest AUC = 0·669, NaiveBayes), OF (Highest AUC = 0·766, LR), and LG (Highest AUC = 0·675, NaiveBayes). On the other hand, the XGBoost algorithm showed the highest performance in the EOM dataset (AUC = 0·766) due to its ability to prevent overfitting through shrinkage and generalization features in datasets with multiple labels [ 35 ]. Notably, the SVM algorithm attained remarkable results with the highest AUC value of 0·961 in the MRR model. This was due to the fact that SVM was able to recognize and fit valuable underlying mapping effectively when more information was included in the feature datasets [ 36 ]. However, the high learning capacity of SVM also made it susceptible to overfitting, leading to poor performance in the semiquantitative imaging model and moderate performance in several SRR models. A deeper investigation of the application of ML algorithms in orbital MRI would provide more solid evidence by using larger datasets, which shall be explored in the future. Compared with other reported prediction models for IVGC response in TED, the accuracy of our models still needs to be improved. In addition to the potential drawbacks of radiomics analysis, this issue might be attributed to the disunity of the standards for patient enrollment and therapy response evaluations among different studies. The management of TED involves multidisciplinary effort, while many aspects of the diagnosis and treatment are unclear and controversial. For example, the patients in our cohorts met the comprehensive criteria of activity assessment considering CAS and orbital MRI. That is to say, patients with CAS lower than 3 but with actively altered orbital MRI were advised to receive IVGC therapy in our center but were excluded in other centers. This significantly affected the treatment outcome. In addition, the determination of “responsive” or “unresponsive” to anti-inflammatory treatment in TED varied markedly from one study to another. In the present investigation, we adopted a well-recognized evaluation standard proposed by Bartalena et al. [ 1 ], integrating four important items of clinical presentations in a composite index. In former studies, usually an eye is perceived as a research object, while in our study, a patient with bilateral eyes were perceived as a research object. This makes our results more feasible for clinical practice. TED clinical management and research work urgently need standardization of evaluation, diagnosis, and treatment. The present study is a novel attempt to implement the concept of MOS/FOS and MRR/FRR in orbital MRI processing. However, it is only a preliminary exploration and further improvements are needed. First, the sample size of this retrospective study was relatively small, despite being the maximum in TED radiomics research works published to date. Thus, a larger sample size is expected to augment the reliability. Second, our models lack external validation. As TED management is highly complicated, the judgement of the activity of patients varies widely among centers, with different parameters for clinical measurements and MRI data acquisition, which makes it extremely challenging to integrate. This could potentially be tackled in the future by conducting a multicenter prospective study with unified metrics. While our study provides a new strategy for future research in this area, it is important to consider these limitations when interpreting our results.
Conclusions The results of this study revealed that radiomics models based on the whole orbital structures can accurately predict the response to IVGC in TED patients with the highest AUC of 0·961. Therefore, the MRR model is a reliable and effective tool for outcome prediction. The FRR model performed very well in reducing the time consumption of segmentation while preserving a rather satisfactory prediction value; thus, it can be applied as an alternative. The findings of our study could considerably contribute to the accurate prediction of responsive or unresponsive TED patients and allow for individualized management and therapy decisions, leading to improved patient prognosis and quality of life. In the meantime, the WOR strategy can be generalized to the application of other orbital diseases.
Background Radiomics analysis of orbital magnetic resonance imaging (MRI) shows preliminary potential for intravenous glucocorticoid (IVGC) response prediction of thyroid eye disease (TED). The current region of interest segmentation contains only a single organ as extraocular muscles (EOMs). It would be of great value to consider all orbital soft tissues and construct a better prediction model. Methods In this retrospective study, we enrolled 127 patients with TED that received 4·5 g IVGC therapy and had complete follow-up examinations. Pre-treatment orbital T2-weighted imaging (T2WI) was acquired for all subjects. Using multi-organ segmentation (MOS) strategy, we contoured the EOMs, lacrimal gland (LG), orbital fat (OF), and optic nerve (ON), respectively. By fused-organ segmentation (FOS), we contoured the aforementioned structures as a cohesive unit. Whole-orbit radiomics (WOR) models consisting of a multi-regional radiomics (MRR) model and a fused-regional radiomics (FRR) model were further constructed using six machine learning (ML) algorithms. Results The support vector machine (SVM) classifier had the best performance on the MRR model (AUC = 0·961). The MRR model outperformed the single-regional radiomics (SRR) models (highest AUC = 0·766, XGBoost on EOMs, or LR on OF) and conventional semiquantitative imaging model (highest AUC = 0·760, NaiveBayes). The application of different ML algorithms for the comparison between the MRR model and the FRR model (highest AUC = 0·916, LR) led to different conclusions. Conclusions The WOR models achieved a satisfactory result in IVGC response prediction of TED. It would be beneficial to include more orbital structures and implement ML algorithms while constructing radiomics models. The selection of separate or overall segmentation of orbital soft tissues has not yet attained its final optimal result. Supplementary Information The online version contains supplementary material available at 10.1186/s12967-023-04792-2. Keywords
Supplementary Information
Abbreviations Area under curve Best corrected visual acuity Clinical activity score Decision curve analysis Dysthyroid optic neuropathy Extraocular muscles Extremely randomized trees Fused-organ segmentation Fused-regional radiomics Gray-level co-occurrence matrix Gray-level run length matrix Gray-level size zone matrix Graves’ orbitopathy Intraclass correlation coefficients Intraocular pressure Inferior rectus Intravenous glucocorticoid Least absolute shrinkage and selection operator Lacrimal gland Light gradient boosting machine Lateral rectus Logistic regression Machine learning Multi-organ segmentation Medial rectus Magnetic resonance imaging Max-relevance and min-redundancy Multi-regional radiomics Neighborhood gray-tone difference matrix Negative predictive value Orbital fat Optic nerve Positive predictive value Receiver operating characteristic Regions of interest Signal intensity Signal intensity ratio Superior oblique Superior rectus Single-regional radiomics Support vector machines Coronal T2-weighted Turbo Spin-Echo with 90° Flip-Back Pulse T2-weighted imaging Thyroid-associated ophthalmopathy Thyroid eye disease Thyroid-stimulating hormone receptor antibodies Whole-orbit radiomics Extreme gradient boosting Acknowledgements We would like to express our gratitude to the technical professionals from Shanghai Medoo Tech Company. We also extend our gratitude to Ms. Qingwen Tang and Ms. Qi Zheng for their help in data collation. Author contributions HYZ, HFZ, XQF, and XFS contributed to the overall conception and design development. HYZ, MDJ, LZ, XFT, YWL, and JS were responsible for data collection and interpretation. HCC, HJZ, JSX, and YTL proofread the data. HYZ, MDJ, HCC, HJZ, DJX, and LZ performed data analysis. HYZ, HCC, HJZ, and JSX completed the manuscript drafting. MDJ, YTL, LZ, XFT, DJX, LZ, YWL, JS, XFS, XQF, and HFZ edited and reviewed the manuscript. All authors read, discussed, and approved the final version of the manuscript. All authors had full access to the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis, as well as the decision to submit this manuscript for publication. Funding This work was supported by the National Natural Science Foundation of China (81930024, and 82271122); the Science and Technology Commission of Shanghai (20DZ2270800); Shanghai Key Clinical Specialty, Shanghai Eye Disease Research Center (2022ZZ01003); Clinical Acceleration Program of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine (JYLJ202202); and Cross disciplinary Research Fund of Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine (JYJC202115). Availability of data and materials The datasets generated and analyzed during the current study are available by the corresponding author Huifang Zhou upon reasonable request. Declarations Ethics approval and consent to participate This retrospective study was approved by our Institutional Review Board (SH9H-2021-T246-2), and the requirement for informed consent was waived. Consent for publication All authors have approved the manuscript for submission. Competing interests The authors declare no potential competing interests related to this work.
CC BY
no
2024-01-15 23:43:47
J Transl Med. 2024 Jan 13; 22:56
oa_package/47/2a/PMC10787992.tar.gz
PMC10787993
0
Background Breast fibroadenomas are common, benign fibro-epithelial lesions [ 1 ], which are frequently encountered in adolescent girls and young women [ 2 – 4 ]. About 10% of women have such symptoms in their lifetime, accounting for 67–94% of all breast biopsies in women under the age of 20 years [ 5 , 6 ]. Although mammography is the gold standard in the detection and evaluation of masses in the breast, sonography has become an indispensable imaging modality because of its technical advantages of non-ionization, low cost, mobility, and real-time diagnosis [ 7 ]. However, the low spatial resolution and image quality of sonography make it hard to extract tissue morphological features accurately and reliably. Breast sonography is thus highly operator-dependent and has a high inter-observer variation rate [ 8 ]. To reduce the burden for the radiologist in reviewing hundreds of clinical images and improve the accuracy of diagnosis, computer-assisted image segmentation becomes valuable. Deep learning approaches are increasingly being used for medical image segmentation and quantitative information regarding the morphology and textural features of lesions [ 9 ]. Several neural network architectures developed from convolutional neural networks (CNNs) have shown satisfactory segmentation performance. However, breast ultrasound (BUS) image segmentation is still challenging due to high speckle noise, a low contrast, blurry boundaries, and intensity inhomogeneity in sonography [ 10 ]. Therefore, precisely segmenting breast fibroadenoma in sonography requires extensive investigation, and deep learning models with the capability of processing more complex textures, focusing on the most important features, increasing robustness, and having noise immunity are preferred. There have also been various attempts to incorporate domain knowledge into neural networks in medical image analysis, such as diagnosis, detection, and segmentation [ 11 ]. Some notable approaches include transfer learning, teacher–student course learning, and combined attention maps. Transfer learning involves leveraging knowledge from natural images to guide medical image analysis. By using a pre-trained network as a fixed feature extractor, knowledge can be transferred between image domains. Although these approaches are promising, simulating the natural learning process observed in humans may be another strategy in deep learning. Humans typically break down complex information into smaller, manageable chunks during learning and then integrate them according to their inherent relationships, which facilitates the learning of large-scale information more effectively. Thus, we propose a human instinct learning paradigm that involves feature fragmentation and information aggregation as a guide for neural network learning to enhance the segmentation performance of breast fibroadenoma in sonography. The workflow of our proposed learning paradigm is shown in Fig. 1 . In this study, we propose an efficient paradigm that emulates the intuitive human learning mechanisms within an artificial neural network to guide the segmentation of breast fibroadenomas in sonography. Feature fragmentation attention modules (Focus, BottleneckCSP, and C3ECA) and information aggregation modules (LogSparse Attention, C3CBAM, and ProbSparse Attention) were selected and specifically tailored to the characteristics of ultrasound images. A dataset of breast ultrasound images of Asian women at Suining Central Hospital, China was constructed, and then the validation and performance of our proposed lightweight model were tested and evaluated on both local and public datasets. Furthermore, our approach was compared with other state-of-the-art (SOTA) methods to confirm its superior segmentation performance. Related works Deep learning-based network U-Net is one of the most popular and outstanding networks [ 12 ]. However, it cannot learn global and long-range semantic information interaction well due to the locality of the convolution operation. The self-attention mechanism and sequence-to-sequence design of transformers work effectively in the global extraction of contextual information, being extensively successful in natural language processing (NLP) [ 13 ]. Cao et al. proposed a pure transformer-based U-shaped encoder–decoder model for medical image segmentation [ 14 ]. Furthermore, the CNN–transformer hybrid network demonstrates great segmentation performance. Schlemper et al. integrated the attention gate module into the encoder–decoder design of the U-shaped architecture [ 15 ]. In addition, TransUNet combines the best features of CNN in processing high-dimensional data with the transformer’s ability to capture location and contextual information [ 16 ]. This model can hold more than 100B trainable parameters, but at a significantly increased computational burden [ 17 , 18 ]. Breast ultrasound image segmentation Deep CNNs have been applied to lesion segmentation in BUS images [ 19 , 20 ]. A fuzzy CNN model incorporating data enhancement as well as fine-tuning post-processing is proposed by Huang et al. for BUS image segmentation [ 21 ]. Xue et al. designed a neural network with a global guidance module as well as a breast lesion boundary detection module, whose outcomes are further optimized by pre-defined regularization conditions to improve the segmentation accuracy [ 22 ]. Similarly, Lei et al. introduced boundary regularization into deep convolutional encoder–decoder networks to reduce the influences of noise and other factors on BUS images [ 23 ]. Abdelali et al. presented an automated CAD system for breast cancer detection and classification in mammography utilizing multiple instance learning (MIL) algorithms in decision-making [ 24 ]. Furthermore, suspicious regions were assessed in screening mammography at an impressive sensitivity of 98.60% using a modified K-means algorithm for region segmentation and bi-dimensional empirical mode decomposition (BEMD) for feature extraction [ 25 ]. Attention mechanisms Attention mechanisms that emulate human perception have recently been introduced to neural networks [ 26 ] to select the more critical features from an extremely large amount of information by classifying feature maps channel-by-channel [ 27 ]. There are two main classes in the practice. The first class chunks the feature information channel-by-channel and then reassembles it. The Focus module, derived from Yolo v5 [ 28 ], reconstructs low-resolution images by selecting pixels from the original one and stacking adjacent pixels using dilated convolution [ 29 ]. This approach is also adopted by the Concentration-Comprehensive Convolution (C3) block [ 30 ], which enhances the network depth while reducing the computational complexity. For further optimization, a Bottleneck module replaces a single large-sized convolution with multiple small-sized convolutions [ 31 ]. Combining it with the cross-stage partial network (CSPNet) gives rise to the BottleneckCSP module to improve memory consumption and learning efficiency [ 32 ]. These findings suggest that piecewise learning of feature information can enhance model efficiency. The second class of attention mechanisms aims to enhance the extraction of semantic information by encoding the feature map’s location information for contextual perception. The Efficient Channel Attention (ECA) module employs a local cross-channel interaction strategy to facilitate information interaction [ 33 ]. The LogSparse Transformer addresses the prediction accuracy of time series with fine-grained and strong long-term dependencies within memory constraints [ 34 ]. The Convolutional Block Attention Module (CBAM) provides attention weights in both channel and spatial dimensions, aiding in the extraction of effective target features [ 35 ]. Additionally, a new formulation of attention through the kernel lens provides a deeper understanding of attention components and enhances the dynamics and utilization of the transformer's multi-headed self-attention mechanism [ 31 ]. Despite the promising performance of these attention modules, their application in medical imaging is still limited. Knowledge-based methods Maicas et al. proposed a teacher–student curriculum learning strategy that mimics more challenging tasks, such as breast image classification for dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) [ 36 ]. Emulating this process in the neural network training improved the classification performance by 5.88% compared to the baseline, DenseNet. An attention-based CNN for glaucoma detection, AG-CNN, combined attention maps during the supervised training process and simulates the physicians’ focus on regions of interest [ 37 ]. This method allows the network to learn from the attention patterns of medical professionals and enhance its ability to identify the most relevant features. Hsu et al. incorporated the existing BI-RADS (Breast Imaging Reporting and Data System) score [ 38 ] as the knowledge to guide the neural network in learning the texture and intensity features of BUS images [ 39 ]. A learning paradigm that goes beyond the confines of medical knowledge to guide neural networks in medical image processing and uses human learning paradigm may provide a more versatility.
Methods Image data set The dataset consists of clinical breast ultrasound images, including both our local dataset and a supplementary one using publicly available sources. The local dataset was collected retrospectively at Suining Central Hospital in Sichuan, China. The images were acquired from patients who underwent breast ultrasound examinations between January and July 2022. During the image acquisition, a sonographer systematically scanned the outer lower, outer upper, inner upper, and inner lower quadrants of the breast in a clockwise manner. Suspected lesions were analyzed, and the periareolar area and armpit were examined to determine the location and size of the lesions at both sagittal and cross-sectional viewing angles. Multiple ultrasound images were acquired for each fibroadenoma case according to the standard protocols. The image acquisition was performed by professional sonographers using a DC-80S system (Mindray Medical, Shenzhen, China). Overall, our local dataset comprises 600 breast ultrasound images obtained from 30 patients. Table 5 summarizes the basic information of all patients included in our local dataset. To supplement our local dataset and further validate the validity and robustness of our model, a professional sonographer selected benign fibroadenoma images from publicly available datasets, Dataset_BUSI [ 37 ] and DatasetB [ 38 ], and excluded those with ambiguous performance. Finally, 207 images from Dataset_BUSI and 39 images from DatasetB with their corresponding labels were merged as a public dataset. To ensure proper segmentation evaluation, all images on both local and public datasets are randomly divided into training and test sets in a 5:1 ratio in our experiments. Network architecture To implement the segmentation of breast fibroadenomas in sonography, we designed a model based on the framework of TransUNet, utilizing an encoder–decoder architecture (see Fig. 8 ). The process begins by reshaping the input image into a series of 2D patches using the patch partition parts. These patches are then vectorized and mapped to an embedding space using a trainable linear projection while preserving the positional information. Patch merging and expanding are responsible for downsampling and upsampling tasks, respectively. In Fig. 8 , the dimensional changes of the input feature map are annotated. During each downsampling, the width and height are halved, while the number of channels is doubled (from to ). Conversely, in the upsampling process, the dimensional changes occur in the opposite direction. Typically, feature maps do not change in dimensions after being processed by the transformer layer. However, incorporation of the human learning paradigm within the transformer layer introduces dimension transformations based on the operations of different fragmentation and aggregation modules. Figure 9 provides an explanation of the feature map's transformations. The transformed patches are subsequently passed through the transformer layers, where the hidden layer features are extracted using the multi-head self-attention mechanism (MSA) and multi-layer perceptron (MLP). At the decoder block, the image undergoes multiple layers of upsampling, and feature fusion is performed to generate the final prediction. This U-shaped network structure enables the model to capture and preserve the underlying characteristics of the image through skip connections, which are often overlooked. In our study, a fragmentation module and an aggregation module are integrated within the MLP of the transformer layer, which is designed to mimic knowledge paradigms inspired by human brain learning patterns, thereby facilitating improved information acquisition and processing. A more comprehensive exposition of these modifications is given in the section of transformers layer and the multi-layer perceptron for an easy understanding of our approach. Transformers layer and multi-layer perceptron The transformer layer, a crucial component of the network architecture, consists of the MSA module and the MLP module. The MSA module operates on the entire embedded sequence, extracting potential features, generating valuable features, and eliminating irrelevant noises. It focuses on the most important parts to enhance the quality of the extracted features. The resulting features are then passed to the MLP module for further processing. Within the MLP module, there are two linear projections to transform the features and one dropout layer to prevent overfitting. Here, the Focus and LogSparse modules are used as examples to describe the feature map dimension transformations within the transformer layer. Starting with an input vector in the dimension of , and after a series of data transformations, including layer normalization and MSA, the dimensions remain unchanged. The Focus module divides the feature map into four contiguous blocks, effectively increasing the number of channels while reducing the length and width to half, thus the dimension will become . On the other hand, the LogSparse attention module calculates the attention using a mechanism similar to the transformer, preserving the tensor's dimensions without altering them. The BottleneckCSP module conducts two consecutive convolutions, keeping the height and width unchanged while quadrupling the number of channels ( ). To maintain the consistency, the C3ECA module also quadruples the number of channels. Both ProbSparse attention and LogSparse attention achieve information aggregation through attention computation, keeping the dimensions of the feature map unchanged. The C3CBAM module has the ability to simultaneously control channel and spatial attention. To bring the feature map back to its initial size, one more convolution is conducted at the end of the computation. Finally, the human brain-inspired learning paradigm implemented in the MLP module is illustrated in Fig. 1 . The MLP module incorporates the fragmentation and aggregation modules as shown in Fig. 9 . Although the original MLP module has a simple structure, it can be modified to adapt to specific requirements and improve its performance. Since the hidden layer features are extracted early in the MSA module, the feature maps passed into the MLP module contain a large amount of information, which may introduce ambiguity. To address it, the high-dimensional information is first fragmented, which effectively enhances the information learning rate and reduces the computational complexity from to by replacing cumulative multiplication with cumulative addition. The fragmented information retains the essential correlations between features, enabling more efficient information processing. Subsequently, the information aggregation module operates on the fragmented information and leverages the strong correlations between them. It aggregates the information according to its correlation and completes the construction of the entire learning paradigm. Such integration improves information utilization while minimizing the loss of valuable data, which is the superiority of the human learning paradigm. By incorporating the MSA and MLP modules within the transformer layer, the network architecture has the benefits of feature extraction, noise reduction, fragmentation, and information integration, which contribute to the overall performance and effectiveness of the proposed network. Further explanations for the core illustrations of the technical methods mentioned above are given to clarify their relationships. Figure 1 elucidates the introduced human learning paradigm and outlines its knowledge framework. Figure 8 depicts the framework of the baseline model with our learning paradigm embedded in the model’s transformer layer. The overall module of integrating the paradigm within the MLP section of the transformer layer is shown in Fig. 9 . Altogether, these depictions aim to clarify how the human learning paradigm is implemented within the MLP in the transformer layer. Fragmentation module The Focus module periodically extracts pixels from a high-resolution image and then rebuilds them into a low-resolution image by stacking four neighbors to map the width- and height-dimensional information into the c-channel, enhancing each pixel's perceptual field and minimizing the information loss. In short, the Focus module performs fragmentation operations by proportionally dividing the feature map along three dimensions: length, width, and height. The hardswish activation function in Eq. 1 was employed in the original Focus module [ 49 ]. In our model, it was replaced by the SiLU activation function in Eq. 2 because the pixel values of breast fibroadenomas in sonography do not require many boundary constraints [ 50 ]: The BottleneckCSP module consists of a bottleneck block and the main module from CSPNet. The feature map is divided into two parts before entering BottleneckCSP in both length and width dimensions. One part is computed by a series of convolution blocks (1 × 1), and the other is directly fused with the original features by a shortcut. Finally, the fused feature map will be resized to the initial channel dimension using a 1 × 1 convolution block. This module efficiently reduces memory usage and computational bottlenecks because of its lightweight design and strong feature extraction capabilities. The activation functions used in the BottleneckCSP for object detection are Hardswish and LeakyRelu: Since the boundary delineation of the activation function is less required in sonography segmentation, we replaced the activation function with SiLU. The C3ECA module consists of the C3 and ECABottleneck modules and the concept of the slice, processing each block by slicing the pixels of each channel into a defined size. Upon entering the C3ECA module, the feature map undergoes a sequence of three consecutive convolution blocks (1 × 1, 3 × 3, and 1 × 1) for feature slicing. Subsequently, it proceeds through an average pooling layer to capture pertinent information. Finally, a 1 × 1 convolution and an activation function are applied to resize the channel to the initial dimension and ensure feature usability, respectively. To increase the performance without sacrificing the information, a skip connection is added to the C3 module to fuse the feature map before slicing and after processing. The feature fusion can ensure that the feature information after slicing still maintains trustworthiness. Aggregation module We applied the convolutional self-attention mechanism which is capable of convolutionally transforming input into queries/keys in the network [ 34 ]. In comparison to the original transformer design, its location-aware capability can accurately match the most relevant input elements, and LogSparse Attention can read more contextual information to enhance the internal location and sonography perception (see Fig. 10 ). Most significantly, the network is able to integrate and purify information from each slice for lesion awareness. The CBAM's spatial and channel attention mechanisms have been shown to improve the network performance in Yolo v5 but with more computational complexity [ 35 ]. Therefore, we combined the C3 module, the convolutional block attention module (CBAM), and the Bottleneck module as the C3CBAM Bottleneck module to keep the computation under control. With the integration of spatial and channel attention mechanisms, the meaningful part of the fragmented information is effectively selected, and each piece of information is successfully located to assure its validity. According to [ 51 ], we derived a probabilistic formal approach for convolutional kernel smoothing. Equations 4 and 5 describe the MSA mechanism [ 13 ] and the modified attention mechanism, respectively: where is a sparse matrix of the same size as Q, and it only contains the Top-u queries under the sparsity measurement M ( q , K ). It is expected that ProbSparse Attention using the adaptive convolutional kernel approach may perform better in terms of location retention and lengthy sequence prediction, and the addition of the sparse attention coefficients can improve the capture ability of scattered pixels. Information fragmentation may make the image features more confusing, and ProbSparse Attention is able to control the semantic information extraction through smoothly stretched convolutional kernels, which improves the utilization of semantic information and reduces the interference caused by scattered information in the network simultaneously. Training and data augmentation Python 3.7 and PyTorch 1.11.0 were used for the compilation. To decrease the potential for overfitting and regularize the network better, several data augment strategies were applied. With a maximum center and a random offset of 20% from the original image, 128 pixels in each dimension were randomly cropped. Additionally, during data improvement, more images were added to the training dataset by rotating the cropped images with a 20% probability up to and mirroring them with a 50% probability. Stochastic gradient descent (SGD) was utilized as the optimizer for the training model with an initial learning rate of 0.01, a momentum of 0.9, a weight decay factor of 1e-4, and a default batch size of 24 for 150 epochs. The training was carried out using a single Nvidia Tesla V100 32 GB GPU. Evaluation metrics The similarity between the ground truth and the segmentation is assessed by employing several comparison metrics. Dice similarity coefficient (DSC) was used to compare the areas based on their overlap, and Hausdorff distance ( HD) was defined as the distance between the boundaries of the ground truth and the segmentation result [ 52 ]. The DSC is a widely accepted measure for assessing the overlap between the predicted and ground truth segmentation masks, providing insight into the accuracy of the segmentation process. Additionally, the HD metric quantifies the maximum distance between the contours of the predicted and ground truth regions, offering valuable information about boundary localization accuracy: where I gt is the ground truth mask, I pt is the predicted mask, i and j are points belonging to different sets, and d represents the distance between i and j .
Results We utilized both local and public datasets to validate the effectiveness of our proposed learning paradigm and conducted ablation experiments to explore its interpretability. Performance assessment Table 1 lists the outcomes of the proposed CNN–transformer hybrid network using different fragmentation and aggregation modules on the local dataset. Appropriate network architectures show improvements in the evaluation metrics as well as a reduction in the training time. The most significant results came from combining the C3ECA and LogSparse Attention modules, 0.876 in DSC and 5.82 mm in HD, respectively. Compared with the baseline model (TransUNet), the corresponding improvements are 0.76% in DSC and 3.51% in HD, respectively. The computation time is reduced by 1.25 h. The impact of incorporating different fragmentation modules on segmentation performance and training time is also illustrated. Notably, the model utilizing C3ECA as the fragmentation module achieves the overall shortest training time within the range of 2.25–2.75 h. To assess whether the human learning paradigm introduces overfitting issues during training, we analyzed the training losses of the model incorporating the LogSparse Attention module (see Fig. 2 ). The results clearly indicate that the inclusion of C3ECA module achieves the fastest fitting and convergence and the lowest loss with gradually diminishing loss jitters even after only 250 iterations. In contrast, the model incorporating BottleneckCSP exhibits higher loss jitters around the 700th iteration (i.e., up to 0.128). Overall, it shows that the neural network guided by the human learning paradigm is capable of finding the local optimal solutions more efficiently at faster convergence in the training process. Visualizations A visual comparison of the segmentation results for three representative cases: Case I, a conventional fibroadenoma; Case II, a fibroadenoma with an overall longer cross-section; and Case III, a fibroadenoma with an overall larger area and the corresponding DSC metrics are shown in Figs. 3 and 4 , respectively. Our findings reveal that the LogSparse-related models demonstrate successful segmentation of all breast fibroadenomas with various pathological characteristics, resulting in segmentation contours that closely match the ground truth. However, the C3CBAM- and ProbSparse-related models exhibit some misclassifications, particularly in Case I with incorrect expansions in the segmented lesion boundaries. In Case II, the network model containing Focus and LogSparse modules performs the best. However, in Case III, the model combining C3ECA and LogSparse modules excels in predicting the conspicuous lower right part of the fibroadenoma. These results suggest that the model incorporating the human learning paradigm, especially the combined C3ECA and LogSparse Attention modules, demonstrates improvements over the baseline model. Comparison to state-of-the-art methods To further evaluate the performance of our work, the best-performing model (the combined C3ECA and LogSparse Attention modules) was compared with the SOTA models using a local dataset. Table 2 and Fig. 5 show that our proposed network enhances DSC and HD metrics by 6.1% and 4.3 mm and 3.82% and 3.46 mm, respectively, as compared to the U-net and DeepLab V3 + , which may be due to the emphasis on semantic information interaction. In comparison to U-net which has the least training time among all tested SOTA models, our approach reduces the training time by another 0.25 h. Robustness on the public dataset To further validate the robustness of our learning paradigm, we conducted tests on the publicly available dataset (Dataset_BUSI and DatasetB) to compare with the SOTA model. Table 3 and Fig. 6 show that our network is also applicable to the public dataset quite well, outperforming TransUNet by 0.42% in DSC and 5.13 mm in HD and DeepLab V3 + by 1.43 mm in HD, respectively. However, the training time optimization embodied in our lightweight model for the local dataset becomes less pronounced here, which may be due to the smaller sample size (500 vs. 207). Because of the more complicated structure of artificial neural networks than these traditional linear ones [ 41 ], the training time is not proportional to the sample size. Ablation study To investigate the effect of the fragmentation module (C3ECA) and aggregation module (LogSparse) on the neural network in improving breast fibroadenoma segmentation, these two modules were integrated into the baseline model individually. Each hyperparameter in the experiment and clinical dataset was maintained consistently to ensure the fairness. The network containing the fragmentation module exhibits a significant decrease in training time but a minor improvement in DSC, while the inclusion of the aggregation module improves DSC by 0.56% and HD by 0.17 mm (see Table 4 ). More importantly, the performance of the combined fragmentation and aggregation modules is much better than that of the individuals (i.e., by 3.38 mm and 2.97 mm in HD, respectively). Therefore, such a combination is synergistic in the learning paradigm.
Discussion In this study, we present a novel approach to enhance the segmentation of breast fibroadenomas in sonography through the utilization of an artificial neural network with a human learning paradigm. Our method is inspired by the learning mechanisms of the human brain, and the new paradigm combines feature fragmentation modules (Focus, BottleneckCSP, and C3ECA) and information aggregation modules (C3CBAM, LogSparse Attention, and ProbSparse Attention) to effectively guide the neural network's learning process. To validate its effectiveness, we conducted a comprehensive set of experiments using both local and public datasets. The quantitative evaluation demonstrates the superiority of our model over state-of-the-art models in terms of Dice similarity coefficient (DSC) and Hausdorff distance (HD) metrics, while also significantly reducing training time. Remarkably, the network employing C3ECA and LogSparse Attention mechanisms showcased the most exceptional performance on both datasets, improving DSC and HD by 0.76% and 3.51% on the local dataset and by 0.42% and 12.59% on the public dataset compared to the TransUNet model, respectively. Altogether, our study introduces an artificial neural network framework augmented by a human-inspired learning paradigm that effectively enhances the segmentation of breast fibroadenomas. Considering the variations in breast densities and anatomical features among populations, particularly between Asian and European-American women, our study holds clinical relevance for early breast fibroadenoma diagnosis. It is also important to note that the image quality of sonography varies greatly due to the operating equipment. In our data collection, ultrasound images were acquired from local women in Suining, a representative small to medium-sized city in China, using the sonographic equipment of the Mindray DC-80S. Their qualities are found poorer compared to those in the public datasets that consist of the BUS images in Europe and North America (see Fig. 7 ). However, the consistently outstanding performance (i.e., robustness and reliability) of our models across diverse datasets showcases their potential and value for clinical applications. The future of computer vision research is likely to focus on targeted and guided feature learning. Self-attention mechanisms, derived from transformers, are competitive, and many variants have been developed. While attention mechanisms improve contextual information extraction, their performance in sonography, which contains uniformly distributed complex patterns, is unsatisfactory. Filipczuk et al. used a k-means based hybrid method for beast fibroadenomas segmentation, but at an average classification accuracy of only 77.20% [ 42 ]. In our work, inspiration from the human’s learning pattern led to devising a fragmentation–aggregation learning paradigm and then incorporating it with a feature segmentation method. This method involves partitioning and allocating feature maps to distinct channels, culminating in the comprehensive acquisition of information and the emulation of the human’s learning trajectory, and progressing from surface-level to in-depth understanding and from local to global comprehension. Rather than indiscriminately adding modules to the neural network, focusing on specific scenarios to optimize performance seems more effective. Here, this learning paradigm was adapted into a mechanism encompassing both fragmentation focus and information aggregation for improved segmentation and streamlined architectures. Ablation experiments have validated its synergistic benefits. This study has some limitations. Firstly, the size of our dataset is modest compared to other publicly available medical image datasets (i.e., MRI and CT). However, we plan to continuously collect more breast ultrasound images (i.e., 300 ones over the next four months), which will significantly enhance the dataset's size and diversity. Although a promising DSC of 0.875815 was achieved here, there is still ample room for further investigation and improvement. Future studies should focus on exploring novel techniques and continuously refining the learning paradigm to enhance the accuracy and effectiveness of sonography segmentation. We will combine the online transfer learning strategy with the CNN–transformer hybrid network model [ 43 ], apply the feature-based transfer learning method [ 44 ], and migrate the SOTA methods of medical image segmentation, such as fuzzy c-means (FCM), Gaussian mixture model (GMM) [ 45 ], and the topology-preserving approach [ 46 ]. Furthermore, specific segmentation requirements will be explored. Ding et al. suggested that the segmentation of the brachial plexus could be transformed into a segmentation of the nerve as well as the surrounding tissues [ 47 ]. Therefore, blood flow and tissue elasticity signals may assist in segmenting the breast fibroadenomas in sonography (see Fig. 7 ). Finally, image segmentation will be evaluated in line with clinical practice and the physician’s intuitive judgment. DSC and HD illustrate only the geometric differences without considering the clinical implications. The smooth lesion boundaries and their tendency toward concavity can influence the physician's assessment of the tumor's benignity and malignancy in clinical diagnosis. Under- or over-contouring tumors with similar DSC and HD measures may lead to significantly different diagnosis results. A medical similarity index (MSI) that involves a user-defined medical consideration function (MCF) derived from an asymmetric Gaussian function will be used for evaluating the segmentation accuracy since the MCF shape shows the anatomical position and characteristics of a particular tissue, organ, or tumor type [ 48 ]. A subjective evaluation will also be applied. Although human evaluation is very cumbersome and time-consuming, it provides more clinical insights into the tumors. And the acceptance by experienced radiologists of the image segmentation results is critical in the clinical applications. Accurate medical image segmentation is a complex and challenging task due to significant variations in image quality, artifacts, and anatomical structures among patients. Thus, further investigation is required for the technical development and its translation to the clinics.
Conclusion Although sonography is preferred in the diagnosis of breast masses, segmentation of the tumors in sonography is unsatisfactory because of the inherent limitations of this imaging modality, low image quality and the presence of artifacts (i.e., speckles and scattering). Utilizing prior knowledge to guide neural network learning can lead to improved performance in specified medical image segmentation tasks. In this paper, we applied a paradigm inspired by human learning patterns to an artificial neural network for the segmentation of breast fibroadenomas in sonography. Our research findings indicate that aggregating high-dimensional information into cohesive modules can enhance the model's information perception ability while reducing training costs. By introducing three fragmentation attention modules and three information aggregation modules, we successfully implemented this learning paradigm and guided the neural network's learning process. Improvements in performance metrics across various network structures are found in comparison to the baseline network. Among all combinations, the C3ECA and LogSparse Attention modules showed the best overall segmentation performance in DSC, HD, and training time. Additionally, our approach demonstrated robust advantages over other state-of-the-art methods on both local and public breast ultrasound image datasets. This study underscores the immense potential of a modular learning paradigm inspired by the human brain within the realm of image processing. Although our findings are promising, there exists ample room for further exploration and refinement of this approach. Additional images will be incorporated into the dataset for a more comprehensive evaluation of our method's capabilities. Ultrasonic elastography may be utilized for capturing intricate mechanical features of breast masses. This strategic augmentation holds the promise of achieving more precise lesion segmentation.
Background Breast fibroadenoma poses a significant health concern, particularly for young women. Computer-aided diagnosis has emerged as an effective and efficient method for the early and accurate detection of various solid tumors. Automatic segmentation of the breast fibroadenoma is important and potentially reduces unnecessary biopsies, but challenging due to the low image quality and presence of various artifacts in sonography. Methods Human learning involves modularizing complete information and then integrating it through dense contextual connections in an intuitive and efficient way. Here, a human learning paradigm was introduced to guide the neural network by using two consecutive phases: the feature fragmentation stage and the information aggregation stage. To optimize this paradigm, three fragmentation attention mechanisms and information aggregation mechanisms were adapted according to the characteristics of sonography. The evaluation was conducted using a local dataset comprising 600 breast ultrasound images from 30 patients at Suining Central Hospital in China. Additionally, a public dataset consisting of 246 breast ultrasound images from Dataset_BUSI and DatasetB was used to further validate the robustness of the proposed network. Segmentation performance and inference speed were assessed by Dice similarity coefficient (DSC), Hausdorff distance (HD), and training time and then compared with those of the baseline model (TransUNet) and other state-of-the-art methods. Results Most models guided by the human learning paradigm demonstrated improved segmentation on the local dataset with the best one (incorporating C3ECA and LogSparse Attention modules) outperforming the baseline model by 0.76% in DSC and 3.14 mm in HD and reducing the training time by 31.25%. Its robustness and efficiency on the public dataset are also confirmed, surpassing TransUNet by 0.42% in DSC and 5.13 mm in HD. Conclusions Our proposed human learning paradigm has demonstrated the superiority and efficiency of ultrasound breast fibroadenoma segmentation across both public and local datasets. This intuitive and efficient learning paradigm as the core of neural networks holds immense potential in medical image processing. Keywords
Abbreviations Dice similarity coefficient Hausdorff distance Convolution neural networks Breast ultrasound Natural language processing Concentrated-comprehensive convolutions Cross-stage partial network Efficient channel attention Convolutional block attention module Dynamic contrast-enhanced magnetic resonance imaging Multi-head self-attention mechanism Multi-layer perceptron Acknowledgements The authors would like to express their thanks to Dr. Cai Zhang and Miss Hong Liu for the collection of sonography and valuable discussion. Author contributions YG and YZ were responsible for the conception of the work. MC, LY, HY, and HY acquired the image data for the research. The written paper was drafted by YG and substantively revised by YZ. All authors read and approved the final manuscript. Funding This work is financially supported by the Chongqing Medical University (Future Innovation Program, 2022-W0063). Availability of data and materials The datasets generated and/or analyzed during the current study are not publicly available due to security of research data concerns but are available from the corresponding author upon reasonable request. Declarations Ethics approval and consent to participate The dataset used in this work was recorded at Suining Central Hospital in Sichuan, China, and the requirement for informed consent was waived. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-15 23:43:47
Biomed Eng Online. 2024 Jan 14; 23:5
oa_package/e1/cc/PMC10787993.tar.gz
PMC10787994
38218786
Background One of the most prevalent vector-borne infections is dengue fever with an estimated annual global burden of 390 million infections, of which 96 million present clinically [ 1 ]. The disease is caused by dengue virus principally transmitted by Aedes mosquitoes which are commonly found in tropical and sub-tropical regions. In addition, dengue has surpassed other infectious diseases such as malaria to be the most prominent vector-borne disease globally in terms of morbidity and cost of treatment [ 2 ]. The impact of dengue is a great burden on public health costs in South-East Asia and the burden of this infection in Thailand is among the highest in the world [ 3 ]. Dengue infection is commonly asymptomatic but when clinical manifestations occur, they can vary from mild to severe and life-threatening. Severe dengue, in particular dengue hemorrhagic fever (DHF) and dengue shock syndrome (DSS), is an important cause of hospitalization and death in Thailand [ 4 ]. The mild form of infection may be infectious and spread the virus in the community. The only available vaccine for dengue has limited efficacy and can only be administered to people who have previously been infected with challenges of pre-vaccination screening and suboptimal test performance [ 5 ]. Due to these limitations and the absence of any specific treatment, vector control has remained a focus of public health interventions to interrupt the infection cycle. Estimating when and where an outbreak will occur is an important goal to effectively allocate prevention and control resources. Therefore, efficient and reliable notification systems are vital to monitor dengue incidence including spatial and temporal distributions to detect outbreaks in order to initiate timely and effective control measures. Effective communicable disease surveillance systems are a prerequisite to ensure early detection of health threats and their timely control. Delay in infectious disease reporting might hamper timely outbreak interventions. In general, public health surveillance of diseases relies on the notification system which is a result of a chain of events from infection through reporting to public health services, be they local, regional or national. The general flow of surveillance information in Thailand is depicted in Fig. 1 . Delays in the system arise at different stages: different health-seeking behaviors (community), laboratory and follow-up tests (health care facility), the reporting system, and communications between different health providers (surveillance response), including hospitals, the district officer and the insecticide sprayer operatives, as well as people in targeted areas. Dengue surveillance in many countries including Thailand relies on passive reporting which is susceptible to delays. The lag in the surveillance system is therefore a vital issue for disease control planning as incomplete and delayed information can undermine any efforts to deliver early warning and real-time outbreak detection required to trigger an effective response to public health threats. Influenced by healthcare provider adherence and patient access, lagged reports exhibit variations across locations. Recent methodologies (examples [ 6 – 10 ]) aim to estimate current disease incidence by addressing notification lags, primarily focusing on systematic delays. However, these approaches overlook cluster detection, a crucial aspect in the decision-making process for disease outbreak control. While a prior effort offered a valuable framework for reporting delay correction in dengue control in Thailand [ 11 ], the correction alone falls short of the ultimate surveillance goal: informing public health actions to reduce morbidity and mortality [ 12 ]. Consequently, in this study, we went beyond delay correction, also implementing and comparing the performance of cluster detection methods with case nowcasting. > Reporting system time lags hinder timely cluster identification, impeding the initiation of effective disease control interventions. Therefore, we introduced an integrated two-step methodology for spatiotemporal real-time cluster detection, specifically tailored to correct reporting delays. The first step involved adopting space-time nowcasting modeling to account for reporting system lags. Subsequently, anomaly detection methods assessed adverse risks, demonstrated using weekly dengue surveillance data in Thailand. We also further evaluated effectiveness with various metrics compared different methods, revealing similarities and differences among detection techniques with optimal thresholds. This advancement offers valuable insights for informing additional public health actions to reduce dengue morbidity and mortality in Thailand.
Methods Dengue surveillance data In this study, we analyzed dengue case data obtained from the routine surveillance system of the Bureau of Epidemiology, Thai Ministry of Public Health. The dataset consisted of reported cases from various healthcare facilities, including governmental hospitals, clinics under the universal health coverage scheme, and private hospitals, all of which reported cases to district health surveillance data centers. To examine the influence of reporting delays and outbreaks, our study focused specifically on the data collected from the 50 districts of the Bangkok metropolitan area. The years 2010–2011 were chosen as they presented a significant and illustrative case study for our research objectives. During this period, widespread dengue outbreaks were observed across the country, with particular intensity in Bangkok. Notably, the response to these outbreaks exhibited notable delays. Therefore, this timeframe serves as a relevant case study to investigate the impact of reporting delays and outbreak occurrences. The dengue case types considered in our analysis encompassed dengue fever, dengue hemorrhagic fever, and dengue shock syndrome. Our primary goal was to achieve real-time detection, enabling prompt identification of dengue infection clusters and facilitating timely intervention to prevent further disease transmission. Consequently, we combined the number of cases across all dengue types in our analysis. Figure 2 illustrates the temporal trend of dengue incidence in Bangkok during the years 2010–2011. Notably, reporting delays tended to increase during the high season, which corresponds to the rainy period, potentially leading to substantial delays in the availability of data. Such delays can hinder the early detection of possible outbreaks, underscoring the significance of improving the timeliness of surveillance systems to enhance outbreak response capabilities. Ethics declarations Ethics Committee of the Faculty of Tropical Medicine, Mahidol University waived for informed consent of participants. This study was approved by the Ethics Committee of the Faculty of Tropical Medicine, Mahidol University. The submission number was TMEC 22–054 and the number of ethical approval certificate was MUTM 2022-057-01. All methods were carried out in accordance with relevant guidelines and regulations. Nowcasting for lagged reporting A key challenge for infectious disease surveillance in countries with developing infrastructure including Thailand is the time lag before reports are delivered at different levels in the notification system. The report structure of surveillance data with reporting lags can be seen as the lag triangle presented in Fig. 3 . As described in [ 11 ], let be the number of disease incidence which occurred during calendar week t in district i ( i = 1,..., I = 50) but arrived in the surveillance database in week d ( d = 1,..., D ) weeks after the onset date. This signifies the problem that cases have been recorded but have not yet been entered into the database. Note that the event that the cases were in the surveillance system in the same week as the date of diagnosis was denoted as d = 1. The current time point of interest is indexed as t = T and the maximum possible delay that can happen in the surveillance system is labelled as D , i.e., full data were delivered into the system from T + D weeks onwards. Then can be defined as the estimated number of dengue cases that truly occurred by summing predicted reporting lag fractions happening at week t , , over the possible lag range. The goal here was to correct the reported cases by nowcasting the actual weekly fractions of dengue cases for each district, , in a real-time manner. To address spatiotemporal reporting lags, a frequently adopted approach in small area health studies is to model case counts as conditionally independent Poisson variates. The likelihood function for this is defined as where the mean and variance are both equal to . That is, for our modeling, we assumed i.e. where was the relative dengue case risk adjusted for the offset, , as the baseline level at risk. There are a number of ways to adjust for the baseline (see examples [ 13 – 15 ]), however a common practice for disease mapping [ 16 ] is to calculate the expected rate as , where and are the true number of disease incidence and population at risk for each location and time. Since we performed the analysis at a weekly scale, the number of populations was assumed to be constant over the study period. Then the expected rate used in the analysis was computed as . Another main parameter of interest is and the most common approach to model this is to assume a logarithmic link to a linear combination of space-time random effects. First, we structured the model-based lag reporting correction by using information across neighboring districts and time periods to incorporate spatiotemporal smoothing. The convolution model (see examples [ 15 – 18 ]) was employed to capture spatially correlated and unstructured extra variation in the model. Both structured and unstructured random effects were included to capture various forms of unobserved confounding. The uncorrelated random effect is described by a zero mean Gaussian prior distribution. The spatially correlated effect is assumed to have the intrinsic conditional autoregressive model [ 19 ]. To capture the time series trend, the first-order random walk model was applied. All random interaction terms among space, time and delay dimension were specified by a Gaussian distribution with zero mean. All precision (reciprocal of variance) parameters were assumed as a Log-Gamma distribution with hyperparameters as 1 and 0.0005, and 1 and 0.00005 for the conditional autoregressive model, and for uncorrelated and random walk random effects. To address the variability in dengue incidence, the Negative Binomial distribution, which incorporates an overdispersion parameter, can be considered as an alternative to the Poisson likelihood. Typically, issues of dispersion can be tackled through models like Negative Binomial and Quasi-Poisson, both having an equal number of parameters and suitability for overdispersed count data [ 20 ]. In our exploration of modeling choices for reporting lags in this study, we also considered the Generalized Poisson model as an alternative base count distribution. This model not only accommodates dispersion but also possesses a heavier tail with the same first two moments, offering increased flexibility for a broader range of data compared to the Negative Binomial [ 21 ]. The Generalized Poisson model can be seen as a characterization, operating as an alternative Poisson mixture model to the Negative Binomial distribution for overdispersed count data, as emphasized in a study cited in our original submission [ 21 ]. Moreover, another study suggests that generalized Poisson regression models can serve as viable alternatives to negative binomial regression [ 22 ]. Despite the typical preference for the Negative Binomial distribution when evidence of dispersion is present relative to the Poisson, a Negative Binomial model had previously shown similar performance to the Poisson in a scenario involving delay correction with mild overdispersion [ 11 ]. Additionally, during our extended study period, we noted similarities in temporal patterns and magnitudes compared to the previous study period. Consequently, we chose to compare only Poisson and Generalized Poisson models in this study. The generalized Poisson distribution used in this study follows the form introduced in previous works [ 23 , 24 ], represented as . Given , we have the mean and variance equal to and . When , the generalized Poisson approaches the Poisson distribution with mean and variance equal to . The mean is also linked to the linear predictor with the logarithm function as in the Poisson. Space-time cluster diagnostics Space-time cluster diagnostics in epidemiology often employ scan statistics and various refinements of scan statistics have been proposed (for example [ 25 – 27 ]), including the version implemented in SatScan software [ 28 ]. However, a fundamental challenge lies in interpreting p-values and establishing a threshold for defining ‘significance’ [ 29 ]. Therefore, we alternatively based our approaches in this study to cluster detection within the model-based framework. In the context of this framework, it becomes crucial to define what constitutes a cluster. In infectious disease surveillance, it is important to effectively identify localized case anomaly that deviate from expected baseline patterns in both space and time, prompting further investigation. This concept is akin to anomaly detection, where we employ the goodness of fit of a model to quantify unusual events within a set of space-time observations. Measures of goodness of fit help summarize the differences between observed local case counts and the values expected under the model or baseline for each location and time. In our study, we thus explored and compared various model-based measures for anomaly detection, including exceedance probability, information criteria, and leave-one-out cross-validation. Exceedance probability A number of diagnostic tools are available to evaluate the local anomalies. However, it is a natural idea to consider a cluster as any isolated locations or geographically-bounded regions that display an excess of disease risk or incidence in a particular time. The excess of disease risk can be examined by comparison with the expected rate previously described. So, an approach for space-time anomaly detection is to calculate , exceedance probability (EXC), from the number of estimates in the posterior sample which exceed a threshold [ 30 , 31 ]. Usually the limit is assumed to be a = 1 which means we apply the level of the expected rate as the baseline. Information criteria An aim of diagnostic checking is to compare observed data with the fitted model in such a way that it is possible to detect any discrepancies. Forms of model assessment involve measuring the goodness-of-fit (GOF) to evaluate whether the particular data in space and time provide an adequate fit to the model. A set of common GOF measures is the information criteria. The deviance information criterion (DIC) [ 32 ] has been widely used for overall model fit in the Bayesian setting generalized from the Akaike information criterion (AIC) in the Frequentist framework. Another is the widely applicable or Watanabe-Akaike information criterion (WAIC) [ 33 ] which can be viewed as an improvement on DIC. WAIC is fully Bayesian in which this measure applies the entire use of posterior distribution. Unlike DIC, WAIC is robust to different parametrizations and is also valid for singular models [ 34 ]. While the global information criteria have been primarily used as an overall measure of model fit, they can be partitioned into contributions from individual observations in space and time to provide finer details of model discrepancies [ 35 , 36 ]. The partitioning of the DIC for the observed data, local DIC, can be written as [ 36 ] where is the mean deviance for nowcasted cases at district i and week t and is the effective number of parameters, amount of information used for the particular observation for each location and time. Likewise, local WAIC, which is a direct result of pointwise predictive density, can be defined as [ 34 ] where (log pointwise predictive density) = and calculated over the posterior sample. Since the range of information criteria is on the positive real line, we adopted the transformed values on a unit interval as 1- and 1- . This similar transformation was also utilized as model probability in model selection and averaging [ 36 , 37 ]. Leave-one-out cross-validation Another set of metrics widely used to estimate the model fit error is cross validation. In a general setting of cross-validation, the data are repeatedly divided into a training set and a test set. Then the model is fitted using the training set and the cross-validation error is calculated from the test set. However, we restricted our attention here to leave-one-out cross-validation (LOO-CV), the special case with all partitions in which each test set represents a single data point. Among LOO-CV methods, the conditional predictive ordinate (CPO) [ 38 ] and probability integral transform (PIT) [ 39 ] are commonly used to detect extreme observations in statistical modeling. The CPO detection in our case for the delay-corrected dengue incidence at district i during week t can be computed as . For each observed case, its CPO is the posterior probability of observing that dengue case when the model is fit using all data except . Large CPO values imply a good fit of the model to observed data, while small values suggest a worse fitting of the model to that observed data point and, perhaps, that it should be further explored. On the other hand, PIT measures the probability of a new value to be less than the actual observed value: where is the observation vector with the it- th component omitted. This procedure is performed in cross-validation mode meaning that in each step of the validation process the ensuing leave-one-out posterior predictive distribution is calculated. However, in our data which are discrete (disease count) data, the estimate was adjusted as , and unusually large or small values of PIT indicate possible outliers or surprising observations not supported by the model under consideration [ 40 ]. Evaluation and computation of anomaly diagnostic methods Surveillance systems for infectious diseases must strike a balance between outbreak detection accuracy and the efficient allocation of disease control resources. The concepts of optimal criteria, accuracy (Acc), sensitivity (Se), specificity (Sp), positive predictive value (PPV), and negative predictive value (NPV) serve as valuable metrics for comparing and assessing the validity of cluster detection methods. In this study, these five evaluation metrics were employed for method comparison and performance evaluation. An anomaly was considered alarmed when the anomaly diagnostic value from space-time cluster diagnostics, computed for each case count, exceeded a predefined cutoff. We then systematically evaluated the performance of the cluster diagnostics across different threshold values. The key evaluation components are defined as follows. The true positive (TP) was calculated as instances where a method correctly indicates the presence of a disease anomaly. True negative (TN) was the count where a method correctly indicates the absence of a disease anomaly. False positive (FP) was the count of cases where a method incorrectly suggests the presence of an anomaly. False negative (FN) was the count of instances where a method incorrectly indicates the absence of an anomaly. Then sensitivity, specificity, and predictive values are expressed as follows: sensitivity = TP / (TP + FN); specificity = TN / (FP + TN); positive predictive value = TP / (TP + FP); negative predictive value = TN / (TN + FN); accuracy is defined as the proportion of correct detections among the total number of detections, i.e., Acc = (TP + TN) / (TP + TN + FP + FN). In order to efficiently apply this methodology in real surveillance situations, one essential characteristic that should be considered in real-time surveillance systems is computational practicability. Using all the data history is perhaps unnecessary while the most recent information might be adequate to capture the disease pattern needed to detect an outbreak. To reduce computing resource, we partitioned the surveillance data into sliding windows to optimize computational competence of the system. Rather than the full likelihood, the working likelihood was partitioned as where w is the length of sliding window. The sliding window technique then investigates only the most recent w and hence the surveillance might be more efficient and practical for real-time applications. However, the partition can be a trade-off between computing efficiency and estimation of precision. We then also examined the effect of different window sizes in the case study. Estimates derived from the models and diagnostic methods are typically computed from converged posterior samples using sampling-based algorithms like Markov Chain Monte Carlo (McMC). However, real-time estimation in infectious disease surveillance requires timeliness. With the setup of a multidimensional model and accumulating surveillance data over time, the parameter space can rapidly expand, demanding exponential computational resources. To address this, a more efficient approach for inferring parameters is the Integrated Nested Laplace Approximation (INLA) [ 41 ]. This method is particularly suitable for the rapid estimation of parameters in a real-time context. The proposed model was implemented using the numerical Laplace approximation within the R-INLA package, available at www.r-inla.org . All computations were conducted using RStudio version 2020.07.0. Computing information using INLA with R code was provided in supplementary document S1 .
Results The data employed to demonstrate anomaly detection consisted of weekly dengue incidence in Bangkok, the location with the highest annual incidence in the country. Results, averaged across study areas and detection thresholds, are presented in Table 1 , detailing estimates of sensitivity, specificity, accuracy, and their corresponding predictive values for anomaly detection. Without delay correction, the accuracy of detection methods under both likelihood assumptions ranged from 0.4791 using PIT to 0.6092 using WAIC. DIC and EXC performed best under a General Poisson model while WAIC and EXC had the best outcome with a Poisson model. The highest accuracy with reporting delay was the Poisson model with WAIC. With nowcasting correcting for reporting lags, EXC performed best across the evaluation metrics with accuracies of 0.7221 and 0.6916 under both Poisson and Generalized Poisson models. The accuracies with corrected delays using the proposed spatiotemporal nowcasting technique were improved about 22.7% and 17.52% under Poisson and Generalized Poisson assumptions respectively. We further examined the optimal threshold and effect of different window sizes in order to apply the cluster detection in real situations. The focus was limited to the test characteristics of EXC since the detection had the best performance across the evaluation measures and likelihood assumptions. The best threshold was defined as the cut-off value with the maximum accuracy. Table 2 shows the cut-off points with the highest accuracy using different computing window lengths. These comparisons were computed on a Dell computer with 64-bit Windows system, 8GB RAM and Intel i5-3570 S CPU @ 3.10 GHz. The optimal threshold varied in a range of 0.95–0.99 for Poisson and 0.93–0.99 for Generalized Poisson models with the maximum accuracy of approximately 72%. The computing times ranged from 0.5376 min per calculation with 5-week window size to 48.6852 min per calculation with 30-week window size under Poisson model, however the accuracy increased less than 1%. On the other hand, the Generalized Poisson model required slightly more computing time of 0.5487 min for 5-week and 53.2669 min for 30-week window sizes. The improved accuracy was also similarly small at less than 1%. The posterior summary of overdispersion parameters with their corresponding credible intervals (CrI) for both delay correction and anomaly detection indicated a mild overdispersion in the observed data with posterior means of 0.0861–0.0937 (95% CrI: 0.041–0.167) and 0.1466–0.1636 (95% CrI: 0.041–0.384). These implied that the Poisson likelihood assumption with space-time random effects might be adequate to capture the case variability in our data set. Figure 4 compares dengue incidence, standardized incidence and exceedance probability at week 102 during the high season in year 2011. Note that the result of other periods (weeks 96–104) was provided in supplementary document S2 . The complete (true) incidence depicted in the left column showed a possible disease cluster in the southwest of Bangkok and hot spots in the center. Exceedance probabilities also revealed the same pattern of high-risk areas using complete and nowcasted data. In contrast, those clusters and hot spots did not appear in data with reporting delays. The reporting lags are crucial for infectious disease surveillance as the infection can actually spread during the lag period while anomaly detection with nowcasting could accurately recover and detect potential outbreaks in the case study. The developed methodology hence demonstrated an advantage in revealing the true disease pattern properly for real-time public health intervention planning.
Discussion Efficient surveillance is paramount for early infectious disease outbreak detection, particularly for diseases like dengue with no effective vaccines or specific treatments. As vector control remains the primary intervention, timely outbreak detection is crucial. In this study, we devised an integrated approach to assess risks while addressing reporting lags, comparing anomaly detection measures in a dengue surveillance case study in Thailand. Unlike prior efforts that often focus solely on delay correction, we extended our investigation to include and compare cluster detection methods, augmenting the decision-making process for disease outbreak control. Spatiotemporal cluster detection typically necessitates complex models, especially when modeling specific localized space-time behaviors. Real-time infectious disease surveillance requires effective clustering methods capable of promptly detecting deviations from normal background variation. To accommodate space-time reporting variations, we modeled dengue case counts using a count likelihood with a spatiotemporal latent random-effect structure. While a Poisson distribution is a common choice, our investigation also included a Generalized Poisson assumption, offering flexibility for a wider range of data compared to the negative binomial [ 21 ]. The dispersion parameter, indicative of data variability, demonstrated mild dispersion across scenarios and window sizes. The use of a Generalized Poisson model, known for its flexibility in handling dispersion, proved effective in capturing complex multidimensional correlations, though at the expense of increased computing time. Considering the real-time surveillance context, the feasibility of model computation should be a key consideration. Experiments with different moving window lengths revealed marginal improvements in accuracy, suggesting that small sliding windows can yield reasonably good performance, capturing data variation adequately within the model specification. A number of measures of adverse risks were compared and investigated. The exceedance probability outperformed followed by information criteria and leave-one-out cross validation. PIT had the lowest overall performance but higher specificity than information criteria. Information criteria and CPO appeared to have high sensitivity but low PPV. This may imply that PIT yielded conservative detection while CPO and information criteria may produce more false positives. EXC appeared to have highest specificity and PPV without lag nowcasting and had the best values across evaluation metrics with correction for delays. Although WAIC has been suggested lately as an alternative to DIC, which has a long historical development in Bayesian statistics, in our case study both WAIC and DIC had very similar results and performance in various assessment measures. The choice of the most appropriate measure should consider the specific requirements and objectives of the surveillance system. Timeliness is a critical aspect of real-time surveillance. One of the key advantages of our proposed framework is its minimal data requirement, as it solely relies on past surveillance data on incidence reporting using a sliding window partition. This flexibility allows the system to be readily adaptable to various disease systems, particularly in cases where other variables such as climatic or clinical confounders are not available in real-time for inclusion in the model. Nevertheless, our unified approach has been designed to accommodate the inclusion of such covariates through the link function, providing a comprehensive framework for capturing additional factors. Despite its advancements, it is important to acknowledge several limitations in this study. Firstly, the developed methodology does not explicitly include prediction, which is a significant aspect of disease surveillance and planning. However, to support real-time disease control activities, our development effectively complements existing disease prediction efforts. The incorporation of lag-corrected nowcasting into forecasting can enhance the effectiveness of surveillance in disease control activities. Another limitation is the exclusive testing of the developed platform using dengue data from Thailand. Generalizing its applicability to other diseases and settings may require further validation. Nevertheless, the developed platform demonstrates potential for a broad spectrum of applications, extending beyond dengue clustering scenarios to address challenges in infectious or emerging disease surveillance. The versatility and robustness of our approach render it applicable to various disease surveillance problems, providing public health practitioners with an effective tool for enhancing real-time monitoring, control, and prediction of infectious diseases.
Conclusions Effective disease surveillance systems are crucial for timely detection and control of health threats. However, reporting lags in infectious disease surveillance systems can hinder the prompt implementation of outbreak control measures. Existing methods for estimating disease incidence often overlook anomaly detection in the presence of reporting delays. In this study, we introduced an integrated approach that addresses this challenge by enabling accurate real-time cluster detection, even in the presence of reporting delays. While further research and collaboration are necessary to enhance the methodology and its development, our approach offers flexibility by relaxing disease-specific assumptions, making it adaptable to various disease settings. By incorporating anomaly detection, our method can effectively identify disease clusters in real-time, contributing to timely initiation of disease control activities. Furthermore, the efforts made in this study can complement existing surveillance systems and forecasting methods. By integrating our approach into the existing infrastructure, we can enhance the overall surveillance effectiveness and facilitate the timely implementation of disease control measures.
Background Dengue infection ranges from asymptomatic to severe and life-threatening, with no specific treatment available. Vector control is crucial for interrupting its transmission cycle. Accurate estimation of outbreak timing and location is essential for efficient resource allocation. Timely and reliable notification systems are necessary to monitor dengue incidence, including spatial and temporal distributions, to detect outbreaks promptly and implement effective control measures. Methods We proposed an integrated two-step methodology for real-time spatiotemporal cluster detection, accounting for reporting delays. In the first step, we employed space-time nowcasting modeling to compensate for lags in the reporting system. Subsequently, anomaly detection methods were applied to assess adverse risks. To illustrate the effectiveness of these detection methods, we conducted a case study using weekly dengue surveillance data from Thailand. Results The developed methodology demonstrated robust surveillance effectiveness. By combining space-time nowcasting modeling and anomaly detection, we achieved enhanced detection capabilities, accounting for reporting delays and identifying clusters of elevated risk in real-time. The case study in Thailand showcased the practical application of our methodology, enabling timely initiation of disease control activities. Conclusion Our integrated two-step methodology provides a valuable approach for real-time spatiotemporal cluster detection in dengue surveillance. By addressing reporting delays and incorporating anomaly detection, it complements existing surveillance systems and forecasting efforts. Implementing this methodology can facilitate the timely initiation of disease control activities, contributing to more effective prevention and control strategies for dengue in Thailand and potentially other regions facing similar challenges. Supplementary Information The online version contains supplementary material available at 10.1186/s12874-024-02141-5. Keywords Open access funding provided by Mahidol University
Electronic supplementary material Below is the link to the electronic supplementary material.
Acknowledgements We would like to thank Nattwut Ekapirat and Naraporn Khuanyoung for assistance with the epidemiological data. Author contributions All authors contributed to the conceptual design of the study. CR designed and developed the statistical methodology, completed analyses, and drafted the manuscript. DA assisted with the epidemiological interpretation and data. RJM and DA were responsible for clinical revision and improvements of the manuscript. All authors have read and approved the final manuscript. Funding This research was funded in part by the Faculty of Tropical Medicine, Mahidol University, and the Wellcome Trust [Grant number 220211]. For the purpose of open access, the authors have applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission. The funding body had no role in the design or analysis of the study, interpretation of results, or writing of the manuscript. Open access funding provided by Mahidol University Data availability The data that support the findings of this study were obtained from the Thai Bureau of Epidemiology, Ministry of Public Health, but restrictions apply to the availability of these data, which were used with permission for the current study, and are therefore not publicly available. For data requests related to this study, please contact the corresponding author, Dr. Chawarat Rotejanaprasert, at [email protected]. Data may be available from the authors upon a reasonable request and with permission of the Thai Bureau of Epidemiology. Declarations Ethics approval and consent to participate Ethics Committee of the Faculty of Tropical Medicine, Mahidol University waived for informed consent of participants. This study was approved by the Ethics Committee of the Faculty of Tropical Medicine, Mahidol University. The submission number was TMEC 22–054 and the number of ethical approval certificate was MUTM 2022-057-01. All methods were carried out in accordance with relevant guidelines and regulations. Consent to publish Not applicable. Competing interests The authors declare no competing interests. Abbreviations Deviance Information Criterion Exceedance probability Conditional predictive ordinate Probability integral transform Watanabe-Akaike information criterion Credible interval True positive True negative False positives False negatives Sensitivity Specificity Positive predictive value Negative predictive value Accuracy
CC BY
no
2024-01-15 23:43:47
BMC Med Res Methodol. 2024 Jan 13; 24:10
oa_package/73/06/PMC10787994.tar.gz
PMC10787995
38218801
Introduction Prominent facial deformity, a prevalent malocclusion in orthodontic clinical practice, significantly impacts facial aesthetics. To enhance the lateral appearance in cases of dental or mild bony protrusions, optimal results can be achieved by extracting the first premolar and utilizing a fixed appliance or clear aligner for maximizing internal retraction of the anterior teeth. Fixed orthodontic maxillary micro-implant anchorage structures provide effective and safe treatment for cases of protrusion [ 1 ]. In contrast, achieving precise control over the three-dimensional movement of teeth using clear aligners necessitates a combination of mini-screws, power ridges, overtreatment, or power arms to optimize anterior torque control and ensure posterior anchorage during anterior retraction [ 2 – 9 ]. However, both clear aligners and fixed orthotics currently possess several limitations including potential trauma associated with micro-implant, aesthetic concerns, possible increase in unnecessary reciprocating motion, and treatment uncertainty [ 2 , 7 , 10 – 13 ]. To enhance the aesthetic appeal, minimize invasiveness, and optimize efficiency in retracting anterior teeth during clear aligner therapy, we have developed two novel design models for clear aligner retraction. The first modification involves a palatal plate-shaped clear aligner, which can now be directly printed using 3D-printing technology. This advancement improves the design parameters of aligners, including configuration, strength, elasticity, and thickness [ 14 – 17 ], thereby enhancing their therapeutic efficacy. The second one is a Lingual Retractor that utilizes advanced 3D-printing technology to create a compound structure specifically designed for seamless integration with clear aligners. Recently, our research group has developed patient-specific attachments utilizing 3D printing technology that have been validated through finite element analysis to exhibit superior anterior tooth anchorage in comparison to alternative attachments during maxillary molar distalization [ 18 ]. Several studies have documented that successful treatment of patients requiring anterior retraction can be achieved by combining a Double J retractor with a fixed appliance [ 19 , 20 ]. Additionally, the bracket re-bonding procedure, which is a complex operation, may also be necessary. Moreover, the utilization of a palatal micro-implant remains indispensable. The incorporation of clear aligners in conjunction with tongue retractors is expected to enhance the convenience and efficacy of anterior tooth retraction. Orthodontic clear aligners can be fabricated from either traditional thermoplastic materials or light-cured shape memory resins. The development of innovative materials has played a pivotal role in enhancing the effectiveness of clear aligners. Currently, there is an abundance of research available on clear aligner materials, with more comprehensive investigations accessible in scholarly articles authored by Ning and Naohisa [ 21 – 24 ]. It is worth noting that the meticulous design of clear aligner morphology and its composite force system structure holds paramount importance. For instance, in the case of anterior internal retraction, a power ridge was incorporated into the clear aligner design to effectively control maxillary anterior teeth torque [ 25 ]. However, it has been observed that the utilization of a power ridge frequently results in dislocation of clear aligners, subsequently exerting an impact on orthodontic outcomes [ 26 ]. Additionally, micro-implant anchorage composite force systems have been explored for anterior teeth retraction; however, many patients are reluctant to undergo this invasive treatment modality [ 7 ]. Despite these challenges, there remains a lack of effective noninvasive and aesthetic anterior teeth retraction using clear aligners. Recently, we employed simulation methodology to investigate the biomechanical characteristics and retraction effects of our innovative designs for two non-invasive and aesthetically pleasing models using clear aligners. Nonetheless, comprehensive comparative and biomechanical analyses regarding the clinical efficacy of anterior teeth retractions versus fixed appliances are still insufficient. Therefore, the purpose of this study was to compare and evaluate the differences among various design of clear aligners, as well as to assess the disparities between the clear aligner model and the fixed appliance. The study encompasses five distinct clear aligner retraction models and one fixed appliance retraction model (Model C0 Control, Model C1 Posterior Micro-implant, Model C2 Anterior Micro-implant, Model C3 Palatal Plate and Model C4 Lingual Retractor, and Model F0 Fixed Appliance). In this study, employing numerical modeling, we conducted an analysis and comparison of the therapeutic efficacy of various orthodontic appliances as well as the biomechanical response of dental and periodontal ligament structures in orthodontics.
Materials and methods Acquisition of medical image data A patient with permanent dentition and maxillary bone protrusion requiring extraction of the first premolar was selected from the Department of Orthodontics at Affiliated Stomatological Hospital of Chongqing Medical University. The present study was granted ethical approval by the Stomatological Hospital of Chongqing Medical University (2023) 056. Cone-beam computed tomography (CBCT) with specific parameters (120 kVp; 5 mA; voxel size of 0.4 mm; Kava, Biberach, Germany) and 3D intraoral scanning were employed to acquire DICOM (Digital Imaging and Communications in Medicine) data. Inclusion criteria for the study were as follows: (a) Complete development of the jaw and presence of all teeth, excluding third molars; (b) Adult patients with maxillary protrusion, ANB>4°, U1-SN < 105°, and extraction of the maxillary first premolar for orthodontic treatment [ 27 ]; (c) Healthy dentition without extensive fillings, no history of root canal treatment, and absence of restoration crowns or dental implants; (d) Periodontal and temporomandibular joints exhibited normal conditions; (e) Complete cone-beam computed tomography (CBCT) and intraoral scan data were available. Exclusion criteria: (a) The clinical crown height on the palatal side of the maxillary posterior teeth is insufficient, measuring less than 4 mm; (b) The root length of the maxillary posterior teeth is inadequate, with a root to crown ratio (R/C) ≤ 1 [ 28 ]; (c) Patients with a history of maxillary surgery, trauma, or tumor are included; (d) Developmental deformities affecting the integrity and structure of the jaw, such as severe asymmetry and cleft palate in the maxilla. The construction of orthodontic model The DICOM data was imported into the Mimics system (Materialize, Belgium). The threshold range was adjusted based on grayscale differences to segment preliminary 3D models of the maxilla and dentition. Geomagic Studio software (Geomagic, USA) was used for surface fine-tuning and smoothing, followed by generating CAD models through autosurfacing. By utilizing the Boolean operation and offset functions in 3-matic software, we established a PDL with an average thickness of 0.2 mm and cortical bone of 2.0 mm, considering cancellous bone as residual material. The extraction dentition model was created by removing the first premolars and their PDL. We obtained a model of anterior tooth retraction of 0.2 mm using six retraction approaches (five clear aligner approaches and one fixed appliance approach), as shown in Fig. 1 [ 29 ]. The clear aligner was developed by applying an external offset on the post-retraction model with a thickness of 0.75 mm [ 30 ]. One of the clear aligner retraction models combined a clear aligner with a 3D printed lingual retraction hook and a 3D printed palatal plate. In this simulation, the anterior teeth were considered as a retraction unit. The lingual retractor and palatal plate were bonded to the tooth surface through the base plate [ 31 ]. The thickness of both the lingual retractor and the palatal plate was 0.5 mm (Supplementary Fig. 1 ). The center of resistance (CR) is considered the fundamental reference point for controlled tooth movement, and the height of the lingual retraction hook was determined based on the center of resistance (CR) of the retraction unit. The retraction unit models were assigned the property of rigidity. The mesial-distal truncated surfaces of the maxilla were firmly constrained (Fig. 2 , A). In order to ascertain the vertical position of the center of resistance (CR) for the retraction unit, a 100 g horizontal force was exerted in close proximity to the median sagittal plane and parallel to the occlusal plane, inducing lingual retraction (Fig. 2 , A). In addition, the point of force application (level 0) was precisely positioned on the alveolar ridge roof of the posterior teeth, at a distance of 7.69 mm from the incisal edge (Fig. 2 , B). Commencing from level 0, it was incrementally advanced towards the root in perpendicular alignment with the occlusal plane at intervals of 1 mm up to level 7, which corresponded closely to the apex of the anterior teeth. during anterior retraction. All components were imported into finite element (FE) software for calculations. The difference between the displacement of the root tip and crown displacement was defined as the crown-root differential displacement. The center of resistance (CR) level is defined as the point where the differential displacements of anchorage units are close to 0. After step-by-step subdivision of the loading calculation, we determined that the vertical position of the center of resistance (CR) is at 4.85 mm. Our clear aligner force system consists of a lingual retraction hook and a clear aligner, which shifts the center of resistance (CR) position towards the root due to force exerted on the crown section. we selected a position 6 mm above the CR as the length for lingual retraction hook (i.e., 18.54 mm above occlusal plane) (Supplementary file 1, Supplementary Fig. 2 ), which was close to the hard palate. (Fig. 3 , B). A posterior traction site was designed using a 3D printed device, uniting six posterior teeth for anchorage (Fig. 3 , A). Additionally, the traction points can be customized based on clinical needs. The construction of five types of clear aligner retraction models (including the Model C0 Control, Model C1 Posterior Micro-implant, Model C2 Anterior Micro-implant, Model C3 Palatal Plate and Model C4 Lingual Retractor) and one Fixed retraction model (Model F0 Fix Appliance) is illustrated in Fig. 3 . The Model C0 served as the control group for the clear aligner model, consisting solely of clear aligners. In Model C1, a micro-implant was positioned between the second premolar and first molar, 5 mm above the alveolar ridge’s highest point, at an angle of 60° to the maxillary occlusal plane, with an intraosseous length measuring 8 mm. The force of 150 g was applied [ 32 , 33 ]. In Model C2, a micro-implant was positioned between the central incisors to apply the force of 150 g by directing it towards the lingual side through the precision cut [ 7 ]. Model C3 incorporated a palatal lateral plate that seamlessly integrated with the clear aligner in terms of thickness and material, which could be obtained through cutting. Additionally, Model C4 combined a clear aligner with a 3D printed lingual retraction device. Furthermore, the 3D printed device was generated using Mimics software. The lingual retraction hook was positioned 6 mm above the center of resistance (CR) (18.54 mm above the occlusal plane) and was designed and modeled using computer-aided design software SolidWorks (Dassault, France) (Fig. 3 , A). Buccal surfaces of the canine featured vertical rectangular attachments measuring 3*2*1mm, while horizontal rectangular attachments of the same dimensions were designed on the buccal surfaces of both second premolar and first molar in all clear aligner models. Model F0, a commonly used clinical retraction system, comprised a relatively rigid rectangular archwire (0.018 × 0.025inch), a posterior micro-implant, and an anterior retraction hook with a height of 7 mm [ 34 ]. Additionally, a retraction force of 150 g was applied [ 35 , 36 ]. Material properties and meshing The models were assembled and imported into ABAQUS software (SIMULIA, France). Each study subject was assumed to possess continuous homogeneity, isotropy, and a linear elastic material constitutive model. The material properties of the components, obtained from previous studies, are summarized in Table 1 [ 29 , 37 – 45 ]. The meshing of the three-dimensional models was performed using the C3D10M element type, also known as a modified tetrahedral quadratic element that is particularly suitable for contact calculations. The number of nodes and mesh is approximately presented in Table 1 . Boundary constraints and contact conditions The base of the maxilla was constrained to prevent any rotation or displacement from occurring. The contact relationships between the cortical and cancellous bone, alveolar bone and periodontal ligament (PDL), teeth and PDL, attachment and corresponding teeth, micro-implant and jaws, 3D printed lingual retractor and corresponding teeth, archwire and anterior teeth, as well as power arm and archwire were defined as bonded connections. The outer surface of the crown and the inner surface of the clear well as the attachment’s outer surface and the clear aligner’s inner surface, are considered non-linear face-to-face contacts. The tangential direction between these two contact surfaces is set to frictional with a coefficient of 0.2 [ 38 , 46 ]. The coefficient of friction between bracket slots and archwire is assumed to be 0.2 [ 47 – 49 ]. The y-axis of the global coordinate system represents the vertical direction, with positive values defined as perpendicular to the occlusal plane towards the root. The local coordinate system is established for each tooth due to variations in mesiodistal and buccolingual directions. The x-axis represents the mesiodistal direction, where the x-value is defined as the distal direction and positive values are assigned to this direction. The z-axis represents the buccopalatal direction, with positive values defined for the palatal direction. Reference points were selected at the incisal midpoint and root apex of the incisors, cusp tip and root apex of the canines, buccal cusp tip and lingual cusp tip of second premolar, mesial buccal cusp tip, distal buccal cusp tip, mesial lingual cusp tip and distal lingual cusp tip of first molar, and mesial buccal cusp tip, distal buccal cusp tip and lingual cusp tip of second molar. Calculation and analysis Due to the bilateral symmetry of the model employed in this study, we specifically selected the right maxillary tooth and periodontal ligament (PDL) for meticulous analysis. Nonlinear iterative calculations were conducted using ABAQUS software (SIMULIA, France), yielding comprehensive results encompassing displacement of teeth and aligners, as well as von-Mises equivalent stress experienced by both PDL and aligners.
Results Determining the center of resistance The displacement distribution and crown-root displacement differences of the six anterior teeth were illustrated in Fig. 2 . As the center of resistance (CR) vertical position approached, the sagittal crown-root displacement difference tended to approach zero. Specifically, at level 4, the central incisor, lateral incisor, and canine exhibited positive crown-root displacement differences of 9.65E-05 mm, 4.96E-05 mm, and 1.20E-05 mm respectively. However, at level 5, these values became negative with respective crown-root displacement differences of -1.49E-05 mm for the central incisor, -1.24E-05 mm for the lateral incisor, and − 1.66E-05 mm for the canine. The force level axis from level 4.0 to 5.0 was meticulously sectioned at intervals of 0.2 mm for the various points of force application, as depicted in Fig. 2 , D. At level 4.8, the crown-root displacement differences of the central incisor, lateral incisor, and canine were positive: 7.43E-06 mm, 7.79E-06 mm, and 1.07E-05 mm respectively. At level 5.0, the crown-root displacement differences of these teeth were negative with values consistent with those previously described. Subsequently, the force level axis from level 4.8 to 5 was meticulously sectioned every increment of 0.05 mm for the different points of force application shown in Fig. 2 , E. At level 4.85 (Fig. 2 , E), the difference in crown-root displacement between, lateral incisor and canine approached zero; specifically measuring at approximately: 1.86E-06 mm (central incisor), 2.75E-06 mm (lateral incisor) and 3.89E-06 mm (canine). Therefore, we considered this position as representing the vertical height of the center of resistance (CR). Comparison of the maximum displacements of the central incisor, lateral incisor, and canine in sagittal dimension The sagittal movement patterns of the central incisor, lateral incisor, and canine were found to be similar under the loading conditions of all five clear aligner models, as depicted in Fig. 4 . Notably, these movements exhibited a consistent inclination of the crown towards the lingual side and the root towards the labial side. However, in the fixed appliance model, both the crown and root of the central incisor exhibited buccal movement. The crown of the lateral incisor had buccal movement, while the root moved lingually. Additionally, the canines displayed an opposite trend to that of the lateral incisor. Furthermore, it is worth noting that tooth displacement was significantly lower in the fixed appliance model compared to clear aligner models. Table 2 demonstrates that Model C3 had the smallest crown-root displacement difference for the central incisor at 6.30E-02 mm. For Model C4, the smallest differences were observed for both lateral incisors and canines at 7.47E-02 mm and 6.31E-02 mm respectively. In contrast, in the fixed appliance model, these differences were measured at 1.34E-04 mm for central incisors, 1.43E-02 mm for lateral incisors, and 5.55E-03 mm for canines respectively (Fig. 4 ). The sagittal retraction of central incisors, lateral incisors and canines under different retraction models was visually demonstrated through a series of figures depicting their initial positions as well as post-retraction positions using both clear aligners and fixed appliances (Fig. 5 ). For a better understanding of the displacement of the teeth, these movements were magnified 50 times. Comparison of the maximum displacements of the central incisor, lateral incisor, and canine in vertical dimension As depicted in Fig. 6 , displacement tendencies were compared for the central incisor, lateral incisor, and canine in terms of crown and root displacement along the vertical dimension. In Table 3 , among the clear aligner models, Model C3 exhibited the smallest crown displacement for the central incisor (-3.08E-02 mm). Similarly, Model C4 showed the smallest displacements for both lateral incisor (-3.65E-02 mm) and canine (-2.27E-02 mm). In contrast, within the fixed appliance model, crown displacements were measured as 5.26E-04 mm for central incisors, 9.77E-03 mm for lateral incisors, and − 4.69E-03 mm for canines. Vertically speaking, all five clear aligner models demonstrated a tendency towards extrusion of anterior teeth; whereas in the fixed appliance model, there was an inclination towards intrusion of central and lateral incisors alongside extrusion of canines. Comparison of the maximum displacements of the second premolar, first molar, and second molar in sagittal and vertical dimension As shown in Fig. 7 , sagittally, the movement trend of the posterior teeth was similar in the five clear aligner models, all showed an inclined movement trend of the crown toward the mesial and the root toward the distal. In the fixed appliance model, the crown of the posterior teeth showed a tendency to move distally in the sagittal direction (Fig. 7 , A). As shown in Table 4 , in the clear aligner models, the smallest displacement of the crown of second premolar and second molar in sagittal dimension were observed in Model C3, which were − 2.72E-02 mm and − 1.72E-02 mm. The smallest displacement of the crown of first molar was observed in Model C4, and were − 2.21E-02 mm. The displacement of the crown of second premolar, first molar and second molar in the fixed appliance model were 7.24E-04 mm, 1.05E-03 mm, and 1.78E-03 mm, respectively. Vertically, the movement trend of the posterior was similar in the clear aligner models. The second premolar showed a tendency to intrude, and the first molar was intrusive except for Model C4. The second molar had a tendency to extrude. In the fixed appliance model, the second premolar showed a tendency to intrude, while the first molar and second molar showed a tendency to extrude (Fig. 7 , C). In the clear aligner models, the smallest displacement of the crown of second premolar and second molar in vertical dimension were observed in Model C3, which were 6.98E-03 mm -3.08E-02 mm and − 1.50E-03 mm. The displacement of the crown of second premolar, first molar and second molar in the fixed appliance model were 1.23E-03 mm, -5.82E-04 mm, and − 6.25E-05 mm, respectively (Table 5 ). Comparison of the maximum displacements and von mises stress in the clear aligners and fixed appliance The maximum von mises of the clear aligner was 693.733 MPa, 772.713 MPa, 754.77 MPa, 717.365 MPa, and 784.445 MPa, respectively. The maximum von mises of the fixed appliance was 68668.1 MPa (Fig. 8 , B). The stress distribution in the clear aligner model was similar, with stresses concentrated at the aligners corresponding to the canine, first premolar, and second premolar teeth, especially at the teeth adjacencies. The stress of the fixed appliance was located primarily on the archwire and the brackets corresponding to the first molar. (Fig. 8 , A). The maximum displacement of the five clear aligner models was 0.304961 mm, 0.283423 mm, 0.295801 mm, 0.298634 mm, 0.292909 mm, respectively. The maximum displacement of the fixed appliance was 0.022658 mm. The displacement trends of the clear aligners in the clear aligner models were similar, with a tendency to move buccally and toward the occlusal direction. In the fixed appliance model, there was a tendency for the corresponding position of the lateral incisors of the archwire to be deformed towards the root and lingual side. Moreover, the corresponding position of the second premolar of the arch wire had a trend of buccal dislocation. Since the archwire corresponding to the position of the first molar was constrained by the buccal canal, the deformation and extrusion were obvious under the retraction force. Comparison of von mises stress in the PDL of the central incisor, lateral incisor, canine, second premolar, first molar, and second molar As depicted in Fig. 9 , the average von mises and stress distribution of PDL in six retraction models were compared. The stress magnitude and stress distribution on PDL were similar in the five clear aligner models. Among the five clear aligner models, the lowest stress of the PDL of the central incisor, lateral incisor, canine, second premolar and second molar appeared in Model C3, which were 0.025871 MPa, 0.030915 MPa, 0.041213 MPa, 0.021395 MPa and 0.011692 MPa, respectively. Model C4 had the lowest PDL stress in the first molar, which was 0.013860 MPa. The stress on the PDL of the central incisor, lateral incisor, canine, second premolar, first molar, and second molar in the fixed appliance retraction model were 0.00256 MPa, 0.012276 MPa, 0.006295 MPa, 0.003738 MPa, 0.001902 MPa, and 0.001394 MPa, respectively. The PDL stress distribution was obviously different between the clear aligner model and the fixed appliance model. In the clear aligner models, the stress was mainly located in the anterior teeth and the second premolar, and the PDL stress of the first molar and the second molar decreased significantly. In the fixed appliance model, the stress was mainly concentrated on the lateral incisor, canine and second premolar. In the clear aligner models, the stress on the PDL of the central incisors, lateral incisors and canines was located on the buccolingual side and concentrated mainly in the cervical position. The stress on the PDL of the second premolar was mainly distributed in the cervical of the mesial and distal of the root surface. The stress of PDL on the first and second molars was concentrated in the cervical region of the mesial and distal of the root surface. In the fixed appliance model, the stress of PDL of the lateral incisor was mainly located on the buccal side, with a concentration in the apical and lingual cervical locations. For canine, the PDL stress was mainly located on the buccal side and distributed more uniformly. The PDL stress of the second premolar was mainly concentrated on the buccolingual cervical region.
Discussion In this study, we conducted numerical simulations to investigate the process of anterior retraction in different orthodontic designs and compared the biomechanical differences among various invisible orthodontic devices during anterior retraction. Additionally, we compared the clear aligner retraction model with the fixed appliance retraction model. The results showed minimal biomechanical disparities among different clear aligner models. The additional force systems did not alter the trend of tooth movement in clear aligner models but rather adjusted both anterior and posterior teeth displacement during retraction. Model C3 demonstrated superior torque control and provided enhanced protection for posterior anchorage teeth compared to other four clear aligners. The clear aligner and fixed appliance exhibited distinct biomechanical properties, with the latter showing superior anterior torque control and posterior anchorage tooth protection compared to the former. The clear aligner models consistently demonstrated lingual tipping and extrusion in the anterior teeth, as well as a similar movement pattern in the posterior teeth with their crowns tilting towards the mesial side, consistent with the findings reported by Wang et al. [ 50 , 51 ]. Retraction of the anterior teeth using clear aligners leads to a roller-coaster effect of tooth movement [ 5 , 50 , 52 , 53 ]. The additional force systems in the study did not change the observed trend of tooth movement in the model, but they did introduce some variation in the displacement magnitude of both anterior and posterior teeth. In Liu et al.‘s study, the utilization of anterior mini-screws and elastics demonstrated their efficacy in achieving incisor intrusion and palatal root torquing [ 7 ]. Consistent with their findings, our experimental group Model C2 also exhibited superior control over the anterior teeth in terms of torque and vertical control when compared to Models C0 and C1. However, the observed trend was not as pronounced, potentially due to variations in force magnitude and application method. Liu’s study revealed that longer anterior teeth experienced less tipping [ 53 ], which aligns with the results obtained from our control group Model C0. Furthermore, our experimental group Model C3 deviated from this trend by showcasing a smaller displacement tendency for central incisors with shorter roots compared to canines. Additionally, all anterior teeth displayed a decreasing sagittal tipping displacement trend. The results indicate that Model C3 exhibited the most precise torque and vertical control for central incisors, as evidenced by its minimal crown-root displacement difference and vertical displacement. This phenomenon can be attributed to the stabilizing and cushioning effect of the palatal plate structure during the retraction process. The displacement of the posterior teeth in the sagittal and vertical directions was effectively minimized, indicating optimal protection for posterior retention. This was related to the role of the palatal plate in combining with the posterior teeth to form a stronger anchorage unit. The Model C4 had the best torque control and vertical control for lateral incisor and canine, which was due to the role of the lingual retractor. The initial displacement tendency of teeth in the fixed appliance model was significantly different from that in the clear aligners. The fixed appliance had the most pronounced effect on the lateral incisor, causing a labial tipping with intrusion of the lateral incisor. The reason for this was the proximity of the traction point to the lateral incisors and the fact that the lateral incisors exhibited a relatively smaller periodontium compared to other anterior teeth in general condition [ 54 , 55 ]. Moreover, the posterior teeth showed a tendency to move distally, due to the backward frictional force exerted by the archwire on the posterior teeth when closing the gap. On the other hand, the displacement magnitude of the teeth in the fixed appliance model was significantly less than in the clear aligner models. This was consistent with previous studies that clear aligner was not as good as fixed appliance in controlling tooth torque and posterior anchorage protection [ 56 , 57 ]. We explored the reasons for this by comparing the stress and displacement of clear aligners with fixed appliance. From Fig. 7 , A. it can be seen that the clear aligners had greater stress at the joint of adjacent teeth and a tendency to fall off in the occlusion direction, which was in agreement with the findings of Meng et al. [ 29 ]. However, the maximum von mises stress of clear aligner was still significantly less than that of fixed appliance. When fixed appliances were subjected to forces, most of the forces were carried by the fix appliances themselves, so the forces transmitted to the teeth were significantly reduced. However, when clear aligners were deformed, the force acted directly on the tooth surface and there was no force decay process. Moreover, Fig. 7 , B showed that the deformation of the clear aligners was significantly greater than that of the fixed appliance, about fifteen times greater. The greater the deformation of the clear aligner the greater the force applied to the tooth. In agreement with Danilee K. B et al., clear aligner was not stiff enough to maintain the tipping tendency compared to fixed appliance, which can lead to a significant roller-coaster effect [ 58 ]. The clear aligner approach and the fixed appliance approach still exhibit a disparity; nevertheless, this study offered a developmental direction and established a theoretical foundation for future non-invasive, aesthetically pleasing, comfortable, and efficient modalities of clear aligner treatment. Improvements in materials, design refinements, and 3D printing technology have made it possible to create clear aligner with better orthodontic capabilities by improving design parameters such as aligner configuration, strength, elasticity, or thickness [ 16 , 17 , 59 ]. Root absorption can result from excessive stress concentration, and it has been reported that 91% of teeth underwent some degrees of root resorption after orthodontic treatment [ 60 ]. Stress distribution of PDL was consistent with the trend of tooth movement [ 30 ]. Since the five clear aligner models had the same trend of movement, the stress distribution in PDL was also roughly the same. For the clear aligner models, the stress of the central incisors, lateral incisors and canines was mainly concentrated on the cervical of the buccal and lingual root surfaces and apical regions, which was consistent with the findings of Liu [ 7 ]. In addition, the stress of the second premolar, first molar and second molar was mainly concentrated on the cervical of the mesial and distal root surfaces. The root surfaces of central and lateral incisors are smaller than those of premolars and molars, making them more susceptible to root resorption [ 45 ]. In Model C3, the PDL stress of anterior teeth was smaller than that in the other clear aligner models, and the stress distribution area was also smaller. The results suggested that the modified palatal plate clear aligner helped reduce the risk of root resorption during anterior retraction. In the fixed appliance model, the lateral incisor was subjected to the greatest stress, and the stress mainly concentrated on the buccal surface, the root tip and the cervical of the lingual surface. However, the stress was still smaller than that in the clear aligner models. Consistent with Tang et al., the stress of the PDL in the fixed appliance model was significantly less than that in the clear aligner models [ 61 ]. Accordingly, this may be an obvious risk factor for root resorption caused by clear aligner therapy. However, it is imperative for this study to acknowledge its potential limitations. The limitations of this simulated study remain, as it can only explain the initial effects of stress distribution and displacement patterns on teeth when analyzing orthodontic appliance force systems. Simplification and assumption pose evident limitations in the context of finite element analysis. Frequently, more intricate anatomical structures are disregarded during the modeling phase. Another concern arises when attempting to accurately represent not only the anatomy but also the morphology of tested tissues, where simplifications are commonly employed [ 62 ]. As digital simulation technology advances, our next endeavor is to achieve a more precise and comprehensive simulation of the orthodontic process. Additionally, replicating exactly the same living substance in a mechanical model proves virtually impossible; hence further investigation into finite element analysis through extensive clinical studies is necessary to quantitatively validate our findings. Moreover, combining FE analysis with clinical studies for mutual validation will enhance the significance of this study, which represents our subsequent step. The modified palatal plate clear aligner we designed is too monolithic, but this study provides direction for future research. Moreover, we will further improve the configuration, strength, elasticity, thickness and other design parameters of the clear aligner to explore the modified clear aligner with better efficacy.
Conclusions After conducting preliminary research, we have arrived at the following conclusions: The teeth movement pattern remained consistent across all five clear aligners, characterized by lingual tipping and extrusion of anterior teeth, as well as mesial tipping of posterior teeth during anterior retraction. Fixed appliances exhibit superior control over torque in anterior teeth and provide better protection against anchorage loss in posterior teeth compared to invisible appliances. The implementation of an additional force system in clear aligners did not alter the observed trend of tooth movement, but it did exert an influence on the magnitude of tooth displacement. Specifically, modified palatal plate structure clear aligner Model C3 demonstrated enhanced torsional control and improved preservation of posterior dental anchorage.
Background The aim of this study is to conduct a comparative evaluation of different designs of clear aligners and examine the disparities between clear aligners and fixed appliances. Methods 3D digital models were created, consisting of a maxillary dentition without first premolars, maxilla, periodontal ligaments, attachments, micro-implant, 3D printed lingual retractor, brackets, archwire and clear aligner. The study involved the creation of five design models for clear aligner maxillary anterior internal retraction and one design model for fixed appliance maxillary anterior internal retraction, which were subsequently subjected to finite element analysis. These design models included: (1) Model C0 Control, (2) Model C1 Posterior Micro-implant, (3) Model C2 Anterior Micro-implant, (4) Model C3 Palatal Plate, (5) Model C4 Lingual Retractor, and (6) Model F0 Fixed Appliance. Results In the clear aligner models, a consistent pattern of tooth movement was observed. Notably, among all tested models, the modified clear aligner Model C3 exhibited the smallest differences in sagittal displacement of the crown-root of the central incisor, vertical displacement of the central incisor, sagittal displacement of the second premolar and second molar, as well as vertical displacement of posterior teeth. However, distinct variations in tooth movement trends were observed between the clear aligner models and the fixed appliance model. Furthermore, compared to the fixed appliance model, significant increases in tooth displacement were achieved with the use of clear aligner models. Conclusions In the clear aligner models, the movement trend of the teeth remained consistent, but there were variations in the amount of tooth displacement. Overall, the Model C3 exhibited better torque control and provided greater protection for posterior anchorage teeth compared to the other four clear aligner models. On the other hand, the fixed appliance model provides superior anterior torque control and better protection of the posterior anchorage teeth compared to clear aligner models. The clear aligner approach and the fixed appliance approach still exhibit a disparity; nevertheless, this study offers a developmental direction and establishes a theoretical foundation for future non-invasive, aesthetically pleasing, comfortable, and efficient modalities of clear aligner treatment. Supplementary Information The online version contains supplementary material available at 10.1186/s12903-023-03704-6. Keywords
Electronic supplementary material Below is the link to the electronic supplementary material.
Acknowledgements We gratefully thank the National Natural Science Foundation of China (Grant No. 12072055, 11872135, U20A20390), Natural Science Foundation of Beijing (Grant No. L 212063) and the Fundamental Research Funds for the Central Universities, the 111 Project (No. B 13003), CAMS Innovation Fund for Medical Sciences (CIFMS) under Grant 2019-I2M-5-016. Chongqing Stomatological Association Zhengya Orthodontic Clinical Research Scientific Research Fund Project (CQSA-ZY2021-01). Project of Chongqing Graduate Tutor Team (dstd201903), Chongqing Young and Middle-Aged Medical Excellence Team. Author Contributions QX and WXW performed the experiments, analyzed the data, and wrote the manuscript. CJW involved in conceptualization and methodology. GF contributed to the interpretation of the results. CW involved in conceptualization, provided manuscript writing assistance, and critically revised the manuscript for important intellectual content. JLS contributed to supervision, project administration, and funding acquisition. YBF contributed to conceptualization and supervision. All authors read and approved the final manuscript. Funding This work was supported by the National Natural Science Foundation of China (Grant No. 12072055, 11872135, U20A20390), Natural Science Foundation of Beijing (Grant No. L 212063) and the Fundamental Research Funds for the Central Universities, the 111 Project (No. B 13003), CAMS Innovation Fund for Medical Sciences (CIFMS) under Grant 2019-I2M-5-016. Chongqing Stomatological Association Zhengya Orthodontic Clinical Research Scientific Research Fund Project (CQSA-ZY2021-01). Project of Chongqing Graduate Tutor Team (dstd201903), Chongqing Young and Middle-Aged Medical Excellence Team. Open Subjects of Shanxi Key Laboratory of Prevention and treatment of Oral Disease and New Materials (KF2020-01). Chongqing Education Commission “Chengdu-Chongqing area twin city economic Circle Construction” science and technology innovation project (KJCX2020017). Technology Innovation and Application Development Specialized for Population Health (CSTB2023TIAD-KPX0054). Data Availability The datasets used and/or analysed during the current study available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate Ethical approval was granted by the ethical committee of Stomatological Hospital of Chongqing Medical University and the ethics number was (2023) 056. The patients provided their written informed consent to participate in this study. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:47
BMC Oral Health. 2024 Jan 13; 24:80
oa_package/ae/88/PMC10787995.tar.gz
PMC10787996
38218890
Background Introduction Tremor is characterized as an involuntary, rhythmic, oscillatory movement of a body part [ 1 ], and it can manifest as a symptom of various neurological diseases, including essential tremor (ET), Parkinson’s disease (PD), and multiple sclerosis (MS). The categorization of tremors is based on clinical factors such as anatomical distribution, activation conditions, amplitude, frequency, and underlying etiology. Within the scope of this review, tremors will be classified according to their activation condition and corresponding neurological symptoms and diseases. Tremor can be classified into two main categories: rest tremor [ 2 ], characterized by nonvoluntary activation that occurs when the individual is attempting to rest and is commonly observed in people with PD. In contrast, action tremor [ 1 ] involves voluntary movement. Action tremor can be further classified into two subtypes: postural tremor, which occurs when the subject maintains a position against gravity, and kinetic tremor, which is associated with any voluntary movement that can be constant (simple kinetic), specific to a particular activity, such as writing (task-specific), or that increases as the individual approaches a goal or visual target (intention tremor). Intention tremor refers to a rise in the amplitude of tremors when visually guided movements are made toward a target, especially when nearing it. This type of tremor can also be coupled with task-specific tremor as the individual performs targeted movements, for example, during drawing (Archimedes Spiral tests). Intention tremor is believed to be correlated with cerebellar pathology, its connected pathways, or both, and it is a common symptom in people with, for example, MS [ 3 ]. It is estimated that 25–60% of people with MS experience postural and intention tremor [ 4 ], which typically occurs in the upper limbs at a frequency of 3–4 Hz [ 3 ]. However, other types of tremors, such as rest, simple kinetic, and task-specific tremors, are not frequently observed in MS [ 5 ]. Assessing tremors in patients with neurological diseases is crucial for determining disease progression and the effectiveness of medical treatments. Traditionally, clinicians use various clinical tests to identify tremor type and severity in patients. However, with the advancement of wearable technologies, such as smartphones, smartwatches, and sophisticated muscle sensors, there are now quantifiable ways to measure movement and tremor. Although wearable technology is a promising approach for quantifying tremors, identifying relevant features for each type of tremor is necessary for practical use. Recent research has shown that analyzing tremor amplitude and frequency makes it possible to differentiate between different movement disorders such as ET and PD versus healthy controls, classify tremor severity, and correlate it with traditional qualitative-scored neurological tests [ 6 ]. However, the changing nature of intention tremors, whose amplitude depends on the movement intention of the patient, makes it difficult to quantify this type of tremor and extract valuable features using the current approaches. Identifying and analyzing intention tremors can greatly aid disease progression monitoring and intervention efficacy assessment. This review examines the advancement of upper limb tremor assessment technology, methodology, and future directions for algorithm and sensor development to improve quantification of tremor in general and intention tremor specifically. Neurological tests for tremor assessment correlation and comparison Researchers evaluate tremor assessment technologies by performing specific tasks that amplify the targeted tremor type. These tasks are based on tests used in clinical practice to assess upper limb impairments. Table 1 displays the most common clinical tests used to correlate or as a reference for evaluating assessment technologies. The Fahn-Tolosa-Marin Tremor Scale (FTMRS) [ 7 ] and the Essential Tremor Rating Assessment Scale (TETRAS) [ 8 ] are frequently used to quantify rest, postural, and kinetic tremor, including tremor during activities of daily living (ADLs). When the technology is tailored for a single population, e.g., people with PD, a more disease-specific test such as the Movement Disorder Society Unified Parkinson’s Disease Rating Scale, Part III Motor Examination (UPDRS-III) [ 9 ] is used for correlation purposes. Another example of a disease-specific test is the Scale for the Assessment and Rating of Ataxia (SARA) test [ 10 ], which focuses on cerebellar ataxia. SARA includes the finger to nose test (FTN) and the finger chase test, which specifically evaluates intention tremor. In summary, clinical tests include different tasks assessing tremor severity depending on their type (see Fig. 1 ): Rest tremor : Sitting with fully supported arms against gravity. Postural tremor : Maintaining a specific posture against gravity, for example, stretching arms to the front so that the subject maintains their elbows stretched against gravity; or shoulder abduction with elbows flexed and hands held in a pronated position resembling a 'wing-beating' posture. Kinetic tremor : Simple kinetic and task-specific tremors are evaluated using tasks such as handwriting, Archimedes spirals drawings, and finger tapping (FT), as well as ADLs involving whole-body movement, such as pouring drinks, eating, and dressing. Intention tremor severity can be measured using the finger to nose test (FTN). In this test, the subject touches their nose and then the examiner’s finger, with the tremor amplitude expected to increase as the hand approaches the finger. Intention tremor can also be assessed using the finger chase test, where the examiner performs sudden fast pointing movements in a frontal plane. At the same time, the subject follows with their finger as quickly and accurately as possible. Literature search and data extraction This review was primarily conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) scoping review checklist (see Additional file 2 ). In this review, we were interested in finding studies examining quantifiable upper limb tremor assessment strategies accessible to clinicians and patients without highly specialized equipment. To determine the criteria for inclusion and exclusion, we conducted a comprehensive search on PubMed and Scopus with the following title/abstract terms ("tremor") AND ("assessment" OR "measurement" OR "evaluation" OR "detection" OR "quantification" OR "monitoring" OR "correlation" OR "estimation" or "discrimination" OR "analysis" OR "differentiation" OR "classification") AND ("technology" OR "sensor" OR "device" OR "quantification") (last search date: 10 July 2023) (see Additional file 5 for the detailed search strings). Further publications were identified from the list of references of relevant papers and relevant review papers found in our search [ 6 , 11 , 12 ]. After screening the articles for relevance and eligibility, we excluded studies that (1) did not focus on upper limb impairment, (2) focused on upper limb symptoms that explicitly excluded tremor, (3) only used clinical tests and clinician evaluation without any sensor or any automated tool, (4) the type of technology is not portable or usable outside of specialized rooms (e.g., functional magnetic resonance (fMRI) or magnetoencephalography (MEG)) or are invasive, (5) only evaluated healthy subjects, (6) interventional studies using damping tools, such as exoskeletons or functional electrical stimulation (FES), (7) preprints, prospective studies, and not peer-reviewed, and (8) not written in English. The remaining studies, 243 publications (see the details on data extraction in Additional file 3 ), were analyzed to identify common themes and establish criteria based on the type of sensors, number of subjects, technology, methodology, purpose, and year of publication. According to our screened papers, tremor assessment technologies can be classified into three distinct types, as depicted in Fig. 2 and classified in the table of Additional file 1 and the database found in Additional file 4 : i. Activity level: Based on tools and digitized tasks , using smartphones or tablets, assessment is made through manipulanda or touch-based games. ii. Based on physiological sensors, physiological measurements are used to detect and differentiate tremors using surface electromyography (EMG) sensors, muscle activation following motor unit recruitment, and electroencephalogram (EEG) measuring the brain's electrical activity from the scalp. iii. Body function level: Movement based on motion capture systems , the tremor and the posture of the subject’s upper limbs are captured using accelerometers, gyroscopes, inertial measurement units (IMUs), electromagnetic tracking, or camera systems, with or without markers.
Conclusions: future avenues to assess intention tremor Of all the collected studies, 52 (21% of the total) assessed intention tremor tasks. Furthermore, 37% of these studies [ 36 , 37 , 56 , 65 , 66 , 71 , 84 , 115 , 122 , 124 , 183 , 193 , 210 , 241 , 251 – 253 , 257 , 265 ] (less than 8% from all studies) focus on pwMS, ataxia, or cerebellar disease, who tend to exhibit intention tremor more clearly. The findings indicate that assessment technologies measuring intention tremor should design tasks that elicit intention tremor and involve individuals who exhibit relevant symptoms. Although digitized drawings have been examined in people with intention tremor [ 14 , 55 , 56 , 58 , 65 , 66 ], further comparison with other intention tremor tasks is needed, such as the SARA scale and the FTN or finger chase tasks. Moreover, the effectiveness of digitized drawings in eliciting intention tremor and their association with task-specific tremors require more investigation. Regarding physiological sensors, EMG has been used in pwMS [ 84 , 95 ]. Still, only one study has explored its application in intention tremor [ 84 ], yet their findings did not provide conclusive evidence concerning the relationship between accelerometry and EMG. The understanding of muscle activity in intention tremor remains incomplete, necessitating a more comprehensive analysis. For instance, conducting tasks specifically designed to elicit intention tremor in individuals with cerebellar pathology would facilitate an in-depth investigation of motor conduction times and activation patterns [ 62 ]. EEG could help to differentiate movement intention from tremor, as previously suggested by Gallego and Ibáñez et al. [ 98 , 238 ] in their analysis of tremor in ET. Examining patients' brain activity with intention tremors may shed light on how cortical or cerebellar activities change during motor control tasks. From computational neuroanatomy and neuroimaging studies, the premotor, primary motor, parietal regions of the cortex, and cerebellum are believed to be involved in motor control [ 271 ] and tremorous movements [ 101 , 272 ]. Assessing cerebellar activity during motor control and intention tremor tasks could be valuable, especially for patients with cerebellar pathology [ 107 , 273 , 274 ]. For example, recent studies observed heightened cerebellar activity through cerebellar EEG recordings of ET patients [ 105 ] with only one study, to the best of the authors’ knowledge, using an intention tremor task [ 106 ]. Additionally, the interaction between the motor, parietal, and cerebellar regions could be analyzed during motor execution and intention tremor tasks. A past study investigated the functional interaction (using EEG modular functional connectivity) of the somatomotor system and higher-order processing systems during a motor task [ 275 ]. Motion capture algorithms could be one of the best ways to assess intention tremors due to their easy integration with wearable technologies for intervention, such as tremor-damping exoskeletons. The valuable research conducted by Morgan et al. [ 115 ] and Deuschl et al. [ 257 ], investigating intention tremor during activities that induce this type of tremor, can now be easily replicated using markerless pose estimation software, as done by Pang et al. in PD [ 269 ]. On the other hand, IMU sensors have become practical and effective for tremor detection but require sensor fusion algorithms and signal processing techniques for reliable analysis [ 90 , 183 , 242 ]. Another study was performed by Carpinella et al. [ 183 ] effectively employed the combined capabilities of EMD and HHT to accurately detect minute variations in intention tremor tasks. They accomplished automatic classification and distinction between HS and pwMS and detected subtle tremors from voluntary movement in MS. Furthermore, Tran et al. [ 251 , 252 ] used ballistic tracking (an intention tremor task analogous to the finger chase test) with an IMU and a Kinect camera to distinguish between ataxia and HS successfully. These outcomes present promising prospects for the automated detection and assessment of intention tremors. In addition to facilitating such analysis, this technique could also provide valuable insight into developing intention detection algorithms for individuals with neurological conditions such as pwMS, thereby enabling wearable technologies to function not only as assessment tools but also as sensors for interventions and assistive technologies in daily life. This review examined the utilization of sensor technology in evaluating tremors across various neurological conditions. Some limitations of our review include manuscripts with unclear terminology related to tremor, e.g., studies not differentiating between the different types of kinetic tremor, and studies with imprecise methodology, especially on sensor fusion with IMUs. Nevertheless, in this review, we tried to the best of our abilities to systematically infer those missing fields using the information in other parts of the manuscripts, e.g., experimental protocol and patient population, to infer tremor type and results and conclusions to infer sensor fusion modalities. While most research has focused on assessing tremor in PD and ET, intentional tremors observed in patients with lesions in the cerebellum could be better understood. This challenge can be approached by targeting intention tremors and leveraging existing technology (see Fig. 3 ). First and foremost, a technical contribution is needed to make better intention tremor assessments beyond the current tests. Furthermore, analyzing muscle activation and brain activity through EMG and EEG can provide insights into the underlying causes of intentional tremors. Regarding motion capture, it is crucial to optimize IMUs through sensor fusion algorithms that utilize the strengths of each sensor (accelerometer, gyroscope, magnetometer) to obtain an accurate limb position to extract tremorous movements using time–frequency analysis. Additionally, using markerless pose estimation would offer a more straightforward and flexible means of capturing data without requiring specialized equipment, enabling assessments to be conducted on more subjects exhibiting intention tremors, for example, at home. Distinguishing between voluntary and involuntary movement remains a challenge for the technologies discussed. Therefore, it is essential to use and further develop signal processing techniques that focus on separating different movement components, such as EMD or DWT, to enhance the detection of the distinct aspects of tremorous movements, their onset, and their differentiation from voluntary movements.
Background Tremors are involuntary rhythmic movements commonly present in neurological diseases such as Parkinson's disease, essential tremor, and multiple sclerosis. Intention tremor is a subtype associated with lesions in the cerebellum and its connected pathways, and it is a common symptom in diseases associated with cerebellar pathology. While clinicians traditionally use tests to identify tremor type and severity, recent advancements in wearable technology have provided quantifiable ways to measure movement and tremor using motion capture systems, app-based tasks and tools, and physiology-based measurements. However, quantifying intention tremor remains challenging due to its changing nature. Methodology & Results This review examines the current state of upper limb tremor assessment technology and discusses potential directions to further develop new and existing algorithms and sensors to better quantify tremor, specifically intention tremor. A comprehensive search using PubMed and Scopus was performed using keywords related to technologies for tremor assessment. Afterward, screened results were filtered for relevance and eligibility and further classified into technology type. A total of 243 publications were selected for this review and classified according to their type: body function level: movement-based, activity level: task and tool-based, and physiology-based. Furthermore, each publication's methods, purpose, and technology are summarized in the appendix table. Conclusions Our survey suggests a need for more targeted tasks to evaluate intention tremors, including digitized tasks related to intentional movements, neurological and physiological measurements targeting the cerebellum and its pathways, and signal processing techniques that differentiate voluntary from involuntary movement in motion capture systems. Supplementary Information The online version contains supplementary material available at 10.1186/s12984-023-01302-9. Keywords Open Access funding enabled and organized by Projekt DEAL.
Technologies for tremor assessment The following sections will discuss the different assessment technologies and algorithms to quantify tremors. The studies in this section have been classified in detail according to sensor type, patient population, and tremor type in Additional files 1 and 4. We encourage the readers to consider this chapter together with those additional files. Table 2 presents an overview of the tools discussed in this chapter and the main type of tremor assessed with them. Signal processing to quantify and analyze tremors Tremor assessment technologies measure physical parameters and transform them into electronic signals. For instance, accelerometers placed on the subject’s hand analyze the frequency components of arm acceleration to detect tremors. Signal processing techniques are necessary to remove noise and measure various movement features. The publications in our review employ different algorithms and feature extraction methods based on signal processing techniques for tremor detection. To detect tremors, measurements are typically transformed from the time domain to the frequency domain, focusing on tremor frequencies (2–10 Hz) compared to regular movement. Fast Fourier transform (FFT) and power spectral distribution (PSD) analysis are commonly used. The FFT provides information about the amplitude and phase of individual frequency components in a signal, while the PSD offers insights into the power distribution across different frequency bands. The PSD is especially suitable for comparing signals of varying lengths because it focuses on the frequency distribution regardless of the signal length. In contrast, the FFT is dependent on the signal length. In addition to the FFT and PSD, decomposing electronic signals in both time and frequency is advantageous, particularly for analyzing changes in frequency strength over time. The discrete wavelet transform (DWT) and Hilbert-Huang transform (HHT) [ 16 ] can be helpful for this. The DWT decomposes a signal into wavelets of different frequencies, scales, and orientations, making it more efficient to simultaneously analyze both frequency and time information, more robust to noise, and computationally efficient. On the other hand, the HHT decomposes a signal into its intrinsic mode functions (IMFs) using empirical mode decomposition (EMD) [ 17 ] and is better suited for analyzing nonstationary signals with precise time–frequency information. However, it may require more processing power. Thus, DWT and EMD are valuable tools to decompose voluntary and involuntary movement. Manipulanda and technical tools to quantify tremors One approach for assessing tremors involves using tools with embedded sensors that can measure the direction, speed, and force of movement [ 18 – 24 ]. Researchers have utilized tools such as pens [ 25 – 30 ] with embedded IMUs and load cells to quantify tremor amplitude while users hold it, attach it to their hands, or write with it. An advantage of embedded sensor tools is their ability to identify different features in virtual tasks [ 31 – 33 ]. For example, the Virtual Peg Insertion Test (VPIT), based on the 9HPT [ 34 ] test, employs a manipulandum with force sensors in a virtual game environment and serves as a digital health metric for predicting the response to neurorehabilitation interventions in neurological disorders. Kanzler et al. [ 13 , 35 ] identified several features and studied their correlation to clinical tests. They found a high correlation between the SARA test and velocity and path length features in relation to intention tremor. Manipulanda have also been used to elicit intention tremor during goal-directed movements; for example, Feys et al. [ 36 , 37 ] conducted studies involving people with MS (pwMS) and intention tremors, where they observed more significant target overshoot and unsteady eye fixation during goal-directed movement tasks. Overall, pens with embedded IMUs have shown promise in measuring different types of tremors, particularly during task-specific movements such as writing or drawing [ 28 ]. However, wearable sensors may be more suitable and sensitive for measuring steady tremors than tools. On the other hand, analyzing digital features in addition to traditional completion time in tests such as the 9HPT could provide further insight into the characteristics of intention tremor. However, focused symptom testing is necessary to determine the effectiveness of these digital features in measuring intention tremor. Therefore, studies that specifically focus on it, using manipulanda in tasks similar to the finger chase test [ 36 – 38 ], would be advantageous; however, a quantification of intensity and its test correlation would still be required for future studies. From measuring the duration of completion to quantifying the drawn lines Digitized drawing tests, such as writing or drawing shapes on tablets or smartphones, offer advantages over traditional methods of assessing tremors. These tests allow for the quantification of drawn lines in terms of time and extraction of different features. The assessment of digitized drawings often involves calculating the power spectral density (PSD) of the drawing position, velocity, or acceleration to determine the frequency ranges of the movement. This can help distinguish subjects with tremors, who are expected to have distinguishable spectra at higher frequencies (> 2 Hz), from those without tremors. Digitizing tablets have been used to assess tremor by analyzing writing and drawing shapes and AS [ 39 – 49 ], as well as combining it with FT [ 50 – 53 ]. Studies have shown that the frequency spectrum of velocity profiles in digitized Archimedes spirals drawings is a reliable measure of tremor intensity and more accurate than traditional visual rating methods [ 54 ]. Smartphone apps offer greater accessibility and flexibility for at-home testing compared to tablets since individuals are more likely to possess a smartphone than a tablet. Furthermore, the choice between smartphones and tablets can affect the reproducibility and intravariability of results, and more straightforward tests may be preferred for smartphone-based MS assessment [ 55 ]. This could be advantageous, especially in using small screens where drawings are limited due to space. These approaches include drawing simpler shapes than Archimedes spirals [ 14 , 56 – 58 ], tilting a smartphone to maintain an objective in position using the smartphone accelerometers [ 59 – 61 ], and finger tapping (FT) to assess upper limb impairment [ 62 , 63 ]. Regarding intention tremor, Erasmus et al. [ 64 ] pioneered this method for quantification of ataxic symptoms in MS. They tested it in a large cohort of 342 pwMS where they drew an’8’ shape in a tablet. Consequently, Feys et al. [ 65 ] investigated the validity and reliability of drawing regular and squared Archimedes spirals on a tablet as a test for tremor severity. They successfully differentiate pwMS with intention tremor from pwMS with no tremor and healthy subjects (HS) by comparing the radial and tangential velocity PSD in the 3–5 Hz frequencies with FTMRS scores. Archimedes spirals drawings have also proven to be a good measure to identify the presence of intention tremor in pwMS by comparing it with FTN, 9HPT, and BBT [ 66 ]. Measuring the segment rate, i.e., the number of times the pen changes from the upward to the downward direction, is the feature that correlates more to visually inspected intention tremor. The advantage of this metric is probably related to the fact that the segment rate increases as the frequency of the movement increases, suggesting that intention tremor could also be detected by analyzing the PSD of the Archimedes spirals movement, as proven by Creagh et al. [ 56 ] during the DaS test. In summary, digitized drawings and app-based games are accessible tools to quantify tremors that could be used in clinics and at home. Tasks such as Archimedes spirals are very effective in eliciting tremors in various neurological diseases. However, it is still unclear how this task is related to intention tremor. Further analysis and correlation to intention tremor tasks, for example, using it in combination with the SARA test, would provide a deeper understanding of its relation to intentional movements. Physiological measurements: discriminating between different neurological diseases Surface electromyography (EMG), measuring muscle electrical activity, and mechanomyography (MMG), measuring surface oscillations produced by motor units, are used to analyze muscle activation patterns in upper limb tremors. In the 80–90s, EMG was used to detect tremors using FFT and PSD in subjects with neurological disorders [ 67 – 69 ]. EMG has been used to distinguish muscle activation depending on the neurological disease [ 70 – 72 ]; for example, Nisticò and Vescio et al. [ 73 , 74 ] showed that during rest tremor, the activation of antagonist muscles is synchronous in subjects with ET and alternating in those with PD. EMG and accelerometer/IMU combinations [ 75 – 83 ] have been extensively used to discriminate PD, ET [ 84 – 89 ], physiological tremor (PH) [ 90 , 91 ], psychogenic tremor [ 92 , 93 ], advanced ET [ 94 ], and MS [ 95 ] from each other by using ML techniques on DWT and HT signal decomposition during, in its majority, stretch and steady positions. MMG [ 96 ] was recently used with EMG, force sensors, and IMUs to detect tremor differences in PD after deep brain stimulation [ 97 ]. Electroencephalogram (EEG) measures the brain's electrical activity from the scalp, providing excellent temporal resolution. However, its low spatial resolution poses a challenge in precisely identifying activity in different brain structures. Despite this drawback, EEG is a valuable tool for evaluating motor tasks [ 98 ], as long as the influence of movement artifacts is carefully considered. EEG has been used to explore the involvement of the cerebellum in conditions such as spinocerebellar and cerebellar AT [ 99 , 100 ], as well as ET in comparison with PD [ 101 , 102 ], HS [ 103 ], and people with age-related tremors (ART) [ 104 ]. These studies consistently demonstrate a strong involvement and oscillations of cerebellar activity in ET and PD. Excessive oscillations in cerebellar EEG have been correlated with tremor intensity in ET [ 105 , 106 ], while increased oscillations in the theta band of cerebellar EEG have been observed in PD [ 107 ]. EEG has also been employed to assess the effects of transcranial magnetic stimulation (TMS) therapy in individuals with multiple system atrophy cerebellar subtypes (MSA-C) [ 108 ], showing higher cerebello-frontal connectivity and a negative correlation to SARA. EMG and MMG measurements have effectively been used to differentiate tremor pattern activations in different neurological conditions, even when the subjects perform the same type of activity. These results suggest that muscular activity could be a powerful tool to understand how tremor is propagated and where it is localized. On the other hand, the mentioned studies have emphasized the importance of EEG in studying the involvement of the cerebellum in movement disorders, which could provide valuable insights into the underlying pathophysiology of intention tremor and potential treatment strategies. Inertial-based recordings using acceleration, orientation, and sensor fusion algorithms Inertial measurement units (IMUs), consisting of accelerometers, gyroscopes, and magnetometers, measure linear acceleration, angular velocity, and magnetic field strength, respectively. As these signals vary depending on the orientation of the sensor, IMUs have become increasingly prevalent in modern technology applications. These sensors can be positioned on different parts of the limbs, such as the wrist, hand, or fingers, to analyze movement by measuring the acceleration, velocity, and orientation of the limbs. Furthermore, suppose multiple IMUs are used on each limb segment, i.e., hand, forearm, upper arm, and trunk. In that case, it is possible to extract the limb's position relative to the trunk and measure additional features such as range of motion and movement synergy. In the past, accelerometers, gyroscopes, and magnetometers were available as separate components, and smartphones typically only included accelerometers due to cost considerations. At the end of the last century, accelerometers were used to detect tremors [ 109 – 113 ], quantify medication efficacy [ 114 ] in PD, and analyze intention tremors in patients with cerebellar pathology [ 115 ]. Accelerometers attached to the hands or wrist either in single form [ 116 – 144 ] or in the form of a smartwatch [ 145 – 156 ] or smartphone [ 157 – 165 ] have been extensively used to quantify tremors in different neurological diseases [ 166 , 167 ], either by analyzing acceleration frequency [ 88 , 168 ] and amplitude [ 169 ] or by using machine learning methods to classify measurements according to tremor type [ 170 – 173 ]. Gyroscopes can detect changes in angular velocity and measure the angular movement of a body part. Analogous to accelerometers, gyroscopes have also been used individually [ 174 – 178 ], in smartphones [ 179 ] and smartwatches [ 180 – 182 ] to decompose tremorous and voluntary movement using different signal processing techniques such as EMD, HHT [ 183 , 184 ], WFLC, and EKF [ 185 ]. Other types of motion detection sensors, such as force transducers [ 186 – 188 ] or electromagnetic sensors [ 189 – 194 ], have been proposed to track tremors in ET, PD, and MS. The miniaturization of IMUs has enabled the direct measurement of tremors on distal limbs using a single chip. Although some studies have utilized both accelerometers and gyroscopes [ 96 , 97 , 195 – 230 ] to gain insight into tremorous movements, only a portion of them have employed sensor fusion algorithms to integrate these data and improve measurement reliability [ 131 , 231 – 255 ]. Sensor fusion filters are used in IMUs to combine data from multiple sensors and improve the accuracy and reliability of the measurements. Their output is no longer angular velocity or acceleration but the IMU orientation relative to a predefined reference. Popular filters include the Madgwick filter and extended Kalman filter (EKF). The Madgwick filter is computationally efficient, using quaternions to combine accelerometer, gyroscope, and magnetometer data for orientation estimation. In contrast, the EKF employs a mathematical model and Bayesian inference to estimate the system state by fusing data from multiple sensors. Overall, measuring acceleration and angular velocity, using electromagnetic tracking to track upper limb movement, or using a combination of sensors embedded in IMUS has proven to be a popular and straightforward method for measuring tremors. To achieve a more accurate and comprehensive understanding of tremorous movement, future research should use sensor fusion algorithms, which are currently underutilized (less than 39% of the studies using IMUs). This approach would enable researchers to calculate limb position, velocity, and acceleration without the noise drawbacks from accelerometers and gyroscopes to characterize tremor movements. Additionally, this approach would benefit understanding movement synergies and tremor propagation. Movement prediction with video recordings Marker-based motion capture uses optical 3D motion analysis systems to track reflective markers placed strategically on the body during movement analysis. It uses infrared cameras to capture marker movement, which is then used to calculate various spatiotemporal, kinematic, and kinetic gait parameters through software calculations [ 256 ]. In particular, Deutschl et al. [ 257 ] used marker pose estimation to observe whether people with ET showed intention tremors by instructing the participants to grasp a target. The researchers identified the presence of intention tremors similar to that seen in MS and ataxia. Leap motion systems use multiple cameras and infrared sensors to analyze hand motions within their field of view. While highly accurate, their range of motion is limited [ 258 , 259 ]. Chen et al. [ 260 ] and Khwaounjoo et al. [ 261 ] used a leap motion sensor to quantify ET and PD postural tremor by measuring the finger tremor amplitude and frequency. Although their results were less accurate than using IMUs, they showed a strong correlation with respect to them; they localized the best positions for tremor identification and achieved high accuracy at lower frequencies. Markerless pose estimation is a new technique used to estimate the position and movement of human body joints without using physical markers. Using standard video, it utilizes computer vision and machine learning algorithms to analyze movement in real-time. The technique involves detecting and recognizing key body landmarks, constructing a skeletal model, and estimating joint position and movement over time. Markerless pose estimation software is user-friendly and flexible. Still, it has limitations, including lower accuracy than marker-based systems, difficulty tracking occluded or partially visible body parts, and sensitivity to environmental factors. Nonetheless, ongoing advances in computer vision and machine learning are enhancing the accuracy and robustness of these techniques [ 262 – 267 ], making them potentially valuable for tremor characterization—for example, Park et al. [ 15 ] utilized Mediapipe [ 268 ] to analyze its feasibility in telemedicine for PD. Although the study involved healthy subjects, the findings suggested that movement tracking accuracy was hindered by poor video quality. Nevertheless, the researchers proposed that the software could be effectively utilized with better video setup and equipment. Furthermore, Pang et al. [ 269 ] used OpenPose [ 270 ], a real-time body pose estimation library using deep learning, to successfully track tremors and bradykinesia in PD using DWT to detect finger motion changes in the frequency domain. In summary, marker-based estimation technologies capture tremors, but their setup and costs limit their evaluation in large patient cohorts or clinical practice. However, with advancements in computer vision based on deep learning algorithms, markerless pose estimators have the potential to become widely adopted for easy tremor analysis using simple setups such as phone cameras. Supplementary Information
Abbreviations Essential tremor Parkinson’s disease Multiple sclerosis Healthy subjects Activities of daily living Fahn-Tolosa-Marin Tremor Scale Essential Tremor Rating Assessment Scale Scale for the Assessment and Rating of Ataxia Finger to nose test Action Research Arm Test 9 Hole Peg Test Box and Blocks Test Finger tapping Preferred Reporting Items for Systematic Reviews and Meta-Analyses Functional magnetic resonance Magnetoencephalography Functional electrical stimulation Electromyography Electroencephalogram Fast Fourier transform Power spectral density Discrete wavelet transform Hilbert-Huang transform Intrinsic model functions Empirical mode decomposition Virtual Peg Insertion Test People with MS Mechanomyography Physiological tremor Age-related tremors Transcranial magnetic stimulation Multiple system atrophy cerebellar subtype Inertial measurement units Extended Kalman filter Institute for Advanced Study Weighted frequency Fourier linear combiner Support vector machine Convolutional neural network Acknowledgements We gratefully acknowledge the funding and support from the Institute for Advanced Study (IAS)—Technical University of Munich. Author contributions NP was involved in the conception, organization, and execution of the research project and the design, execution, review, and critique of the statistical analysis. NP also played a role in writing the first draft of the manuscript and provided input during its review and critique. DU participated in the statistical analysis, provided feedback during the manuscript preparation, and contributed to its review and critique. KD contributed to the manuscript's preparation and writing during the review and critique process. NT was involved in organizing the research project and the review and critique of the manuscript. GC was involved in the research project's conception and organization, took part in the design, execution, and review of the statistical analysis, and contributed to the review and critique of the manuscript. All authors read and approved the final manuscript. Funding Open Access funding enabled and organized by Projekt DEAL. This work was supported by the Hans Fischer Senior Fellowship from the Institute for Advanced Study (TUM-IAS). Availability of data and materials The datasets supporting the conclusions of this article are included within the article and its additional files. Declarations Ethics approval and consent to participate The authors confirm that the approval of an institutional review board or ethics committee was not required for this work. Informed patient consent was not necessary for this work. We confirm that this manuscript aligns with the guidelines of the Journal's stance on ethical publication matters. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests. Author’s financial disclosures for the previous 12 months: NP is supported by the Institute for Cognitive Systems (TUM-ICS) and the Institute for Advance Studies from the Technical University of Munich (TUM-IAS). DU is supported by the Department of Neurology, Klinikum rechts der Isar of the Technical University of Munich. KD is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research under award number DE-SC0022150. NT is a co-founder of Infinite Biomedical Technologies and Vigilant Medical Technologies and serves on their board as a scientific advisor. His intellectual property has been licensed to Vasopatic Medical and Phantom Robotics, although he has not received any royalties. GC is a shareholder of intouch-robotics GmbH. This study is not related to the company.
CC BY
no
2024-01-15 23:43:47
J Neuroeng Rehabil. 2024 Jan 13; 21:8
oa_package/15/be/PMC10787996.tar.gz
PMC10787997
38218880
Background Tooth development is a time- and space-specific process including the initiation, bud, cap, and bell stages. In the past few decades, the molecular pathways and regulating mechanisms underlying tooth morphogenesis have been widely explored [ 1 ]. In tooth germ, the intimate interactivities between dental epithelium and mesenchyme tissues are sequentially controlled by multiple cytokines/signaling molecules, including bone morphogenetic proteins (BMPs), Wnt, and Shh [ 2 – 4 ]. During the bell stage, the inner enamel epithelial cells differentiate into enamel-secreting ameloblasts, while the adjacent dental papilla mesenchymal cells polarize and differentiate into odontoblasts to secrete dentin matrix [ 5 ]. Subsequently, the dental papilla mesenchyme encompassed by accumulative dentin matrix forms dental pulp. The outside enamel and dentin are hard component of tooth, protecting the dental pulp tissues. The dentin and dental pulp are together called dental-pulp complex because of their close relationships in biological development and physiological structure. The dental-pulp complex is crucial for the life of tooth, not only because of the commonly physiologic functions of pulp, but also the regulating effects on pulp homeostasis. After severe pulp injury, odontoblasts differentiated from dental pulp stem cells (DPSCs) form reparative dentinogenesis. Human dental pulp stem cells (hDPSCs) are isolated from adult dental pulp tissues and positive for mesenchymal stem cells markers [ 6 ]. As multipotent progenitors, hDPSCs are able to self-renewal and differentiate into dentin-forming odontoblasts [ 7 ]. Multiple growth factors and complex molecular signal pathways are related to odontogenic differentiation of hDPSCs and dentinogenesis, including BMPs, insulin-like growth factor, vascular endothelial growth factor and platelet-derived growth factor [ 8 ]. Reasonably, DPSCs are considered to be a promising and suitable source for in vivo and in vitro studies of tertiary dentin formation and dental pulp regeneration [ 9 – 12 ]. Numerous studies have explored biomolecular capping materials to promote the repair of injured pulp tissue [ 13 , 14 ]. Among these, microRNAs (miRNAs) are promising molecules due to their epigenetic regulatory role in multiple biological processes like osteo/odontogenic differentiation [ 15 – 20 ]. By binding to the 3′UTR of messenger RNAs, some microRNAs are proved to influence the odontogenic differentiation of DPSCs in post-transcriptional level by negatively regulate the target genes like krüpple-like factor 4, bone morphogenetic protein receptor type II, osterix and glycoprotein non-metastatic melanomal protein B [ 16 , 21 – 23 ]. Furthermore, studies have revealed that some microRNAs can epigenetically regulate other epigenetic factors like DNA methyltransferases and histone modification enzymes, functioning as epigenetic-microRNAs (epi-miRNAs) [ 24 ]. After odontogenic induction of hDPSCs, the differentially expressed miRNAs have been analyzed [ 20 ]. However, whether these miRNAs can work as epi-miRNA and the underlying regulation pattern still need to be explored. In addition to the epigenetic regulation by miRNA, posttranslational modifications of histone proteins are also closely associated with odontoblast differentiation and tooth development [ 25 – 28 ]. In bell stage of mice tooth germ, the lysine 27 trimethylation on histone 3 (H3K27me3) marks of dental papilla showed a spatiotemporal pattern and decreased from early to late bell stage. During the odontogenic differentiation process of human dental papilla cells, the dynamic levels of H3K27me3 marks accompanied by the elevated trend of specific histone demethylase KDM6B [ 29 ]. Besides, the KDM6B was found to remove the H3K27me3 marks from the promoter region of BMP2 to promote odontogenic differentiation [ 30 , 31 ]. These modifications at histone level represent a complicated and dynamic process. Studies have analyzed the miRNAs that differentially expressed during tooth development and odontogenic differentiation of DPSCs, however, if these miRNAs could interplay with these epigenetic modifiers and further influence the tooth development in histone modification levels still need further studies. Our previous study analyzed the miRNAs in human tooth germs during bell stage and found significantly varied expression of miR-93-5p [ 32 ]. As a member of miR-106b-25 cluster, miR-93-5p had been proved to play a functional role in osteoarthritis by affecting anti-inflammation and associate to the recovery of sepsis related acute kidney injury by targeting to KDM6B [ 19 , 33 – 36 ]. In present study, the potential functions of miR-93-5p working as an epi-miRNA in dentin formation of tooth development and odontogenic differentiation were investigated.
Methods Oligonucleotide transfection HDPSCs were cultured as reported method [ 37 ] and approval from the Medical Ethics Committee of West China Hospital of Stomatology, Sichuan University (WCHSIRB-CT-2021-243). Hsa-miR-93-5p mimic (miR-10000093-1-5, Ribobio), mimic NC (miR-1N0000001-1-5, Ribobio), hsa-miR-93-5p inhibitor (miR-20000093-1-5, Ribobio) and inhibitor NC (miR-2N0000001-1-5, Ribobio) were synthesized. Lipofectamine 3000 reagents (Invitrogen) was used for oligonucleotide transfection. The final concentrations in the transfection system were 50 nM. The mimic/inhibitor NC were controls. Dual-luciferase reporter assay Synthetic KDM6B-WT-3′UTR (wild type) and KDM6B-MUT-3′UTR (mutant) gene fragment were cloned into pGLO vectors separately, and then co-transfected with 293T cells and miR-93-5p/NC mimic in virtue of Lipofectamine 3000 Reagents. Cell suspension was collected and luciferase activities of samples were detected. qRT-PCR and western blotting Total RNAs including miRNAs in hDPSCs were prepared by RNeasy Plus Mini Kit (74134, Qiagen) and mice tooth tissues were prepared by RNeasy Plus Micro Kit (74034, Qiagen). After reverse transcription, samples were processed for quantitative polymerase chain reaction (qPCR). Tables 1 and 2 listed the primers. Total proteins were extracted according to the protocol (KeyGEN). The primary antibodies were β-actin (1:1000, GB11001, Servicebio), Histone 3 (1:1000, GB11102, Servicebio), H3K27me3 (1:700, 6002, Abcam) and KDM6B (1:700, ab169197, Abcam). Secondary antibodies of goat anti-rabbit/mouse IgG-HRP (1:5000, L3012-2/L3032-2, Signal way Antibody) were used. Alizarin Red S and ALP staining Base medium (NC) and odontogenic induction medium (OM) for cells culture were compounded as previous description [ 38 ]. After odontogenic induction for 3, 7 and 14 days, hDPSCs were fixed and dyed by Alkaline Phosphatase (ALP) Assay Kit (Biyotime). Besides, mineral nodules were stained and observed by Alizarin Red S solution (Solarbio). Images were acquired with inverted light microscopy (Olympus, Japan). Chromatin immunoprecipitation (ChIP) assays Cells were harvested after transfected with miR-93-5p mimic and odontogenic induction for 7 days. Enzymatically processed chromatin was obtained by EZ-Zyme Chromatin Prep Kit (17-375, Millipore). EZ-Magna ChIP HiSens kit (17-10461, Millipore) and antibodies of rabbit anti-H3K27me3 (9733, Cell Signaling Technology), rabbit anti-KDM6B (ab16917, Abcam), normal rabbit IgG (CS200581, Millipore) were used for ChIP assay. DNA samples were acquired and then quantified by real-time PCR. Table 3 listed the primers. Animals The animal studies had approval from the Medical Ethics Committee of West China Hospital of Stomatology, Sichuan University (WCHSIRB-D-2021-321). Embryos and newborn mice at embryonic day 17.5, postnatal day 0 and 3 (E17.5, P0 and P3) were obtained by time-mated pregnant C57BL/6 mice (Chengdu Dossy Experimental animals Co., Ltd.). After mice were euthanized, dental papilla and enamel organ tissues of mandibular first molars were separated under transmitted light microscope. Five-weeks-old male Sprague Dawley rats (Chengdu Dossy Experimental animals Co., Ltd.) were used for pulpotomy. Rats were randomly separated into six groups (5 rats for per group), including group without pulpotomy (Control) and capping groups: Vitapex (Morita, Japan), lentivirus-scramble (GeneCopoeia), KDM6B-overexpression (pEZ-Lv105 lentivirus vector, GeneCopoeia), AAV-scramble (5′-CGCTGAGTACTTCGAAATGTC-3′, Genechemand AAV-miR-93-5p inhibitor (5′-ACCGCTACCTGCACGAACAGCACTTTGTTTTT-3′, GV479 vector, Genechem). Rats were anesthetized and the cavities on occlusal surfaces of maxillary first molars were prepared by 1/4-inch burs under water cooling. The dentin debris on pulp wound was flushed away by sterile saline. After the pulp surface were cleaned and covered by fresh blood, aseptic cotton pellet soaked in sterile saline was pressured on the pulp surface to stop bleeding. After the hemorrhage was under control, gelatin sponges were used to deliver capping agents. The cavities were protected with a thin layer of glass-ionomer cement and closely restored by composite resin at last. After 2 and 4 weeks, all rats were euthanized and the maxillae were fixed in 4% paraformaldehyde. For observing enhanced green fluorescent protein, the samples were embedded by Tissue-Tek O.C.T. Compound (Sakura). The tissue sections (6 μm) were obtained and photographed. MicroCT The rats’ maxillae were collected for microCT analysis before decalcification. Tertiary dentin was analyzed by micro-CT scanner (μCT50, SCANCO MEDICAL AG) in a scanning resolution of 8 μm pixel size under the following settings: 70 kVp, 200 μA, AL 0.5 mm, 1 × 300 ms. Histologic and immunologic staining The decalcified samples were embedded by paraffin for cutting into slices. Hematoxylin and eosin (Beyotime) staining was performed according to the instruction. For immunohistochemistry staining, the antibodies were BMP2 (AF5163, 1:200, Affinity Biosciences). For immunofluorescence staining, the antibodies were H3K27me3 (9733, 1:200, Cell Signaling Technology), KDM6B (ab169197, 1:250, abcam) and FITC conjugated secondary antibodies (1:400, Santa Cruz Biotechnology). Images were acquired by microscopy (Olympus, Japan). Statistical analysis Relative mRNA levels were normalized with GAPDH . Relative microRNA levels were normalized with U6. Numerical data were presented as mean ± SD. GraphPad Prism 7 was used for data analysis. Student’s t test or ANOVA followed by Tukey’s test was used to evaluate statistical significance. P values < 0.05 were considered statistically significant.
Results MicroRNA-93-5p downregulation is paralleled with KDM6B upregulation in tooth development and odontogenic differentiation of DPSCs In our previous study [ 32 ], the expression of miRNAs from human tooth germ in bell stages were analyzed by miRNA microarray. The differentially expressed miRNAs were listed in the heat map (Additional file 1 : Fig. S1A), suggesting that miR-93-5p of human tooth germ was significantly decreased during bell stage. Previous study confirmed that specific demethylase KDM6B of H3K27me3 marks was a key epigenetic regulator during dental papilla development and dynamically expressed in a spatiotemporal pattern [ 29 ]. For further investigated the epi-miRNAs interact with histone modification in dentin formation, we then searched the miRNA databases including TargetScanHuman7.2, miRbase Target and miRDB for miRNAs that not only differentially express during the bell stage, but also probably target on KDM6B. Afterwards, miR-93-5p was predicted to target with KDM6B (Additional file 1 : Fig. S1B). Moreover, the dynamic expression trend of miR-93-5p in mice tooth germ from early to late bell stages was investigated. Dental papilla from mice first molar germs of embryonic 17.5 days (E17.5), postnatal 0 and 3 days (P0, P3) were separated under light microscope (Fig. 1 A). The dental epithelial organ tissues expressed specific epithelial marker cytokeratin 14 ( Ck14 ) while the dental papilla tissues significantly expressed specific mesenchymal marker Vimentin (Fig. 1 B, C). In developing mouse dental papilla tissues, the expression of odontogenic genes collagen type-1α ( Col-1α ) and osterix ( Osx ) were upregulated from E17.5 to P3 (Fig. 1 D). Along with the odontogenic differentiation process of mice dental papilla mesenchymal cells, miR-93-5p was down regulated (Fig. 1 E) while Kdm6b showed an up-regulated trend (Fig. 1 F). To further delineate the expression pattern of KDM6B and miR-93-5p in odontogenic differentiation of adult dental mesenchymal cells, hDPSCs were cultured under odontogenic condition. MiR-93-5p was down-regulated when hDPSCs differentiated into odontoblasts (Fig. 1 G). Furthermore, the expression of KDM6B was up-regulated in both mRNA and protein levels (Fig. 1 H, I). MicroRNA-93-5p regulates odontogenic differentiation of hDPSCs MicroRNA-93-5p mimic/inhibitor was transfected into hDPSCs effectively (Figs. 2 A, 3 A). After miR-93-5p mimic transfection and odontogenic induction, the activity of ALP was significantly suppressed while the mineralized nodule was reduced in hDPSCs (Fig. 2 B, C). The mRNA expression of odontogenic genes OSX , ALP , osteocalcin ( OCN ) and COL-1α were significantly suppressed (Fig. 2 D–G). In contrast, miR-93-5p inhibitor treatment promoted the ALP activity and mineralized nodule formation in hDPSCs, accordingly, mineralization indicators above were upregulated (Fig. 3 B–G). Above data together indicating the miR-93-5p functionally regulating the odontogenic differentiation of hDPSCs. MicroRNA-93-5p targets KDM6B and influences H3K27me3 marks of BMP2 During the odontogenic differentiation of hDPSCs, the H3K27me3 marks was down-regulated (Fig. 4 A). After miR-93-5p mimic treatment, H3K27me3 marks in hDPSCs were significantly enriched (Fig. 4 B, C). The KDM6B was targeted by miR-93-5p and down-regulated, the bonding site on KDM6B was also validated (Fig. 4 D–G). The H3K27me3 methylases including EZH2, SUZ12, and EED were no different expression after miR-93-5p mimic treatment (Fig. 4 F). BMP2 was further detected to be down-regulated in hDPSCs after miR-93-5p mimic transfected (Fig. 4 H). To examined how miR-93-5p functioned on the BMP2 transcription, ChIP-qPCR assays were conducted. After odontogenic induction for 7 days, the KDM6B affinity on BMP2 promoters was decreased (Fig. 5 A). Accordingly, increased H3K27me3 marks on BMP2 promoters mirrored the loss of KDM6B occupancy (Fig. 5 D). The different levels of KDM6B affinities on promoter regions of OSX and OCN had no significant effects on H3K27me3 marks (Fig. 3 B, C, E, F). Above results suggested that miR-93-5p could influence H3K27me3 marks in BMP2 promoter regions by targeting KDM6B, therefore epigenetically regulated the odontogenic differentiation of hDPSCs. MicroRNA-93-5p inhibitor induces dentin formation in rat pulpotomy model The pulpotomy was performed on rats’ maxillary first molars and the pulp cutting surfaces were capped by gelatin sponges with agents (Additional file 1 : Fig. S2A–H). Fluorescence observation revealed that the capping agents with lentivirus and AAV vector were successfully transfected into the residual pulp of rats’ molars (Additional file 1 : Fig. S2I). For MicroCT analysis, KDM6B-overexpression and miR-93-5p inhibitor treatment effectively promoted the formation of dentin bridges over the opening of tooth root canals after 4 weeks (Fig. 6 A). In rat’s dental pulp, KDM6B-overexpression and miR-93-5p inhibitor treatment upregulated KDM6B accompanying with the downregulation of H3K27me3 marks, which is accordance with results in cultured hDPSCs (Fig. 6 B, C). H&E staining showed that the necrotic pulp without tertiary dentin formation was in the lentivirus-scramble and AAV-scramble groups, while the KDM6B - overexpression and miR-93-5p inhibitor treatment induced the tertiary dentin formation above the pulp surfaces and protected the residual pulp tissues from inflammation (Fig. 6 D). Accordingly, KDM6B-overexpression and miR-93-5p inhibitor treatment upregulated the expression of BMP2 in residual pulp tissues (Fig. 7 ).
Discussion MicroRNAs play an important role in organ development and pathological changes, not only via directly targeting on gene mRNAs, but also via their complicate interactions with other epigenetic factors. Some miRNAs work as epi-miRNAs to create controlled feedbacks by interacting with DNA methylation or histone modification marks. The epi-miRNAs related to the differentiation and proliferation of embryonic pluripotent stem cells are highly valued as potential molecular drugs for disease management and tissue regeneration [ 39 ]. As small molecular epigenetic factors, miRNAs have been confirmed to regulate the multiple signaling molecules underlying the whole process of odontogenesis by targeting various genes especially associated with cell differentiation [ 20 , 40 ]. During the odontogenesis, the miR-34a can indirectly regulate the expression of ALP and promote odontogenic differentiation of dental apical papilla cells by inhibiting the Notch pathway [ 32 , 41 ]. The miRNA-27 and miR-338-3p can promote odontoblast differentiation by activating Wnt/β-catenin signaling and directly suppress RUNX2 [ 42 , 43 ]. As an epi-miRNA, miR-720 can suppress NANOG by DNMT3A and DNMT3B, accordingly regulate the proliferation and odontogenic differentiation of DPSCs [ 44 ]. Trimethylated of H3K27 is a repressive epigenetic mark and is crucial for relevant genes expression during tooth development. Studies demonstrated that the specific demethylase KDM6B was able to activate the expression of odontogenesis-associated genes OSX , OCN and BMP2 in dental mesenchymal stem cells by regulating H3K27me3 marks [ 28 , 45 ]. Since the bell stage of tooth development is critical period for dentin formation, the key factors and epigenetic machinery involved in this process should be well studied for exploring innovative therapies for dentin generation. In our previous study, the marks of H3K27me3 changed in a spatiotemporal trend during bell stage of tooth development. Considering the complex and multi-level relationship between epigenetic factors, we further analyzed the miRNAs express in human tooth germ between early and late bell stages by microarray analysis. After quering the miRNA databases (TargetScanHuman7.2, miRbase Target and miRDB), miR-93-5p was identified as the only candidate miRNA differentially expressed during the process of dentin formation and targeted to KDM6B. In addition, other H3K27me3 methylases including EZH2, SUZ12, and EED was not the target gene of miR-93-5p. The expression of EZH2, SUZ12, and EED showed no significant difference after the treatment of miR-93-5p mimics and inhibitors. These results suggested that miR-93-5p influences H3K27me3 by targeting to KDM6B, but not H3K27me3 methylases. In a study of acute kidney treatment, the regulatory axis of KDM6B/H3K27me3/TNF-α was confirmed and the targeting site of miR-93-5p on KDM6B was identified by dual-luciferase reporter assay [ 36 ]. However, the expression pattern and underlying interaction between miR-93-5p and demethylase KDM6B in odontogenesis especially in dentinogenesis have not been reported. In present study, up-regulation of miR-93-5p suppressed the odontogenic differentiation while inhibition of miR-93-5p promoted hDPSCs differentiation into odontoblasts. These results suggested that miR-93-5p can work as an epi-miRNA and effectively regulate the odontogenic differentiation of hDPSCs in a multi-level epigenetic mechanism. The dentinogenesis and osteogenesis are analogous process of synthesizing the extracellular matrix for hard tissue formation and share similar mineralization genes of OSX , OCN and BMP2 . Previous studies have confirmed that KDM6B depletion can suppress the expression of OSX , OCN and BMP2 , as well as the secretion of mineral matrix [ 28 , 45 , 46 ]. These results were consistent with our present results. The ChIP-qPCR data showed miR-93-5p suppressed the specific recruitment of KDM6B to the promoter region of BMP2 , and consequently inhibited BMP2 expression by influencing the H3K27me3 marks on promoter region. The H3K27me3 marks with affinities of KDM6B on the promoter regions of OSX and OCN showed no significant alteration after miR-93-5p mimic treatment, suggesting the existence of complex and finer mechanisms underlying the regulation of OSX and OCN to maximize the benefit in varied tissue microenvironments. As reported, RUNX2 and OSX are early stages markers of osteo/odontoblastic differentiation, however, OCN mainly occurs late [ 47 ]. Studies have reported that in dental mesenchymal stem cells, KDM6B knockdown significantly altered the expression of downstream target gene DLX2 which is important for biomineralization by regulating the extracellular matrix proteins including OCN [ 28 , 48 ]. After odontoblastic induction, the overexpression of lysine acetyltransferase p300 enriches H3K9ac mark on promoter regions and increase the expression of OCN [ 49 ]. During the odontogenic differentiation, OSX is in the downstream of IGF-I and MAPK signaling pathway in addition to the BMP-2/Smad/Runx2 axis [ 50 , 51 ]. Besides, the suppressive epigenetic marks of H3K9me3 and H3K27me3 show a bivalent modification mode and locate predominantly on OSX during odontogenic differentiation of dental mesenchymal progenitors [ 52 ]. Additionally, under mineralized induction, the modification of active H3K4me3 marks on matrix-related genes OCN , OSX , DMP1 and DSPP effectively promote odontogenic differentiation of hDPSCs [ 27 ]. All these studies provide further interpretations for the multiple regulatory mechanisms underlying the expression of OSX and OCN , explaining our relevant results to some extent. Although microRNAs have been reported to function in the odontogenesis of hDPSCs through BMP2 pathway and subsequently regulating odontoblast markers DSP and DMP-1 [ 16 ], our study firstly proved miR-93-5p can work as an epi-miR by leading an innovative epigenetic network of BMP2 signals. As the BMP2 pathway severely influences the odontogenic differentiation of hDPSCs, miR-93-5p showed an effective impact on tertiary dentin formation by regulating KDM6B/H3K27me3/BMP2. In current study, we observed pulp capping agents that either elevated KDM6B expression or inhibited miR-93-5p significantly induced the formation of dentin bridge in rat pulpotomy model. Our results enriched the interaction between epigenetic factors, additionally, the underlying epigenetic regulation mechanism of miR-93-5p may be a prospective target to dentin regeneration and vital pulp therapy. As a promising small biomolecular drug for pulp regeneration, the treatment effects of miRNAs are dependent on the mechanisms underlying hDPSCs proliferation, odontogenic differentiation, and inflammatory response [ 53 , 54 ]. MiR-143-5p was reported to regulate the odontogenic differentiation by targeting MAPK14, and thus participated in the p38 MAPK signaling pathways [ 55 ]. Wnt1 was found to be a target of miR-140-5p, and the down-regulation of miR-140-5p promoted the odontogenic differentiation of DPSCs by activating Wnt1/β-catenin signaling pathway [ 56 ]. For inflamed human dental pulp cells stimulate by lipopolysaccharide, miR-146a and basic fibroblast growth factor worked cooperatively to promote the cell proliferation and odontogenic differentiation [ 57 ]. Besides, miRNAs also play a role in tissue defense and repair by regulating inflammation related genes. MiR-125a-3p has shown odonto-immunomodulatory properties by inhibiting NF-κΒ and TLR signaling [ 16 ]. Mesenchymal stem cell-derived exosomes miR-27b can inhibit sepsis by suppressing KDM6B and NF-κB signaling pathway [ 58 ]. Interestingly, miR-93-5p has also been proved to attenuate lipopolysaccharide-induced chondrocyte inflammation by targeting TLR4 and further inhibiting the NF-κB signaling [ 34 ]. The function of miR-93-5p in regulating inflammation also suggesting the miR-93-5p may have a potential advantage for vital pulp therapy.
Conclusions MiR-93-5p can target KDM6B and regulate H3K27me3 marks in the promoter region of BMP2 , thus modulating the odontoblastic differentiation of hDPSCs and the formation of tertiary dentin. Our findings may not only advance our knowledge on the epigenetic regulation on the repair of pulp injury, but also provide a potential therapeutic measure to promote the success of vital pulp therapy and regenerative endodontics.
Background Epigenetic factors influence the odontogenic differentiation of dental pulp stem cells and play indispensable roles during tooth development. Some microRNAs can epigenetically regulate other epigenetic factors like DNA methyltransferases and histone modification enzymes, functioning as epigenetic-microRNAs. In our previous study, microarray analysis suggested microRNA-93-5p (miR-93-5p) was differentially expressed during the bell stage in human tooth germ. Prediction tools indicated that miR-93-5p may target lysine-specific demethylase 6B (KDM6B). Therefore, we explored the role of miR-93-5p as an epi-miRNA in tooth development and further investigated the underlying mechanisms of miR-93-5p in regulating odontogenic differentiation and dentin formation. Methods The expression pattern of miR-93-5p and KDM6B of dental pulp stem cells (DPSCs) was examined during tooth development and odontogenic differentiation. Dual luciferase reporter and ChIP-qPCR assay were used to validate the target and downstream regulatory genes of miR-93-5p in human DPSCs (hDPSCs). Histological analyses and qPCR assays were conducted for investigating the effects of miR-93-5p mimic and inhibitor on odontogenic differentiation of hDPSCs. A pulpotomy rat model was further established, microCT and histological analyses were performed to explore the effects of KDM6B-overexpression and miR-93-5p inhibition on the formation of tertiary dentin. Results The expression level of miR-93-5p decreased as odontoblast differentiated, in parallel with elevated expression of histone demethylase KDM6B. In hDPSCs, miR-93-5p overexpression inhibited the odontogenic differentiation and vice versa. MiR-93-5p targeted 3′ untranslated region (UTR) of KDM6B, thereby inhibiting its protein translation. Furthermore, KDM6B bound the promoter region of BMP2 to demethylate H3K27me3 marks and thus upregulated BMP2 transcription. In the rat pulpotomy model, KDM6B-overexpression or miR-93-5p inhibition suppressed H3K27me3 level in DPSCs and consequently promoted the formation of tertiary dentin. Conclusions MiR-93-5p targets epigenetic regulator KDM6B and regulates H3K27me3 marks on BMP2 promoters, thus modulating the odontogenic differentiation of DPSCs and dentin formation. Supplementary Information The online version contains supplementary material available at 10.1186/s12967-024-04862-z. Keywords
Supplementary Information
Abbreviations MicroRNAs Lysine-specific demethylase 6B Lysine 27 trimethylation on histone 3 Dental pulp stem cells Human dental pulp stem cells 3′ Untranslated region Bone morphogenetic protein Alkaline phosphatase Alizarin red S Osterix Collagen type I alpha Osteocalcin Quantitative reverse translation polymerase chain reaction Chromatin immunoprecipitation Adreno-associated virus Fluorescein isothiocyanate Acknowledgements Not applicable. Author contributions YZ and SW contributed to conception and design, acquisition, analysis and interpretation of data, drafted and critically revised the manuscript. SG and SH contributed to data acquisition and analysis. MW, XZ and XZ contributed to conception and design. LZ and XX contributed to conception and design, drafted and critically revised the manuscript. All authors approved the author list and agreed to be accountable for all aspects of the work. Funding The design of the study and collection, analysis, and interpretation of data and in writing the manuscript work were supported by the Sichuan Science and Technology Program (2022NSFSC1358), National Natural Science Foundation of China (81800927, 82170921, 82370947, 81870754), Health Commission of Sichuan Province (21PJ058), and West China Hospital of Stomatology (LCYJ2019-4). Availability of data and materials The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate The present study is approved by the Ethical Committee of the West China School of Stomatology, Sichuan University and State Key Laboratory of Oral Diseases (WCHSIRB-D-2021-243, WCHSIRB-D-2021-321). Consent for publication Not applicable. Competing interests The authors declare that they have no completing of interests.
CC BY
no
2024-01-15 23:43:47
J Transl Med. 2024 Jan 13; 22:54
oa_package/26/21/PMC10787997.tar.gz
PMC10787998
38218764
Background Uterine leiomyosarcoma (ULMS) is a rare and very aggressive mesenchymal tumour with poor 5-year survival rates, accounting for 1.3% of all uterine malignancies [ 1 ]. ULMS metastasizes most frequently to the lung, peritoneum, bone, and liver [ 2 ], whereas metastasis to the heart is very uncommon. ULMS diagnosis can definitely be established only after histopathological analysis because symptoms and signs of ULMS resemble benign uterine myomas. However, in patients with early surgical intervention including hysterectomy and adnexectomy, the prognosis is beneficial. In cases of ULMS metastasis to the heart, there are certain treatment challenges that should be considered in a multidisciplinary team. Hereby, we report our experience in the case of successful treatment of solitary ULMS metastasis to the visceral layer of the pericardium. Case presentation Our patient was a 49-year-old female referred to the Department of Cardiac Surgery for scheduled surgery of pericardial neoplasia. Three years prior, the patient underwent a hysterectomy and adnexectomy owing to the ULMS. Considering the fact that the tumour was limited to the uterus, further chemotherapy or radiotherapy was not needed. Three months before admission to our institution, regular follow-up included magnetic resonance imaging (MRI) of the abdomen and pelvis and a computed tomography (CT) scan of the chest. The MRI of the abdomen and pelvis discovered neoplasia in the diaphragmic portion of the pericardium, whereas a CT chest scan showed the tumour mass adjacent to the apex of the heart. No other signs of primary disease relapse or metastases were found. The patient was asymptomatic, in good general condition, and the physical exam was unremarkable. After admission to our institution further workup included a heart MRI that confirmed the finding of the tumour between two layers of the pericardium (Fig. 1 a) and adjacent to the apex of the heart, whereas on the performed PET/CT scan, besides the intrapericardial tumour, the presence of other metastases was excluded. Coronary angiography revealed tumour vascular supply from the left anterior descending coronary artery (LAD) (Fig. 1 b). The multidisciplinary team concluded that the patient was a candidate for surgery. Surgery included diastolic cardiac arrest achievement and resection of the tumour with macroscopically healthy edges width of 5–8 mm. Macroscopically, a parietal layer of the pericardium was completely free from the tumour that invaded only the apical myocardium of the left ventricle (Fig. 2 a and b). Intraoperative histopathology showed resected edges free of tumour cells. The defect of the left ventricle was reconstructed with a polyester patch and polypropylene sutures placed around the defect (Fig. 2 c and d). Intra- and the postoperative course went uneventfully. Completed histopathology confirmed the diagnosis of leiomyosarcoma (Fig. 2 e and f) with positive immunohistochemical stains for oestrogen and progesterone receptors confirming the uterine origin of the tumour. The patient was discharged from the hospital after 13 days. The control CT scans of the chest, abdomen and pelvis performed two, six and twelve months later did not show any relapse of the primary disease. Three months after the cardiac surgery, the patient received adjuvant chemotherapy with doxorubicin and dacarbazine. Consecutive control echocardiographs showed a left ventricular ejection fraction of 55% and no pericardial effusion. One year after surgery, there are no signs of new metastases, the Eastern Cooperative Oncology Group (ECOG) performance status is grade 0, whereas New York Heart Association (NYHA) functional class is I.
Discussion ULMS metastases to the heart are very rare and require thorough deliberation of medical treatment. Even though ULMS metastases are related to the advanced stages of the disease, [ 2 ] our case showed that after adequate treatment in the early stages, late ULMS metastases are possible, even to very uncommon sites, such as pericardium. Malignant tumours may reach the heart via hematogenous or lymphatic spread, transvenous extension or direct invasion. The malignancies that spread through the lymphatics often seed the pericardium or epicardium, whereas myocardial and endocardial metastases generally rise from the hematogenous spread [ 3 ]. The majority of reported ULMS metastases to the heart were intracavitary [ 4 – 9 ]. In our case, the major portion of the metastasis was epicardial, whereas only a smaller part invaded the myocardium without relation to the cardiac chambers. Although ULMS metastases to the heart are rare, we should always pay attention to this possibility in patients with ULMS. In the context of the past medical history positive for ULMS, and the fact that primary leiomyosarcomas of the heart are extremely rare and constitute less than 0.25% of all cardiac tumours, [ 10 ] it was more likely that our patient had metastatic disease rather than primary leiomyosarcoma of the heart. However, considering the unpredictable nature of malignant diseases, these two entities should be distinguished. We performed immunohistochemical staining for oestrogen and progesterone receptors that confirmed the uterine origin of the tumour. In our case, regular follow-up ensured early detection of ULMS cardiac metastasis and early treatment before any symptoms developed. Our multidisciplinary team opted for surgery and adjuvant chemotherapy because the patient had solitary metastasis, was in good general condition without any symptoms, and was very motivated for the treatment. Surgery is generally recommended only in selected conditions - in patients with intracavitary metastases resulting in significant hemodynamic complications and in patients with solitary cardiac disease when the primary tumour is controlled, and a beneficial prognosis is expected [ 3 ]. From a surgical point of view, it is very challenging to assess the width of the resection edges to be radical enough because leiomyosarcoma is a very aggressive tumour and the remaining tumour cells along resection edges could result in tumour growth and the need for repetitive surgery. Moreover, the surgery should be sufficiently conservative at the same time to ensure the satisfactory function of the remaining ventricle postoperatively. Therefore, intraoperative histopathology might be very helpful and should always be performed to avoid excessive resection and recurrent tumour growth after surgery as well. Although our patient had late and solitary metastasis of ULMS which was completely resected, according to the current recommendations [ 11 ] the stage of the disease required adjuvant chemotherapy. This approach resulted in the absence of any new metastases in the one-year follow-up. However, current evidence for adjuvant chemotherapy in oligometastatic ULMS is weak [ 12 , 13 ]. Therefore, we do not know whether chemotherapy contributed to the 1-year relapse-free survival after surgery in our patient and further studies are certainly needed to clarify this issue.
Conclusion Notwithstanding the ULMS by itself and ULMS metastases to the heart, especially to the visceral layer of the pericardium, are very rare, there is a certain possibility for such an appearance. Our case demonstrated that strict surveillance of patients with ULMS even after successful treatment of the early stage of the disease is of utmost importance to reveal metastatic disease to the heart in a timely manner and to treat it with beneficial outcomes. Such cases should always be carefully discussed in a multidisciplinary team in a tertiary centre, and surgery with adjuvant chemotherapy might be a good approach in patients with beneficial prognosis. Considering the surgical challenges of epicardial metastasis of ULMS, the main aim should be to ensure radical resection to avoid repetitive tumour growth and satisfactory function of the remaining myocardium. Therefore, intraoperative histopathology might contribute to surgical decision-making and is strongly recommended.
Background Uterine leiomyosarcoma is a rare and aggressive tumour with a poor prognosis. Its metastases to the heart are even rarer, especially to the epicardium. The majority of reported cardiac metastases of uterine leiomyosarcoma were in the cardiac chambers or intramyocardial. Surgical resection of the uterine leiomyosarcoma in the early stages is the only definitive treatment for this disease. However, in the cases of cardiac metastasis, surgery is recommended only in emergencies and patients with expected beneficial outcomes. Case presentation Our patient was a 49-year-old female referred to the Department of Cardiac Surgery for scheduled surgery of pericardial neoplasia. The patient underwent a hysterectomy and adnexectomy three years prior owing to the uterine leiomyosarcoma. A regular follow-up magnetic resonance imaging of the abdomen and pelvis discovered neoplasia in the diaphragmic portion of the pericardium. No other signs of primary disease relapse or metastases were found. The patient was asymptomatic. The multidisciplinary team concluded that the patient is a candidate for surgery. Surgery included diastolic cardiac arrest achievement and resection of the tumour. Macroscopically, a parietal layer of the pericardium was completely free from the tumour that invaded only the apical myocardium of the left ventricle. Completed histopathology confirmed the diagnosis of leiomyosarcoma of the uterine origin. Three months after surgery, the patient received adjuvant chemotherapy with doxorubicin and dacarbazine. One year after surgery, there are no signs of new metastases. Conclusions Strict surveillance of patients with uterine leiomyosarcoma after successful treatment of the early stage of the disease is of utmost importance to reveal metastatic disease to the heart in a timely manner and to treat it with beneficial outcomes. Surgery with adjuvant chemotherapy might be a good approach in patients with a beneficial prognosis. From a surgical point of view, it is challenging to assess the appropriate width of the resection edges to be radical enough and, at the same time, sufficiently conservative to ensure the satisfactory postoperative function of the remaining myocardium and avoid repetitive tumour growth. Therefore, intraoperative histopathology should always be performed. Keywords
Author contributions The authors that contributed significantly to the conception and design of the manuscript are K.K., A.L., and I.S. Authors that contributed significantly to the analysis and interpretation of data are K.K., A.L., V.R.L., D.M., I.I., L.S., Z.S.D., H.G., B.B. and I.S. Authors that significantly contributed to drafting the manuscript are K.K., A.L., I.I., and I.S. Authors that significantly contributed to critical revise of the manuscript for important intellectual content are K.K., A.L., V.R.L., D.M., I.I., L.S., Z.S.D., H.G., B.B. and I.S. All authors read and approved the final manuscript. Funding None. Data availability Not applicable. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Written Informed Consent was obtained from the patient for the publication of Case report and accompanying images. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:47
BMC Cardiovasc Disord. 2024 Jan 13; 24:49
oa_package/51/92/PMC10787998.tar.gz
PMC10787999
38218782
Introduction Although cervical cancer is largely preventable through effective screening and vaccination for human papillomavirus (HPV), it still remains a major public health burden in much of the world [ 1 ]. Approximately 90% of cervical cancer incidence and mortality occur in resource-limited settings where the lack of prevention is compounded by limited treatment options [ 2 ]. In much of East Africa, including Kenya, cervical cancer is the most frequent cause of cancer-related death among women [ 3 – 5 ]. While Kenya has adopted the World Health Organization (WHO) recommendations for simplified HPV-based screening strategies, there are major gaps in implementation and substantial loss-to-follow-up after screening [ 6 , 7 ]. Reasons for loss-to-follow-up include transportation costs and distance to treatment facilities, stigma, lack of social support, and low levels of personal risk perception or knowledge about HPV and cervical cancer [ 8 , 9 ]. Offering self-collected HPV-testing in the community in Western Kenya has been shown to be an effective screening strategy. However, while community-based screening can substantially improve screening rates over baseline, and is more cost-effective than facility-based testing [ 10 , 11 ], a crucial limitation is the loss to follow-up of women who tested positive for HPV. Novel strategies to increase attendance at both screening and follow-up include visit navigators, transportation vouchers, treatment incentives, and health messaging via mobile phones (mHealth). Several programs have evaluated the combination of reminder telephone calls and travel incentives, which were shown to improve follow-up [ 12 , 13 ]. However, this combination of interventions is labor intensive, costly, and places additional burdens on health facility staff. mHealth strategies utilizing text messages have the potential to reach large numbers of people through automated messaging about health conditions and services, while requiring relatively low costs and administrative burdens [ 14 ]. One possible solution would be to use text messages as a way to delivering cervical cancer screening results, health messaging and logistical information about follow-up [ 15 ]. mHealth solutions may be particularly suited to Kenya, where 78% of households own or have access to mobile phones [ 16 ]—more than those who have access to public water and sanitation services—and many use their mobile phones frequently, as evidenced by over $108 million in cash transfers carried out through mobile phones daily. Mobile phones have been shown to be effective in educating patients about sensitive health-related issues that require confidentiality in various health domains in Kenya, such as HIV prevention [ 17 , 18 ], family planning [ 19 ], and sexually transmitted infections [ 20 ]. Text messages, in particular, have been found useful for reminding patients about medication adherence [ 20 , 21 ] and increasing preventive health visits and outpatient clinic attendance in many low- and middle-income countries (LMICs) [ 22 , 23 ]. As part of a two-phase trial of implementation strategies for cervical cancer screening in western Kenya, our team introduced text messaging to deliver HPV test results and follow-up plans to women [ 24 , 25 ]. In the first phase, while text messaging was a popular and efficient method of results delivery, it did not result in higher rates of treatment uptake when compared with notification through phone calls or home visits. From individual interviews at the time of treatment, we found that women wanted more clear and personalized information when receiving their results. Therefore, in the second phase, we sought to develop and evaluate an intensified mHealth strategy with enhanced text-messaging to improve rates of follow-up with treatment after a positive HPV test through improved understanding of HPV, treatment logistics and information to share with their partners. This paper describes the modifications made to the content of text messages informed by feedback from focus group discussions. It further examines the acceptability of enhanced text messages and their impact on treatment uptake by comparing two different community arms in the second phase: one employing standard text messaging, and the other utilizing enhanced text messaging.
Methods This study was part of a two-phase cluster-randomized trial evaluating community-based cervical cancer prevention strategies using HPV self-sampling in Migori County, Kenya ( ClinicalTrials.gov identifier: NCT02124252–28/04/2014). The two-phase design allowed the study team, including community partners, to collect feedback and evaluate uptake data to iteratively improve the implementation strategy, with a focus on improving follow-up, between phases. Results from the first phase showed that the community-based HPV testing model had higher uptake and lower program costs compared to screening in health facilities [ 10 , 11 , 26 ]. This paper presents a mixed-method sub-study within the single-arm, second phase of the randomized trial, exploring the development, acceptability, and impact of an enhanced mHealth strategy on cryotherapy uptake among women who test positive for HPV. In the second phase, we offered the more effective community-based screening coupled with optimized linkage to treatment strategies in six communities. The enhanced linkage strategy included decentralization of treatment sites, increasing from one to four; increased provider training and supervision, and texts tailored to provide further education on cervical cancer and reminders for treatment. The study activities described below were nested within the second phase. Participants and setting Participants included women between the ages of 25 and 65 years, with an intact uterus and cervix, who resided within the six study communities in Migori County, Kenya. Study communities were defined by the sub-locations assigned to one government health facility, with an overall population of approximately 5000. To avoid spillover, we identified communities with non-adjacent borders that had not participated in the first phase of the study. Prior to carrying out the community heath campaigns (CHCs), the study team conducted door-to-door enumeration to characterize the study communities more accurately, which is presented in detail elsewhere [ 26 ]. Structure of community health campaign with standard text messaging The process of education and self-collection of specimens for HPV testing at the CHCs is described in detail elsewhere [ 26 ]. After collection, women provided their preference for receiving HPV test results (text message, phone call, and home visit). We used the care HPV test (QIAGEN, Germantown, MD) to collect and process samples, with a goal of providing results to participants within 2 weeks. Text notification provided through the Frontline SMSTM program ( https://www.frontlinesms.com/ ). Receipt of HPV test results via text was considered successful if the program confirmed the transmission of text message, meaning the participant’s phone was on and the SIM card was valid or phone line was active [ 24 ]. The text-based HPV test results notification took into consideration participants’ HIV status and HPV test results: 1) HPV negative and HIV positive; 2) HPV negative and HIV negative; 3) HPV positive; and 4) inconclusive HPV test result (Table 1 ). Based on their test result, participants also received guidance regarding their next cervical cancer screening and any necessary treatment following a positive HPV test. Women who opted to receive their HPV test result notification via text in the standard group received one text message. For those who did not follow-up for treatment, second and third attempts for results notification for women were completed by phone call or home visit. Phone call and home visit strategies were deemed successful if the participant was reached and was given their results directly by study staff. Our study staff attempted up to four phone calls or three home visits before determining that a participant was unable to be reached. Development of the enhanced text messaging strategy After the fourth CHC, in July 2018, we conducted two semi-structured focus group discussions (FGDs) with female participants who had been previously screened during the first phase of the trial and had opted for a cell phone-based strategy (text of phone call) for their results notification. These participants were identified and recruited by community health volunteers for their ability to actively engage and provide valuable feedback. Each FGD consisted of 10 participants and lasted for 2.5 hours. We explored myths and misperceptions related to HPV found in qualitative data from the first phase, such as misunderstanding of how HPV is treated, what causes HPV, and meaning of a positive result. We also examined women’s preferences for sharing information with their partners, barriers to accessing treatment, and the rationale behind their choice of text or phone call for receiving HPV test results. FGD participants were asked to help identify appropriate content and wording to develop messages that would most resonate with women. Given the straightforward nature of participants’ responses to our research topics, we employed structural coding to synthesize our findings. In response to the FGD feedback, the enhanced text messaging strategy included changes in the timing, number, and content of messages (Fig. 1 ). Messages were developed to be more clear, concise, and specific to the patient (Table 1 ). To ensure understanding, women who opted to receive their results via texts were shown examples of texts at the time of HPV screening. Messages were sent out more frequently; in addition to a text with their results, women received a brief message thanking them for screening and additional treatment reminders if they tested positive. Treatment reminder text messages were tailored to address common barriers in accessing treatment, which may include providing location of clinics, time of appointment, and a possible description of transport options. Evaluation of the enhanced text messaging strategy The enhanced text messaging strategy was deployed in the last two study communities. We collected information about participants through a structured questionnaire administered at three time points: at the CHCs prior to HPV testing (pre-test), immediately after HPV testing (post-test), and after treatment for those who screened positive (follow-up). Our primary outcomes for this study were receipt of HPV test results and treatment uptake. Prior to screening, we collected information of sociodemographic characteristic, clinical information, and behavior regarding their phone use, frequency, and barriers to phone ownership, access, or use, if any. At the follow up, we asked participants about their acceptability of text messaging and the role of receiving text messages in their decision to access treatment. Treatment Women who tested HPV positive were referred for evaluation for cryotherapy at local health facilities. Treatment was provided by trained clinic providers, and the study staff kept track of participants who successfully received treatment and their date of treatment. Treatment was available for up to 3 months at each health facility after participants who tested HPV positive were notified of their result in their respective community. To calculate time to treatment, we only included women who accessed treatment through April, 2019. Statistical analysis Descriptive statistics were used to compare the baseline characteristics of the women in the first four communities and the last two communities, as well as standard and enhanced text groups. To test bivariate relationships between treatment uptake and categorical demographic characteristic variables, we performed chi-squared tests. We used Kruskal-Wallis test to evaluate continuous variables and the median time and interquartile range in days between screening and notification, notification and treatment access, and screening to treatment access. P values of < 0.05 were considered statistically significant. All analyses were performed using STATA version 16 (College Station, TX: StataCorp LP).
Results Focus groups What FGD participants liked and disliked about results notification via text messaging Participants highlighted several key benefits of receiving their results via text, including convenience, privacy, and control over information sharing. Some women felt there was a lower chance of missing their results when delivered via text. They liked the flexibility of receiving information at any time, even if their phone was off, as it could be accessed later at their convenience, unlike phone calls with a specific place and time. One 26-year-old participant noted, “Even if my phone was off by the time they are sending the message, I will still get the message.” Another 27-year-old participant expressed, “I am not always with my phone all the time. They [study team] can call and at that time I don’t have it, and they may not call again. That means I will miss it, that’s why I prefer texting to a phone call.” Furthermore, some participants appreciated that they could refer back to the text at a later time. “With text, you’ll have that information as long as you want it, that’s why I preferred text to phone call.” (A 27-year-old participant). Participants also expressed a sense of comfort when receiving sensitive information via text due to the privacy it offered. One 26-year-old stated, “I’ll use my own password to read the text...no other person will have access to it.” Several participants specifically noted the increased privacy with text messages compared to other result notification strategies. A 42-year-old participant mentioned, “I chose texting because it has privacy. You have to shout when making or receiving a phone call that even those whom you don’t want to have your information will have it. With text, you read it on your own.” Another participant shared, “I had the opportunity for them [study team] to do a home visit, but I chose not to because people in our village will talk a lot and make statements.” (A 52-year-old participant). An important aspect of privacy was control over how they share their information. They valued having information readily accessible, giving them the autonomy to decide when and with whom to share it. A 42-year-old participant explained, “I’ll receive a text on my phone and I’m the one who will read it. If I want to share it, that will be up to me.” Another woman noted, “I will not be able to explain everything well to him (my husband), but with the text, he can read and get full information.” (A 26-year-old participant). For one participant, the extent of control over sharing information was tied to the specific HPV test result they received. This 50-year-old participant explained, “I preferred text because it allowed me to review the information first before sharing the good news of my negative test result with my husband.” However, with a positive HPV result, a few participants held a contrasting view and preferred not to receive their HPV results via text due to the emotional distress it may cause. One 31-year-old participant mentioned being “stressed to death [if they tested positive for HPV]”, while a 25-year-old participant expressed a desire for privacy as they “didn’t want anybody to know and wanted to avoid stigma [related to HPV].” As a result, they preferred to receive their results through a phone call. In addition to the fear around receiving a positive HPV diagnosis via text, there were other disadvantages to receiving results and other health information in this manner, including inability to ask follow-up questions, communication barriers, and possible unfamiliarity with technology. Despite being provided with contact information for the clinical team, a primary concern was the lack of immediate access to a knowledgeable provider to answer questions or provide more counseling and information about treatment. A 35-year-old woman who opted for phone calls said, “If I had any questions, I could not get my answers right there,” while a 25-year-old participant shared that “you can be counseled through a phone call which is not possible with text. It [phone call] gives you an opportunity to ask questions.” Some participants felt that there would be communication barriers over text. One 40-year-old participant stated, “I preferred phone call because I will have to talk to the person in a language that I understand well.” Although women were asked about their language preference, they remained uncertain about whether the message would be easily comprehensible, suggesting underlying concerns about the complexity of the information provided. A 25-year-old participant reported, “I didn’t know the type of language they were going to use in sending that text.” For one participant, communication barriers would potentially be compounded by lack of familiarity with text messaging. This 32-year-old participant explained, “In case I would test positive, I will know when and where to go for treatment. Through a text, I cannot even ask that. I don’t know how to text.” Ideas to improve text messaging reported by FGD participants Women suggested the messages be simple, short, personalized, and the information conveyed in the messages should be educational to the recipient as well as the recipient’s family. Some women commented that texts should be concise because it would make the readers lose interest and that texts should address participants by their names. They also recommended that the messages be sent frequently. One woman reported that notification should be sent 3–4 days prior to actual treatment and should include the specific date and time of when each woman should visit the clinic for treatment. Based in these results, we developed the strategy described above with the content shown in Table 1 . Pilot of the enhanced text messaging strategy Between February and November 2018, 3303 women participated in cervical cancer screening with self-collected HPV tests offered through CHCs in six communities (Table 2 ). Of the 2368 women who underwent cervical cancer screening in the first four communities, almost half (49.4%) chose to receive HPV test results via phone call and less than one-quarter (23.9%) opted for text, making it the least acceptable notification method. In the last two communities, where enhanced text messaging notification was offered, over half (51.2%) of the 935 screened women opted for phone call, followed by more than one-quarter (28.2%) opting for text. Among all participants, 555 (16.8%) tested HPV positive, and 257 (46.3%) of the HPV-positive women accessed treatment. HPV rates (15.9% vs. 15.5%; p = 0.943) and treatment uptake (53.3% vs. 53.7%; p = 0.928) did not vary between standard and enhanced text groups. Compared to women in the first four communities, women in the last two communities were younger (37.1 years vs. 38.6 years; p = 0.004), had fewer children (4.5 vs. 5; p = 0.005), had higher rates of cervical cancer screening prior to the CHCs (20.7% vs. 12.9%; p < 0.001), with higher proportion of women having completed HPV testing in the past (26.9% vs. 12.5%; p < 0.001), and were more likely to report a positive HIV status (34.9% vs. 20.3%; p < 0.001) and engage in family planning (43.7% vs. 38.9%; p = 0.008). The similar differences were also observed between standard and enhanced text groups. More women had undergone cervical cancer screening prior to the CHCs (25.8% vs. 18.4%; p < 0.05) and reported living with HIV (32.6% vs. 20.2%; p < 0.001) in the enhanced text group than those in the standard text group. Among all women who attended CHCs, 2749 (83.2%) women reported using cell phones daily (Table 3 ). More women who opted for texts reported owning their own phone (92.5%) and being comfortable with reading and writing texts and receiving sensitive information via text than those who opted for phone calls or home visit ( p < 0.001). However, women were less likely to share their positive HPV test result with their partners if they opted for texts compared to those who opted for phone calls and home visit. For those who opted for texts, 12.6% requested their results via phone call or 2.1% home visit in the event of a positive test result. There was a significant difference in notification of results at first attempt across the text, phone call, and home visit categories ( p < 0.001) (Table 3 ). All women who opted for text received their test result at first attempt, followed by those who opted for home visit (86.8%) and phone calls (54.5%). For those who opted for text and accessed treatment, most (82.5%; p < 0.001) did so after receiving first text notification while significantly fewer women sought treatment after second (10.9%) and third text notifications (6.6%). The median time it took from screening to notification of test results varied by notification method, with text messaging strategy delivering the results most efficiently (16 days; p < 0.001), followed by home visit (20 days) and phone calls (31 days) (Table 4 ). HPV positive women who opted for text messaging took the longest time to access treatment after receiving their test results (25 days) while those who opted for phone calls had the shortest (7 days).
Discussion We sought to develop an enhanced text messaging strategy to increase completion of the cervical cancer screening cascade in a community-based HPV screening program in partnership with women in western Kenya. We found that, although providing women various options for notification was valued, the chosen notification modality had no effect on treatment uptake, which remained around 50%. Treatment uptake did not improve after incorporating an in-person review of text content, increased frequency, and enhanced text messaging, which was clearer, more concise, and more personalized. Besides the enhanced text messaging strategy, our team also aimed to make cervical cancer prevention services more accessible and reduce structural barriers by increasing treatment sites and providing additional training and supervision for medical providers in cervical cancer treatment. Overall, treatment uptake did not differ across notification methods. However, women in the communities where the enhancement measures were implemented accessed cervical cancer treatment sooner than women in other communities after receiving their positive HPV result by phone call or home visit. This decrease in time from HPV result notification to treatment may be explained by the enhanced linkage to care strategies and in-person contact with study staff, rather than via text, to counsel women or address their apprehension toward treatment. Similar to our study, one study in Tanzania found that one-way text messages had no effect on the follow-up screening rate among HPV-positive women and instead suggested that provider-initiated phone calls to educate women on the importance of rescreening may be more effective [ 27 ]. While we did not observe a greater treatment uptake with the text messaging strategy compared to phone calls or home visits—in fact, time before accessing treatment was longest in the text messaging group—it is important to highlight that the text messaging strategy was also not associated with a lower treatment uptake. The delay in accessing treatment among women who received enhanced text messages and tested positive for HPV in our study differed from a study in Argentina, where text messages served as a cue to action for women to visit the health center to obtain their HPV test results [ 28 ]. In their study, 69% visited a health center within 7 days of receiving the text, including 7.5% on the same day. Notably, their study area primarily consisted of urban populations, whereas our study focused on rural communities in Kenya. The limited impact of the text enhancements in our study suggests that there are higher structural barriers to treatment acquisition in this setting that are difficult to offset by enhanced text messaging strategy alone. Such barriers include a long travel distance to the clinic, transportation costs, and a misalignment between work and clinic hours. Similar to our study, a pilot study of community-based HPV self-sampling in rural Uganda, which used SMS for result notifications, found that only 22% of women with positive HPV results attended the clinic for follow-up, identifying transportation challenges as a significant barrier [ 29 ]. However, one study based in Tanzania showed that women who received a transportation voucher via text to return to the clinic for cervical cancer screening, as well as 15 texts promoting behavioral change, were 1.53 times more likely to attend screening than those who only received the texts [ 30 ]. Although the overall screening uptake was relatively low in the study, their findings highlight the potential impact of mHealth in reducing socioeconomic and systemic barriers for women to access cervical cancer services, especially in rural areas. Most women felt comfortable receiving either test result via text. However, it is notable that some women chose to receive results via text in the case of negative HPV test results, but via phone calls or home visits if the results were positive. These findings are critical for understanding the gaps in the cervical cancer care continuum. One study in South Africa suggested that in case of abnormal Pap smear results, a text should instruct the women to come to the clinic where the results are then shared during face-to-face discussions with a medical provider [ 12 ], given the concerns around privacy of texts and fear of stigma—an important consideration when women may not have their own phone and may share it with their family. Another study based in Argentina used a text messaging strategy to connect women with triage Pap post-HPV testing and to inform women about their HPV test result availability while replacing the term “HPV-testing” with the term “self-collection.” The authors hypothesized that this is one of the ways that helped women reduce concerns related to privacy and increased clinic attendance rates in their study [ 28 , 31 ]. Although our study team informed women of their positive HPV test result via text and used the term “HPV,” we attempted to reduce stigma toward HPV and ensure confidentiality by asking women to choose their preferred results notification method (phone call, text, or home visit) depending on their HPV test result (positive or negative), making this process as individualized as possible. Nonetheless, more research should be conducted to develop a culturally tailored text intervention for improving treatment uptake. The challenges inherent in text messaging highlight the advantages of and potential need for greater individual interaction via phone calls or home visits to provide education and link women to treatment. In fact, in our FGDs, women reported that the inability to ask follow-up questions was a negative aspect of receiving test results via text. Two-way messaging, which has been shown to be more effective in various behavior change interventions compared to one-way interventions [ 30 , 32 ], could mitigate these challenges and allow women to actively engage in cervical cancer education and services, especially those in resource-limited settings. One study based in Portugal found that adding more than one communication method was more effective than sending only written invitation letters in increasing cervical cancer screening uptake [ 33 , 34 ]. Their study included a 3-step invitation to screening, in which an automated reminder via text or phone call (step 1), manual phone call (step 2), and face-to-face interview (step 3) were applied sequentially and demonstrated that screening uptake was increased by 17% among women who received the invitation through step 3 compared to those receiving the standard invitation letter. The similar multistep, multimodal system that integrates HPV test result notification via text, phone call, and home visit could be applied in western Kenya to optimize linkage to care. Our study had several limitations. First, we asked whether enhanced text messaging helped women understand why they needed treatment or how they could access treatment. We relied on self-reporting and did not require participants to share what their understanding was (they simply indicated “yes” or “no”). Second, we only included survey items about the effect of text notifications on the decision-making process with the use of enhanced text notification, and not the use of standard text notification. Therefore, we were not able to accurately compare the varying effects of standard text and enhanced text messaging on treatment acquisition. Third, the measurement of results notification timing for text messages may not be completely accurate, as receipt of the test results was recorded after a message was sent and registered in an active phone; the actual reading of the message was not confirmed by the women. This part of the data collection relied on the transmission of text messages through the Frontline SMS program, in which we did not require a confirmation text to avoid data costs for the women. Fourth, our study did not explore the acceptability of the content in enhanced text messages for participants who sought treatment and those who did not. In contrast, a similar study also developed text messages for women during focus groups but validated the content through interviews with health providers and women, considering both perspectives [ 35 ]. Their findings emphasized personalized and persuasive language with a professional tone to encourage women to visit the clinic for their HPV test results. In our study, although we assessed the impact of enhanced text messaging on treatment uptake, we did not directly investigate the effect of the message content or wording on the participants who received these messages. Understanding the impact of specific language and content on women’s decisions to access treatment, their privacy experiences with text-based test results, whether positive or negative, and their perception of sender legitimacy, a crucial element in the context of mobile-based interventions, would have been of immense value. Last, we encountered delays in HPV test kit availability in the middle of the study due to slow customs clearance of the test kits, leading to delays in planned CHCs. This may have contributed to the low uptake of screening and treatment among women. It is also an example of one of the external logistical barriers faced by women in this rural area for which the study could not control.
Conclusion In this cohort of women undergoing community-based HPV testing, over three quarters of the participants preferred a cell phone-based strategy (phone call or text messaging) for results delivery. There was no difference in treatment uptake rates between standard and enhanced text groups, even after the text messaging strategy was enhanced with increased messages and adapted content. This enhanced text strategy is one attempt to address low linkage to care in cervical cancer amidst the overall poor transportation, education, and supply resources in Kenya. While enhanced text messaging did not garner higher treatment uptake, reflecting the multiple factors impacting ability to complete the care cascade in in Kenya, it did not result in lower treatment rates or a negative experience for women. As cell phone ownership increases, these results may help programs to provide different options for results notification, though there remains a need to address the structural and logistical barriers that may inhibit women’s decision or ability to follow up with treatment. Future programs could therefore offer multiple results notification methods, including a combination of cell phone-based strategy and home visit, to ensure that they meet the needs of their populations.
Background Mobile health (mHealth) has become an increasingly popular strategy to improve healthcare delivery and health outcomes. Communicating results and health education via text may facilitate program planning and promote better engagement in care for women undergoing human papillomavirus (HPV) screening. We sought to develop and evaluate an mHealth strategy with enhanced text messaging to improve follow-up throughout the cervical cancer screening cascade. Methods Women aged 25–65 participated in HPV testing in six community health campaigns (CHCs) in western Kenya as part of a single arm of a cluster-randomized trial. Women received their HPV results via text message, phone call, or home visit. Those who opted for text in the first four communities received “standard” texts. After completing the fourth CHC, we conducted two semi-structured focus group discussions with women to develop an “enhanced” text strategy, including modifying the content, number, and timing of texts, for the subsequent two communities. We compared the overall receipt of results and follow-up for treatment evaluation among women in standard and enhanced text groups. Results Among 2368 women who were screened in the first four communities, 566 (23.9%) received results via text, 1170 (49.4%) via phone call, and 632 (26.7%) via home visit. In the communities where enhanced text notification was offered, 264 of the 935 screened women (28.2%) opted for text, 474 (51.2%) opted for phone call, and 192 (20.5%) for home visit. Among 555 women (16.8%) who tested HPV-positive, 257 (46.3%) accessed treatment, with no difference in treatment uptake between the standard text group (48/90, 53.3%) and the enhanced text group (22/41, 53.7%). More women in the enhanced text group had prior cervical cancer screening (25.8% vs. 18.4%; p < 0.05) and reported living with HIV (32.6% vs. 20.2%; p < 0.001) than those in the standard text group. Conclusions Modifying the content and number of texts as an enhanced text messaging strategy was not sufficient to increase follow-up in an HPV-based cervical cancer screening program in western Kenya. A one-size approach to mHealth delivery does not meet the needs of all women in this region. More comprehensive programs are needed to improve linkage to care to further reduce structural and logistical barriers to cervical cancer treatment. Keywords
We would like to thank the participants as well as the research assistants and health care providers for their support of and contributions to this study. Authors’ contributions All authors (YC, SI, LP, EAB, and MJH) were involved in the preparation, review, and editing of the final manuscript. MJH and YC were responsible for the conception and design of the manuscript. YC, SI, LP, and MJH carried out data analysis and interpretation. YC and MJH drafted the manuscript with contributions from the other authors. Funding This research was funded from by the National Institutes of Health (R01 CA188248). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Availability of data and materials Per the Kenya Medical Research Institute (KEMRI) guidelines, the data will be made available upon reasonable requests to the corresponding author. Declarations Ethics approval and consent to participate This study received ethical approval from Duke University School of Medicine (IRB No. Pro00077442) and the Kenya Medical Research Institute Scientific and Ethical Review Unit (SERU No. 2918). All methods were carried out in accordance with relevant institutional and national guidelines and regulations. All participants provided written informed consent. To ensure confidentiality and anonymity, data were deidentified. The study team’s contact information was provided to participants during and after the consent process to address any questions related to the study. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:47
BMC Womens Health. 2024 Jan 13; 24:32
oa_package/e4/fc/PMC10787999.tar.gz
PMC10788000
38218833
Introduction Endometriosis is a chronic gynecological disorder characterized by the presence of endometrial-like tissue outside the uterus, most commonly in the pelvic cavity [ 1 ]. It affects approximately 10% of women of reproductive age and is associated with debilitating symptoms such as pelvic pain, dysmenorrhea, dyspareunia, and infertility [ 1 ]. The pathogenesis of endometriosis remains poorly understood, and there is a need for reliable biomarkers that can aid in its diagnosis and management [ 2 ]. Brain-derived neurotrophic factor (BDNF) is a neurotrophin that plays a crucial role in the development, survival, and plasticity of neurons in the central nervous system [ 3 ]. It has been implicated in various physiological processes, including neuronal growth, synaptic plasticity, and pain modulation [ 3 , 4 ]. BDNF is primarily synthesized in the brain, but emerging evidence suggests that it is also expressed in peripheral tissues, including the reproductive system [ 5 ]. Recent studies have proposed a potential association between BDNF and endometriosis, highlighting BDNF as a promising candidate biomarker for this condition [ 6 , 7 ]. Elevated levels of BDNF have been reported in the peritoneal fluid, serum, and endometrial tissue of women with endometriosis compared to healthy controls [ 8 – 10 ]. These findings suggest that BDNF may be involved in the pathogenesis of endometriosis and could potentially serve as a diagnostic or prognostic marker [ 7 , 11 ]. However, the existing literature on the association between BDNF and endometriosis is still limited and characterized by inconsistencies in findings. Therefore, a comprehensive evaluation of the available evidence is warranted to clarify the role of BDNF in endometriosis. The aim of this systematic review and meta-analysis is to evaluate the existing evidence on the association between BDNF levels and endometriosis.
Methods The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines was followed for conducting the present study. More details about PRISMA can be found in Supplementary File Table 1 . The protocol of this study is registered in PROSPERO with the code CRD42023439147. Search strategy A systematic search was performed in four international bibliometric databases, including Scopus, Embase, PubMed, and Web of Science from the inception up to 12 June 2023, with the goal of identifying any published article which evaluated the altered levels of BDNF in endometriosis. Regarding our systematic search strategy, we categorized the keywords into two different groups, including the endometriosis group and the BDNF group. In the endometriosis group, we used any possible keyword related to endometriosis, including endometriosis, adenomyosis, or abnormal uterine tissue. In the BDNF group, we used all possible keywords related to BDNF, such as BDNF, brain-derived neurotrophic factor, or brain-derived neurotrophic factor. We used “OR” between the keywords in each group and utilized “AND” between the groups. Supplementary Table 2 represents the search string for each database in detail. Eligibility criteria We included studies that evaluated the levels of BDNF in endometriosis using enzyme-linked immunoassays (ELISA) or any other methods. The exclusion criteria included animal studies, in-vitro studies, meta-analyses, review articles, letters to editors, case reports, and congress abstracts. We did not impose any language restriction regarding the original language of the identified articles. Data extraction and quality assessment The initial screening of the identified studies, based on their titles and abstracts was performed by two independent reviewers, in order to exclude irrelevant studies. Then, the full texts of the remained articles were evaluated for extracting their data. Two independent reviewers performed the data extraction, based on an Excel sheet, containing the first author’s names, country of origin, year of publication, type of endometriosis, the stage of the endometriosis, source of the BDNF, age of the patients, and sample sizes of the studies. Moreover, two independent reviewers assessed the quality of the included studies, using Newcastle-Ottawa Scale (NOS) tool. Data synthesis and meta-analysis The meta-analysis utilized a random-effects model to determine the combined effect size and evaluate its statistical significance. The standardized mean difference (SMD) and its corresponding 95% confidence intervals (95% CIs) were employed to present the pooled effect sizes. Sensitivity analysis was performed by including only the studies that assessed blood levels of BDNF. Assessment of publication bias was conducted through the implementation of funnel plots and Egger’s regression test.
Results Study selection A systematic search of electronic databases yielded a total of 192 articles. After removing duplicates and applying the inclusion and exclusion criteria which was done by two reviewers (A.S & S.R), a final set of 12 articles were included in this systematic review and meta-analysis [ 6 , 8 – 18 ] The characteristic information of included studies is in Table 1 . The inclusion criteria were as follows: (1) patient population: women of reproductive age after being diagnosed with endometriosis; (2) Intervention: evaluating level of BDNF in serum or plasma; (3) Comparison: healthy women ; (4) Outcome: impact on the BDNF level; (5) Setting/Time: All and (6) study design: randomized controlled trial, retrospective studies, and prospective studies. Studies that were conducted on animals or have not met our inclusion criteria or were designed as case reports, case series, and non-English articles were excluded. The selection process is illustrated in Fig. 1 . Characteristics of included studies Quality assessment The quality assessment of the included studies was performed using the Newcastle-Ottawa Scale (NOS) for observational studies (Table 2 ). The overall quality of the studies ranged from moderate to high, with most studies scoring 6 or higher on the NOS. Only two studies had poor quality [ 8 , 14 ]. Meta-analysis results The meta-analysis of the included studies revealed a significant association between BDNF levels and endometriosis. The pooled standardized mean difference (SMD) of BDNF levels between women with endometriosis and controls was 0.87 (95% confidence interval [CI] 0.34 to 1.39, p = 0.001; I2 = 93%), indicating higher BDNF levels in women with endometriosis compared to controls. The forest plot depicting the individual study results and the overall pooled effect is presented in Fig. 2 . Publication bias Publication bias was assessed using funnel plots and Egger’s test. The funnel plot appeared symmetrical, indicating no significant publication bias. Egger’s test also confirmed the absence of publication bias ( p = 0.15) (Fig. 3 ). Sensitivity analysis A sensitivity analysis was conducted by studies that assessed blood levels of BDNF. The results showed that blood levels of BDNF are significantly higher in endometriosis patients (SMD: 1.13 95% CI 0.54 to 1.73, p = 0.0002; I2 = 93%) (Fig. 4 ).
Discussion The result of the present systematic review and meta-analysis indicates that BDNF levels significantly increase in patients diagnosed with endometriosis compared to healthy controls. The result of the sensitive analysis showed a significant increase in BDNF levels in both plasma and serum in endometriosis. Evidence showed that BDNF level varies during a healthy menstrual cycle, and it is reported that BDNF significantly increases during the Luteal phase in comparison with the follicular phase [ 19 ]. It is also mentioned that BDNF is significantly lower in Amenorrhoeic subjects, as well as postmenopausal women [ 19 ]. Taken together, all this evidence shows that estradiol and progesterone might have an impact on BDNF circulation, and also literature showed a positive correlation between BDNF and E (2) and progesterone in fertile women [ 19 ]. Results of a study done by Bucci et al. revealed a significantly higher level of estradiol and progesterone among patients with stage 1 and 2 endometriosis compared to healthy controls [ 12 ]. It can therefore be assumed that BDNF can increase in patients diagnosed with endometriosis. This study produced results that corroborate the findings of a great deal of the previous work in this field. Giannini et al. found that the level of BDNF in plasma was significantly higher in comparison with healthy controls in the follicular phase, also the results of a study done by Browne et al. are consistent with Giannini et al. study and showed a higher level of BDNF in patients diagnosed with endometriosis [ 9 , 14 ]. However, the findings of the Ding et al. and De Arellano et al. studies do not support the results of the studies mentioned earlier, they revealed no significant difference between healthy controls and women with endometriosis in the level of BDNF [ 10 , 13 ]. A systematic review done by Chow et al. indicates that Pro-BDNF is expressed in the endometrium, and BDNF expression in the endometrium is significantly higher in patients with endometriosis [ 20 ]. These findings may be a possible explanation for the results of Browne et al. study which showed that although BDNF concentration was higher in women with endometriosis, three months after surgical removal of endometriotic lesions, no difference was found in the level of BDNF between healthy controls and women with endometriosis [ 9 ]. Wessels et al. compared BDNF levels in patients who received treatment for endometriosis with patients who did not, the results showed a significantly decreased BDNF level in the treated group [ 6 ]. Although BDNF was significantly higher in endometriosis compared with healthy controls, no significant changes were reported between different stages of endometriosis [ 6 , 11 ]. However, BDNF expression in eutopic endometrium is positively correlated with stages of endometriosis [ 7 ]. A study done by Rocha et al. showed that although BDNF is higher in plasma among patients with ovarian endometrioma and can be used as a diagnostic marker, it is not helpful for the diagnosis of other forms of endometriosis including peritoneal or deep infiltrating endometriosis [ 21 ]. BDNF expression plays an essential role in female reproductivity by affecting placental function, oocyte maturation, embryo development, follicle development, and oogenesis, therefore dysregulation of BDNF can lead to several serious complications in women such as endometriosis, intra-uterine growth restriction (IUGR), preeclampsia and cancers [ 20 ]. A positive correlation is reported between estrogen and BDNF, and the interaction of inflammatory factors [Interleukin-1β (IL-1β)] and estradiol (E2) with their receptors leads to increased extracellular signal-regulated kinase 1/2 (ERK1/2) expression, within transcription factor phosphorylation, cAMP response element binding protein (CREB) causes synthesis of BDNF in the endometrium [ 10 ]. Capillary blood vessels formed around endometriosis tissue would help this increased amount of BDNF reach the peripheral circulation. To the best of our knowledge, the present systematic review and meta-analysis is the very first study that investigates the level of BDNF in patients with endometriosis and evaluates the diagnostic value of BDNF in endometriosis. Also, our study has extended the results of previous studies on this topic by including 12 studies. Additionally, in our sensitive analysis, we have compared BDNF levels in serum and plasma separately, which can lead to a better vision for utilizing the BDNF as a novel biomarker for endometriosis. However, with a small sample size, caution must be applied, as findings might not be transferable to all the patients who are diagnosed with endometriosis. Only 50% of the included studies have evaluated the level of BDNF in either serum or plasma, since it is easier for both health workers and patients to evaluate BDNF in blood samples, more studies are required to investigate BDNF levels in blood. Number of limitations should be considered for current study. Several confounding factors are able to make changes in BDNF level in individuals such as socioeconomic status which can lead to escalating rate of depression, different type of mental disorders and administration of number of medicines including Analgesics. [ 22 ] Included studies in our meta-analysis have not considered mentioned factor in their participants, therefore evaluated BDNF level in these studies can be effected by confounding factors. Other limitation for our study is number od included articles and participants, for considering BDNF as a diagnostic value for endometriosis, more studies should be included and determined. Considerably more work will need to be done to determine the correlation between BDNF level and endometriosis and to evaluate the diagnostic value of BDNF. These would help health workers with earlier diagnosis, more efficient treatment, and controlling the adverse effect of endometriosis such as pain and infertility. As mentioned earlier, since BDNF increases in both serum and plasma, it can be utilized as an accessible, fast, non-invasive, and inexpensive method for not only diagnosis but also evaluating the severity and treatment respond in women with endometriosis. In conclusion, our study revealed that BDNF level is significantly higher in patients with endometriosis compared to healthy control. Further investigation and experimentation into the correlation between BDNF and endometriosis is strongly recommended.
Background The existing literature on the association between BDNF protein levels and endometriosis presents inconsistent findings. This systematic review and meta-analysis aim to synthesize the available evidence and evaluate the possible relationship between BDNF protein levels and endometriosis. Methods Electronic databases (PubMed, Embase, Scopus, PsycINFO, and Web of Science) were used to conduct a comprehensive literature search from inception to June 2023. The search strategy included relevant keywords and medical subject headings (MeSH) terms related to BDNF, endometriosis, and protein levels. A random-effects model was used for the meta-analysis, and to explore heterogeneity subgroup analyses were performed. funnel plots and statistical tests were used for assessing the publication bias. Results A total of 12 studies were included. The pooled standardized mean difference (SMD) of BDNF levels between women with endometriosis and controls was 0.87 (95% confidence interval [CI] 0.34 to 1.39, p = 0.001; I2 = 93%). The results showed that blood levels of BDNF are significantly higher in endometriosis patients (SMD: 1.13 95% CI 0.54 to 1.73, p = 0.0002; I2 = 93%). No significant publication bias was observed based on the results of Egger’s regression test (( p = 0.15). Conclusion This study revealed a significant difference between patients diagnosed with endometriosis and healthy control in the level of BDNF. The results indicate that women with endometriosis have higher levels of BDNF. Further studies are needed to be undertaken to investigate the role of BDNF in endometriosis pathophysiology and the diagnostic value of BDNF in endometriosis. Supplementary Information The online version contains supplementary material available at 10.1186/s12905-023-02877-0. Keywords
Electronic supplementary material Below is the link to the electronic supplementary material.
Acknowledgements The authors would like to acknowledge the Clinical Research Development Unit of Imam Ali Hospital, Karaj, Iran. Author contributions A.S., S.P, R.B: Conceptualization, Project Administration, Data curation, Writing- Original Draft, Writing ? Review & Editing, Visualization K.J, A.S; M.B, F.S, M.A: Validation, Resources, Methodology, Software, Formal analysis, Writing ? Original Draft I.M, E.M.: Writing- Original Draft S.R.: Data curation Funding This study did not receive funding, grant, or sponsorship from any individuals or organizations. Data availability All data generated or analyzed during this study are included in this published article [and its supplementary information files]. Code availability Not applicable. Declarations Ethics approval Not applicable. Consent to participate Not applicable. Ethical statement Not applicable. Consent for publication Not applicable. Conflict of interest The authors have no relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. IRB Not required for meta-analysis or a systematic review and commentaries.
CC BY
no
2024-01-15 23:43:47
BMC Womens Health. 2024 Jan 13; 24:39
oa_package/87/4c/PMC10788000.tar.gz
PMC10788001
38218894
Introduction As the world’s second-largest economy, China is also grappling with the intricate challenge of rapid ageing [ 1 ]. According to a recent national survey (2020) based on the scale assessment for activities of daily living (ADLs) and instrumental activities of daily living (IADLs) [ 2 ], three levels were categorised based on the severity of dependency and older adults’ requirements for care. The study estimated that more than 20 million Chinese older adults were in need of minimal assistance with daily living activities, such as meal preparation and basic hygiene (level 1 dependency), 36 million needed moderate assistance with daily tasks, including cooking, shopping, and medication management (level 2 dependency), and 45 million were largely dependent on others for their daily living activities, requiring continuous supervision and assistance, such as those with severe cognitive or physical impairments (level 3 dependency), respectively. The one-child policy has directly impacted the availability of family caregivers, compounding the issue of inadequate care for Chinese older adults in their later years [ 3 ]. For the majority of older adults, dependency on assistance for daily living activities and cognitive impairments has become a significant life event, and these aspects lead to an increasing demand for nursing homes [ 4 ]. However, the quality of care provided in Chinese nursing homes is primarily influenced by policies, often falling short of meeting the demands of older adults in terms of having skilled caregivers, real-time monitoring, and continuous health assessment [ 5 ]. As sustainable strategies for promoting care for the ageing population, the use of smart technologies can address the escalating unmet healthcare needs of older adults and offset the inadequacy of medical resources to effectively improve the current healthcare system [ 6 ]. In hospital settings, smart technologies are used to enhance clinical decision-making [ 7 ], while in home-based care, they help with self-management and the remote monitoring of chronic diseases [ 8 , 9 ]. In nursing home settings, technologies are predominantly implemented to provide person-centred care services and integrate medical services from remote hospitals [ 10 ]. The use of smart technologies holds the potential to support a substantial number of older adults in both home-based and nursing home-based care [ 11 ]. In 2014, the Ministry of Civil Affairs of the People’s Republic of China, the supervisory department for geriatric care, initiated the ‘Smart Elderly Internet of Things (IoT) Pilot Project’ to enhance the operation of SNHs [ 12 ]. In 2015, the Chinese government introduced the ‘Internet Plus’ plan to encourage technological innovation [ 13 ], encompassing projects related to IoT or Artificial Intelligence (AI) in safety monitoring, fall prevention, and disease detection for older adults. However, the concept of a SNH and the availability of smart technologies in nursing home settings remain ambiguous. Moreover, many older adults have a negative attitude towards smart technologies, perceiving them as challenging to use and being expensive [ 14 ]. Exploring the expectations and acceptability of SNHs within a defined service scope and associated technologies [ 10 ] among stakeholders, particularly older adults, will provide a better understanding of the future development and implementation of SNH models. Expectations, in this context, generally encompass the desires of consumers regarding what they expect a SNH to provide [ 15 ], while acceptability refers to the intention to use services when they are available and meet the criteria of target users willing to adopt SNHs [ 16 ]. Previous studies have often defined a SNH as either a smart building equipped with IoT networks [ 17 ], or the isolated application of smart technology within nursing home environment [ 18 – 21 ]. Specifically, a precise definition of SNHs and the comprehensive implementation of functional technologies is needed. A comprehensive scoping review has defined a SNH as characterised by the incorporating of functional information technologies, encompassing the IoT, digital health, big data, AI, cloud computing technologies, and information management system (IMS) that enable the monitoring of abnormal events, provision of remote clinical services, establishment of health information databases, enhancement of decision making processes, analysis of clinical data, and facilitation of activities of daily living for older residents [ 10 ]. It may integrate medical services from remote hospitals or healthcare experts, using telemedicine, mHealth, and other electronic clinical information, to manage complex health conditions among their residents and ensure their overall well-being within a safe and cost-effective environment [ 10 ]. Previous studies have investigated the willingness and associated factors of Chinese older adults to the conventional nursing homes [ 22 , 23 ]. However, there is a lack of studies that have examined the expectations and acceptability of SNHs. It is crucial to thoroughly investigate the perspective of Chinese older adults regarding SNHs. This is necessary to ensure the successful development of innovative geriatric care models that meet the healthcare demands of China’s ageing population and are widely embraced. Research questions Drawing upon the defined SNH model [ 10 ], the following research inquiries were devised: 1) What factors are important to assess the expectations and acceptability of SNHs, and their psychometric property as a tool? 2) To what extent are Chinese older adults inclined to embrace the evidence-based SNH model? 3) What are the levels of expectation and acceptability exhibited by Chinese older adults towards the SNH model? 4) Is there an association between the sociodemographic characteristics of Chinese older adults and their levels of expectations and acceptability concerning SNHs?
Methods In this study, an exploratory sequential mixed method (Fig. 1 ) was used to answer the research questions. There were no similar instruments or pre-existing questionnaires available to measure the expectations and acceptability towards SNHs. Hence, a newly developed instrument was designed based on the results of a qualitative study to assess the levels of expectation and acceptability of SNHs among Chinese older adults. Subsequently, a survey was conducted in four Chinese cities. The sociodemographic factors associated with expectations and acceptability of SNHs were also explored and examined. Guidelines for conducting and reporting mixed research in the field of counseling and beyond guided results reporting [ 24 ] (Additional file 1 ). In the mixed method approach, qualitative insights were derived from a developed questionnaire assessing the expectations and acceptability. Both quantitative and qualitative data were combined in the final analysis to enhance the depth of findings. The study protocol, a scoping review and the preceding qualitative study have been previously published [ 10 , 25 , 26 ]. Questionnaire development and validation The questionnaire was developed as a measurement tool building on the conceptual framework (Fig. 2 ) derived from the ‘smart technology adoption behaviors of older consumers theory’ proposed by Golant [ 27 ], a scoping review [ 10 ], as well as the results of a qualitative study which has been published elsewhere [ 26 ]. According to the conceptual framework, the adoption of SNH emerges in response to unmet healthcare needs, resulting in unfulfilled expectations among older adults. The decision to embrace SNHs is underpinned by appraisals of information and technology. Older adults’ choices are influenced by their prior experiences with smart technologies and external sources of persuasiveness, including public media, friends, family members, and healthcare professionals (HCPs). The determinants shaping their technology appraisal encompass perceived efficaciousness, positive or negative usability, and the potential collateral damage associated with adopting smart technologies. Simultaneously, attributes specific to older adults, such as their resilience towards smart technologies, are linked to their acceptability of SNHs. A qualitative case study was conducted using the snowball sampling method to collect data from a total of 34 participants until data saturation was achieved. Of these participants, 28 were older adults aged 60–75, residing in Hainan and Dalian, China, during the winter season. They were selected from six provinces to ensure a diverse representation of older adults. Additionally, six adult children were included in the study to explore their expectations and acceptability of SNHs. Semi-structured in-depth interviews and focus group discussions were conducted for data collection. Data were imported and managed using ATLAS.ti8 software. A framework method [ 28 ] was employed using inductive and deductive approaches to analyse the textual data. Furthermore, data were coded and categorised into themes. All items in the new questionnaire were derived from the interviews and previous scoping review through the mentioned analytical strategy. The questionnaire item design incorporated direct quotes from the qualitative data to ensure that the latter survey aligns authentically with the perspectives of the Chinese ageing population. Meanwhile, the concept of SNHs, captured from the scoping review was stated before the questionnaire to assist the respondents in sharing their perspectives on the expectation and acceptability of mart nursing homes. It included an explanation of it as a care model that provide continuous monitoring of its residents through information technologies, connect them with their remote HCPs, and integrate medical resources to satisfy the care needs of older residents. Additionally, information on sociodemographic characteristics, including age, place of residence, gender, health condition, income, type of insurance, educational attainment, number of children, and living partners, was collected from the respondents. Three items were included to measure respondents’ resilience to smart technologies, comprising familiarity with technologies, openness to new technology, and self-efficacy in applying smart technologies [ 27 ]. An expert panel, which included two statisticians, two family physicians, one public health physician, one nursing home operator, one business stakeholder, and three older adults, was invited to assess the content validity using the content validity index (CVI) for 49-item of the questionnaire [ 29 ]. This was done in line with the Consensus-based Standards for the selection of health status Measurement Instruments (COSMIN checklist) guideline [ 30 ], which evaluates the relevance, comprehensibility, and comprehensiveness of a newly developed questionnaire. Subsequently, cognitive debriefing was conducted among ten older adults [ 31 ]. Of those, eight were selected from Dalian and two from Hainan community groups. Considering the diverse characteristics of the intended respondents, three participants with primary school education were recruited, three with junior or high school education, and remaining had university education. The research team organised an online group discussion where they introduced the purpose of the study and explained the concept of SNHs, along with the content of each item in the questionnaire. Participants were instructed to provide insights into their understanding of the questions, any ambiguous terms, and potential areas of confusion. The investigator (ZYY) recorded and clarified the responses for each question. For example, the investigator used a fixed probe to ask the participants, ‘Is this a correct choice that can reflect your response? Can you paraphrase this item in your own words based on your understanding? Can you elaborate on why you chose this answer?’. The frequency of problems encountered for each question would be gathered, such as difficulties in understanding and ambiguity of wording, and adjustments would be made accordingly. One session was carried out with a duration of approximately 2–3 h. Structural validity was established through exploratory factor analysis (EFA), based on data collected from the survey respondents. The eigenvalue was set above 1, and items with a loading value below 0.40, as well as cross-loadings greater than 0.40 were dropped [ 32 ]. Subsequently, structural equation modelling (SEM) was utilised to evaluate model fit with the SPSS AMOS software. Internal consistency was assessed using Cronbach’s alpha. A Cronbach’s alpha exceeding 0.70 is considered indicative of good internal consistency for the questionnaire [ 33 ]. Construct validity (hypothesis-testing) was assessed by comparing responses towards the expectations and acceptability of SNHs with a single item regarding willingness to move to a nursing home (Yes or No) [ 22 ]. The expectations and acceptability scores were categorised into tertiles. The hypothesis posited that the highest tertile of expectations would exhibit an association with the willingness to move to a nursing home as evidenced by an a priori odds ratio of at least 2.0, while the highest tertile of acceptability would be linked to the willingness to move to a nursing home, reflecting a priori odds ratio of at least 3.0 [ 34 ]. It was also hypothesised that expectations and acceptability would be positively correlated, with a correlation coefficient r value of > 0.4. A one-month intra-rater test–retest was performed among participants who answered and returned the second completed questionnaire. The participants were recruited from those who had the willingness to participate in the test–retest and provided their telephone numbers when they answered the questionnaire for the first time. Quantitative study (survey) Study setting Quantitative data using surveys were collected in four major cities namely Xi’an, Nanjing, Shenyang, and Xiamen, representing the west, east, north, and south of China. In Xi’an, Nanjing, and Shenyang, the estimated older population comprises 18%, 22%, and 26%, respectively [ 35 – 37 ]. Meanwhile, the government of Xiamen has actively promoted smart healthcare initiatives to assist older adults in their activities of daily living [ 38 ]. Participants and sample size estimation The selected older adults were within the age range of 60–75 years. Individuals residing in nursing homes, receiving palliative care, or experiencing cognitive impairment were excluded. Sample size calculation was conducted using PASS software. Based on an expected 10% level of acceptance of nursing homes among Chinese older adults [ 22 ], a 95% confidence level with a two-sided and 5% margin of error, the minimum required sample size was 139. However, for this study, a target sample size of 300 was set with inflation for non-response and incompletion rates. The data was collected from older adults who usually gather in public parks for group activities, such as morning or post-dinner exercise. Data collection A stratified random sampling method was used to identify participants. Eight enumerators (two in each city) recruited participants and asked them to suggest the ten most popular parks or communities where local older adults participate in physical activities. Subsequently, they recruited older adults from randomly selected public parks or community centres. In China, older adults typically visit public parks for collective activities, such as physical exercise and morning routines, or post-dinner dancing. Different age groups can be easily identified by the types of activities they engage in. For example, older adults aged 60–70 years usually join dancing groups, while older individuals prefer playing chess or engaging in conversations with others. Additionally, respondents were encouraged to provide their telephone numbers to enhance research credibility and facilitate participant recruitment for the intra-rater test-retest. During data collection, enumerators explained the concept of SNHs, which was stated on the questionnaire and checked the completeness of the questionnaires when all respondents returned them. Data analysis The IBM Statistical Package for Social Sciences (SPSS 26) software was used for data management and analysis. Qualitative variables were presented as frequencies and percentages. The expectations and acceptability of SNHs were categorised into tertiles.Chi-square tests were used to examine the associations among the sociodemographic factors, expectations and acceptability of SNHs, and the willingness to move to a nursing home. Multiple logistic regression models were utilised to analyse the association between the independent variables, including sociodemographic characteristics and older adults’ resilience to smart technologies, on expectations and acceptability of SNHs. Variables from the univariable regression analysis with a p -value < 0.20 in the expectation and acceptability domains were included in the multinomial logistic regression analysis. In all analyses, the significance level was set at 0.05. Statistical strategies to multicollinearity, data normality, and assumptions of the final model were checked.
Results Questionnaire development and validation The initial version of the questionnaire was crafted by synthesising qualitative data obtained from a scoping review and qualitative case study using both deductive and inductive analysis approaches, incorporating themes, codes, and subcodes [ 10 , 26 ] (Additional file 2 , A2-1). It comprised 24 items for the expectation domain, and 25 items pertaining to the acceptability domain. Among the 24 items in the expectation domain, five codes (subdomains) were identified from the qualitative phase. The subdomains are ‘quality of care supported by governments and societies’ with five items; ‘smart technology applications’ with seven items; ‘presence of a skilled HCP team’ with three items; ‘access and scope of basic medical services’ with six items; and ‘integration of medical services’ with two items. In the 25-item acceptability domain, six codes (subdomains) were identified, which encompass ‘perceived efficaciousness’ of SNHs with four items; ‘perceived positive usability’ with nine items; ‘perceived negative usability’ with two items; ‘perceived collateral damages’ with four items; ‘persuasiveness of external information’ with four items; and ‘persuasiveness of internal information’ with two items. Each item was measured on a 5-point Likert scale, where a response of 1 indicated the lowest levels of expectations or acceptability of SNHs, while a response of 5 indicated the highest levels of expectations or acceptability. The CVI scores for relevance, comprehensibility, and comprehensiveness were 0.97, 0.96, and 0.95, respectively (Additional file 2 , A2-2). These results were considered highly valuable [ 29 ]. The second version of the questionnaire had been reduced to 40 items from the initial 49 items (Additional file 2 , A2-3) and named the Expectation and Acceptability of Smart Nursing Homes questionnaire (EASNH-Q). The item on willingness to move to a nursing home was moved to the sociodemographic characteristics section and all items were renumbered. All participants in the cognitive debriefing agreed with the item description and scale design for these 40 items without any problems. After undergoing the process of face and content validity, structural validity, internal consistency tests, one-month intra-rater test–retest, and construct validity were conducted using the data obtained from the latter survey among 264 respondents. EFA identified three subdomains (three factors) for the underlying structure of expectations and these three factors were renamed as nursing care, medical services, and government and social support in relation to the service categories. EFA also identified three subdomains (three factors) for the acceptability structure and the three factors were categorised as perceived usability, perceived efficaciousness, and perceived collateral damages and negative usability (Additional file 2 , A2-4). In confirmatory factor analysis (CFA), single-factor models indicated the presence of 24 remaining items. Of which, 10 items in the expectation domain and 14 items in the acceptability domain were considered adequate (Table 1 ; Fig. 3 ) (Additional file 2 , A2-5). Cronbach’s alpha was 0.87 in the expectation domain, and it was 0.92 in the acceptability domain. Construct validity indicated by the strong correlation between the expectations and acceptability of SNHs Pearson’s coefficient of 0.85 ( p < 0.01). Among the 264 respondents, 84 (31.8%) were unwilling to move to nursing homes, while 180 (68.2%) expressed a willingness to move (Table 2 ). Type of insurance, education, the degree of familiarity with technology, openness to technology, and self-efficacy in applying smart technologies were significantly associated with the willingness to move to nursing homes. The binary logistic regression analysis for expectations and acceptability in relation to the willingness to move to nursing homes presented that the odds of older adults in the higher tertiles of expectations for SNHs towards moving to nursing homes were higher compared to those with the lowest tertile scores (OR of 1.99, 95% CI 1.01–3.93 for the middle tertile and OR of 3.02, 95% CI: 1.18–7.73 for the highest tertile) (Table 3 ). Similarly, the odds of older adults with the higher tertiles of acceptability for SNHs towards moving to nursing homes were higher compared to those with the lowest tertile scores. (OR of 2.36, 95% CI 1.13–4.91 for the middle terile and OR of 2.43, 95% CI: 1.11–5.39 for the highest tertile). In the test-retest reliability analysis, 52 participants (13 in each city) answered and returned the second completed EASNH-Q. More than half of them were women, the majority were aged 60–70. Five did not have a pension, two had no insurance, four had a primary school education, six had three or more children, and four lived alone without partners (Additional file 2 , A2-6). The intraclass correlation coefficients (ICC) values for expectation and acceptability factors were 0.90 and 0.81, respectively (Additional file 2 , A2-7). Quantitative study (survey) In total, 264 respondents completed the questionnaires, resulting in a response rate of 70%. The demographic characteristics of the respondents are presented in Table 4 . The number of respondents in each age group (60–64 years old, 65–70 years old, and 71–75 years old) was similar. Among these respondents, over 60% reported having one or more chronic diseases. More than 90% had insurance coverage and 68.1% had a high school or university education. In addition, 56.8% had one child and only 9% lived alone. Approximately one-quarter (24.2%) of the respondents were familiar with technology, 71.2% had openness to technologies, and 63.6% had self-efficacy in applying smart technologies. The overall means (SD) for expectations and acceptability were 4.0 (0.60) (Min-Max: 2.0–5.0) and 4.0 (0.60) (Min-Max: 1.6–4.9), respectively. The associations between sociodemographic characteristics and expectations and acceptability of SNHs presented that the younger age, having insurance, a university level of education, openness to technology, and self-efficacy in applying smart technologies were significantly associated with expectations (Table 5 ). Older age, living with partners and children, openness to technology, and self-efficacy in applying smart technologies were significantly associated with acceptability (Table 5 ). Table 6 displays the comparisons between the highest tertile of the expectation group and the lowest tertile of the expectation group. Older adults with self-efficacy in applying smart technologies were 28 times more likely to have the highest tertile of expectation (OR: 28.02, 95% CI: 5.92-132.66), and those with willingness to move to a nursing home were 3 times more likely to have the highest tertile of expectation (OR: 2.98, 95% CI: 1.06–8.37). Meanwhile, older adults with self-efficacy in applying smart technologies were 14 times more likely to be in the highest tertile of acceptability compared between the highest tertile of the acceptability group and the lowest tertile group (OR: 13.80, 95% CI: 4.33–43.95). The multinomial logistic regression models revealed that 41.7% (Nagelkerke R 2 = 0.417) and 32.2% (Nagelkerke R 2 = 0.322) of the variances in the expectation domain and the acceptability domains, respectively.
Discussion This is the first study in which an instrument was developed to assess the expectations and acceptability of SNHs among mainland Chinese older adults, both in general and in particular. It aims to examine their levels of expectations and acceptability towards SNHs, as well as to determine the sociodemographic factors associated with different categories of expectations and acceptability. The exploratory sequential mixed methods study design integrates various data sources offering strength to confirmatory results [ 39 ]. The study began with a qualitative phase, which explored the expectations and acceptability of a SNH model in general, and specifically among Chinese older adults and their family members. The qualitative phase mapped the knowledge bases for the development and validation of a 24-item EASNH-Q [ 40 ], and continued with a cross-sectional study in four major cities in China involving 264 respondents. Data integration was achieved through a data-building approach, in which the results from the qualitative phase and the survey were analysed and compared to understand complex phenomena, measure changes, and examine the hypothesis [ 24 , 40 ]. The results from both qualitative and quantitative phases aligned with study design principles, variables exploration and analysis, and data interpretation. Many concordant findings, rather than discordant ones, were noted between the two phases. The former phase indicated the highest acceptance of moving to nursing homes as an alternative and a high level of agreeableness with external information persuasiveness for receiving healthcare benefits, such as media. A few discordant results in the later phase were related to a lower acceptance of moving to a nursing home and the family-oriented culture in healthcare decision-making as the trustworthy persuasiveness. Additionally, three items were generated from the emerging codes during the scoping review and content validity, including SNHs can provide better services to improve healthcare accessibility and availability, the preference of “human-centric” designs for the smart devices, and hospice care, were highly expected by the participants (Additional file 3 ). In China, many similar questionnaires commonly focus on older adults’ willingness to move to conventional nursing homes. Two of these studies had larger samples, with 670 and 1003 Chinese older adults [ 22 , 23 ], and more than half of their respondents were in aged 60–70, very similar to the main sample of this study. Additionally, more than half of the other studies’ respondents had a primary school education or lower in contrast to this study that had < 10%. In one study [ 22 ], data from an urban community showed that half of the respondents had a higher economic status which is similar to the respondents in this study (monthly pension: 1000–4000 CNY, $138–555). Regarding the proportion of willingness to move to a nursing home among Chinese older adults, this study had a higher acceptance rate (68.2%) compared to the other two previous studies (45.4–11.9%) [ 22 , 23 ]. The higher acceptance rate reflects the increased demand for moving to a nursing home, particularly when older adults consider their disabilities [ 41 ]. It has been reported that older adults may choose to transition from home-based care to nursing homes with intensive supervision and more professional services due to the decline in bodily functions and the obstacles faced by family members who are unable to devote themselves to necessary or additional care [ 42 ]. As an alternative, nursing homes can provide 24-hour formal care and some medical services for older adults who require daily assistance and have complex health demands [ 43 ]. Moreover, the purpose of developing the EASNH-Q was to explore the expectations and acceptability of SNHs, making it a novel contribution. The item design of the EASNH-Q demonstrated good levels of relevance, comprehensibility, and comprehensiveness in assessing the expectations and acceptability of SNHs [ 44 – 46 ]. The expectations and acceptability of SNHs were explored among Chinese older adults who were interviewed in the qualitative phase. These expectations and acceptability were examined through a survey in the subsequent quantitative phase, providing empirical evidence of high levels. The survey sites selected from four different regions of mainland China represent the major group of the Chinese ageing population according to their family structures, health status, long-term care needs, and insurance schemes [ 47 ]. There were small variances in different cities when respondents answered the EASNH-Q (effect size: 0.34 − 0.32) (Additional file 4 ). The results showed that expectations were highly correlated with the acceptability of SNHs. Older adults from Nanjing, in the east of China, had the highest expectations of SNHs, and they also had the highest acceptability of SNHs. In contrast, older adults from Xiamen, in the south of China, had the lowest expectations and the lowest acceptability. These geographic differences among older adults may be attributed to their sociodemographic characteristics. For example, urban older adults living in environments more sustainable for an ageing population, with fewer children, higher income, and higher education have a better acceptability of nursing homes than those in rural areas who have more children, limited income, and lower education [ 48 , 49 ]. In addition, the in-depth analysis of the response distribution for each item revealed that most of the questions had a ceiling effect (> 15%), except item for Q11, ‘persuasiveness of public media increases the acceptability of SNHs’ (3.8%). This reflects the report of Chinese older adults’ social network type to receive healthcare benefits, indicating that the media has less impact on appraising their health [ 50 ]. Meanwhile, the floor effect of each item was small (< 15%). The assessments of ceiling and floor effect indicate the ability of a questionnaire to distinguish among respondents at the extreme ends of the scale [ 51 ]. High ceiling effects, as observed in many of the items, may suggest a limited instrument range, measurement inaccuracy, or response bias [ 52 ]. However, no previous research has reported on the ceiling and floor effects on the expectations and acceptability of nursing homes in China. Nevertheless, the high ceiling and floor effects reflected and examined the results from the qualitative phase that all participants had a positive attitude towards SNHs [ 26 ]. It is believed that IoT, big data, and internet networks can provide quality services [ 53 ]. This belief was reflected in the responses to items Q1-5, particularly in real-time monitoring, disease prediction, electronic health records, and customised services. It is important to note that technology is not the primary reason for people deciding to move to nursing homes. Instead, technology acts as an assistant to the functions and care practices provided in nursing homes [ 54 ]. In China, more than half of older adults wish for nursing homes to provide medical services at a hospital level [ 22 ]. This study observed that many respondents had high expectations for collaboration between hospitals and SNHs to integrate medical services with remote hospitals. Moreover, Chinese older adults expected medical staff to be available at conventional nursing homes, as many nursing home residents are moderately dependent and at risk of fatal diseases [ 22 , 55 ]. There were also high expectations of having trained caregivers, such as nurses and doctors in SNHs. Additionally, more than half of the respondents had high expectations of hospice care in SNHs because it is an essential part of all healthcare systems. This might be due to the general perception of the limited services and lack of accessibility of hospice care in the current nursing homes. For example, only 30.8% of nursing homes in Hebei province provided hospice care services [ 56 ]. Chinese older adults are influenced by the family-oriented culture when it comes to receiving and appraising information about their health [ 50 ]. The results were indicative of the same path that trustworthy health-related resources were typically found within family members, doctors, friends, and public media, as well as influenced by personal demands. Respondents showed a high acceptability of SNHs when they perceived the benefits and efficaciousness of using smart technologies. This perceived efficaciousness of technology generally involves a comparison between two options and the benefits received, such as comparing the quality of care and cost-effectiveness in SNHs versus conventional ones [ 27 , 34 ]. Moreover, it has been commonly reported in previous studies that many older adults had negative attitudes towards adopting smart technologies due to the additional cost or the need to purchase expensive devices [ 14 , 57 , 58 ]. However, the high scores of items Q19-22 in the EASNH-Q confirmed that certain features of SNHs could increase older adults’ positive attitude and their consideration of adopting smart technologies. These features include the perceived necessity for health, ease of use, user-friendliness, convenience, and the “human-centric” design of smart solutions. The final adjusted multivariable analysis showed that only self-efficacy among three items for testing the older adults’ resilience to smart technologies, including familiarity with technology and openness to technology [ 27 ], was more likely to influence the information and technology appraisals among Chinese older adults. The direct users of smart technologies designed and applied in nursing home settings have been revealed through the previous scoping review [ 10 ]. These users are nursing home residents (81%) and their HCPs (19%), such as nursing home staff and doctors in remote hospitals. Self-efficacy refers to an individual’s belief in their ability to successfully use smart technologies and older adults with self-efficacy in applying smart technologies may increase their willingness to adopt new solutions [ 59 ]. For other sociodemographic factors, such as age, income, and educational attainment, were not found to be significantly associated with the different categories of expectations and acceptability towards SNHs among Chinese older adults. These factors were previously reported in other studies to be directly associated with Chinese older adults’ willingness to move to a nursing home [ 22 , 23 ], and the willingness to move to a nursing home was examined to be significantly associated with the highest tertile of expectations in this study. This study employed several strategies to ensure research accuracy and credibility. Firstly, semi-structured, in-depth interviews, focus group discussions, and member checking were used for data collection in the qualitative study phase to ensure study credibility. A team of five investigators participated in data auditing, analysis, and coding discussions to authenticate the findings, ensuring the reliability of the study. In the quantitative phase, the survey sites chosen for data collection were selected to represent the west, east, north, and south of China. Eight onsite enumerators underwent training and were provided with a detailed study procedure to standardise the recruitment of participants and improve data quality. Data accuracy was cross-checked by the research team. However, this study has some potential limitations. Firstly, the concept of SNHs stated on the EASNH-Q was developed based on the informative literature, of which, most of the study population were from middle-income and high-income countries that may not be applicable to resource-challenged or low-income countries, as well as countries with limited internet access. Secondly, selection biases might have occurred, with qualitative study participants being Chinese older adults who were flown into Hainan and Dalian during the winter season, and quantitative study respondents coming from the four major cities [ 26 ]. This approach might not have captured all the essential factors necessary to measure the expectations and acceptability of SNHs among the entire Chinese ageing population, including other regions and rural areas in China, taking into consideration their multimorbidity and cultural differences. The findings should be generalised with caution to older adults residing in rural areas as they may have a lower acceptance of moving to a nursing home [ 22 ]. Moreover, the survey respondents in this study were selected among outdoors and able older adults, potentially missing specific groups of older people with limited mobility, economic disadvantages, or those who fall ill at home but still intend to move to nursing homes. In addition, the participants may find it difficult to answer the questions related to the acceptability of SNHs as a whole due to the non-existence of a SNH to refer to or a lack of experience using smart technologies for healthcare.
Conclusion The significance of this study lies in the exploration of the expectations and acceptability of SNHs among Chinese older adults, through both qualitative and quantitative evidence leading to the 24-item EASNH-Q that demonstrated commendable validity, reliability, and stability. The rigorous development process establishes it as a reliable tool for measuring the levels of expectations and acceptability of SNHs. Self-efficacy in applying smart technologies links to the high expectations and acceptability of SNHs. The willingness to relocate to a nursing home increases the high expectations of SNHs. A feasible SNH model presents a promising solution for addressing the challenges posed by the rapidly ageing society in China. The study results hold relevance for a wide range of stakeholders and audience with an interest in SNHs, including older adults, their family members, healthcare providers, nursing home personnel, policy-makers, and entrepreneurs in the smart device industry. Furthermore, the potential applicability of these findings extends beyond China, encompassing both developed and developing nations. Subsequent research efforts should aim to quantify the expectations and acceptability of SNHs within a larger and more diverse Chinese population considering various societal strata and potentially different countries. Gaining insights from a more extensive population base will enable a more comprehensive assessment of the determinants influencing expectations and acceptability of SNHs. This, in turn, will contribute to the development of a more effective SNH model that aligns with local settings and stakeholders’ requirements.
Background Smart nursing homes (SNHs) integrate advanced technologies, including IoT, digital health, big data, AI, and cloud computing to optimise remote clinical services, monitor abnormal events, enhance decision-making, and support daily activities for older residents, ensuring overall well-being in a safe and cost-effective environment. This study developed and validated a 24-item Expectation and Acceptability of Smart Nursing Homes Questionnaire (EASNH-Q), and examined the levels of expectations and acceptability of SNHs and associated factors among older adults in China. Methods This was an exploratory sequential mixed methods study, where the qualitative case study was conducted in Hainan and Dalian, while the survey was conducted in Xi’an, Nanjing, Shenyang, and Xiamen. The validation of EASNH-Q also included exploratory and confirmatory factor analyses. Multinomial logistic regression analysis was used to estimate the determinants of expectations and acceptability of SNHs. Results The newly developed EASNH-Q uses a Likert Scale ranging from 1 (strongly disagree) to 5 (strongly agree), and underwent validation and refinement from 49 items to the final 24 items. The content validity indices for relevance, comprehensibility, and comprehensiveness were all above 0.95. The expectations and acceptability of SNHs exhibited a strong correlation ( r = 0.85, p < 0.01 ) , and good test-retest reliability for expectation (0.90) and acceptability (0.81). The highest tertile of expectations (X 2 = 28.89, p < 0.001) and acceptability (X 2 = 25.64, p < 0.001) towards SNHs were significantly associated with the willingness to relocate to such facilities. Older adults with self-efficacy in applying smart technologies (OR: 28.0) and those expressing a willingness to move to a nursing home (OR: 3.0) were more likely to have the highest tertile of expectations compared to those in the lowest tertile. Similarly, older adults with self-efficacy in applying smart technologies were more likely to be in the highest tertile of acceptability of SNHs (OR: 13.8). Conclusions EASNH-Q demonstrated commendable validity, reliability, and stability. The majority of Chinese older adults have high expectations for and accept SNHs. Self-efficacy in applying smart technologies and willingness to relocate to a nursing home associated with high expectations and acceptability of SNHs. Supplementary Information The online version contains supplementary material available at 10.1186/s12912-023-01676-0. Keywords
Supplementary Information
Acknowledgements Not applicable. Authors’ contributions ZYY formulated and assumed overall responsibility for the study’s conduct. FKR, SGS and BHC participated in the research’s design phase. SJ engaged in both qualitative data collection and statistical analysis. FZR, an expert in gerontechnology, served as one of the investigators contributing to the evaluation and appraisal of the technical aspects. KC oversaw the statistical analysis, while SGS and BHC validated the study’s qualitative and quantitative data, methodological design, and provided supervision throughout the research process. All authors have made significant intellectual contributions to the study’s development and have granted their approval for the final manuscript’s submission to the journal. Funding The author(s) received no financial support for the research, authorship, and publication of this article. Availability of data and materials The dataset supporting the results and conclusions of this article is included within the article and its additional files. Declarations Ethics approval and consent to participates Ethical approvals for this study have been obtained from the Ethics Committee for Research Involving Human Subjects, Universiti Putra Malaysia, Malaysia (UPM/TNCPI/RMC/JKEUPM/1.4.18.2, 28/11/2020) and Hainan Medical University, China (IYLIJ-2020-021, 03/09/2020). The respondent’s Information Sheet was provided, and Informed Consent Form completed before participation in this study. All methods were performed in accordance with the Declaration of Helsinki and other relevant guidelines and regulations. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:47
BMC Nurs. 2024 Jan 13; 23:40
oa_package/09/5d/PMC10788001.tar.gz
PMC10788002
38218980
Background Drug overdose has been the leading cause of injury death in the USA over the past decade [ 1 ], inflicting a devastating toll on families and communities across the country. The overdose epidemic, a public health crisis, has claimed the lives of roughly one million Americans since 1999 [ 2 ], with sharp, unprecedented increases since 2019 due to the emergence—and proliferation—of synthetic opioids, namely fentanyl and its analogues [ 1 , 3 , 4 ]. The potency and ubiquity of synthetic opioids in the drug supply have shifted the risk environment for people who use drugs (PWUD) [ 3 ], as stimulants and opioids adulterated with fentanyl have become increasingly pervasive, heightening concerns and anxieties related to overdose risk among PWUD [ 5 , 6 ]. Recently, the increasing presence of xylazine, a veterinary anesthetic, in drug overdose deaths presents an emergent threat, leading to severe soft tissue damage and potentially heightened overdose risk [ 7 , 8 ]. Additionally, novel benzodiazepines have emerged in the unregulated drug supply in North America [ 9 , 10 ], raising concerns about heightened overdose risk. Amidst notable supply shifts observed over the course of the COVID-19 pandemic [ 11 ], drug checking services have been proposed as a crucial public health response to the overdose epidemic in the USA [ 12 – 16 ]. Drug checking services offer a promising strategy to improve knowledge and agency among PWUD navigating the opaque drug market [ 17 , 18 ]. Fentanyl test strips (FTS), which are used to detect the presence of fentanyl, are widely used among PWUD to make informed decisions about use and mitigate risks [ 19 ]. Existing evidence demonstrates that FTS may modify how individuals intend to use, prompting individuals to discard their sample or practice harm reduction techniques [ 12 ], such as using a tester shot, using less, using in the presence of others, using more slowly, or ensuring naloxone is accessible [ 14 , 16 , 20 – 22 ]. FTS and other rapid immunoassay test strips (e.g., benzodiazepine test strips) are commercially available and distributed by many syringe service programs (SSPs) and other harm reduction organization across the USA [ 23 ]. Xylazine test strips are currently sold by BTNX, motivated by the needs expressed by PWUD and clinicians alike [ 8 , 24 ], while critically important tools, rapid immunoassay test strips have noteworthy limitations, suffering from low limits of detection and interferences from adulterants [ 23 ]. In providing a binary result (positive or negative), rapid immunoassay test strips provide no information on concentration, which is important for dosing, especially in a market saturated with fentanyl [ 18 ]. PWUD have shared that fentanyl is ubiquitous and difficult to avoid [ 18 ], thereby limiting the utility of tests to screen for the presence of fentanyl without knowing the concentration of fentanyl in the sample. Additionally, test strips are specific to one substance, or to several compounds of the same class [ 19 ]. In other words, an individual wanting to test their sample for fentanyl and benzodiazepines would have to use two strips: one for fentanyl and one for benzodiazepines. To address these limitations and to offer more detailed analytical data, various harm reduction organizations have piloted the use of Raman spectroscopy and Fourier-transformed infrared (FTIR) spectrometers for drug checking [ 17 , 25 , 26 ]. These devices can be optimized to provide information on the presence and approximations of the amount of multiple compounds simultaneously but typically require users to employ spectral libraries for accurate, routine analysis and are less sensitive than rapid immunoassay test strips [ 17 , 25 , 26 ]. To offset limitations of each analytical method [ 27 ], some harm reduction programs use integrated approaches (e.g., using rapid immunoassay test strips in combination with FTIR) [ 26 , 28 ]. Advances in drug checking are underway, providing potentially life-saving services for PWUD [ 25 ], by enhancing market monitoring capacity. Results from drug checking services are often shared within social networks to share information about drug quality with peers but can also feed into public health data systems [ 29 ], aiding in the detection of novel adulterants in the supply [ 21 , 30 , 31 ]. In this manuscript, we cast attention to the requirements and considerations of drug checking services for supply-level monitoring. This work was informed by the ongoing collaborations between academic institutions, SSPs, and community partners, and we begin with an overview of the various methodologies proposed, followed by a set of guiding principles that emerged from our discussions of implementation. While drug checking services are implemented across Europe, Australia, and Canada [ 21 , 32 , 33 ], the considerations presented herein were focused on implementation in the US context, particularly within SSPs. The overarching aim is to describe how drug checking services at harm reduction organizations can be used for supply-level monitoring amidst rapid shifts in the drug landscape without compromising individual-level information for PWUD, and in this way, inform public health interventions for the worsening overdose crisis in the USA
Methods As a group of public health researchers, analytical chemists, evaluators, and harm reductionists, we used a semi-structured guide to facilitate discussion on key priorities for drug checking services, considering implementation, data, and public health significance. Four possible methodologies were discussed, each of which would be integrated into a SSP. Following the discussion, we conducted a thematic analysis to identify salient themes. These findings were contextualized with extant literature and were further validated by all members of this collaborative and other harm reductionists and public health professionals in Ohio.
Conclusions Drug checking services are potentially life-saving interventions, promoting agency among PWUD to mitigate risks in an unpredictable environment. Augmenting existing drug checking programs to facilitate supply-level monitoring has the potential to detect emerging threats in the drug supply, and in this way, public health agencies can proactively respond to supply shifts and tailor interventions to curb the toll of the overdose epidemic.
Background Shifts in the US drug supply, including the proliferation of synthetic opioids and emergence of xylazine, have contributed to the worsening toll of the overdose epidemic. Drug checking services offer a critical intervention to promote agency among people who use drugs (PWUD) to reduce overdose risk. Current drug checking methods can be enhanced to contribute to supply-level monitoring in the USA, overcoming the selection bias associated with existing supply monitoring efforts and informing public health interventions. Methods As a group of analytical chemists, public health researchers, evaluators, and harm reductionists, we used a semi-structured guide to facilitate discussion of four different approaches for syringe service programs (SSPs) to offer drug checking services for supply-level monitoring. Using thematic analysis, we identified four key principles that SSPs should consider when implementing drug checking programs. Results A number of analytical methods exist for drug checking to contribute to supply-level monitoring. While there is likely not a one-size-fits-all approach, SSPs should prioritize methods that can (1) provide immediate utility to PWUD, (2) integrate seamlessly into existing workflows, (3) balance individual- and population-level data needs, and (4) attend to legal concerns for implementation and dissemination. Conclusions Enhancing drug checking methods for supply-level monitoring has the potential to detect emerging threats in the drug supply and reduce the toll of the worsening overdose epidemic. Keywords
Overview of low-barrier methodologies Drug checking devices, such as the TruNarc Raman spectrometer and Bruker Alpha FTIR [ 26 ], provide detailed information for PWUD, but widespread implementation is constrained by legal complexities as well as additional cost and labor requirements for already-stretched harm reduction organizations [ 19 , 25 ]. All methodologies discussed (Fig. 1 ) were low-barrier methods, in the sense that minimal materials, costs, and labor would be required for implementation. In this community-academic collaborative, drug checking services would be implemented at the SSP, and with prepaid shipping materials, SSP staff would send completed test materials to the research partner, who would perform all analyses using liquid chromatography with tandem mass spectrometry (LC–MS/MS), a highly selective and sensitive analytical tool for pharmaceutical and illicit drug analysis [ 23 ]. Evaluation partners in this collaborative would be responsible for dissemination, feeding results into data streams used by PWUD and public health agencies alike; this is discussed in greater detail in the subsequent section. The first test makes use of the illicit drug paper analytical device (idPAD) [ 34 ], a paper test card developed for the analysis of solid illicit drug samples. To use the cards, solid sample is applied to the card, and the card placed in water to run twelve colorimetric tests, each designed for detecting different functional groups of compounds present in illicit drugs [ 34 ]. At present, the idPAD is a useful tool for the analysis of bulk (percent-level) composition of illicit drugs, though it is unable to offer immediate information on drug content to non-trained users. Refinements of the idPAD are ongoing, and a mobile app is now available. The ultimate goal of this app is to capture idPAD images and use a trained neural network to detect the presence of various compounds, adulterants, and cutting agents to provide immediate information on drug content without the need for a trained user [ 35 ]. In addition to these developments in progress, the idPAD has been shown to be a useful tool for the collection and analysis of small quantities of illicit drugs for downstream (LC–MS/MS) analyses [ 23 ]. The second test takes the same approach as the idPAD but requires minimal time and sample. Individuals press a small mass of sample (10 mg) on an absorbent paper dot with a wax-printed boundary that helps localize and keep the sample in place during transit. Upon receipt of the paper dot, the testing laboratory can extract the solid drug from the paper dot for downstream analysis methods. In the third approach, the same sample mass (10 mg) is placed into a liquid-filled tube containing an aqueous solution of Bitrex, a non-toxic, bittering agent commonly used to prevent ingestion of cleaning products by children. The sample can be directly analyzed with LC–MS/MS. Each of these approaches yields quantitative information (i.e., concentration) after analysis but provides no information for PWUD at the point-of-use. The final proposed testing method allows for both the generation of rapid data at the point-of-use and for downstream analysis by making use of the commonly employed rapid immunoassay strips (e.g., FTS). With this approach, individuals use fentanyl or benzodiazepine test strips as normal, receiving a rapid dichotomous result (positive or negative). Rather than discarding the used strip, however, it would be sent for downstream analysis, by extraction of illicit drugs from the paper test card [ 36 ]. Key principles In weighing the strengths and limitations of each testing method, our interdisciplinary team reached a consensus on four guiding principles, or considerations, for selecting a method and implementing drug checking services for supply-level monitoring: (1) immediate utility to PWUD, (2) integration into SSP workflow, (3) balancing individual- and population-level data needs, and (4) attention to the legal context, each of which is described in further detail. Overall, the selected approach should align with the needs and concerns expressed by PWUD. Immediate utility to PWUD Of the four tests discussed, only one method, the rapid immunoassay test strips, provides immediate results to the participant. This was deemed to be of utmost importance because supply-level data cannot come at the expense of individual-level information, especially when such information can be used to inform decision-making related to use and, ultimately, reduce overdose risk [ 12 , 14 , 16 , 20 , 21 ]. In the final three tests, small amounts (10 mg) of sample are required. The idPAD, in contrast, requires much larger amounts (20 mg), presenting a significant barrier to implementation. Demonstration of immediate benefit to PWUD will be key in building trust among prospective participants. Integration into SSP workflow Considerations of the operational context were critical in thinking about the feasibility of implementation at the SSP. The time required for the idPAD would interfere with the existing SSP workflow, as there are often space constraints and lines of people waiting to enter during operating hours, although resources and structures vary widely between SSPs [ 37 , 38 ]. The processes for the second test using paper dots were cumbersome, often requiring assistance and a flat surface. The ease of the third test, in which individuals simply placed a scoop of sample into a liquid vial or tube, made it a feasible option. Similarly, FTS are portable, meaning they are already distributed by SSPs for use off-site, causing no changes to existing processes. Since FTS are already distributed by most SSPs, no disruptions would be made to SSP operations. Additionally, advancements in harm reduction are underway in Ohio with the installation of public health vending machines (PHVMs) [ 39 ]. PHVMs are stocked with a range of essential supplies for PWUD to mitigate drug-related harms, including but not limited to sterile injection equipment, HIV test kits, condoms, sharps containers, naloxone, and FTS [ 39 ]. FTS included in PHVMs could include prepaid mailing materials and information about the testing service, where rather than discarding the used strip, individuals submit the strip for analysis to contribute to supply-level monitoring [ 23 ]. As an example of a potential downstream analytical method, the Lieberman group has developed sensitive tandem LC–MS/MS analysis for 22 common drugs and drug metabolites [ 23 ]. The limit of detection for all analytes is below 0.07 ng/mL, and preliminary results show that a wide range of illicit compounds can be recovered from used FTS using this method (Fig. 2 ). All 21 drugs were recovered above the limit of detection, demonstrating the potential to obtain much more detailed information about the community drug supply than the result that FTS provide at the point-of-use. The current drug market has been characterized by fentanyl ubiquity [ 18 ], and thus, there will likely be shifts in demand for alternative test strips (e.g., xylazine test strips), as opposed to FTS. The method described herein is not limited to FTS, meaning used xylazine test strips could also be used for downstream analysis, but further work is needed to assess how drug-specific antibodies (e.g., fentanyl-specific antibody on FTS) affect the recovery of different drugs. Additionally, future studies should assess how long different drugs can be stored on used immunoassay test strips, how effectively and consistently they can be removed for analysis, and whether other drugs or cutting agents interfere with recovery or downstream analysis. Besides used test strips, other drug paraphernalia (e.g., cookers, cottons, bags) could be analyzed by extracting residue, but PWUD would receive no information at the point-of-use. This may be a beneficial approach for SSPs and harm reduction organizations that have working relationships with local law enforcement and prosecutors for safe disposal of syringes. For example, when law enforcement officials in St. Joseph County, Indiana, find used drug paraphernalia (e.g., syringes, cookers) in the community, they contact employees from the local harm reduction organization to safely collect and dispose of such materials. Paraphernalia collected for disposal, with the exception of syringes, could be submitted for analysis to contribute to supply-level monitoring. While there are previous studies where syringes were used for analysis [ 40 ], this approach requires safeguards for safe transport and handling of biohazardous materials. Additionally, submitting used syringes would limit analyses to substances consumed by injection, whereas collecting test strips or paraphernalia other than syringes accommodates testing of substances that were consumed through various routes of administration. This is an important consideration, as snorting has become increasingly common in the synthetic opioid era [ 41 – 43 ]. Balancing individual- and population-level data needs Members of this collaborative discussed the importance of utilizing existing infrastructure for dissemination of results to ensure that, even if there is a data lag, the results are useful and relevant to PWUD in the community. For example, results can feed into “bad batch alerts” systems. The SOAR (Safety, Outreach, Autonomy, Respect) Initiative in Ohio has developed a mobile application, modeled after a text messaging service in Baltimore [ 44 , 45 ], that alerts PWUD when overdoses have surged and when fentanyl has been detected and reported in multiple batches in a particular geographic area. Feeding results into a data stream that is trusted and used by PWUD maximizes the utility of data. Beyond bad batch alerts, this information can be used by SSP staff to share information with participants, effectively tailoring information to current supply trends. Similarly, public health departments often manage dashboards to monitor and evaluate overdose data; such dashboards can be complemented by overlaying overdose trends with supply-level trends (Fig. 3 ), facilitating the detection of emergent shifts and threats. While aggregate data can provide important information for supply-level monitoring, providing anonymous individual-level data can maximize benefits to individuals participating in drug checking programs. The dashboard (streetsafe.supply) developed and maintained by the Injury Prevention Research Center at the University of North Carolina–Chapel Hill, which offers mail-based drug checking services, is one exemplar [ 48 ]. Each sample is assigned an anonymous ID, which individuals make note of prior to submission. Individual results are posted to the dashboard with the associated sample ID, allowing individuals to access the results from their sample. Publishing individual-level results on a dashboard underscores the need to protect participants’ anonymity to avoid both (a) criminalization [ 49 ] and (b) retaliation from those who sell drugs for perceived “snitching”, [ 50 , 51 ] potentially disrupting supply chains or social networks [ 52 ]. Attention to the legal context for implementation and dissemination Recognition of the legal complexities associated with each approach was also central to the discussion. Asking individuals to provide a sample on-site requires significant trust [ 25 ], and in most states, drug possession on SSP premises is prohibited [ 19 ], meaning individuals would have to complete the test off-site and bring completed materials at their next visit. Alternatively, the SSP could provide individuals with prepaid mailing supplies, allowing individuals to complete and submit the test off-site simultaneously. Whether SSP participants or staff are responsible for mailing completed testing materials is of consequence to the research partner because staff can ship materials according to a planned schedule, whereas samples ready for analysis will be received sporadically when submitted by individual participants. The level of data collected and reported should be scrutinized, carefully considering the utility of such information to PWUD as well as how such information could be used by police. At minimum, prospective participants should be fully informed on how data will be used for supply-level monitoring. Scholars have raised concerns about police using supply-level monitoring—and geospatial data, in particular—to target enforcement resources [ 49 ]. Protecting participants’ anonymity is paramount to ensure public health monitoring does not facilitate increased—and counterproductive—criminalization among individuals participating in harm reduction programming [ 49 ]. Drug paraphernalia laws can prevent PWUD from participating in harm reduction programming [ 53 ], and thus, may present a barrier to participation in drug checking services. Paraphernalia laws broadly prohibit the possession of equipment that is associated with illicit drugs, even equipment used for testing, although considerable heterogeneity exists across states [ 19 , 53 ], and the legal status of FTS has often been ambiguous [ 53 ]. In 2021, the Centers for Disease Control and Prevention (CDC) and the Substance Abuse and Mental Health Services Administration (SAMHSA) announced new regulations that now allows federal funding to be used to purchase FTS. Historically, in as many as 30 states, it was illegal to possess drug checking equipment, which included FTS, and 33 states prohibited the distribution of drug checking equipment [ 19 ]. Penalties for violation of drug paraphernalia laws varied widely, ranging from civil fines to multi-year sentences [ 19 ]. Even though regulations have changed, and loopholes exist [ 54 ], limited awareness may discourage participation and implementation of drug checking programming due to concerns about potential criminalization [ 53 ], underscoring the need to promote awareness among PWUD. Furthermore, there are complexities associated with new regulations that still limit participation in the full range of harm reduction services. For example, in Ohio, the recent passage of SB 288 excludes only FTS from drug paraphernalia laws [ 55 ]; rapid immunoassay test strips for other scheduled substances would still be subject to drug paraphernalia laws. Drug paraphernalia laws are particularly relevant for partners collecting and submitting used paraphernalia for analysis. This approach requires strong working relationships between harm reduction organizations and local law enforcement, which can be facilitated by providing officers with training and resources that detail the well-established benefits of harm reduction services to PWUD—and the community at-large [ 56 ]. These relationships, or even partnerships, between harm reduction organizations and law enforcement are critical because officers have discretion in how they respond to, and enforce, substance use-related incidents [ 57 – 60 ]. Processes to accelerate implementation In addition to considerations for implementation at SSPs and with PWUD, special considerations exist for the implementation of these protocols at academic research institutions conducting downstream analyses of illicit compounds. While analytical reference solutions of controlled substances can be purchased and handled by academic researchers without additional approvals, the purchasing, handling, and disposals of solid illicit drug standards and samples are regulated by government entities at the federal (Drug Enforcement Administration [DEA]), state (State Pharmacy Boards), and local levels. Specifically, academic laboratories wishing to work with solid illicit drugs are required to acquire the license(s) for the schedules of drugs of interest. It is unclear, however, that these regulations apply to used FTS, as they are garbage and do not require special protocols for waste disposal. In any case, approvals and documents of support or acknowledgment from government organizations, especially the DEA, may facilitate increased stakeholder support, alleviating concerns about legality and enforcement. Additionally, forming working relationships between harm reduction organizations and local law enforcement can help safeguard PWUD, mitigating concerns about policing and criminalization of those participating in drug checking and other harm reduction services [ 56 , 57 , 59 , 60 ]. If applications or standard operating procedures are required, these should be initiated as early as possible to enable timely incorporation of samples collected through SSP collaborations. Collaborations with academic laboratories and SSPs provide an opportunity to develop and validate methods for targeted and non-targeted analysis, which depend on real-world samples because adulterants in the supply can cause chemical interference that would not be observed when tested with pure, analytical-grade compounds. SSPs can provide academic institutions with diverse, real-world samples that enhance the utility of novel tests and technologies, while academic institutions provide access to analytical instrumentation (e.g., LC–MS/MS) that facilitate robust, detailed analyses for drug checking, overcoming the limitations of existing rapid tests and advancing supply-level monitoring efforts [ 25 ]. Implications for public health policy and practice The USA faces a worsening overdose crisis, exacerbated by supply shifts and the emergence of xylazine, altering the risk environment for PWUD [ 3 , 7 , 8 ]. In the absence of safe supply, drug checking services are an urgent need [ 12 , 13 ], as these services provide PWUD with agency to navigate an unpredictable drug market [ 18 ]. Many SSPs and harm reduction programs distribute rapid immunoassay test strips, and community-academic partnerships provide a promising avenue to enhance existing drug checking services for supply-level monitoring, by developing and validating methods for analysis (e.g., xylazine test strips). A wide variety of technologies exist that can be applied for drug checking services [ 17 , 26 , 27 ], each of which has its own strengths and limitations. Faced with budgetary constraints, harm reduction organizations will have to balance tradeoffs, and although there is likely not a one-size-fits-all approach, the implementation of drug checking services should be guided and informed by key principles. For one, tests should prioritize immediate utility to participants. Additionally, the dissemination of results should carefully balance individual- and supply-level information needs, while ensuring anonymity to mitigate the potential for targeted policing and criminalization among participating individuals and communities [ 49 ]. The processes for dissemination should also be considered, looking to existing, trusted data infrastructure used by PWUD (e.g., bad batch alert systems) to maximize the utility of data. Existing supply monitoring efforts are limited and typically stem from law enforcement seizures and postmortem toxicology results, both of which are subject to selection bias [ 13 ]. In the collaborative described herein, SSPs will continue to distribute FTS as normal, but participants can submit the used test strip for analysis rather than discarding it. This approach ensures participants receive immediate results that can inform how they use, while also contributing to supply-level data. The costs associated with testing present a barrier to the scale and sustainability of community–academic partnerships—and to drug checking services more broadly. Opioid settlement funds may provide one mechanism to fund drug checking and other essential harm reduction services that have long been the financial responsibility of community-based organizations [ 61 ].
Abbreviations Fentanyl test strips Fourier-transformed infrared spectroscopy Illicit drug paper analytical device Liquid chromatography with tandem mass spectrometry Public health vending machines People who use drugs Syringe service program Acknowledgements The authors extend their thanks and appreciation to James Decker, Gary Bright, Sharona Bishop, and Brittney Nye from Hancock Public Health (Findlay, OH) for their thoughtful review and comments on this project. Author contributions KJM contributed to conceptualization; HDW, KLH, and ML contributed to methodology; KJM performed writing—original draft; HDW, KAH, KLH, DS, BC, RB, and AT performed writing—review and editing; AT, ML, and SN performed supervision. All authors reviewed and approved the manuscript in its final form. Funding Funding was received from the following sources to support the development of analytical methods: Berthiaume Institute for Precision Health at the University of Notre Dame (Substance Abuse Fund); Indiana Clinical and Translational Sciences Institute, funded in part by Grant No. UL1TR002529 from the National Institutes of Health National Center for Advancing Translational Sciences; and the National Science Foundation Partnership for Innovation (Grant No. Grant IIP-2016516). The content of this manuscript is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or any other funding agency. Availability of data and materials Not applicable. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests No competing interests to disclose.
CC BY
no
2024-01-15 23:43:47
Harm Reduct J. 2024 Jan 13; 21:11
oa_package/3f/d6/PMC10788002.tar.gz
PMC10788003
38218768
Introduction Acute coronary syndrome (ACS) encompasses various types of myocardial ischaemia, including ST-segment elevation myocardial infarction (STEMI), non-STEMI and unstable angina pectoris (UAP). The condition is primarily caused by the destabilisation of coronary atherosclerotic plaques [ 1 , 2 ]. Emerging evidence suggests that an upregulated inflammatory response and abnormal metabolism of specific lipid molecules play significant roles in the formation, rupture and subsequent development of ACS. In addition to established traditional lipid biomarkers, such as low-density lipoprotein (LDL-C), high-density lipoprotein (HDL-C) and triglycerides (TGs), several metabolomic and lipidemic indicators, including ceramides and apolipoproteins, have been implicated in the occurrence and progression of ACS [ 3 – 5 ]. Ceramide, a sphingolipid derivative of sphingomyelinase, assumes a critical function in preserving the structural integrity of cells and acts as a bioactive lipid participating in diverse cellular signalling pathways related to cell proliferation, differentiation and apoptosis. Substantial research has established a link between ceramides and multiple atherosclerotic processes, encompassing lipoprotein aggregation, cholesterol accumulation in macrophages, the modulation of nitric oxide synthesis, the generation of superoxide anions and the regulation of cytokine expression [ 6 – 9 ]. Notably, ceramides have been implicated in acute coronary events by fostering the infiltration of oxidised LDL-C into vascular walls, monocyte adhesion, atherosclerotic plaque formation and the expansion of the lipid-rich core, rendering it more susceptible to rupture [ 10 ]. Intracoronary imaging studies have predominantly identified ceramides within the thin fibrous cap of atheromatous plaques with necrotic cores [ 11 , 12 ]. Plasma ceramides have exhibited a positive and independent correlation with plaque rupture and erosion in patients suffering from acute myocardial infarction (AMI), as evaluated by optical coherence tomography [ 13 ]. Moreover, specific molecular lipid species, particularly ceramide (d18:1/16:0), have been linked to the fraction of necrotic core tissue and lipid core burden in coronary atherosclerosis, serving as predictive markers for the 1-year clinical outcome following coronary angiography (CAG) [ 5 ]. Bolstered by both theoretical evidence and empirical findings, ceramide (d18:1/16:0), ceramide (d18:1/18:0) and ceramide (d18:1/24:1) and their ratios to ceramide (d18:1/24:0) have emerged as novel risk indicators in patients diagnosed with confirmed coronary heart disease [ 14 ]. In addition, the progression of atherosclerotic cardiovascular disease is influenced by systemic vascular inflammation, which contributes to multiple maladaptive processes [ 15 ]. Interleukin-6 (IL-6), a pro-inflammatory cytokine, has been proposed as a potential predictor of coronary artery disease (CAD) severity and has been associated with plaque burden, as assessed by intracoronary imaging [ 16 , 17 ]. Furthermore, evidence suggests that the activation of the inflammatory pathway is crucial for ceramide biosynthesis [ 18 ]. In-vitro experiments have also demonstrated the interaction between ceramides and cytokines in various cellular pathways, highlighting their involvement in inflammation [ 19 – 21 ]. Given the interplay between pro-inflammatory cytokines and ceramides, along with their significant roles in the occurrence and progression of ACS, it is conceivable that there is a substantial overlap in the biological processes triggered by these factors in patients diagnosed with ACS. In summary, we speculated that the concurrent measurement of traditional risk factors, pro-inflammatory cytokines such as tumour necrosis factor-alpha (TNF-α) and IL-6, along with ceramides, could improve the diagnostic accuracy of ACS. However, limited research has explored the relationship between inflammatory factors and ceramides in patients with ACS, as well as the potential diagnostic benefits of concurrently measuring pro-inflammatory cytokines, such as TNF-α and IL-6, alongside ceramides. Therefore, the objective of our study was to investigate the correlation between ceramides and pro-inflammatory cytokines in patients with ACS and to evaluate the potential added value of combining ceramide and pro-inflammatory cytokine testing for early ACS diagnosis.
Method and materials Study design and participants This observational study involved the enrolment of 216 patients who were admitted and underwent CAG for suspected CAD between July 2021 and May 2022 at the Second Hospital of Hebei Medical University, China. The exclusion criteria encompassed moderate to severe chronic kidney disease (estimated glomerular filtration rate ≤ 60 mL/min/1.73 m2), chronic heart failure, acute cerebrovascular disease, a history of malignant tumours and active infectious diseases. Patients who had been prescribed anti-hyperlipidaemic medication or met the diagnostic criteria for AMI without evidence of vascular stenosis were also excluded from the study. For a comprehensive overview of the detailed inclusion and exclusion processes, please refer to Fig. 1 . The inclusion criterion was patients suspected of having CAD by clinical doctors based on their clinical symptoms (e.g. chest pain, dyspnoea). Two highly skilled cardiologists independently reviewed all coronary angiograms, and a diagnosis of ACS was determined based on a combination of clinical symptoms, electrocardiographic changes, cardiac biomarkers and findings from CAG that indicated the presence of significant stenosis (≥ 50%) in one or more coronary arteries, following the recommended criteria outlined in the European Society of Cardiology guidelines [ 22 , 23 ]. Prior to participation in the study, written informed consent was obtained from all of the enrolled participants. The study protocol strictly adhered to the principles outlined in the Declaration of Helsinki and received approval from the Ethics Committee of the Second Hospital of Hebei Medical University (ethics approval number: W2021041). Preparation of blood samples After a minimum fasting period of 12 h following admission, venous blood samples were collected from the patients. The blood was drawn from peripheral veins using ethylenediaminetetraacetic acid-containing tubes to preserve the integrity of the samples for the measurement of ceramides and cytokines. To ensure optimal preservation, aliquots of plasma were promptly prepared and stored at a temperature of − 80°C until the time of analysis. Demographic, clinical and laboratory assessments The study assessed various demographic and traditional risk factors. Demographic variables included age and gender. The traditional risk factors that were evaluated were body mass index (kg/m2), smoking history, medical history of alcohol consumption, diabetes mellitus (DM) and hypertension. In addition, laboratory variables were measured, including baseline serum lipid markers, such as total cholesterol, LDL-C, HDL-C and TGs. Other variables measured were lipoprotein(a) (LPa), hypersensitive C-reactive protein (hs-CRP), fasting blood glucose and B-natriuretic peptide. Ceramide measurements The quantification of ceramides was conducted by a senior laboratory examiner who was blinded to the clinical details of the participants. To extract plasma ceramides, a liquid–liquid extraction method using methanol was employed. The levels of plasma ceramide (d18:1/14:0), ceramide (d18:1/16:0), ceramide (d18:1/18:0), ceramide (d18:1/20:0), ceramide (d18:1/22:0), ceramide (d18:1/24:0) and ceramide (d18:1/24:1) were measured using the Shimadzu LC-20 A system. The Phenomenex Kinetex C18 analytical column (2.6 μm, 3.0 × 50 mm id.) was used coupled with an AB SCIEX triple Quad 4500 tandem mass spectrometer (Applied Biosystems Inc., USA) equipped with an electrospray ion source. The gradient reverse phase chromatography involved mobile phases including liquid chromatography–mass spectrometry (LC–MS) grade water (A) with 0.1% formic acid-mM ammonium acetate water and (B) 0.1% formic acid-2mM ammonium acetate-acetonitrile-isopropyl alcohol (7:3/v:v). Ceramide (d18:1/17:0) was used as the internal standard. The calibration linearity of ceramides was established by plotting the ratio of the peak response of ceramides to the peak response of their respective stable isotope internal standard in working standard solutions against the quantity of ceramides. To ensure the stability of analysis and calibration verification, three levels of standard solutions (QCL, QCM and QCH) were employed as quality controls and were checked every 10 injections. All ceramides exhibited excellent linearity within the calibration range, with correlation coefficients (R 2 ) of > 0.99. No matrix interference or carryover was observed during the analysis. Cytokine measurements The plasma levels of TNF-α, IL-6 and IL-8 were quantified using enzyme-linked immunosorbent assay (ELISA) kits obtained from Biotech Pack Analytical in Beijing, China. The ELISA kits had a minimum detectable concentration of < 1.0 pg/mL. After the samples were thawed, the ELISA measurements were conducted by a skilled laboratory examiner. The method employed yielded both intra-assay and inter-assay coefficients of variation of < 15% each, ensuring reliable and consistent results. Statistical analyses Statistical analyses were conducted using the IBM SPSS Statistics version 25.0 software (IBM Corp., Armonk, NY, USA). Continuous variables were presented as mean ± standard deviation, while categorical variables were expressed as percentages. The normality of data was assessed using the Kolmogorov–Smirnov test. For normally distributed continuous variables, the student’s t-test was used for inter-group comparisons, whereas the Mann–Whitney U test was employed for non-normally distributed data. Categorical variables were compared using either the chi-square test or Fisher’s exact test, depending on the sample size of patients in the analysis group. The association between ceramides and pro-inflammatory cytokines was evaluated using Pearson’s correlation coefficient. Receiver operating characteristic (ROC) curves and multivariate logistic regression were used to analyse the clinical accuracy of ceramides combined with cytokines in predicting ACS. The performance and discrimination ability of the four diagnostic models were assessed using the R statistical software version 3.4.3. Statistical significance was considered when the two-sided P -value was < 0.05.
Results Demographic, clinical and laboratory findings Table 1 presents the demographic, clinical and laboratory characteristics of all participants included in the study. Out of the total of 216 participants, 138 were diagnosed with ACS, while the remaining participants were classified as non-ACS. Within the ACS group, 11 individuals (7.97%) had STEMI, 25 (18.12%) had NSTEMI and 102 (73.91%) were diagnosed with UAP. All patients with UAP exhibited major vessel stenosis of > 50% as observed via CAG. In comparison with the non-ACS group, a higher percentage of patients with ACS were men (69.6%), current smokers (44.2%) and had a history of DM (37.0%). Additionally, the ACS group demonstrated significantly higher TG and LPa levels, along with lower levels of HDL-C, in comparison with those without ACS. Furthermore, BNP levels were found to be significantly elevated in patients with ACS ( P < 0.05). Pro-inflammatory cytokine plasma levels and ceramides of participants Table 2 presents the plasma levels of 3 pro-inflammatory cytokines and 7 ceramide species in all participants included in the study. Patients with ACS exhibited significantly higher levels of TNF-α and IL-6 compared with those without ACS ( P < 0.01). Among the analysed ceramides, ceramide (d18:1/16:0) displayed the most substantial elevation in plasma levels in the patients with ACS, with a P -value close to 0. The P -values for ceramide (d18:1/24:0) and ceramide (d18:1/22:0) were also close to 0, indicating significant elevation. For ceramide (d18:1/18:0) and ceramide (d18:1/20:0), the P -values were < 0.05, indicating a significant increase. In contrast, the P -values for ceramide (d18:1/14:0) and ceramide (d18:1/24:1) were 0.097 and 0.361, respectively, suggesting no significant elevation. These findings suggest that the plasma levels of ceramide (d18:1/24:0), ceramide (d18:1/22:0), ceramide (d18:1/18:0) and ceramide (d18:1/20:0) were significantly elevated in the patients with ACS, while no significant elevation was observed for ceramide (d18:1/14:0) or ceramide (d18:1/24:1). The association between ceramide and inflammatory factors Table 3 presents the associations between plasma ceramides and inflammatory factors. The results indicate no significant associations between ceramides and pro-inflammatory cytokines TNF-α, IL-6 or IL-8. However, a mild association was observed between hs-CRP and ceramides d18:1/18:0 and d18:1/20:0, with P -values of < 0.5. Univariable and multivariable logistic regression results: the clinical acute coronary syndrome predictors Table 4 presents the results of the univariate logistic and multivariate regression analyses for clinical predictors of ACS. Among the traditional risk factors, age (OR = 0.981, 95% CI: 1.580–4.990, P < 0.001), male (OR = 2.808, 95% CI: 1.272–4.767, P < 0.001), DM history (OR = 2.462, 95% CI: 1.272–4.767, P < 0.01) and being a current smoker (OR = 3.961, 95% CI: 1.999–7.848, P < 0.001) were significant predictors of ACS ( P < 0.05). Additionally, most of the lipid profiles, including TGs (OR = 1.405, 95% CI: 1.008–1.960, P < 0.05) and HDL-C (OR = 0.227, 95% CI: 0.081–0.636, P < 0.01), were significantly associated with ACS ( P < 0.05). The pro-inflammatory cytokines, TNF-α (OR = 1.063, 95% CI: 1.018–1.109, P < 0.01) and IL-6 (OR = 1.157, 95% CI: 1.077–1.243, P < 0.001), were also significant predictors of ACS ( P < 0.01). Moreover, several ceramides, including ceramide (d18:1/16:0) (OR = 1.016, 95% CI: 1.008–1.024, P < 0.001), ceramide (d18:1/18:0) (OR = 1.017, 95% CI: 1.000–1.034, P < 0.05), ceramide (d18:1/24:0) (OR = 1.001, 95% CI: 1.000–1.001, P < 0.01), ceramide (d18:1/20:0) (OR = 1.016, 95% CI: 1.001–1.032, P < 0.05) and ceramide (d18:1/22:0) (OR = 1.003, 95% CI: 1.001–1.005, P < 0.05), were significantly associated with ACS. The results of the multivariate regression analysis demonstrated that being male (OR = 2.702, 95% CI: 1.290–5.658, P < 0.01), having a DM history (OR = 2.329, 95% CI: 1.077–5.035, P < 0.05), being a current smoker (OR = 2.702, 95% CI: 1.204–6.066, P < 0.05), LPa (OR = 1.006, 95% CI: 1.000–1.012, P < 0.05), IL-6 (OR = 1.173, 95% CI: 1.082–1.271, P < 0.001) and ceramide (d18:1/16:0) (OR = 1.018, 95% CI: 1.008–1.028, P < 0.001) were all significant predictors of ACS. Predictive value of the models for acute coronary syndrome Table 5 presents the predictive values of four diagnostic models, which included variables with P -values of < 0.05 in the multivariate logistic regression. Model 1 achieved an area under the curve (AUC) of 0.722 for diagnosing ACS. When IL-6 was added to the model, the AUC increased to 0.785. Incorporating traditional risk factors along with ceramide (d18:1/16:0) resulted in an AUC of 0.782. Finally, model 4, which combined traditional risk factors (male gender, history of DM, current smoking status and elevated LPa, IL-6 and ceramide [d18:1/16:0]), demonstrated an AUC of 0.827. The results indicated that the combination of traditional risk factors, IL-6 and ceramide (d18:1/16:0) significantly improved the AUC of model 4 compared with model 1 (0.827 [0.770–0.884] vs. 0.722 [0.653–0.791], P < 0.001), model 2 (0.827 [0.770–0.884] vs. 0.785 [0.723–0.846], P < 0.05) and model 3 (0.827 [0.770–0.884] vs. 0.782 [0.720–0.845], P < 0.05). The ROC curves for the prediction of ACS for the four models are depicted in Fig. 2 .
Discussion Due to the multifaceted nature of the underlying aetiology and mechanisms of ACS, relying on a single biomarker for accurate prediction is unlikely. Therefore, the development of a multi-marker model is crucial to enhance the prediction of ACS occurrence. In this study, a proportional increase in the plasma levels of ceramide (d18:1/16:0), ceramide (d18:1/18:0), ceramide (d18:1/20:0), ceramide (d18:1/22:0), ceramide (d18:1/24:0), TNF-α and IL-6 in patients with ACS was observed. Certain ceramide species, specifically, ceramide (d18:1/16:0), ceramide (d18:1/18:0), ceramide (d18:1/20:0), ceramide (d18:1/22:0) and ceramide (d18:1/24:0), along with TNF-α and IL-6, were identified as independent predictors of ACS, even after adjusting for traditional risk factors. The proposed model, incorporating the traditional risk factors, ceramide (d18:1/16:0) and IL-6, demonstrated an AUC of 0.827. In comparison, the AUCs of the models considering only the traditional risk factors ceramide (d18:1/16:0) or IL-6 individually were 0.782, 0.785 and 0.722, respectively. These findings suggest that combining the assessment of traditional risk factors, including male gender, history of DM, current smoking status and elevated LPa with ceramide (d18:1/16:0) and IL-6, can enhance the predictive accuracy for ACS. Although our study did not find any significant associations between the ceramide subspecies and proinflammatory cytokines, in-vitro research suggests that ceramide, as a second messenger of sphingolipids, is linked to various cytokines, including TNF-α and IL-6. Ceramides have been shown to stimulate the secretion of IL-6 and CRP, exerting a direct pro-inflammatory effect [ 24 – 28 ]. Some studies have indicated that cytokines such as TNF-α and IL-6 can impact phospholipid metabolism and subsequently influence ceramide production [ 18 ]. However, our study did not find a significant association between any of the ceramide molecules and TNF-α or IL-6, while a mild relationship was observed between hs-CRP and ceramide (d18:1/18:0) and ceramide (d18:1/20:0), which is inconsistent with prior studies. Existing studies refer to research that indicated that TNF-α and IL-6 could impact phospholipid metabolism and, subsequently, influence ceramide production; our study, however, did not find a significant association between any of the ceramide molecules. This discrepancy may be attributed to our relatively small sample size and the lack of serial biomarker measurements. Further research is warranted to explore the relationship between proinflammatory cytokines and ceramide molecules in more detail. Numerous studies have provided compelling evidence of the strong association between inflammation and ACS [ 29 ]. Interleukin-6, IL-18 and TNF-α are found in human plaques and may play roles in plaque progression and rupture since they have been associated with ACS. Additionally, they are associated with a higher incidence of acute cardiovascular events in patients with extreme cardiovascular risk [ 30 ]. Among the proinflammatory cytokines, IL-6 (primarily produced by T-cells and macrophages) plays a crucial role in destabilising plaques, promoting atheroprogression and stimulating the production of hs-CRP [ 31 , 32 ]. Moreover, a comprehensive analysis of multiple studies consistently demonstrated that elevated blood levels of IL-6 independently increase the risk of major adverse cardiovascular events, as well as cardiovascular and all-cause mortality in patients with ACS [ 33 ]. Consequently, measuring IL-6 levels in the blood holds promise for improving the risk stratification of patients with ACS [ 34 ]. In our study, we observed significantly higher plasma levels of IL-6 in patients with ACS; furthermore, we identified an independent association between IL-6 levels and the occurrence of ACS. Ceramides, which are bioactive lipids with crucial regulatory functions in pro-inflammatory cytokines, have been implicated in various cardiovascular conditions. Elevated serum ceramide concentrations serve as predictors for cardiovascular atherosclerotic disease, stroke, heart failure and atrial fibrillation [ 35 , 36 ]. Moreover, specific plasma ceramide levels are correlated with heightened cardiovascular mortality in ambulatory patients with chronic heart failure [ 37 ]. A previous study developed a model that combined the measurement of ceramides with high-sensitive troponin T for the detection of ACS in patients presenting with chest pain, achieving an impressive AUC of 0.865 [ 38 ]. Another study demonstrated that elevated plasma levels of ceramide (d18:1/16:0), ceramide (d18:1/18:0) and ceramide (d18:1/24:1) were independent predictors of a high atherosclerotic burden in patients with STEMI [ 39 ]. Furthermore, ceramide molecules, including ceramide (d18:1/16:0), ceramide (d18:1/18:0) and ceramide (d18:1/24:1), as well as their ratios to ceramide (d18:1/24:0), have emerged as promising risk stratifiers in patients with established CAD [ 14 ]. Additionally, ceramides have shown potential as plasma biomarkers for the early prediction of restenosis after percutaneous coronary intervention [ 40 ]. However, this study indicated that only ceramide (d18:1/16:0) independently predicted the occurrence of ACS. Limitations Several limitations of the current study should be acknowledged. First, the study was conducted at a single centre, limiting the generalisability of its findings to other populations. Second, the sample size was relatively small, warranting the need for larger prospective studies to validate the conclusions. Third, the absence of serial measurements hindered the ability to establish a temporal relationship between the biomarkers and the onset of ACS. Fourth, uric acid has been identified as a significant determinant of many different outcomes, such as all-cause and cardiovascular mortality, as well as cardiovascular events in patients with chronic coronary syndromes and ACS [ 41 ]. However, the biomarker of serum uric acid was not included in this study. Additionally, internal and external validation of the diagnostic models was not performed, which is crucial for assessing their robustness. Therefore, further research is warranted to confirm the diagnostic value of these biomarkers and to establish a validated diagnostic model for ACS.
Conclusion In conclusion, ceramide (d18:1/16:0) plays an essential role in predicting ACS. In addition, this study’s results support the idea that the simultaneous measurement of traditional risk factors, IL-6 and ceramide (d18:1/16:0) can improve the diagnostic accuracy of ACS. While these findings may not offer novel perspectives for developing new therapeutic approaches, an ACS risk assessment combining IL-6 and ceramide (d18:1/16:0) presents a unique tool for aiding clinical implementation and decision-making in patients suspected of having atherosclerosis. Additionally, ACS risk assessment has the potential to enhance patients’ adherence to medical therapy and lifestyle changes.
Background There is a growing body of evidence supporting the significant involvement of both ceramides and pro-inflammatory cytokines in the occurrence and progression of acute coronary syndrome (ACS). Methods This study encompassed 216 participants whose laboratory variables were analysed using standardised procedures. Parameters included baseline serum lipid markers, comprising total cholesterol, low-density lipoprotein-cholesterol, high-density lipoprotein-cholesterol, triglycerides (TGs), lipoprotein(a) (LPa), fasting blood glucose, B-natriuretic peptide and hypersensitive C-reactive protein. Liquid chromatography-tandem mass spectrometry measured the concentrations of plasma ceramides. Enzyme-linked immunosorbent assay quantified tumour necrosis factor-α (TNF-α), interleukin 6 (IL6) and IL8. The correlation between ceramides and inflammatory factors was determined through Pearson’s correlation coefficient. Receiver operating characteristic (ROC) curve analysis and multivariate logistic regression evaluated the diagnostic potential of models incorporating traditional risk factors, ceramides and pro-inflammatory cytokines in ACS detection. Results Among the 216 participants, 138 (63.89%) were diagnosed with ACS. Univariate logistic regression analysis identified significant independent predictors of ACS, including age, gender, history of diabetes, smoking history, TGs, TNF-α, IL-6, ceramide (d18:1/16:0), ceramide (d18:1/18:0), ceramide (d18:1/24:0), ceramide (d18:1/20:0) and ceramide (d18:1/22:0). Multivariate logistic regression analysis revealed significant associations between gender, diabetes mellitus history, smoking history, LPa, IL-6, ceramide (d18:1/16:0) and ACS. Receiver operating characteristic analysis indicated that model 4, which integrated traditional risk factors, IL-6 and ceramide (d18:1/16:0), achieved the highest area under the curve (AUC) of 0.827 (95% CI 0.770–0.884), compared with model 3 (traditional risk factors and ceramide [d18:1/16:0]) with an AUC of 0.782 (95% CI 0.720–0.845) and model 2 (traditional risk factors and IL-6), with an AUC of 0.785 (95% CI 0.723–0.846) in ACS detection. Conclusions In summary, incorporating the simultaneous measurement of traditional risk factors, pro-inflammatory cytokine IL-6 and ceramide (d18:1/16:0) can improve the diagnostic accuracy of ACS. Keywords
Acknowledgements Not applicable. Author contributions LHQ and GBY conceived of the study, and LFJ, ZL and LL participated in its design and data analysis and statistics. All authors helped to draft the manuscript, read and approved the final manuscript. Funding Not applicable. Data availability All data generated or analyzed during this study are included in this published article. Declarations Ethics approval and consent to participate The study protocol strictly adhered to the principles set forth in the Declaration of Helsinki and received approval from the Ethics Committee of the Second Hospital of Hebei Medical University (Ethics approval number: W2021041). We obtained signed informed consent from the participants. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:47
BMC Cardiovasc Disord. 2024 Jan 13; 24:47
oa_package/ad/d7/PMC10788003.tar.gz
PMC10788004
38218832
Introduction Thyroid cancer accounts for about 1% of all malignant tumors in humans and around 33% of head and neck malignant tumors [ 1 ]. Furthermore, Papillary Thyroid Carcinoma (PTC) accounts for approximately 80–90% of thyroid malignant tumors. The global incidence of thyroid cancer has increased rapidly in recent years, at a rate of 4.5–6.6% annually [ 2 ]. Although most PTC patients have a good prognosis, it is noteworthy that 20–90% of PTC patients can develop Lymph Node Metastasis (LNM) [ 3 ]. In addition to increasing the risk of local recurrence, LNM can lower the Disease-Free Survival (DFS) rate of PTC patients. Furthermore, LNM may lead to secondary surgery or radiation iodine therapy, affecting patients’ Quality of Life (QoL) [ 4 ]. The PTC LNM often occurs in the central region at first, followed by the lateral cervical region, and finally the mediastinal lymph nodes [ 5 ]. However, it is noteworthy that metastatic progression does not strictly adhere to these steps, as some stages could be skipped. Therefore, a comprehensive, rational, and appropriate initial surgery can reduce the risk of postoperative recurrence and the possibility of secondary surgery. Herein, we retrospectively analyzed the clinicopathological data of 2384 PTC patients, focusing on the risk factors of Central Lymph Node Metastasis (CLNM) and Lateral Lymph Node Metastasis (LLNM) in PTC patients. We also explored the surgical methods for treating PTC.
Materials and methods General information The Medical Ethics Committee of Inner Mongolia Medical University Affiliated Hospital approved our research plan. Our hospital admitted 2709 thyroid malignant tumor patients from January 2016 to December 2020. Among them, 2384 [88.0%, 353 males (14.8%) and 2031 females (85.2%), age range = 15–83 years, average age = 46.41 ± 10.23 years] were PTC patients. Preoperative ultrasound examination of the thyroid and neck lymph nodes, chest X-ray, and chest CT were performed on all patients, with some undergoing neck CT examination. Preoperative fine-needle biopsy was performed on 210 patients (10%), of which 179 were diagnosed with PTC and LNM. The diagnosis was confirmed through intraoperative freezing and postoperative pathological examination, which yielded a positive rate of 85%. All patients underwent an intraoperative frozen section pathological examination during surgery and a routine postoperative pathological examination. The exclusion criteria were as follows: 1 Patients with a previous history of malignant tumors; 2 Patients undergoing the present operation as a secondary surgery; 3 Patients that were pathologically diagnosed with non-papillary carcinoma; 4 Patients with upper mediastinal LNM or distant metastasis; 5 Patients who underwent non-radical surgery; and 6 Patients who did not undergo follow-up examination six months post-surgery. Surgical methods All PTC patients included herein underwent radical thyroidectomy. Among them, 1829 (76.7%) and 555 (23.3%) patients underwent unilateral Central Lymph Node Dissection (CLND) and bilateral CLND, respectively. Preoperative ultrasound examination revealed enlarged lymph nodes in the lateral neck, while Fine-Needle Aspiration (FNA) biopsy showed LNM in 38 cases (1.6%), prompting the need for Lateral Cervical Lymph Node Dissection (LLND). Related data analysis Among the 1829 patients that underwent unilateral CLND, 887 had Central Lymph Node Metastasis (CLNM), with a metastasis rate of 48.5%. On the other hand, among the 555 patients who underwent bilateral CLND, 287 had CLNM, with a metastasis rate of 51.5%. Furthermore, 38 patients underwent lateral neck lymph node dissection, of which 32 cases were confirmed through postoperative pathological examination to have LLNM, with a metastasis rate of 1.3%. Gender, age, tumor size on histology, and multifocal tumor nature (single or multiple lesions) were subjected to univariate analysis to determine if they are linked to a higher risk of central and lateral neck lymph node metastases in 2384 PTC patients (Tables 1 and 2 ), and BRAF gene testing was performed on 85 patients (Table 3 ). Follow up A follow-up rate of 91.0% was achieved, with 2169 of the 2384 PTC patients completing the full follow-up regimen, while 25 patients were lost. The follow-up period was set for 12–72 months up until December 31, 2021. Among the patients who did not undergo LLND, 47 (2.0%) experienced lateral neck metastasis post-surgery. There were 4 (0.2%) and 2 (0.1%) cases of lung metastasis and bone metastasis, respectively, and no deaths were reported. Statistical methods All statistical analyses were performed using SPSS 26.0 software. Counting data were expressed as percentages. The χ 2− test or Fisher’s exact probability method was used for component comparisons. Binary logistic regression analysis was performed to analyze the relevant risk factors using PTC neck lymph node metastasis as a variable factor (0-no, 1-yes). We used ROC curves to determine the critical value for predicting CLNM based on the size of tumor lesions. Inspection level α = 0.05.
Results Univariate analysis of factors related to CLNM Univariate analysis revealed a significant correlation between CLNM and gender, age, lesion size, and multifocal characteristics in PTC patients ( P < 0.05). Multivariate analysis of factors related to CLNM Herein, we constructed a multivariate logistic regression equation by incorporating gender, age, lesion size, and the multifocal tumor nature. According to the results, the risk of CLNM was significantly higher in males than females ( OR :5.294,95% CI : 3.768–7.438, P < 0.05). On the other hand, the risk of CLNM increased with decreasing age and increased with increasing lesion size and number of multifocal lesions ( OR :3.188, 95% CI : 1.963–5.176, P < 0.05) (Table 4 ). We created ROC curves for 2384 patients undergoing CLND to further investigate the relationship between CLNM and tumor lesion size. We determined that the critical value for predicting tumor lesion size was 0.855. The AUC was 0.269, with sensitivity and specificity values of 57.9% and 69%, respectively ( P < 0.05) (Fig. 1 ). The CLNM rates of patients with BRAF gene mutations and those without BRAF gene mutations were 54.4% and 45.6%, respectively. No statistically significant difference was found in the transfer rate between the two groups ( P = 0.741) (Table 3 ). Analysis of factors related factors to LLNM Analysis of 38 PTC patients who underwent LLND revealed that the size and number of lesions, as well as the number of CLNMs, were correlated with LLNM (Table 2 ). We also compared LNM in different lateral neck regions (Table 5 ). The LNM rate was higher in zones II, III, and IV than in zones I and V. However, the groups studied had relatively fewer cases, necessitating additional in-depth analysis and research with more cases for each category.
Discussion According to research, PTC, the most common type of thyroid cancer, has an excellent 10-year Survival Rate (SR) of over 90% [ 6 ]. Nonetheless, LNM occurs in 20–90% of PTC cases and is generally considered the primary cause of PTC local recurrence. It has been reported that secondary surgery post-recurrence increases the difficulty of postoperative care, reduces patients’ QoL, and affects patients’ SR [ 7 ]. Lymph node dissection during PTC surgery increases the risk of iatrogenic complications. Currently, a great controversy remains over the extent of preventive CLND and therapeutic LLND. Balancing treatment methods, avoiding overtreatment of low-risk patients, and identifying patients with more severe conditions or at higher risk of injury (for whom more active treatment methods are needed) are some of the challenges currently faced by Doctors taking care of these patients. Therefore, understanding the nature and risk factors of Cervical Lymph Node Metastases (CNMs) in PTC patients is critical in guiding CLND. Consistent with other research findings [ 8 , 9 ], we found an increased risk of CLNM in male patients in this study. This outcome indicates that special emphasis should be placed on evaluating LNM in male PTC patients during preoperative clinical examinations. Herein, PTC patients aged ≤ 30 years were more likely to develop CLNM. Although this finding aligns with some previous research [ 10 , 11 ], it is noteworthy that some scholars [ 12 ] found that CLNM is not related to age. Furthermore, Yang [ 10 ] deduced that > 44.5 years old is the threshold for cervical lymph node skip metastasis, with patients in this age group being more prone to cervical lymph node skip metastasis. This study revealed that tumor size (> 0.5 cm) is a risk factor for CLNM in PTC patients, with a tumor size of 0.855 cm as the critical value for predicting CLNM according to the ROC curve. In PTC research, scholars have consistently reported that tumor size is an essential factor in predicting CNM, but with different thresholds. However, it is generally believed that the larger the tumor lesion, the higher the CNM risk [ 13 , 14 ]. Consistent with other research results [ 15 ], multifocal tumor foci are also a risk factor for CNM in PTC patients. Furthermore, Liu [ 11 ] found that the extension and growth of tumors outside the thyroid gland is a risk factor for CLNM, potentially because the tumor cells invading perithyroidal soft tissue are more likely to metastasize along the rich lymphatic tissue to the surrounding lymph nodes, resulting in LNM. No statistically significant relationship was found between BRAF gene mutations and CLNM. In this regard, BRAF gene mutations are not found to be a risk factor for CLNM. Although many studies have been conducted on the characteristics and risk factors of LLNM, the findings are highly controversial. Zhang et al. [ 16 ] reported that tumors extending outward from the thyroid gland, bilateral lobe tumors, and CLNM are risk factors for LLNM. On the other hand, Niel et al. [ 17 ] reported that tumors located at the upper pole, CLNM, and tumors > 1.5 cm in size are risk factors for LLNM. Furthermore, Liu agrees that CLNM is a risk factor for LLNM and that LLND should be conducted more actively when the number of CLNMs is more than three. Contrastingly, a previous study [ 14 ] reported that CLNM is not a risk factor for LLNM. This study found that the rate of lateral CNM increased with the increase in tumor size, but the difference was not statistically significant. There is currently no consensus on the scope of lateral neck lymph node dissection, and a controversy remains over whether to routinely clean lymph nodes in Zone V [ 18 , 19 ]. Here, we found that LLNM mainly occurs in zones II, III, and IV, with less occurrence in regions I and V. It may be appropriate to not clean the lymph nodes in Zones I and V for PTC patients with low-risk factors. Dr.Ozgur’s team review of the thyroid gland confirms that the thyroid gland has no defined anatomical fibrous capsule,but rather perithyroidal soft tissue [ 20 ]. Furthermore, some scholars highlighted the importance of tumor location in the perithyroidal soft tissue, discovering that tumor invasion of the soft tissue could increase the risk of tumor recurrence and death [ 21 ]. Wang et al. [ 22 ] discovered that PTC invasion and breakthrough of the perithyroidal soft tissue or posterior dorsal soft tissue increases the likelihood of tumor invasion into lymphatic vessels and the risk of CNM. According to recent research, the V600E mutation of BRAF (v raf murine sarcoma viral oncogene homolog B1) is the most common and critical genetic event in PTC occurrence. The BRAF V600E mutation is solely found in PTC and PTC-derived undifferentiated cancers, and is absent in normal thyroid tissue, thyroid follicles, and other types of thyroid tumors [ 23 ]. Numerous studies reported that this mutation is associated with commonly known clinicopathological features of PTC that predict tumor progression and recurrence, such as advanced age, extrathyroidal invasion, LNM, and advanced tumor stages. Additionally, the direct association between the BRAFV600E mutation and clinical PTC progression, recurrence, and treatment failure has been confirmed. Herein, 79 of the 85 PTC patients who underwent BRAF gene testing were found to have BRAF gene mutations, with a mutation rate of 93%. One study indentified several molecular and histopathologic features that correlate with more behavior of Thyroid papillary microcarcinoma(TPMC),such as BARF mutation status, subcapsular location, peri-and intratumoral fibrosis, and multifocality, and provided a practical and simple scoring system to evaluate the clinical behavior of this common type of thyroid cancer. The scoring system relies on BRAF mutation status and three histopathological features to assign tumors into three risk categories. The absence of either of these factors cannot be accurately classified [ 24 ]. However, our BRAF gene testing sample size was relatively small and incomplete histopathological features information, necessitating additional research with larger samples to obtain more accurate conclusions. Furthermore, in a previous study, 13 of the PTC patients who underwent BRAF testing and showed no mutations were complicated with Hashimoto’s Thyroiditis (HT). Some studies discovered that the inflammatory process of HT exerts a protective effect on PTC [ 10 ]. Many patients with PTC + HT were clinically diagnosed with enlarged cervical lymph nodes [ 25 ], which posed more difficulties for preoperative color Doppler ultrasound for determining LNM, leading to more lymph node clearance and complications. As a result, accurately identifying the risk factors for CLNM and determining the need for neck lymph node dissection is even more critical for PTC patients with HT.
Objective This study aims to identify and analyze the risk factors associated with Cervical Lymph Node Metastasis (CNM) in Papillary Thyroid Carcinoma (PTC) patients. Methods We conducted a retrospective study involving the clinicopathological data of 2384 PTC patients admitted to our hospital between January 2016 and December 2020. All relevant data were statistically processed and analyzed. Results The related risk factors for Central Lymph Node Metastasis (CLNM) were gender (male), age (≤ 30 years old), tumor lesion size (> 0.855 cm), and multifocal tumor foci. The ROC curve revealed that the critical value for predicting CLNM based on tumor lesion size was 0.855 (sensitivity = 57.9%, specificity = 69%, AUC = 0.269, and P < 0.05). Lateral Lymph Node Metastasis (LLNM) was positively correlated with tumor diameter. Specifically, the LLNM rate increased with the tumor diameter. LLNM occurrence was significantly higher in zones II, III, and IV than in zones I and V. Although the BRAF gene mutation detection assay has certain clinical benefits in diagnosing PTC and LLNM, no statistically significant difference was found in its relationship with central and lateral neck lymph node metastases ( P = 0.741). Conclusion Our findings revealed that CLNM is associated with gender (male), age (≤ 30 years old), tumor lesion size (> 0.855 cm), and multiple tumor lesions in PTC patients. Central Lymph Node Dissection (CLND) is recommended for patients with these risk factors. On the other hand, preoperative ultrasound examination, fine-needle pathological examination, and genetic testing should be used to determine whether Lateral Cervical Lymph Node Dissection (LLND) is needed. Keywords
Summary Our study findings can be summarized in five key points. First, male patients, patients aged ≤ 30 years, and those with a tumor lesion size > 0.855 cm should undergo preventive CLND. Second, LLNM presence should be confirmed through color ultrasound examination and fine-needle biopsy before LLND. Third, the lymph node cleaning range should include Zones II, III, and IV, whereas lymph nodes in Zones I and V can be cleaned as appropriate. Fourth, the appropriate surgical method and whether lateral neck lymph node dissection is necessary could be determined through preoperative puncturing of tumor lesions and assessment of enlarged lateral neck lymph nodes. Despite the above-mentioned insightful findings, this study has some shortcomings. Particularly, clinical examination results, imaging features, and tumor location were not examined. Consequently, additional research is required to further explore the relevant risk factors for CNM in PTC patients.
Acknowledgements Not applicable. Author contributions ML designed and directed the study. XW and XZ collected all the clinicopathological data. HS was responsible for the statistical analysis and wrote the manuscript. ML and JM onfirmed the authenticity of all the raw data. All authors have read and approved the final version of the manuscript. All authors have read and approved the final version of the manuscript. Funding Not applicable. Data availability The datasets generated and analysed during the current study are not publicly available because internal statistical data of the research unit and has not been uploaded to the database, but are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate This study was approved by the Ethics Committee of the Affiliated Hospital of Inner Mongolia Medical University. Considering the retrospective nature of the data, the Ethics Committee of the Affiliated Hospital of Inner Mongolia Medical University approved the requirement for informed consent. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:47
Diagn Pathol. 2024 Jan 13; 19:13
oa_package/a5/cf/PMC10788004.tar.gz
PMC10788005
38218879
Background Monitoring athletes has become an important and present part of sport preparation. The scientific study of quantifying athletes' training began in the early 1990s with the four methods that were most used at the time: retrospective questionnaires, diaries, physiological monitoring and direct observation [ 1 ]. Nowadays, there is a plethora of athletic monitoring methods and technologies, varying from the simplest and cheapest, such as diaries [ 1 ], to the most complicated and expensive ones, such as the global positioning system (GPS) [ 2 ]. Frequently monitoring the variables related to performance can help coaches to assess the effectiveness of their training programs and update those to better meet the athletes’ needs. Besides, another reason to frequently monitor athletes is to reduce the time lost to illness [ 3 ] and injury [ 4 , 5 ]. By monitoring the weekly training loads, coaches can make better decisions about the changes in the program to ensure that athletes are not exceeding thresholds that put them in higher risk of injury [ 6 ] and illness [ 7 ]. Furthermore, monitoring the recovery response after a training session or a competitive match can aid practitioners to balance the adaptation process and recovery. This is particularly important to understand the beginning of the period characterized by a decrease in performance in reaction to high loads (i.e., functional overreaching) [ 8 ]. Failing to monitor this response can lead to unplanned fatigue followed by a period of inadequate recovery, phenomenon designed by nonfunctional overreaching [ 9 ]. This continuum of unplanned fatigue can result in a syndrome defined by overtraining, in which large decrements in performance occur that are associated to psychological disturbances that can last for months [ 10 ]. The particularities of the variables mentioned before alongside with the complexity of the majority of team-sports calendar (e.g., short preparation periods and weeks with high volumes of matches and training sessions) can make the training process hard to monitor and prescribe [ 11 ]. The management of the balance between training loads and recovery significantly influences a team’s overall fitness, which, in turn, plays a crucial role in their competitive success [ 4 ]. One of the team-sports that has a voluminous competitive calendar is professional volleyball. Volleyball is a sport characterized by a diverse range of physical demands, necessitating well-developed energy systems [ 12 , 13 ]. These include the phosphagen system, which provides immediate energy for high-intensity, short-duration activities like quick sprints or jumps; glycolysis, which predominates in moderate to high-intensity activities lasting from a few seconds up to a minute, contributing to sustained efforts during longer rallies; and the oxidative system, which supports prolonged, lower-intensity activities, crucial for endurance over the course of a match. The effective interplay of these energy systems is essential for optimal performance in volleyball, as players frequently transition between activities of varying intensity and duration [ 14 , 15 ]. Prior research in the field of volleyball has explored various aspects of athletic performance [ 12 ] and recovery [ 16 , 17 ]. Studies have examined internal and external training loads, investigating how these variables influence players' physiological responses and performance outcomes [ 18 , 19 ]. Key findings have indicated the importance of monitoring training intensity and volume to optimize player readiness and prevent overtraining [ 18 ]. Additionally, research has highlighted the role of neuromuscular fatigue assessments and well-being measures in understanding athletes' responses to training and competition demands [ 18 , 20 ]. In the realm of these neuromuscular assessments, the vertical jump emerges as a particularly crucial measure in volleyball. This is because the act of jumping is central to key actions such as serving, blocking, and attacking [ 12 ]. The vertical jump, therefore, is not just a frequent movement in volleyball but also a critical skill that significantly influences a team's performance and success. It underscores the importance of precisely monitoring and optimizing training loads, as these directly impact an athlete's ability to perform these jumps effectively and consistently. Despite these advancements, there remains a gap in the systematic synthesis of this literature, particularly in integrating these diverse findings to inform monitoring strategies in volleyball. This gap underscores the need for the current systematic review, aiming to consolidate existing knowledge and identify directions for future research. Moreover, previous research has shown the importance of conducting systematic reviews about training/match monitoring with increasing attention given to the consensus as to which variables related to training load, fatigue, and well-being are most useful [ 21 ]. Therefore, the aim of this systematic review was to examine the extent, range, and nature of the evidence on the associations between training load measures, fatigue and well-being assessments used in volleyball training/match monitoring literature to aid the planning of future research.
Methods Registration and protocol This systematic review was conducted in accordance with the recommendations of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 [ 22 ]. The study protocol was registered with INPLASY (INPLASY202270059). A PRISMA checklist is provided as a supplementary file (Table S 1 ). Eligibility criteria Inclusion criteria for this systematic review were as follows: (1) original research papers published in peer-reviewed journals in English, French, Spanish, or Portuguese; (2) subjects were volleyball athletes, with no restrictions on age, thereby including youth, collegiate, and adult players; (3) the study involved at least two evaluation points, encompassing a baseline and a post-intervention measurement. The exclusion criteria were: (a) studies not involving human subjects; (b) research not specifically focused on volleyball training or competition; (c) studies lacking empirical data or not presenting clear methodological descriptions. These criteria were designed to ensure an analysis across various age groups and both male and female athletes, providing a holistic understanding of volleyball training and performance. Information sources The literature search was performed from database inception to March 2023 (date when the search was last conducted) in five electronic databases: PsycINFO, MEDLINE/PubMed, SPORTDiscus, Web of Science, and Scopus. The search was developed to consider research articles published online. Search strategy Scientific peer-reviewed published papers written in English, Portuguese, French, and Spanish were eligible for the present systematic review. The search strategy was developed around keywords for Population (volleyball athletes), Exposure (volleyball training or matches), Country (all), and study type (longitudinal). Included terms for the searches were: ‘training load volleyball’, ‘workload volleyball’, ‘rating of perceived exertion volleyball’, ‘RPE volleyball’, ‘well-being volleyball’, ‘wellness volleyball’, ‘fatigue volleyball’, ‘sleep volleyball’, ‘training response volleyball’, ‘neuromuscular fatigue volleyball’, and ‘neuromuscular status volleyball’. The complete search strategy is available in the supplementary file (Table 1 ). Selection and data collection process All retrieved papers were exported to CADIMA software, a tool designed to increase the efficiency of the evidence synthesis process and facilitate reporting of all activities to maximize methodological rigor [ 23 ]. Duplicates were automatically removed. Titles and abstracts of potentially relevant papers were screened by two reviewers (A.R. and J.R.P.). Disagreements between authors were solved through discussion and, when necessary, the remaining authors (P.C., M.J.C-S. and J.V-S.) were involved. Full‐text copies were acquired for all papers that met title and abstract screening criteria. Full‐text screening was performed by two reviewers (A.R. and J.R.P.). Again, any discrepancies were discussed until the authors reached an agreement and consulted the four other authors when required. In the process of article selection, inter-rater reliability was quantitatively assessed using the Cohen kappa coefficient. For the initial title and abstract screening, the kappa coefficient was 0.810. Similarly, for the full-text review phase, the kappa coefficient was 0.979. Data extraction Data were extracted from each article by the lead author (A.R.). Data not provided or presented non-numerically were identified as “not reported”. The following data, when possible, were extracted from each article: (1) participants’ characteristics (sample size, sex and age); (2) participants’ level (young, collegiate or professional); (3) monitoring period (i.e., seasonal phase(s) and duration); (4) training load measures (e.g., RPE, heart rate, time motion analysis); (5) neuromuscular fatigue tests (e.g., heart rate, biochemical markers); (6) well-being assessment methods (e.g., scale, questionnaire). Risk of bias assessment Methodological quality was assessed using a modified version of the Downs and Black [ 24 ] checklist for assessing the methodological quality of randomized and nonrandomized healthcare interventions. This checklist has been validated for use with observational study designs [ 24 ] and has been previously used to assess methodological quality in systematic reviews assessing cross-sectional and longitudinal studies [ 25 , 26 ]. The number of items from the original checklist can be tailored to the scope and needs of the systematic review, with 10–15 items used in previous systematic reviews [ 25 , 26 ]. For this review, 11 items in the checklist were deemed relevant (Table S 3 ). Each item is scored as “1” (yes) or “0” (no/unable to determine), and the scores for each of the 11 items are summed to provide the total quality score. The quality of each included article was rated against the checklist independently by two authors (A.R. and J.R.P.). Any disparity in the outcome of the quality appraisal was discussed, and a third author (J.V-S.) was consulted if a decision could not be reached. In the assessment of methodological quality and risk of bias, inter-rater reliability was quantitatively evaluated using the Cohen kappa coefficient. The kappa value obtained was 0.903. Data synthesis Results were not pooled as the studies were heterogeneous in their methods, data, and context. Instead, we presented a narrative synthesis of the findings from included studies. We identified three categories of monitoring interventions through the process of reviewing the included studies. The definitions of these interventions are provided in the supplementary file (Table S 2 ). Summary tables were provided as means and standard deviations were reported for age of participants, body mass, and body height. The period of each study (i.e., pre-season, competitive period, or both) and the duration of the study, in weeks, were also reported.
Results Study selection The electronic search yielded 2535 articles (PsycINFO = 121, PubMed = 411, SPORTDiscus = 661, Scopus = 731, Web of Science = 611). A total of 868 duplicate records were removed, and a further 1570 irrelevant articles were excluded based on title and abstract; 97 fulltext articles were screened and 66 were removed, leaving 31 articles for inclusion in the review. Reasons for exclusion were study designs did not meet the inclusion criteria ( n = 33), no volleyball players in the sample ( n = 20), failure to perform any monitoring strategy ( n = 7), and duplicate dataset ( n = 6). The full results of the search are presented in Fig. 1 . Risk of bias in studies The ratings from the quality appraisal for each article are presented in the supplementary file (Table S 4 ). Methodological quality scores ranged from 7 to 9 out of 11. The predominant concerns identified in the evaluation of these studies centre around issues of external validity, particularly the representativeness of the study participants. This limitation significantly hampers the generalizability of the findings. The studies fall short in ensuring that the subjects included are reflective of the broader population from which they are drawn, raising questions about the applicability of their conclusions beyond the specific sample studied. In line with previous literature using the Downs and Black checklist [ 25 , 26 ], no articles were excluded based on methodological quality. Study characteristics Study characteristics for all 31 included studies are presented [ 16 – 18 , 27 – 54 ] (Table 2 ). From these 31 articles, 22 included professional athletes [ 16 – 18 , 28 , 30 – 32 , 34 – 36 , 38 – 43 , 46 , 48 , 50 – 53 ], seven were collegiate-level volleyball athletes [ 27 , 29 , 33 , 37 , 44 , 45 , 47 ], and two included young athletes [ 49 , 54 ]. Nine articles used female volleyball players [ 27 – 29 , 32 , 33 , 37 , 44 , 45 , 47 ], while the remaining 22 were male volleyball athletes [ 16 – 18 , 30 , 31 , 34 – 36 , 38 – 43 , 46 , 48 – 54 ]. Quantifying training stress in volleyball athletes Quantifying training stress can be done in different ways. The most common one can be achieved by multiplying the training session intensity by the training session duration. Training load can be either internal or external [ 55 ]. Internal training load refers to the physiological stress that a training session induces in the athlete [ 55 ]. Measures such as heart rate (HR) and rating of perceived exertion (RPE) are the most common methods to monitor internal load [ 2 ]. On other hand, external training load is defined as the physical work prescribed in the training plan [ 55 ]. The most common method of monitoring external load is with time-motion analysis devices, such as GPS, accelerometers, or inertial motion units (IMUs) [ 2 ]. The effects of different training loads measurements have been investigated in volleyball with durations ranging from one week [ 16 , 49 ] to two seasons [ 27 ] (Table 3 ). Moreover, the effects of single training load measurement (i.e., internal, or external) [ 16 , 17 , 27 , 28 , 30 – 32 , 35 – 38 , 40 – 42 , 44 , 46 – 51 , 53 , 54 ] or a combination of both training load measurements [ 18 , 33 , 39 , 43 ] have been investigated. The session rating of perceived exertion (sRPE) (77%) [ 16 – 18 , 28 , 30 – 33 , 35 , 37 – 44 , 47 – 51 , 53 , 54 ] and the IMUs (16%) [ 18 , 27 , 39 , 43 , 46 ] are the most commonly used training load measurement strategies in volleyball. Other training load measures investigated in the volleyball literature include HR [ 32 ], accelerometers [ 33 ], and video-cameras [ 36 ]. Quantifying fitness and fatigue in volleyball athletes The reduction in maximal voluntary contractile force is designated by neuromuscular fatigue and tests to detect this type of fatigue are broadly used in sport [ 2 ]. Low-frequency fatigue (i.e., resulted from high-force, high-intensity, or repeated stretch–shortening cycles muscle actions) is frequently a topic of interest while monitoring athletes [ 56 ]. Consequently, many research studies have established the reliability and validity of vertical jumps as an indicator of neuromuscular fatigue in athletes [ 57 ]. One of the most valid measures of fatigue is the ratio of flight time to contraction time (FT:CT), which can be explained by the fact that time-related variables are more sensitive to fatigue [ 58 ]. Nevertheless, other measures such as jump height, peak and mean power, and peak force are also popular among coaches [ 59 ]. In addition to being used to monitor training stress, submaximal exercise protocols and physiological markers such as HR can be used as objective markers of fatigue. Heart rate variability (HRV) is widely used, in particular the natural logarithm of the square root of the mean sum of squared differences between adjacent normal RR intervals (Ln rMSSD) [ 60 ]. Another monitoring tool that can be used is the recovery period after a training session, indicated with the heart rate recovery (HRR) [ 61 ]. Finally, examining hormonal and biochemical markers can provide a good indicator of athletes’ adaptation process [ 62 ]. Only five studies included fitness and fatigue measurements as tools to monitor volleyball athletes [ 18 , 28 , 34 , 42 , 49 ] (Table 4 ). The countermovement jump (CMJ) is the most used fatigue measurement strategy in volleyball [ 18 , 28 , 42 , 49 ]. Other fitness and fatigue monitoring tools are hormonal and biochemical markers [ 34 , 42 ] and HR variables [ 28 ]. Quantifying well-being in volleyball athletes Questionnaires can be useful to monitor athletes’ levels of stress [ 1 ] and identify those at greater risk of becoming injured [ 63 ]. Research has shown that athletes often have a mood disturbance while developing symptoms of overreaching and overtraining [ 2 ]. Therefore, assessing athlete’s mood state and level of tension through tools such as the Profile of Mood States (POMS) and the Brunel Mood Scale (BRUMS) can be useful [ 64 ]. Wellness inventories, like the Hooper index [ 65 ], are also common if the goal is to gather as much information as possible about different metrics, such as fatigue, stress, sleep, or recovery. The current literature search returned 22 studies that applied some form of well-being questionnaire [ 16 – 18 , 28 , 29 , 31 – 35 , 38 , 40 – 42 , 44 , 45 , 48 – 52 , 54 ] (Table 5 ). The Hooper index [ 16 , 28 , 32 , 38 , 41 , 44 , 48 ], the Total Quality Recovery (TQR) scale [ 16 , 17 , 31 , 35 , 40 , 50 , 51 ], and general wellness questionnaires [ 18 , 29 , 33 , 40 , 49 , 51 , 52 ] are the most commonly used well-being measurement strategies in volleyball. Other well-being measuring tools investigated in the volleyball literature include the Recovery Stress Questionnaire for Athletes (RESTQ-Sport) [ 34 , 42 , 54 ] and the POMS [ 35 ].
Discussions Literature that has evaluated the effect of all monitoring strategies (i.e., training stress, fitness and fatigue, and well-being) during volleyball training and/or competition is limited. Besides, there is a small number of studies describing the external training load when compared with the internal training load. Furthermore, not only fitness and fatigue monitoring studies are limited, but also have questionable methodologies within volleyball athletes. A sample monitoring system for volleyball is suggested in Fig. 2 . Training stress in volleyball Seven studies analysed the internal load of volleyball players during the pre-season with the sRPE [ 17 , 30 , 31 , 41 , 42 , 50 , 54 ]. During the first weeks of pre-season the internal load of the players is defined by a progressive increase characterized by a decrease in performance [ 30 , 31 , 42 ]. This can also be seen with external training load measures, as jump load is higher during the first phase of pre-season [ 36 ]. To better prepare athletes for the start of the competition phase, this periodization approach is common in team-sports during the pre-season [ 54 , 66 ]. Coaches are advised to introduce the load progressively and, in the middle of the pre-season period, decrease the training loads to allow recovery and better balance the fitness-fatigue relationship [ 67 ]. In fact, elevated injury rates have been observed during this period in other sports [ 68 ]. This is in line with what is reported in volleyball’s literature, as weekly workloads, acute-chronic workload ratio (ACWR), and incidence of injury values are higher during the pre-season period [ 17 , 50 ]. Coaches and practitioners should evaluate athletes’ fitness in the beginning of the pre-season period and assess what were the workloads that players were familiarized during the off-season so that weekly internal training load peaks do not occur. Sixteen studies analysed the internal training load of volleyball athletes during the competitive period with the sRPE method [ 17 , 18 , 31 – 33 , 35 , 37 – 41 , 43 , 44 , 48 , 50 , 51 ]. It can be observed that volleyball periodization is characterized by a wave distribution of the training load during this period [ 31 , 35 , 37 , 40 , 48 , 50 ]. This is distinctive of sports in which the pre-season period is short compared to the competitive period with the objective to adapt the stress applied during training sessions [ 11 ]. Due to various travels made and games played against teams of different levels the number of training sessions reduce during the competitive period [ 66 ]. Therefore, this wave distribution of the training load can avoid a possible decrement in performance. This can be done by increasing training loads in weeks in which the team has a low possibility of winning or losing the game [ 11 , 66 ]. In a more in-depth analysis, results of the literature indicate that during the first phase of the competitive period, volleyball athletes experience higher internal loads compared to the second phase of the same period [ 31 , 35 , 38 , 40 , 41 ]. The first phase of the competitive period of volleyball professional season is characterized by a focus on the development of fitness components while the second phase comprises the most specific training sessions (technical and tactical skills) [ 11 ]. Thus, this can explain these differences in internal load levels observed during the competitive period. Moreover, while looking into a single week, it can be observed that higher sRPE values are recorded during the middle of the week and lower values at the end of the week [ 39 , 43 , 48 ]. This is a common strategy to optimize the adaptation process in team sports by augmenting athletes’ recovery status by reducing training loads [ 11 ]. There are significant differences in competition and in training jump count, jump height and jump load between positions in female [ 27 ] and male volleyball athletes [ 33 , 36 , 46 ]. Outside hitters had the highest jump height followed by middle blockers and right-side hitters [ 27 ]. Female [ 27 ] and male [ 36 , 46 ] volleyball middle blockers showed a higher jump count and jump rate compared to outside hitters and right-side hitters. This is in line with another study with female volleyball athletes that reported that middle blockers experienced both a higher HR-method internal training load and sRPE than the rest of the players [ 32 ]. Middle blockers are often required to be involved in every defensive blocking aspect of the game [ 69 ], hence their higher values of both external and internal training load. Nevertheless, HR measures of internal training load should be interpreted with caution. While the HR represents a valid means through which to measure exercise intensity in endurance sports, these methods are questionable in team sports, such as volleyball, which are characterized by short but maximal anaerobic efforts [ 70 ]. In fact, the results of one study stated no association between well-being and HR-based internal training load [ 32 ]. Thus, given the limitations inherent in using the HR for monitoring the intensity of volleyball training sessions, coaches are advised to not use HR-based methods to quantify training stress in this sport. Fitness and fatigue in volleyball One study demonstrated that submaximal exercise heart rate (HRex) values decreased over a period of 4 weeks [ 28 ]. Reductions in HRex are generally associated with improved aerobic fitness, while elevations in HRex are related to acute fatigue or loss of fitness [ 71 ]. One study also showed positive associations between seated Ln rMSSD and training load (i.e., sRPE) in female volleyball athletes [ 28 ]. These results must be interpreted carefully as these positive associations can vary depending on how loads are being tolerated by athletes. If training loads increase in response to increments in fitness and performance, then seated Ln rMSSD will reduce [ 72 ]. On other hand, if converse cardiac-autonomic responses are stimulated through mechanisms of fatigue resulted from high training loads, then seated Ln rMSSD will increase [ 73 ]. These inconsistencies in associations between Ln rMSSD and training load show the importance of monitor various markers of fatigue, fitness, load, and well-being. Previous research showed that HRV values return to baseline 24 h after an intense exercise bout in the supine position [ 74 ]. Therefore, it can be hypothesized that high training loads induces greater fluctuations in the seated Ln rMSSD compared to supine Ln rMSSD. Thus, coaches and practitioners should have this into consideration when monitoring fatigue of volleyball athletes through HRV. In response to a high-load exercise, various enzymes and blood markers, such as creatine kinase (CK), increase [ 63 ]. This type of exercises induces muscle damage and since CK is released from muscle cells to blood, practitioners have been using CK levels to assess the degree of muscle damage [ 75 ]. According to the search conducted, volleyball athletes experience an increase of CK levels during the first weeks of pre-season and a decrease in the final weeks [ 34 , 42 ]. This is in line with what was already mentioned in this manuscript about the levels of sRPE during the pre-season period. It is expected to observe higher increment in CK levels in individuals with lower physical fitness [ 75 ], particularly during initial training periods (i.e., pre-season) characterized as an initial training time followed by a period with no structured training. This also indicates that CK levels increase in response to high training loads, which is in line with what was previously reported [ 75 ]. However, CK has a large variability [ 76 ] and personnel involved in the collection of this marker must understand the importance of establishing baseline values from many samples over several days. Testosterone and cortisol are other two markers that are associated with cellular catabolism, anabolism, and overreaching [ 62 ]. Literature shows that during volleyball pre-season, both testosterone and cortisol levels do not change [ 42 ]. This is probably an indicator that volleyball pre-season is not enough to induce disturbances in the balance of the immune system. Results from a study conducted during the pre-season showed that the CMJ height did not change during a 6-week period, assessed four times during this time-window [ 42 ]. Another study revealed that, across a single training week, the CMJ jump height decreased [ 49 ]. Both studies’ methodologies indicated that the best of all jumps was retained for analysis. However, when the comparison between highest and average results is possible, the averaged jump results is more sensitive than the highest jump in detecting fatigue or supercompensation effects [ 77 ]. Therefore, these results should be interpreted with caution and volleyball coaches should have into consideration that averaged CMJ performance without arm swing should be used to track neuromuscular status. Well-being in volleyball One study reported well-being measures, such as mood, soreness, and sleep duration, as independent predictors of injury in female volleyball athletes [ 29 ]. This is aligned with other non-volleyball studies [ 78 ]. According to the literature, athletes do not get the sleep duration that is recommended [ 79 ] which is a minimum of 7 h to minimize injury risk [ 80 ]. Therefore, volleyball staff should seek to include these subjective markers into their daily training monitoring routines to identify athletes with higher injury risk. Volleyball athletes’ recovery state is lower in the final stage of the pre-season, compared to other points of the competitive period [ 31 ]. In the last phase of the pre-season, coaches are advised to employ a taper strategy to avoid the undesirable outcomes of fatigue already mentioned in the beginning of the present manuscript, like nonfunctional overreaching [ 8 ]. In fact, the results of a study with professional male volleyball players showed that the odds of injury were inversely proportional to the values of TQR scale (i.e., the less recovered the player, the greater the odds of sustaining an injury) [ 17 ]. Likewise, athletes’ readiness to start the competitive period is important since the perception of stress increase whereas their perception of recovery decrease during a volleyball pre-season [ 31 , 34 , 54 ]. The results from other studies suggested that the RESTQ-Sport [ 42 ] and the Hooper index [ 16 , 44 ] are sensitive to an increase in the training load in volleyball athletes, showing promising results as tools to indicate early symptoms of overtraining. Consequently, balancing pre-season training stress and recovery is essential so athletes’ adaptation process is optimized for match-days. During periods of congested travels and games, volleyball athletes reported poorer well-being responses in questionnaires [ 16 , 33 , 35 , 40 , 48 , 51 , 52 ]. Time lost to travel, and the ensuing disruption of routines and training schedules may inhibit the use of recovery and medical interventions. Since travels can decrease the well-being and increase athletes’ risk for illness, coaches and staff should implement some strategies, such as: provide adequate recovery time after travels; avoid flying on the same day as match-day; and encourage athletes to drink water during travels [ 81 ]. By tracking well-being values coaches can make informed decisions about the demands that incur from both in and out of sport activities. During the last stage of the competitive period, higher levels of stress can be observed in professional volleyball athletes [ 38 , 41 ]. Anxiety of a pre-match situation seems to impact the perception of stress levels by professional athletes [ 82 ]. This stage is characterized by the decisive matches of the season. On other hand, stress levels in collegiate volleyball athletes may not be as heavily influenced by athletic events during the season and may be more a consequence of the temporal relation to the academic school year [ 45 ]. Therefore, challenges that occur in social and academic settings are the offset to higher stress levels in collegiate athletes. Limitations, strengths, and recommendations for future research Many conclusions can be drawn from the available literature on the monitorization strategies in the volleyball context. Studies addressing the responses of the three types of monitorization strategies in volleyball are limited [ 18 , 28 , 42 , 49 ]. Of these four studies, none was conducted during a full season. Thus, future research should examine fitness and fatigue outcomes, internal and external training load data, and well-being questionnaires responses during a longer period (i.e., at least one full season) to better understand the relationship of different monitoring strategies in volleyball athletes. Besides, only five studies analysed fitness and fatigue in this athletic population [ 18 , 28 , 34 , 42 , 49 ]. Moreover, none of these studies was performed during a full season and future research should point in that direction. More specifically, fatigue in female volleyball athletes can be even more expanded by analysing the menstrual tracking and biochemical markers to develop a further understanding of how Ln rMSSD responses influence training adaptations. Although the jump analysis is accepted as a reflection of external load, displacements and changes of direction also seem to affect this dimension (especially for the libero position). Therefore, those movements should be considered in future research as only one study analysed these metrics in a sample of collegiate female volleyball athletes [ 33 ]. Furthermore, the simple jump count method is not ideal to measure external load. Six studies expressed external load by analysing the jump height of each athlete [ 18 , 27 , 33 , 39 , 43 , 46 ]. Still, two volleyball players with different body mass that achieve the same jumping height will not experience the same load. Due to gravity, linear velocity at landing increases with higher jumping height values, which subsequently increases kinetic energy (i.e., energy related to the body mass) levels at landing [ 83 ]. So, coaches should consider the vertical displacement of each jump as well as the mass of the athlete to have a better external load metric that is more reflective of what the volleyball athlete is experiencing [ 83 ]. Future research should explore the prospective relationship between external load calculated with the parameters mentioned before, the incidence of injury and the landing mechanics of volleyball players. This would potentially inform training and match-play guidelines by designing thresholds for injury prevention purposes. One notable limitation in the current volleyball literature, and a promising direction for future research, is the exploration of GPS and Local Positioning Systems (LPS) for monitoring external load. While extensively used in outdoor sports, the application of GPS in volleyball, particularly indoor, is less common [ 84 ]. However, advancements in LPS technology now allow for its potential application in indoor environments, such as volleyball courts [ 85 ]. The adoption of these systems could provide detailed insights into player movements, intensity, and workload, which are crucial for training optimization, performance enhancement, and injury prevention [ 5 , 85 ]. This area remains under-researched in volleyball, highlighting a significant gap and an opportunity for future studies. It is recommended that subsequent research investigates the utility and implementation of these technologies in volleyball, offering a comprehensive perspective on managing external load in athletes. Such exploration could substantially contribute to the evolving landscape of volleyball training and competition analysis. The average CMJ height is more sensitive than highest CMJ height in monitoring the effects of fatigue [ 77 ]. However, three of the four studies that used this test to monitor neuromuscular fatigue opted to use the best of all attempts [ 28 , 42 , 49 ]. So, average CMJ height should be used in future volleyball studies to track neuromuscular status. Additionally, peak power, mean power, peak velocity, peak force, mean impulse, and calculated power would seem merit worthy in quantifying supercompensation effects [ 77 ] and no study evaluated the impact of these variables within volleyball athletes. Nevertheless, the more useful indicators of readiness and neuromuscular fatigue within the plethora of variables that the CMJ give are the FT:CT and reactive strength index modified (RSI mod ) [ 86 ]. The RSI mod is obtained by dividing the jump height to the contraction time and, similarly to FT:CT, the emphases of these two variables are jump process and force production [ 87 ]. Because time and contraction-specific measures better reflect the strategy employed by the neuromuscular system, compared with jumping height, contraction time is more sensitive to detect adaptations resulted from fatigue [ 88 ]. Since the ability of vertical jump height to reflect fatigue in athletes show inconsistencies in the literature [ 89 , 90 ], future studies in volleyball should consider the use of RSI mod and FT:CT to monitor neuromuscular fatigue. While the CMJ test is prevalently used in the current literature, exploring alternative assessments could provide a more comprehensive understanding of neuromuscular responses in volleyball athletes. Tests like the Drop Jump, which involves a short-duration stretch–shortening cycle, can offer insights into reactive strength and plyometric capabilities under fatigued conditions [ 87 ]. Additionally, isometric tests, such as isometric mid-thigh pulls or isometric calf raises, could be utilized to assess force in specific joint positions [ 91 ]. These alternative tests could reveal different dimensions of fatigue that may not be fully captured by the CMJ alone. Incorporating a variety of neuromuscular assessments can help in developing a more nuanced understanding of fatigue patterns in volleyball players, which in turn could inform more effective training and recovery protocols. Therefore, it is recommended that future research in volleyball expand the repertoire of fatigue assessment tools to include dynamic, plyometric, and isometric evaluations, providing a broader spectrum of data to optimize athlete performance. Finally, to mitigate divergency in fatigue, relative velocity loss thresholds have recently been implemented during the strength training prescription [ 92 ]. Thus, velocity based training (VBT) can be a great alternative to the most used percentage-based methods since the latter do not have into consideration training-related fatigue [ 93 ]. Therefore, strength and conditioning coaches should consider monitoring velocity attained at the start of a training session to help objectively monitor changes in athlete fitness and fatigue. This is a topic that needs more understanding and future research should seek to answer if VBT is a reliable and valid tool to monitor neuromuscular fatigue in volleyball athletes. Due to the heterogeneity of the measures used, it was not possible to conduct a meta-analysis. Plus, RPE and well-being data can be collected without following specific procedures and across a range of methods (e.g., different RPE scales and/or different operational questions). Therefore, practitioners working in professional volleyball can use this information in various ways with different assessment standards between them and this systematic review did not have that into consideration. Nevertheless, since there is a growing interest in topics related to athletes’ monitoring this study can aid volleyball coaches to select which training load measures, fatigue and well-being assessments can be used with their athletes.
Conclusions Within the context of team sport athletes, such as volleyball, coaches should use a mixed-methods approach when monitoring these athletes. No single measure can determine how a player is fully coping with the demands of training and matches. Therefore, practitioners not only need a range of methods, but also ensure athletes are familiarized with them to better improve their buy-in and the quality of the data analysis. According to this review, internal training load should be collected daily after training sessions and matches with the sRPE method. External training load should also be measured daily according to the method proposed by Charlton et al. [ 83 ] based on jump height, jump count, and kinetic energy. If force platforms are available, neuromuscular fatigue can be assessed weekly using the FT:CT ratio of a CMJ or, in cases where force platforms are not available, the average jump height can also be used. Finally, the Hooper Index has been shown to be a measure of overall wellness, fatigue, stress, muscle soreness, mood, and sleep quality in volleyball when used daily.
Background Volleyball, with its unique calendar structure, presents distinct challenges in training and competition scheduling. Like many team sports, volleyball features an unconventional schedule with brief off-season and pre-season phases, juxtaposed against an extensive in-season phase characterized by a high density of matches and training. This compact calendar necessitates careful management of training loads and recovery periods. The effectiveness of this management is a critical factor, influencing the overall performance and success of volleyball teams. In this review, we explore the associations between training stress measures, fatigue, and well-being assessments within this context, to better inform future research and practice. Methods A systematic literature search was conducted in databases including PsycINFO, MEDLINE/PubMed, SPORTDiscus, Web of Science, and Scopus. Inclusion criteria were original research papers published in peer-reviewed journals involving volleyball athletes. Results Of the 2535 studies identified, 31 were thoroughly analysed. From these 31 articles, 22 included professional athletes, seven included collegiate-level volleyball athletes, and two included young athletes. Nine studies had female volleyball players, while the remaining 22 had male volleyball athletes. Conclusions Internal training load should be collected daily after training sessions and matches with the session rating of perceived exertion method. External training load should also be measured daily according to the methods based on jump height, jump count, and kinetic energy. If force platforms are available, neuromuscular fatigue can be assessed weekly using the FT:CT ratio of a countermovement jump or, in cases where force platforms are not available, the average jump height can also be used. Finally, the Hooper Index has been shown to be a measure of overall wellness, fatigue, stress, muscle soreness, mood, and sleep quality in volleyball when used daily. Supplementary Information The online version contains supplementary material available at 10.1186/s13102-024-00807-7. Keywords
Supplementary Information
Acknowledgements Not applicable Authors’ contributions A.R., J.R.P., P.C., and J.V-d-S. conceptualized the systematic review. A.R., J.R.P., P.C., M.J.C-e-S., and J.V-d-S. performed the selection of the eligible studies. A.R. extracted data, synthesized the data, prepared tables and figures, and drafted the manuscript. All authors contributed significantly to the interpretation of results. All authors critically reviewed the manuscript. All authors read and approved the final manuscript. Funding This research received no external funding. Availability of data and materials All data are available upon request to the corresponding author. Declarations Ethics approval and consent to participate Not applicable. Consent to publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:47
BMC Sports Sci Med Rehabil. 2024 Jan 13; 16:17
oa_package/84/3c/PMC10788005.tar.gz
PMC10788006
0
Background Bacterial polyhydroxyalkanoates (PHAs) are bio-based polymeric materials that show high biodegradability not only in soil but also in the marine environments, and produced from renewable resources; making them promising materials that contribute to achieving Sustainable Development Goals (SDGs). Poly(3-hydroxybutyrate- co -3-hydroxyhexanoate) [P(3HB- co -3HHx)] is a practical and by far the most implementable PHA that can be fabricated into various commercial products owing to its resemblance to conventional plastics such as low-density polyethylene and polypropylene [ 1 ]. To date, the copolymer has been industrially produced from plant oil under the trademark of Green Planet TM by KANEKA Co. Ltd., at the scale of over 5 thousand tons per year to support the growing demands in single-use plastics: cutlery, straw, container, coffee capsules, films, etc. [ 2 ]. In contrast to poly(3-hydroxybutyrate) [P(3HB)] homopolymer, P(3HB- co -3HHx) is characterized by reduced crystallinity and melting temperature, as well as improved flexibility attributed with the longer side-chain in the 3HHx (C 6 ) comonomer [ 3 ]. The copolymer was initially discovered to be synthesized by Aeromonas caviae FA440 having the biosynthetic genes clustered as phaP-C-J Ac encoding phasin, a unique class I PHA synthase accepting C 4 -C 7 monomers, and ( R )-specific enoyl-CoA hydratase, respectively, from plant oils and fatty acids as substrates [ 4 – 6 ]. Efficient production of this copolymer from plant oils has been achieved by engineering of high PHA-performing Ralstonia eutropha ( Cupriavidus necator ) [ 7 , 8 ], which were the modification of β -oxidation and ( R )-3HB-CoA-formation pathways as well as introduction of the double mutant (N149S/D171G) of PhaC Ac (PhaC NSDG ) [ 5 , 9 – 12 ]. The catalytic properties of ( R )-specific enoyl-CoA hydratase PhaJ, linking β -oxidation and PHA biosynthesis, is one of key factors regulating 3HHx composition in the resulting PHA polymers. Metabolic engineering for biosynthesis of P(3HB- co -3HHx) from structurally unrelated sugars is another important technology for the cost-effective bioproduction; considering that sugars are relatively inexpensive and can be the alternative to the bioprocess involving plant oils that usually causes severe foaming and complicates the downstream processing [ 13 ]. The intracellular formation of ( R )-3HHx-CoA from sugars has been achieved by artificial reverse β -oxidation (rBOX) pathway in which key enzymes are bacterial NADPH-dependent crotonyl-CoA carboxylase/reductase (Ccr) and mammalian ethylmalonyl-CoA decarboxylase (designated as Emd) [ 14 – 16 ]. The Ccr-Emd combination plays a role by connecting crotonyl-CoA formed from acetyl-CoA to butyryl-CoA that is then elongated and converted to ( R )-3HHx-CoA. Namely, the bifunctional Ccr catalyzes reduction of crotonyl-CoA to butyryl-CoA as well as reductive carboxylation of crotonyl-CoA to ethylmalonyl-CoA, and Emd converts ethylmalonyl-CoA back into butyryl-CoA. The R. eutropha strains equipped with the artificial pathway effectively produced P(3HB- co -3HHx) from fructose or glucose [ 14 , 15 ]. The recent study has demonstrated that the artificial pathway driven by Ccr-Emd is also functional chemolitoautotrophically in the engineered R. eutropha , enabling the gas fermentation of P(3HB- co -3HHx) using CO 2 and H 2 as carbon and energy sources, respectively [ 17 ]. The progress of P(3HB- co -3HHx) production from structurally unrelated carbon sources and the relevant enzymes are summarized in Additional file 1 : Table S1. We here discovered that R. eutropha possesses a native de novo biosynthesis pathway for ( R )-3HHx-CoA and NADPH-acetoacetyl-CoA reductases (PhaBs)-independent pathway for provision of ( R )-3HB-CoA, functional under microaerobic conditions. Despite the numerous studies on R. eutropha as a useful host for microbial cell factories, there have been limited information for bioproduction using this bacterium under low-aerobic or microaerobic conditions. These results reflect the metabolic versatility of R. eutropha as well as arises the high potential of this bacterium as a valuable platform from the industrial biomanufacturing point of view.
Materials and methods Bacterial strains and plasmids The bacterial strains and plasmids used in this study are listed in Table 1 . R. eutropha strains were cultivated at 30 °C in a nutrient-rich (NR) medium containing 10 g of bonito extract (Kyokuto, Tokyo, Japan), 10 g of polypeptone, and 2 g of yeast extract in 1 L of tap water. E. coli strains were grown at 37 °C in a Lysogeny broth (LB) medium for general gene manipulation and transconjugation. Kanamycin (100 mg/L) was added to the medium when necessary. Construction of recombinant R. eutropha strains The glucose-utilizable strain NSDG-GG having phaC NSDG was used as a parent strain to construct various deletion mutants in this study. The gene deletion in R. eutropha chromosome was carried out through homologous recombination using pk18mobsacB-based suicide vectors, where the targeted genes were phaB1 ( h16_A2171 ), phaB2-C2 ( h16_A2002-A2003 ), phaB3 ( h16_A2171 ), had ( h16_A0602 ), paaH1 ( h16_A0282 ), crt2 ( h16_A3307 ), bktB ( h16_A1445 ), phaJ4a ( h16_A1070 ), h16_A3330 , and fadB’ ( h16_A0461 ) (Additional file 1 : Table S2). The deletion vectors for bktB and h16_A3330 were constructed by inserting the respective fragments connecting the upstream and downstream regions of the target gene by inverse PCR. The details of the construction are described in the supplementary text and the sequences of oligonucleotide primers used for PCR amplification are shown in Additional file 1 : Table S3. Deletion of other genes were conducted using vectors that had been constructed previously [ 14 , 20 , 25 , 36 ]. The previously constructed pBPP-ccr Me -phaJ4a-emd and pBPP-ccr Me -phaJ Ac -emd [ 15 , 34 ] were used to overexpress the genes for P(3HB- co -3HHx) synthesis. Transconjugation of the mobilizable plasmids to R. eutropha strains were performed using E. coli S17-1 as the donor strain, as previously described [ 37 ]. In the cases of chromosomal modification, transconjugants into which the pk18mobsacB-based suicide vector in interest was integrated into the chromosome (pop-in strains) were selected on a Simmons Citrate Agar (BD diagnostics, Franklin Lakes, NJ, USA) plate medium containing 250 mg/L kanamycin. The integrants were plated on an NR agar medium containing 10% (w/v) sucrose for the second recombination event (pop-out strains). Sucrose-resistant isolates were selected based on PCR analysis to confirm the deleted allele. The transconjugants of R. eutropha harboring the mobilizable expression vectors were selected on the Simmons Citrate Agar plate medium containing 250 mg/L kanamycin. Production and analyses of PHA PHA production by R. eutropha strains was carried out at 30 °C in a 500 mL Sakaguchi flask with 100 mL of a nitrogen-limited mineral salts (MB) medium, which composed of 9 g of Na 2 HPO 4 ·12H 2 O, 1.5 g of KH 2 PO 4 , 0.5 g of NH 4 Cl, 0.2 g of MgSO 4 ·7H 2 O, and 1 mL of trace element solution in 1 L of deionized water [ 37 ]. A filtered-sterilized solution of glucose was added to the medium to a final concentration of 1% (w/v). In this study, aerobic condition was defined by a reciprocal shaking at 120 strokes/min; meanwhile low-aerated cultivation was conducted under a low-shaking speed of 60 strokes/min. Unless otherwise stated, the medium composition and fermentation condition were fixed throughout the study. After cultivation for 120 h, the cells were harvested, washed with cold deionized water, and then lyophilized. The content and composition of intracellular PHA were determined by gas chromatography after methanolysis of the dried cells in the presence of 15% (v/v) sulfuric acid, as previous described [ 38 ].
Results Unusual P(3HB- co -3HHx) biosynthesis profile of R. eutropha under low-aerobic condition Among the three PhaB paralogs in R. eutropha H16 [ 18 ], it has been reported that the highly-expressed PhaB1 is a major reductase and the weakly expressed PhaB3 fairly compensated P(3HB) synthesis when PhaB1 was absent [ 19 ]. During our investigation for PHA copolymer biosynthesis by engineered strains of R. eutropha , we noticed that phaB -deleted strains showed altered PHA biosynthesis property when shaking rate was reduced to 60 strokes/min from usual 120 strokes/min (Fig. 1 , Additional file 1 : Table S4). The left panel in Fig. 1 A shows PHA biosynthesis from glucose by the glucose-assimilating R. eutropha strain NSDG-GG harboring phaC NSDG and its respective phaB -deleted variants, denoted as parent (NSDG-GG), ΔB1 (∆ phaB1 ), ΔB1ΔB3 (∆ phaB1 ∆ phaB3 ) and ΔB1ΔB3ΔB2-C2 (∆ phaB1 ∆ phaB1 ∆ phaB2-C2 ). The triple mutant ΔB1ΔB3ΔB2-C2 was constructed by deletion of phaB2 along with phaC2 encoding the second PHA synthase with unknown physiological function, since they are adjacent to each other. Under the aerobic condition (120 strokes/min), the cellular PHA content was reduced from 85 wt% to 64 wt% by deletion of phaB1 , and further decreased to 26 wt% by double deletion of phaB1 and phaB3 . Additional deletion of phaB2-phaC2 did not affect the PHA synthesis. These results were consistent with those reported by Budde et al. [ 19 ]. Interestingly, relatively higher amount of PHA (63 and 53 wt%) was produced under the slow-shaking condition (60 strokes/min) even by the double and triple phaB -deletants, respectively (right panel in Fig. 1 A, B). The single deletion mutant ΔB1 showed slow PHA formation under this condition but outperformed the aerobic condition after 96 h; meanwhile the double and triple deletants could still produce significant amount of PHA under the low-aerobic cultivation (Fig. 1 C, Additional file 1 : Table S5), with 4.5-fold production (> 60 wt%) by the double phaB -deleted mutant and 3.0-fold (> 50 wt%) by the triple phaB -deletant when compared to the aerobic counterpart. PHA produced by the ΔB1 strain was nearly P(3HB) homopolymer containing negligible fraction of 3HHx (< 0.1 mol%) under the usual aerobic condition as observed so far, and the additional deletion of phaB3 led to trace but stagnant 3HHx composition (~ 1 mol%). We here found that, under the low aerated condition, the 3HHx fractions in PHA became significant (1.8 mol%) for the ΔB1 strain and they were further increased to 2.9 and 3.9 mol% by the double and triple deletion of phaB isologs, respectively (Fig. 1 D). These results indicated the presence of native pathway for formation of ( R )-3HHx-CoA from acetyl-CoA precursor in R. eutropha , which was independent from PhaB and activated during the low-aerobic cultivation. Revisiting PHA induction mode in R. eutropha : the effects of nitrogen and oxygen limitation While PHA biosynthesis is usually induced under unbalanced growth lacking of nitrogen source [ 4 , 9 , 10 , 12 , 20 ], the present microaerobic PHA production by R. eutropha was done on dual nitrogen and oxygen limitations. We thus investigated the production behavior of the single phaB1 -deleted ΔB1 strain, on aerobic (O-excess) and low-aerobic (O-limiting) at a varying concentration of nitrogen source (N-excess and N-limiting) (Fig. 2 , Additional file 1 : Table S6). Under the aerobic condition, nitrogen limitation was necessary to induce PHA production and increased nitrogen source constituted to the balanced growth that resulted in reduction of PHA accumulation, as observed so far (Fig. 2 A). In contrast, under the low-aerated condition, the increasing NH 4 Cl from 0.5 to 2.0 g/L marked a significant increase in bacterial growth (0.72–2.11 g/L) but only a slight decrease in PHA concentration (1.65–1.53 g/L) (Fig. 2 B). This implied that PHA synthesis was still induced regardless of nitrogen concentration when oxygen was restricted. It is also worthwhile to mention that total biomass obtained during the low-shaking cultivation tended to be higher than that by the aerobic cultivation, notably under the nitrogen-excess condition (Fig. 2 A–C). The PHA yield Y P/S (g-PHA/g-glucose) and cell yield Y X/S (g-residual cell/g-glucose) in Fig. 2 D, E indicated higher magnitude for PHA production than aerobic one. The 3HHx compositions were remained to be significant (~ 1.6 mol%) throughout the variation of nitrogen amount under the low-aerobic condition (Fig. 2 F). Biosynthesis of PHA based on oxygen limitation has been rare for R. eutropha but was investigated under chemolithoautotrophic conditions using H 2 and CO 2 [ 21 – 23 ]. The present results in Fig. 2 coincided with those reported by Ishizaki and Tanaka [ 22 ] that an O-limiting–N-excess condition yielded high cell growth with moderate PHA content and consequent high PHA production, suggesting the shared PHA production mechanism in the different trophic modes. Identification of genes responsible for the native rBOX of ( R )-3HHx-CoA de novo biosynthesis from glucose under low-aerobic conditions A series of mutants were constructed based on the ∆B1 strain by disrupting endogenous genes potentially responsible to ( R )-3HHx-CoA formation, and subjected to low-shaking cultivation (Fig. 3 , Additional file 1 : Table S7). The two ( S )-3HB-CoA dehydrogenases PaaH1 (H16_A0282) and Had (H16_A0602), as well as the ( S )-specific crotonase Crt2 (H16_A3307) in R. eutropha were reported to be broad substrate specific [ 20 ], thus possess capability to function in conversion of 3-oxoacyl-CoA to trans -2-enoyl-CoA via ( S )-3-hydroxyacyl-CoA of C 4 and C 6 . The gene deletion analyses indicated the crucial roles of the two dehydrogenases in the ( R )-3HHx-CoA formation, as the C 6 composition was decreased to 0.6 mol% by the single deletion of paaH1 and to 0.2 mol% by the double deletion of paaH1 and had . Neither change in cell growth nor PHA synthesis was observed by the deletion of Crt2, unexpectedly. Unlike under usual aerobic condition [ 15 ], the introduction of a tandem of had-crt2 by using a broad host range-expression vector or by replacement of phaB1 in the chromosomal pha operon did not affect the 3HHx composition in the resulting PHA (data not shown). Considering the similar catalytic properties of Had and PaaH1 to each other, the dehydrogenation of 3-oxohexanoyl-CoA was thought to be not the rate-limiting step in the ( R )-3HHx-CoA formation under the low-aerobic condition. FadB’ (H16_A0461) which is bifunctional ( S )-3-hydroxyacyl-CoA dehydrogenase/( S )-crotonase in β -oxidation [ 14 ] also showed no changes as well upon the gene deletion. BktB is one homolog of β -ketothiolases with broad substrate specificity and has been reported to be important for condensation of acetyl-CoA and propionyl-CoA/ n -butyryl-CoA to form 3-oxoacyl-CoAs of C 5 -C 6 in the biosynthesis of PHA copolymers [ 6 , 24 ]. Under the microaerobic condition, the deletion of bktB markedly decreased the 3HHx incorporation (0.6 mol%), implying partial significance of bktB in the pathway as well as function of other thiolase paralog(s) for the condensation. Kawashima et al. [ 25 ] demonstrated that PhaJ4a (H16_A1070) was the major ( R )-enoyl-CoA hydratase in R. eutropha that supplies ( R )-3HHx-CoA through aerobic β -oxidation on soybean oil. Here, the disruption of phaJ4a resulted in complete block of ( R )-3HHx-CoA formation in the glucose-fed low-aerobic condition. The ability of R. eutropha to supply ( R )-3HHx-CoA from glucose under the microaerobic condition indicated the presence of unknown enzyme(s) catalyzing conversion of crotonyl-CoA to butyryl-CoA prior to the chain elongation. In the KEGG database, H16_A3330 is annotated as acryloyl-CoA reductase (NADPH), possibly catalyzing the reduction of the double bond in short-chain 2-enoyl-CoAs. Nevertheless, the h16_A3330 -deleted strain ΔB1ΔA3330 showed a slight decrease in 3HHx fraction to 1.5 mol%, suggesting only partial participation of this reductase in the formation of butyryl-CoA. Concerted effect of the exogenous Ccr-PhaJ-Emd with native rBOX on P(3HB-co-3HHx) synthesis under low-aerobic cultivation The effects of the artificial rBOX, driven by Ccr Me , PhaJ and Emd Mm on P(3HB- co -3HHx) biosynthesis by R. eutropha under the low-aerobic condition were then investigated (Fig. 4 , Additional file 1 : Table S8). The expression plasmids pBPP-ccr Me -phaJ4a-emd and pBPP-ccr Me -phaJ Ac -emd were used for this purpose, in which phaJ4a Re and phaJ Ac are the genes of ( R )-specific enoyl-CoA hydratase specific to medium-chain-length and short-chain-length substrates, respectively. In addition to the significant increase in the 3HHx composition up to 6.4 ~ 9.8 mol% in the parental NSDG-GG by the vectors, the compositional change was more compelling in all the phaB -deleted mutants, as shown by the increase in 3HHx composition up to 32 ~ 38 mol% and 18 mol% by introduction of the vectors harboring phaJ4a and phaJ Ac , respectively. The results demonstrated the concerted action of the artificial and the native rBOX for formation of ( R )-3HHx-CoA under the low-aerobic condition. The high 3HHx composition in the resulting copolyester by co-expression of PhaJ4a was agreed with the preference of PhaJ4a towards medium-chain-length 2-enoyl-CoA substrates. The introduction of the either expression vector collectively restored the PHA production capability in all the phaB -deletants under both aerobic and microaerobic cultivation. This is most probably due to PhaJ-catalyzed conversion of crotonyl-CoA to ( R )-3HB-CoA in addition to the conversion of 2-hexenoyl-CoA to ( R )-3HHx-CoA (Fig. 5 ). The improvement was drastic in double (ΔB1ΔB3) and triple phaB -deleted (ΔB1ΔB3ΔB2-C2) strains, with the overall PHA concentration comparable to ΔB1 strain (1.9–2.4 g/L).
Discussion This study demonstrated that a low-aerobic or microaerobic condition with slow-shaking of the media promoted PHA biosynthesis in R. eutropha regardless nitrogen limitation, and moreover led to conditional activation of native reverse β -oxidation (rBOX) pathway. This condition was still able to support the bacterial growth as shown in the similar residual cell weight between both shaking conditions. It has been known that P(3HB) functions as an electron sink to maintain cellular redox balance under anaerobic conditions in several facultative anaerobes [ 26 ]. Usually, when oxygen availability is restricted, the cells need other pathway(s) to regenerate oxidative cofactors from the reductive form, thus NADPH-dependent reduction of acetoacetyl-CoA to ( R )-3HB-CoA plays the role in the cofactor regeneration in P(3HB)-producing anaerobes. In the present case of R. eutropha , the oxygen respiration was not fully retarded because the cell yield (Y x/s ) under the low-shaking conditions was only slightly lower when compared with those under the usual aerobic conditions (Fig. 2 E). The excess reducing equivalents not regenerated by the respiration under the limited oxygen availability would promote PHA biosynthesis to balance the cellular redox state. In fact, NADH was accumulated in R. eutropha when the terminal electron acceptor O 2 was limited under the anoxic condition [ 27 ]. The improved PHA biosynthesis under the oxygen limitation has also been seen for Azotobacter beijerinckii [ 28 ], Azotobacter vinelandii [ 29 ], Allochromatium vinosum [ 30 ] and halophilic bacterium Halomonas bluephagenesis [ 31 ]. Our study also demonstrated a striking difference in PHA accumulation trend between aerobic and low-aerobic cultivation. PhaB1 is the major acetoacetyl-CoA reductase for ( R )-3HB-CoA formation in both the conditions. PhaB3 is the minor reductase under usual aerobic conditions as reported previously [ 19 ], however, this is not applicable under the low-aerobic condition since the disruption of phaB3 resulted in only slight reduction in PHA production. Given the fact that the double and triple phaB -deletants could still produce significant amount of PHA (Fig. 1 ), it was suggested that R. eutropha possesses other enzyme(s) for the formation of ( R )-3HB-CoA from acetoacetyl-CoA functional under the low oxygen condition. rBOX for the C 4 -intermediates potentially participated in the PhaB-independent formation of ( R )-3HB-CoA via ( S )-3HB-CoA with the aid of ( R )-2-enoyl-CoA hydratase(s). Nevertheless, PaaH1, Had, and Crt2 were not the major enzymes contributing to the ( R )-3HB-CoA formation. PhaJ4a seemed to partially play the role, as the gene deletion of phaJ4a slightly reduced the PHA production in the low-shaking cultivation. Alternatively, unidentified ( R )-specific reductase such as some isologs of FabG [NADPH 3-oxoacyl-acyl carrier protein (ACP) reductase] [ 32 ] and PhaG (3-hydroxyacyl-ACP thioesterase) along with CoA-ligase [ 32 ], or enigmatic NADH-dependent ( R )-reductase may function in providing ( R )-3HB-CoA from acetoacetyl-CoA in R. eutropha under such condition. Investigation such as comparative transcriptomics analysis and gene deletion studies might be required to identify the possible pathways contributes to the microaerobic-mediated ( R )-3HB-CoA formation. So far, R. eutropha has been believed to lack a pathway for formation of ( R )-3HHx-CoA from sugar-derived acetyl-CoA molecules, because this bacterium produced only P(3HB) homopolymer from sugars even when a heterologous PHA synthase exhibiting broad substrate specificity (such as PhaC NSDG ) was expressed within the cells. The present results indicated the rBOX for formation of ( R )-3HHx-CoA from C 4 -acyl-CoA intermediates was functional specifically under the low-shaking condition. It was supposed that the robust β -oxidation in R. eutropha with multiple isologs enables the function of rBOX and the resulting native ability to form ( R )-3HHx-CoA directing to the copolyester biosynthesis, when needed. The gene disruption analysis for several known enzymes demonstrated the actual functions of BktB, PaaH1/Had, and PhaJ4a in the rBOX pathway for the C 6 -intermediares under the low-aerated condition, whereas Crt2 and bifunctional FadB’ did not contribute to it. The conditional activation of rBOX would be also related to the reduced availability of oxygen. Namely, the reduction of crotonyl-CoA and 3-oxohexanoyl-CoA in rBOX would play the role in redox homeostasis, in addition to the reduction of acetoacetyl-CoA to ( R )-3HB-CoA, as described above (Fig. 5 ). As shown in the previous artificial pathway for biosynthesis of P(3HB- co -3HHx) from structurally unrelated sugars [ 14 , 15 ], the key reaction is the reduction of crotonyl-CoA to butyryl-CoA for thiolase-mediated elongation of C 4 to C 6 . However, the native enzyme(s) responsible for the reduction of crotonyl-CoA in R. eutropha has been unclear. Although a putative acryloyl-CoA reductase (H16_A3330) had been one candidate for the unidentified reductase, the gene disruption denied the participation of H16_A3330 as a major enzyme in the rBOX pathway. Further investigation is required to identify the missing reductase in R. eutropha . With the aid of artificial pathway driven by heterologous Ccr Me and Emd Mm along with PhaJ via plasmid expression, copolyesters with higher 3HHx monomer composition could be achieved under the low-aerated condition when compared to the corresponding aerobic cultivation (Fig. 4 ). The results suggested that the crotonyl-CoA reduction step mediated by the unknown native reductase was not sufficient, as well as indicated the importance of chain length-specificity of PhaJ in regulating the 3HHx composition in the resulting copolyesters. The similar trend has also been observed in a recent report focusing on autotrophic production of P(3HB- co -3HHx) by the engineered R. eutropha . Tanaka et al. [ 17 ] conducted the autotrophic cultivation of R. eutropha harboring pBPP-ccr Me -phaJ 4a -emd Mm on the gas mixture of H 2 /O 2 /CO 2 (8:1:1), where the oxygen concentration was set to low both to induce PHA synthesis and avoid the risk of hydrogen explosion. They achieved efficient production of P(3HB- co -3HHx) from CO 2 and H 2 with higher 3HHx monomer compositions of 44–48 mol% than those obtained by the same strain cultivated on fructose. This was probably due to synergistic correlation between the native rBOX pathway activated under low-aerobic environment and the artificial rBOX, demonstrating one of examples for the usefulness of the application of PHA production under micro-aerobic condition.
Conclusions Usual bioprocesses using aerobic microbes highly demand oxygen with low solubility in the aqueous media to support efficient cell growth and bioconversion, thus high transfer coefficient of oxygen is achieved by vigorous aeration and/or agitation requiring much energy. This work demonstrated that the low-aerobic cultivation could promote the PHA biosynthesis in R. eutropha H16-derived strains, particularly in the phaBs -lacking mutants. Moreover, it was found that the low-aerobic condition enabled P(3HB- co -3HHx) biosynthesis mediated by the native rBOX, and exogenous Ccr-PhaJ-Emd (artificial rBOX) showed synergistic effect on the ( R )-3HHx-CoA formation. The knowledge obtained by the current study is expected to be useful for compositional regulation of PHA copolyesters produced not only from sugars but CO 2 as well, considering the natural property of R. eutropha as the knallgas bacterium.
Background Ralstonia eutropha H16, a facultative chemolitoautotroph, is an important workhorse for bioindustrial production of useful compounds such as polyhydroxyalkanoates (PHAs). Despite the extensive studies to date, some of its physiological properties remain not fully understood. Results This study demonstrated that the knallgas bacterium exhibited altered PHA production behaviors under slow-shaking condition, as compared to its usual aerobic condition. One of them was a notable increase in PHA accumulation, ranging from 3.0 to 4.5-fold in the mutants lacking of at least two NADPH-acetoacetyl-CoA reductases (PhaB1, PhaB3 and/or phaB2) when compared to their respective aerobic counterpart, suggesting the probable existence of ( R )-3HB-CoA-providing route(s) independent on PhaBs. Interestingly, PHA production was still considerably high even with an excess nitrogen source under this regime. The present study further uncovered the conditional activation of native reverse β -oxidation (rBOX) allowing formation of ( R )-3HHx-CoA, a crucial precursor for poly(3-hydroxybutyrate- co -3-hydroxyhexanoate) [P(3HB- co -3HHx)], solely from glucose. This native rBOX led to the natural incorporation of 3.9 mol% 3HHx in a triple phaB -deleted mutant (∆ phaB1 ∆ phaB1 ∆ phaB2-C2 ) . Gene deletion experiments elucidated that the native rBOX was mediated by previously characterized ( S )-3HB-CoA dehydrogenases (PaaH1/Had), β-ketothiolase (BktB), ( R )-2-enoyl-CoA hydratase (PhaJ4a), and unknown crotonase(s) and reductase(s) for crotonyl-CoA to butyryl-CoA conversion prior to elongation. The introduction of heterologous enzymes, crotonyl-CoA carboxylase/reductase (Ccr) and ethylmalonyl-CoA decarboxylase (Emd) along with ( R )-2-enoyl-CoA hydratase (PhaJ) aided the native rBOX, resulting in remarkably high 3HHx composition (up to 37.9 mol%) in the polyester chains under the low-aerated condition. Conclusion These findings shed new light on the robust characteristics of Ralstonia eutropha H16 and have the potential for the development of new strategies for practical P(3HB- co -3HHx) copolyesters production from sugars under low-aerated conditions. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s12934-024-02294-4. Keywords
Supplementary Information
Acknowledgements KH thanks Japan Society for Promotion of Science (JSPS) for Postdoctoral Fellowships for Research in Japan. The authors thank Biomaterials Analysis Division, Open Facility Center, Tokyo Institute of Technology for DNA sequencing. We are also grateful to our colleagues, Mari Nakagawa and Dr. Allan Devanadera for construction of pk18msΔbktB and pk18msΔA3330 suicide vectors. Author contributions KH: Conceptualization, Methodology, Investigation, Validation, Writing- original draft, review & editing, Visualization. IO: Methodology, Writing-review & editing. TF: Project administration, Conceptualization, Methodology, Supervision, Writing-review & editing. Funding This research was supported by JSPS KAKENHI Grant-in-Aid for JSPS Fellows (20F40100) and NEDO Moonshot R&D program (JPNP18016). Availability of data and materials All data generated and analyzed during this study were included in this manuscript. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-15 23:43:47
Microb Cell Fact. 2024 Jan 14; 23:21
oa_package/be/51/PMC10788006.tar.gz
PMC10788007
38218781
Background Uncertainty is a significant phenomenon in the illness experience of persons with an oncological disease during their illness trajectory [ 1 ]. It is not only limited to the phase of diagnosis and treatment, the experience of uncertainty can persist when oncological therapy has already finished, and affected persons have reached the phase of survivorship. Diagnosis, treatment, medical follow-ups as well as personal and social issues such as work, relationships, and identity are associated with the experience of uncertainty [ 2 , 3 ]. It can negatively affect physical, psychological, and existential outcomes [ 4 ]. Greater uncertainty is associated with increased fatigue, insomnia [ 5 ], emotional distress [ 4 ], anxiety, depression [ 6 ] and lower quality of life [ 7 ] and can also influence psychosocial adjustment to the diagnosis of cancer [ 8 ]. A variety of studies have investigated uncertainty in individuals with several types of cancer at various stages of the disease trajectory [ 9 – 11 ]. In a longitudinal study, Raphaelis, Mayer [ 12 ] showed that uncertainty was highly prevalent in women with vulvar neoplasia throughout the course of six months after diagnosis. Vulvar neoplasia includes vulvar cancer and vulvar intraepithelial neoplasia (VIN) as precancerous cellular change in the external female genitalia [ 13 ]. Although vulvar neoplasia is a rare disease, its incidence has increased globally over the past decade, especially in younger women [ 14 ]. Mostly, surgery is the first choice of treatment, since there is a limited role for primary radio or chemotherapy [ 13 ]. Across all stages, treatment for vulvar neoplasia is associated with significant morbidity and impact on quality of life. Symptoms commonly reported after treatment include bleeding, pain, odour, pruritis, sexual dysfunction, urinary incontinence, constipation, and lower extremity oedema [ 15 ]. Women have also reported diminished emotional and social functioning, as well as compromised body image and sexuality, emotional and interpersonal distress, particularly if it requires extensive resections of the labia or clitoris [ 12 , 16 – 18 ]. According to Senn, Eicher [ 17 ], uncertainty is one of the most prevalent psychosocial symptoms, occurring in about 83% of women with vulvar neoplasia [ 17 ]. Their experience of uncertainty refers to the risk for disease transmission, progression, and recurrence. Affected women reported about uncertainty with regard to their reproductive and sexual capacities after treatment completion. In addition, vulvar neoplasia remains a stigmatised condition associated with poor hygiene or promiscuity [ 16 ]. Affected women felt isolated and ashamed to speak about their condition and their experienced uncertainty [ 19 , 20 ]. This tendency to not talk about the disease because of its location and societal associations may reinforce illness-related uncertainty [ 19 , 21 , 22 ]. The phenomenon of uncertainty in illness was theoretically framed by the work of Mishel [ 23 – 25 ]. She first developed the Uncertainty in Illness Theory (UIT) [ 25 ] and defined uncertainty in illness as the inability to structure the meaning of illness-related events cognitively because of insufficient information. The Reconceptualized Uncertainty in Illness Theory (RUIT) was developed two years later with awareness of the limitations of the UIT, where the development of uncertainty was viewed linearly [ 26 ]. The theory was reconceptualized through discussions with colleagues and qualitative data from patients with chronic conditions. Finally, RUIT addresses the experience of continuous uncertainty, such as in a chronic or potentially recurring illness. It is the central theoretical proposition of RUIT that the appraisal of uncertainty in chronic illness changes over time – from a danger to an opportunity [ 24 ]. As result of this process Mishel described growth toward a new value system, whereas the result of the UIT is a return to the previous level of adaptation [ 24 ]. Predominantly qualitative studies have provided empirical support for the RUIT. They affirmed a transformative process that is characterized by the transition towards a new orientation where uncertainty is accepted as an inherent aspect of life [ 27 ]. This process was described in various ways by researchers, including themes such as “developing a revised life perspective”, “finding new ways to navigate the world”, “experiencing growth through uncertainty”, “achieving new levels of self-organization”, “setting new goals for living”, “devaluing previously important things”, “redefining what is considered normal”, and “creating new dreams” [ 28 ]. In all studies the gradual embrace of uncertainty and the restructuring of one’s own reality were identified as significant phenomena of the process, aligning with the assumptions of the RUIT. However, the support of the RUIT differs by population and methodology, i.e., more qualitative than quantitative studies confirmed the RUIT. The samples of these studies included breast cancer survivors [ 29 ], women regenerating after cardiac disease [ 30 ], chronically ill men [ 31 ], HIV patients [ 32 ], long-term diabetic patients [ 33 ], persons with schizophrenia [ 34 ], women who have not been diagnosed but were genetically predisposed to hereditary breast and ovarian cancer [ 35 ], spouses of heart transplant patients [ 36 ], and adolescent survivors of childhood cancer [ 37 ]. Although several empirical works supported the proposition of the RUIT, the results have not yet been fed back to the theory in a synthesized form. As a result, it is still not clear in an explanatory manner how uncertainty develops in the chronic course of a disease from a danger to an opportunity. Nevertheless, Mishel’s theoretical considerations opened a new perspective on the phenomenon of uncertainty in chronic illness, such as in cancer, and potential opportunities for the discipline of nursing to intervene therapeutically in the illness trajectory. This is especially relevant for women with vulvar neoplasia as a group characterized by a high recurrence rate [ 38 ] and by taboo-related communicative difficulties [ 20 ]. While it is already known that women with vulvar neoplasia experience uncertainty up to six months after diagnosis [ 12 ], it is unclear whether and how their experience of uncertainty changes during the chronic illness trajectory and how the findings can inform the further development of the RUIT.
Methods We aimed to explore the development of uncertainty experience in women with vulvar neoplasia over time and to discuss the significance of the results for Mishel’s RUIT [ 24 ]. Design We conducted a longitudinal qualitative study since we intended to inductively explore the unexplored development of uncertainty over time. For the purposive sample, we included women aged 18 years and older with vulvar neoplasia (initial diagnosis or recurrence) who were about to receive surgical treatment. Between May 2019 and January 2021, gynaecologic oncology nurses invited women of four Swiss and one Austrian women’s clinics to participate in this study. Data collection Data collection took place via qualitative interviews, depending on the participants ́ preferences, face-to-face in the hospital or at home, via phone or video call. We recorded the interviews digitally. In addition, notes were taken during and after each interview, which were included in the analysis. Participants were invited to bring a trusted person to the interview. None of the women made use of this option. The first author, a female PhD candidate having a nursing background and working as research associate at a University of health sciences, conducted the interviews at the following points of time: (1) at diagnosis or before surgical treatment, (2) one week later, (3) six months later, (4) nine months later and (5) one year later. The interviewer and the participants met for the first time at the time of the 1st interview. The first three points of time were chosen for reasons of explanatory power according to the results of Raphaelis et al. [ 12 ]. We chose the other points of time with an exploratory intent. We developed a semi-structured interview guide consisting of four central subjects including both backward and forward looking questions to explore processes and change over time [ 39 ]. We adjusted it after the first three interviews with participants regarding the degree of abstraction of the narrative stimuli. The central topics were: (1) Current status related to the vulvar neoplasia, (2) situations of uncertainty, (3) retrospective reflections on developments over time, (4) outlook for further therapy or the recovery phase. Data analysis We first conducted within-case analyses for the trajectory of each participant. Afterwards, we performed a cross-case analysis for reasons of comparison, thereby intending to reach a higher level of abstraction and to develop a theoretical model. For data management and analysis, we used MAXQDA22© software [ 40 ]. To explore the individual temporal trajectories of the participants, each individual interview was analyzed separately by the first author. Since we were interested in changes of participants’ uncertainty experience also on a theoretical level, the coding strategy of Grounded Theory was followed [ 41 ]. In a first step, the data were openly coded, followed by axial coding in order to develop initial concepts. To systematically identify changes over time for each participant, we conducted a longitudinal analysis using Saldaña’s [ 42 ] framework for longitudinal qualitative research. Framing, descriptive, analytic, and interpretative questions guided the identification of changes over time. To identify similarities and differences, we performed cross-case analyses. By means of a second coding cycle, the single cases were merged into generic sub-categories and broader categories in order to compare them. Thereby axial and selective coding was used [ 41 ]. Finally, we synthesized these results in a central model in order to explain the common development of uncertainty experience over time. Trustworthiness To enhance the trustworthiness of our findings, we adhered to the criteria of adequacy, empirical saturation, and theoretical pervasiveness for qualitative social research [ 43 ]. Therefore, we identified the research question from the field of interest and established it against the background of the theoretical work by Mishel [ 24 ] (adequacy and theoretical pervasiveness). We empirically collected data and analyzed them inductively (empirical saturation). Ethics Participation was voluntary and could be withdrawn at any time without giving reasons. Informed consent was ongoing processed at each interview. In addition to study information, the researcher’s role as a PhD student in the study was disclosed.
Results We conducted 30 interviews between November 2019 and November 2021. Each of the seven participants completed three to five interviews. Four of seven participants completed all five interviews. One participant passed away during the study period due to a postsurgical bleeding, one participant could no longer be reached by telephone after the third interview and another after the fourth. The length of the interviews ranged from 13 to 75 min (Mean = 40). Characteristics of the participants Five women from Austria and two from Switzerland participated in our study. Their age ranged between 28 and 85 years. Four participants were diagnosed with vulvar cancer, three with vulvar intraepithelial neoplasia. Four had an initial diagnosis (Table 1 ). Development of uncertainty experience in women with vulvar neoplasia The experience of uncertainty developed in three stages within one year: (1) uncertainty as an existential threat, (2) uncertainty as an inherent part of illness, and (3) uncertainty as a certainty. The analysis revealed that the experience of uncertainty continuously developed back and forth during the study period of one year. Participants developed different coping strategies in dealing with uncertainty: weighing up potential consequences, avoiding or handling uncertainty, and reframing uncertainty. This fluctuating development of the uncertainty experience is visualized as a cyclical model (Fig. 1 ). Uncertainty as an existential threat: The Sword of Damocles The unknown meaning of a new symptom, having to wait for the result of an examination, not understanding the meaning of the diagnosis and its consequences, and realizing an increased risk of developing vulvar cancer or a recurrence were stimuli for uncertainty. Participants reacted to the unknown meanings of these uncertainties by creating an explanation based on their existing knowledge or previous experiences, e.g., through a pre-existing chronic condition or a previous vulvar neoplasia. Against this background they implicitly made a judgment of their individual risk for experiencing existential consequences. Participants [ 3 – 7 ] assessing a high risk of potential consequences by the experienced uncertainty felt threatened by the possibility of suffering from serious health deteriorations or dying: Symptoms triggered existential uncertainty throughout the different health- and recovery phases. They played a significant role in the diagnostic process, when participants first noticed a changed appearance of the vulva. Symptoms continued to re-stimulate existential uncertainty after cancer treatment was completed and their experience of uncertainty had already developed positively. In this phase all the participants again judged their risk for existential consequences and perceived uncertainty a threat due to the possibility of having a recurrence: This alarm resulted from the awareness of the increased risk of cancer in participants with a precancerous stage and of recurrence in participants with vulvar cancer. By again weighing up their risk for existential consequences, they once again perceived uncertainty as a threat due to possible cancer recurrence. Having to wait for the (still) unknown result of an examination occurred several times in the course of the illness trajectory as an uncertainty. This concerned the results of the primary clarification of the diagnosis, and after surgery the histological findings regarding the complete removal of the carcinoma. Uncertain consequences were the necessity of a further surgery, which would involve, e.g., the removal of the lymph nodes. The consequential possibility of needing an ostomy or a full resection of the vulva because of the uncertain necessity of a radical surgical oncological treatment, threatened participant 2 on the one hand by the risk of experiencing a serious physical change or on the other hand by a “disfigured” intimate area and to not being able to have children in the future (participant 3). Furthermore, the fear that the existing cancer might have metastasized underlay the existential uncertainty (participants 2, 3, 7). In the longer-term course of the disease or recovery, having to wait for the results of the routine gynecological oncological check-ups was again an uncertainty stimulus for all the participants, even if women experienced no symptoms. Though, available findings were no guarantee to full understanding and comprehension of their meanings to participants. Especially older women (participants 5, 6) experienced uncertainty about the meaning of the diagnosis, and the prospects for their treatment and recovery but would not dare to ask the physician to explain: One of the participants associated uncertainty with the phrase of the “Sword of Damocles” (interview 1, participant 7). Living under a “Sword of Damocles” was a recurring experience. At a later stage of disease or recovery, women again had to wait for the results of their routine gynecological oncological check-ups. This recurring experience was always a stimulus triggering uncertainty, even if the participants did not experience symptoms. This uncertainty shaped their experience on an affective level. It was a significant stressor which manifested itself by fear, insecurity, sadness, anger, and the feeling of powerlessness: Uncertainty as an inherent part of the illness: An accepted companion The enduring experience of threatening uncertainty was a starting point for employing coping strategies, either dealing with uncertainty, such as reducing it and mentally processing the negative experience of it, or avoiding uncertainty. Reducing uncertainty involved the acquisition of information to support informed decision-making. However, participants not just reduced uncertainty that was based on a lack of information, but on the (still) uncertain outcome of an investigation or a treatment. Therefore, they adopted health promoting behaviors to minimize the probability of occurrence of the uncertain adverse event, such as having metastases Participants 1, 2, 3 and 7 coped with the threatening uncertainty experience by thinking positively, by practicing self-care, as well as by reflecting about their emotional responses. By thinking positively, they encouraged themselves to hope for the best, to think in a constructive manner and to calm themselves: They found strength in taking uncertainties with a sense of humor and focusing on meaningful things. Their practice of self-care consisted of not letting the stress of the threatening uncertainty get them down, e.g., of letting oneself go (participants 1, 2, 3, 7). Therefore, they promoted their health, not only related to the vulvar neoplasia, by paying attention to their needs, exercising regularly, eating a balanced diet, and reducing other stress factors, e.g., splitting the care of the mother in need of care (participant 1). To cope with the psychological stress of uncertainty, they paid more attention to clearing their minds by engaging in meaningful activities, talking about their fears to people they trust and spending most of their time in familiar surroundings: If the interviewees were able to overcome existential uncertainty, e.g., by completing cancer treatment or if a symptom cleared up as harmless, it triggered a change in their experience of uncertainty: Participants [ 1 – 3 , 7 ] experienced uncertainty no longer as an existential threat. Instead, they accepted uncertainty as an inherent part of illness and opened up to the concept of it. They accepted that certainty in illness will probably never exist – despite all the information and expert knowledge of professionals as well as their own coping strategies: The analysis revealed that the remaining uncertainty did not refer to a specific external stimulus, such as surgery or pending findings anymore. From now on the women’s uncertainty experience mainly concerned the irreducible unpredictability regarding the disease course and the prognosis: Other participants [ 4 – 6 ] who we did not find uncertainty acceptance, reported of repressing uncertainty and its existential threat. This was the case if the interviewees were not able to reduce uncertainty or to cope with it. We found the aim of this avoidance strategy was to restore normality – as if nothing had ever happened. These participants did not want at any price facing uncertainty as a part of their life. They kept a mentally distance by distracting themselves, in order to not having to think about uncertainty. Participant 5 rejected new information to avoid getting bad news. They furthermore constructed a negative certainty, i.e., being convinced that the uncertainty will in fact occur: Unlike the other participants, they could not accept uncertainty as a result of their management strategies, but rather gave up under the feeling of having no choice and resigned themselves to the uncertainty and the threat that came with it. Participants implementing the avoidance strategy consequently reported feeling depressed and powerless. They had a little sense of control, as they felt they have no choice. Uncertainty as a certainty in illness: A mindset to promote recovery As participants 1, 2, 3 and 7 had accepted uncertainty, they increasingly observed a positive impact on their recovery and health. As a consequence, they developed a new mindset – with uncertainty in the background and their awareness about it in the foreground. They were convinced that an altered cognitive focus made a positive impact on their recovery and would reduce their risk of cancer recurrence. The new mindset regarding uncertainty was characterized by the realization of the universal nature of the phenomenon. They no longer felt alone with uncertainty in their illness as soon as they became aware of the certainty of uncertainty as a natural part of life - that concerns all aspects of human existence: In their new mindset women gained trust in their psychological coping strategies. They concluded that their own perspective made a difference and improved their sense of control. This mindset allowed them to experience increased self-confidence in being able to beat cancer. They felt relieved, reported more serenity, mental closure, mental health, and resilience:
Discussion This longitudinal qualitative study explored how the uncertainty experience developed in women with vulvar neoplasia over the course of one year. The findings were not only of phenomenological interest, but also of theoretical as the study was conducted with sensitivity of the Reconceptualized Uncertainty in Illness Theory [ 24 ]. They contribute to a deepened understanding of the uncertainty experience of women with vulvar neoplasia in the illness trajectory but also inform the further development of Mishel’s theory itself. The development of uncertainty was never complete but oscillating in the chronic course of the disease in women with vulvar neoplasia. Change in uncertainty experience was inhibited by existential uncertainty and promoted by the acceptance of uncertainty. According to the RUIT [ 24 ], uncertainty experience changes in a positive way when someone is at the peak level of instability due to uncertainty. In this qualitative longitudinal study, we also identified participants experiencing instability due to threatening uncertainty. Over time, however, we did not observe a development in the experience of uncertainty, as long as uncertainty still was perceived existentially threatening. However, we found commonalities regarding the re-appraisal of uncertainty under other circumstances. In a similar vein as Mishel [ 24 ], the participants described a new view of life allowing a change of perspectives with regard to evaluation of uncertainty. However, this development did not lead to perceiving uncertainty as an opportunity. In our study, the development of uncertainty occurred in individuals who were able to reduce the threat of uncertainty or where uncertainty dissolved by external circumstances. Subsequently they did not associate the uncertainty with existential consequences but were still experiencing uncertainty that, however, concerned the unpredictability of the further illness trajectory. In accordance, Han et al. [ 44 ] suggest that existential uncertainty may be a bigger threat for patients than more information-related aspects of uncertainty, i.e., uncertainty associated with diagnosis, prognosis, causal explanations, and treatment. Existential uncertainty encompasses an awareness of the fact that one`s own existence is undetermined but finite. Being existentially uncertain means to live with a constant threat to one’s own existence – a threat reaching beyond the physical domain and affecting the social, personal, and spiritual domains. Dwan and Willig [ 45 ] outlined key distinctions between existential uncertainty and other aspects of uncertainty in the experience of persons with cancer. Thereby, the focus is on meaning rather than on facts, on the person rather than on the disease, and the fundamental nature of the human being in the world. Another comparable distinction of existential uncertainty was made by Karlsson, Friberg [ 46 ], as they characterized it as living with an unpredictable future, being confronted with one’s own impending mortality, and undergoing personal development. Against this background Penrod [ 47 ] cautioned that providing even more information to reduce existential uncertainty may be counterproductive to change the negative experience of uncertainty. However, existential uncertainty that could be overcome may result in existential well-being, that similarly depends on a person’s meaning and purpose of life, and feelings regarding death and suffering [ 48 ]. A bidirectional relationship of existential well-being and health-promoting behaviours is assumed [ 49 ]. This would explain why individuals changed their perspective on uncertainty as a health-promoting behaviour, as soon as uncertainty was no longer experienced as existentially threatening, but could be accepted, perhaps leading to existential well-being. Greater manifestation of existential well-being is associated with a reduced incidence of depression and an improved overall health condition. Existential well-being is furthermore connected with emotional well-being, that manifests as social engagement, health-promoting behaviours, and positive affect and optimism [ 50 ]. A positive mindset and proactive living promote the relationship of existential well-being and health-promoting behaviours. In particular, having a positive mindset to foster health was influenced by a positive self-image, and having sense of control [ 51 ]. Having a positive mindset and sense of purpose in life were directly associated with health-promoting behaviours of proactive living [ 52 ]. Especially in individuals living with chronic illness, looking for and having meaning are positively correlated. Following, the promotion of finding meaning in life should have high priority during the management of chronic disease [ 53 ] to overcome existential uncertainty and achieve existential well-being. Limitations and strength of the study A limitation concerns the short duration of some interviews. Extended interviews would contribute to an in-depth understanding of individual experiences and to draw conclusions with regard to a longer period of time. Furthermore, due to the peculiarities of the group of interest of women with vulvar neoplasia, the transferability of the results to persons with another oncological condition or chronic disease is limited. However, the longitudinal study design contributes to uncover dynamic processes as they occur and to offer insights into changes and continuities within the life course [ 54 ]. Implications The results of this study show that the experience of uncertainty changes over time in women with vulvar neoplasia, since different types of uncertainty occured during the illness trajectory. Uncertainty associated with existential consequences did not develop in a positive way until participants were able to cope with it. It is therefore important to differentiate between several types of uncertainty. The role of existential uncertainty should be considered as potential inhibitor of change in the interaction with women with vulvar neoplasia and with regard to intervention planning. In the context of cancer, there is growing evidence that meaning-oriented uncertainty interventions might be most useful [ 55 – 57 ]. The combination of existential uncertainty and the identified possibility of change in the experience of uncertainty emphasize the need to develop an own language and understanding of professionals in order to anticipate and address different aspects of patients ́ uncertainty experience [ 58 ].
Conclusions The findings provide health care practitioners, especially in the field of psycho-oncology with a deeper understanding of the development of uncertainty experience in the disease trajectory of women with vulvar neoplasia. Our results may inform practice, in particular interactions with affected individuals. Furthermore, the findings strengthen the theoretical basis of uncertainty in chronic illness. They can provide orientation for developing theory-based measurements and interventions. Finally, we reflected the results against the background of the RUIT [ 24 ]. Thereby, the results can contribute to theory dynamics in nursing, by informing theory further development and adding to the body of existing theories.
Background Women with vulvar neoplasia continue to experience uncertainty up to six months post-surgery. Uncertainty in illness is considered a significant psychosocial stressor, that negatively influences symptom distress, self-management strategies and quality of life. According to the Reconceptualized Uncertainty in Illness Theory , the appraisal of uncertainty changes positively over time in chronic illness. We aimed at exploring whether and how the experience of uncertainty develops in women with vulvar neoplasia. Methods We selected a purposive sample of seven women diagnosed with vulvar neoplasia in four Swiss and one Austrian women’s clinic. By means of a qualitative longitudinal study, we conducted 30 individual interviews at five points of time during one year after diagnosis. We applied Saldaña’s analytical questions for longitudinal qualitative research. Results First, participants experienced uncertainty as an existential threat, then an inherent part of their illness, and finally a certainty. Women initially associated the existential threat with a high risk for suffering from severe health deteriorations. Participants that could reduce their individually assessed risk by adopting health promoting behaviors, accepted the remaining uncertainty. From now on they reframed uncertainty into a certainty. This new mindset was based on a belief of promoting recovery and reducing the risk of recurrence. Conclusions The long-lasting and oscillating nature of uncertainty should receive attention in supportive oncology care. Uncertainty concerning existential issues is of special importance since it can inhibit a positive development of uncertainty experience. Keywords
Acknowledgements Many thanks to all participants for their openness to share their experiences and the recruiting nurses for their support. Author contributions All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Jasmin Eppel-Meichlinger, Hanna Mayer, Enikö Steiner, and Andrea Kobleder. The first draft of the manuscript was written by Jasmin Eppel-Meichlinger and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. Funding We acknowledge support by Open Access Publishing Fund of Karl Landsteiner University of Health Sciences, Krems, Austria. Furthermore, we would like to thank the Nursing Science Foundation Switzerland for funding this study (ID 2143 − 2017). Data availability The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate The study received ethical approval from the Cantonal Ethics Committee Bern, Ethics Committee Northwest and Central Switzerland, Ethics Committee Eastern Switzerland, Ethics Committee Ticino, and Ethics Committee of the University Hospital Vienna. Informed consent was obtained from all the participants. The study was conducted in accordance to the Declaration of Helsinki. Consent to publish Not applicable. Competing interests The authors have no relevant financial or non-financial interests to disclose.
CC BY
no
2024-01-15 23:43:47
BMC Womens Health. 2024 Jan 13; 24:35
oa_package/80/bb/PMC10788007.tar.gz
PMC10788008
38218855
Introduction Osteosarcoma (OS) is the most common primary bone tumor in adolescents [ 1 , 2 ]. The malignancy of OS is high, and most patients develop lung metastasis within one year, so the prognosis is poor [ 3 , 4 ]. At present, surgery combined with chemotherapy drugs is still the main way of OS treatment, but the effect is limited [ 5 , 6 ]. It is important to elucidate the underlying mechanisms affecting OS progression at the molecular level for developing potential therapeutic targets of OS. Circular RNAs (circRNAs) are RNA molecules characterized by covalently closed loops and widely present in eukaryotes, which are mainly formed by back-splicing of exons or introns of genes [ 7 , 8 ]. Mechanistically, circRNAs have been confirmed to act as microRNA (miRNA) sponges to mediate gene expression [ 9 , 10 ]. A large amount of evidence shows that circRNA abnormal expression is often related to the occurrence of human diseases [ 11 , 12 ]. Importantly, studies has confirmed that circRNA is associated with malignant progression of tumors, including OS [ 13 , 14 ]. Studies had suggested that circTADA2A had an increasing effect on OS cell proliferation and metastasis, which was achieved via sponging miR-203a-3p to upregulate CREB3 [ 15 ]. Circ_0001721 was considered to be a potential target for OS treatment, which enhanced OS glycolysis, proliferation and metastasis through regulation of miR-372-3p/MAPK7 [ 16 ]. Circ_0000376 is located at chr12: 11199618-11248400 with 48,782 bp length and is derived from PRH1-PRR4 gene. In this study, we screened differentially expressed circRNA in OS tissues and normal tissues using GEO database, and pointed out that circ_0000376 was overexpressed in OS tissues. Previous studies had shown that decreased circ_0000376 expression could lead to decreased OS cell viability and metastasis ability [ 17 ]. Therefore, we have reason to believe that circ_0000376 may be a potential target for OS therapy. To further confirm this, we conducted this study and revealed a novel downstream miRNA/mRNA regulatory axis of circ_0000376.
Materials and methods Samples collection The OS tumor tissues and adjacent normal tissues were collected from 33 OS patients at The Third Hospital of Mianyang and stored at -80 °C. Written informed consent was signed from each patient, and our research was approved by The Third Hospital of Mianyang. Cell culture and transfection OS cells (143B, HOS, MG63 and U2OS) and osteoblast cells (hFOB1.19) were bought from ATCC (Manassas, VA, USA) and cultured in DMEM medium (Solarbio, Beijing, China) containing 10% FBS and 1% penicillin–streptomycin. Circ_0000376 small interfering RNA (si-circ_0000376), pCD5 overexpression vector, lentivirus short hairpin RNA (sh-circ_0000376), miR-577 mimic, miR-577 inhibitor (anti-miR-577), pcDNA hexokinase 2 (HK2) overexpression vector, pcDNA lactate dehydrogenase-A (LDHA) overexpression vector, and negative controls were synthesized by RiboBio (Guangzhou, China). They were transfected into OS cells with Lipofectamine 3000 (Invitrogen, Carlsbad, CA, USA). Quantitative real-time PCR (qRT-PCR) Total RNAs were isolated by TRIzol reagent (Invitrogen) and reverse-transcribed into cDNA using Reverse Transcription Kit (Takara, Dalian, China). PCR reaction was conducted with SYBR Green (Takara) and specific primers (Table 1 ). Relative expression was normalized by β-actin or U6 and expressed using 2 −ΔΔCT method. Also, RNA was treated with RNase R solution and then used for qRT-PCR. Cell proliferation detection In cell counting kit 8 (CCK8) assay, OS cells seeded into 96-well plates were cultured for 48 h. CCK8 reagent (Beyotime, Shanghai, China) was added to each well. The absorbance at 450 nm was detected under microplate reader to measure cell viability. In colony formation assay, OS cells seeded in 12-well plates were cultured for 2 weeks. After that, the colonies were fixed with paraformaldehyde and stained with crystal violet. The number of colonies was counted under microscope. In EDU assay, OS cells seeded into 96-well plates were stained with EDU solution and DAPI solution (RiboBio). Fluorescence images were captured under fluorescence microscope, and EDU positive cell rate was calculated by ImageJ software. Flow cytometry Annexin V-FITC Apoptosis Detection Kit (Beyotime) was used. OS cells suspended with binding buffer were stained with Annexin V-FITC and propidium iodide. Cell apoptosis rate was analyzed by flow cytometer and CellQuest software. Transwell assay Transwell chamber pre-covered with Matrigel was used. Serum medium was added to the lower chamber, and OS cells suspended with DMEM medium were seeded into the upper chamber. 24 h later, the cells were fixed and stained. Under microscope, the number of invasive cells from 5 fields was counted. Cell glycolysis detection After transfection, the supernatants of OS cells were collected for measuring the glucose consumption, lactate production and ATP/ADP level by Glucose Assay Kit, Lactate Assay Kit and ApoSENSOR ADP/ATP Ratio Assay (BioVision, Milpitas, CA, USA). The ECAR and OCR of cells were analyzed using XF96 Extracellular Flux analyzer (Seahorse Bioscience, Chicopee, MA, USA). Western blot (WB) analysis RIPA buffer (Abcam, Cambridge, MA, USA) was used to obtain total protein. Protein samples were separated via SDS-PAGE gel and transferred onto PVDF membranes. Primary antibodies, including anti-CyclinD1 (1:200, ab16663), anti-MMP9 (1:1000, ab38898), anti-HK2 (1:10000, ab227198), anti-LDHA (1:5000, ab52488), and anti-β-actin (1:1000, ab8227), were used to incubate the membranes, which were then hatched with secondary antibody (1:50,000, ab205718). Protein bands were visualized using ECL reagent (Beyotime), and Image Lab software was used for gray scale analysis. Dual-luciferase reporter assay The binding sequence and mutant sequence of miR-577 in circ_0000376, HK2 3’UTR or LDHA 3’UTR were designed and inserted into the pmirGLO reporter vector, generating the corresponding wild-type and mutant-type vectors. OS cells were co-transfected with the vectors and miRNA. Cells were then harvested to detect luciferase activity using Dual-luciferase Reporter Gene Assay Kit (Beyotime). Xenograft models U2OS cells transfected with sh-NC or sh-circ_0000376 were subcutaneously injected into BALB/c nude mice (6-week-old, Vital River, Beijing, China) to construct xenograft tumor model (n = 6/group). Tumor volume was recorded every 3 days post-injection 7 days. 22 days later, tumor tissues were excised from euthanized mice. Mice tumor tissues were used for preparing paraffin section. Immunohistochemical (IHC) staining was carried out using SP Kit (Solarbio) with anti-HK2 (1:500, ab227198), anti-LDHA (1:2000, ab52488) and anti-Ki67 (1:1000, ab15580). Animal experiment was approved by The Third Hospital of Mianyang. Statistical analysis Data were shown as means ± SD. GraphPad Prism 7.0 was used to perform statistical analyses. Significant differences were compared using Student’s t -test or ANOVA. P < 0.05 was considered statistically significant.
Results Circ_0000376 expression was increased in OS patients and cells Figure 1 A exhibited 10 differentially expressed circRNAs in OS tumor tissues and normal tissues in GEO database (accession: GSE96964), among which circ_0000376 (chip: hsa_circRNA_000554) was significantly overexpressed in OS tumor tissues. Through qRT-PCR, circ_0000376 was confirmed to be upregulated in OS tumor tissues compared to adjacent normal tissues (Fig. 1 B), as well as in 4 OS cell lines compared to hFOB1.19 cells (Fig. 1 C). After RNA was treated with RNase R, we confirmed that circ_0000376 expression was not significantly affected, while linear RNA GAPDH mRNA expression was markedly reduced (Fig. 1 D, E). These data confirmed that circ_0000376 could resist RNA digestion. Knockdown of circ_0000376 inhibited OS cell growth, invasion and glycolysis After si-circ_0000376 was transfected into MG63 and U2OS cells, circ_0000376 expression was remarkably decreased (Fig. 2 A). Then, we evaluated OS cell proliferation, apoptosis, invasion and glycolysis to explore the effect of circ_0000376 knockdown on OS cell progression. As shown in Fig. 2 B–F, downregulation of circ_0000376 suppressed cell viability, the number of colonies and EDU positive cell rate, while increased cell apoptosis rate. Additionally, circ_0000376 knockdown inhibited the number of invasive cells, glucose consumption, lactate production and ATP/ADP ratios (Fig. 2 G–J). Moreover, circ_0000376 knockdown resulted in a decrease in ECAR and an increase in OCR in MG63 cells (Additional file 1 : Fig. S1A-B), confirming that circ_0000376 might promote Warburg effect of OS cells. The results exhibited that. WB analysis results indicated that silencing of circ_0000376 also decreased cell cycle protein CyclinD1 expression and invasion protein MMP9 expression in OS cells (Fig. 2 K–L). These results indicated that circ_0000376 enhanced OS cell proliferation, invasion, glycolysis and inhibited apoptosis. Circ_0000376 interacted with miR-577 The starbase software and circinteractome software were used to jointly predict miRNAs that could complement with circ_0000376, and then we focused on miR-577 (Fig. 3 A). According to their binding sites, we designed the WT/MUT-circ_0000376 reporter vectors (Fig. 3 B). Besides, miR-577 mimic was used to overexpress miR-577 in MG63 and U2OS cells (Fig. 3 C). In dual-luciferase reporter assay, we observed that the luciferase activity of WT-circ_0000376 vector without MUT-circ_0000376 vector was reduced by miR-577 mimic, confirming the interaction between circ_0000376 and miR-577 (Fig. 3 D, E). In OS tumor tissues, miR-577 had decreased expression and was negatively correlated with circ_0000376 expression (Fig. 3 F–G). Also, miR-577 was lowly expressed in OS cells (MG63 and U2OS) compared to hFOB1.19 cells (Fig. 3 H). Above data confirmed that circ_0000376 could sponge miR-577. The regulation of si-circ_0000376 on OS cell progression was eliminated by anti-miR-577 To explore whether circ_0000376 regulated OS progression via sponging miR-577, the rescue experiments were performed. After co-transfected with si-circ_0000376 and anti-miR-577 into MG63 and U2OS cells, we detected miR-577 expression and confirmed that miR-577 expression promoted by si-circ_0000376 could be decreased by anti-miR-577 (Fig. 4 A). Analysis results showed that the negative regulation of si-circ_0000376 on cell viability, the number of colonies and EDU positive cell rate were reversed by miR-577 inhibitor (Fig. 4 B–D and Additional file 2 : Fig. S2A-B). Circ_0000376 knockdown induced cell apoptosis could also be abolished by miR-577 inhibitor (Fig. 4 E and Additional file 2 : Fig. S2C). Furthermore, the addition of anti-miR-577 overturned the suppressive effects of si-circ_0000376 on the number of invasive cells, glucose consumption, lactate production, ATP/ADP ratio, and the protein expression of CyclinD1 and MMP9 (Fig. 4 F–K and Additional file 2 : Fig. S2D). Therefore, we confirmed that circ_0000376 might contribute to OS progression via targeting miR-577. MiR-577 interacted with HK2 and LDHA Targetscan software was used to predict the downstream target of miR-577. The 3’UTRs of HK2 and LDHA were discovered to have binding sites with miR-577 (Fig. 5 A, B). MiR-577 mimic reduced the luciferase activities of the HK2 3’UTR-WT vector and LDHA 3’UTR-WT vector, confirmed that there had interaction relationship between miR-577 and HK2 or LDHA (Fig. 5 C, D). HK2 and LDHA mRNA expression levels were upregulated in OS tumor tissues, and their expression levels were negatively correlated with miR-577 expression (Fig. 5 E–H). In OS tumor tissues and cells, we also observed the high HK2 and LDHA expression at the protein levels (Fig. 5 I–L). MiR-577 hindered OS cell progression by targeting HK2 and LDHA To further confirm that miR-577 mediated OS progression by regulating HK2 and LDHA, we conducted rescue tests, respectively. In MG63 and U2OS cells co-transfected with miR-577 mimic and pcDNA HK2 overexpression vector, we found that miR-577 reduced HK2 protein expression, and this effect was reversed by pcDNA HK2 overexpression vector (Fig. 6 A). MiR-577 inhibited cell viability, the number of colonies and EDU positive cell rate, while enhanced apoptosis rate. However, these effects were reversed by HK2 overexpression (Fig. 6 B–E and Additional file 3 : Fig. S3A-C). Moreover, overexpressed HK2 also eliminated the inhibitory effects of miR-577 on the number of invasive cells, glucose consumption, lactate production, ATP/ADP ratio, and the protein expression of CyclinD1 and MMP9 (Fig. 6 F–K and Additional file 3 : Fig. S3D). Similarly, pcDNA LDHA overexpression vector were transfected into MG63 and U2OS cells with miR-577 mimic. As shown in Fig. 7 A, pcDNA LDHA overexpression vector increased LDHA protein expression reduced by miR-577. Function experiments suggested that LDHA overexpression overturned the regulation of miR-577 on OS cell proliferation, apoptosis, invasion, and glycolysis (Fig. 7 B–I and Additional file 4 : Fig. S4A-D). Also, the decreasing effect of miR-577 on the protein expression of CyclinD1 and MMP9 was abolished by overexpressing LDHA (Fig. 7 J, K). Above all, these results suggested that miR-577 targeted HK2/LDHA to suppress OS progression. Interference of circ_0000376 inhibited OS tumor growth To determine the role of circ_0000376 in vivo, we constructed U2OS cells with stable knockdown circ_0000376 using sh-circ_0000376 (Fig. 8 A). After that, U2OS cells transfected with sh-NC/sh-circ_0000376 were injected into nude mice. After 22 days, we found that tumor volume and weight were reduced in the sh-circ_0000376 group (Fig. 8 B, C). In the mice tumor tissues of sh-circ_0000376 group, circ_0000375 expression was inhibited and miR-577 expression was promoted (Fig. 8 D). Also, The HK2 and LDHA protein expression levels were repressed in sh-circ_0000376 group (Fig. 8 E, F). Besides, HK2, LDHA and Ki67 positive cells also were decreased in the tumor tissues of sh-circ_0000376 group (Fig. 8 G). These results showed that circ_0000376 sponged miR-577 to promote HK2/LDHA-mediated glycolysis, thus accelerating OS tumor growth in vivo.
Discussion Circ_0000376 acts as an oncogenic gene in many tumors. For example, circ_0000376 was considered to be a tumor promoter in lung cancer, which enhanced lung cancer proliferation, glycolysis and metastasis through miRNA/mRNA network [ 18 – 20 ]. Also, circ_0000376 had been shown to play active role the malignant progression of gastric cancer and breast cancer [ 21 , 22 ]. Here, we investigated circ_0000376 role in OS. The present results suggested that circ_0000376 was overexpressed in OS, and its interference restrained OS cell proliferation, invasion, glycolysis, and accelerated apoptosis. Animal experiments also further showed that circ_0000376 knockdown reduced OS tumorigenesis in vivo. These results provided new evidence that circ_0000376 was a potential therapeutic target for OS. We believed that circ_0000376 promoted OS malignant progression, which was consistent with the previous reports [ 17 ]. MiRNA and siRNAs have been confirmed to play vital function in human diseases [ 23 – 27 ]. According to reported studies, circ_0000376 might be involved in regulating OS development through sponging miR-432-5p [ 17 ]. Here, we explored the new molecular mechanism of circ_0000376, and confirmed that circ_0000376 sponged miR-577. In many tumors, miR-577 played a negative role in tumor malignant phenotype, such as breast cancer [ 28 ] and glioblastoma [ 29 ]. MiR-577 suppressed the proliferation and metastasis of papillary thyroid carcinoma cells [ 30 ], and could inhibit cervical cancer cell growth and glycolysis [ 31 ]. In the previous research, miR-577 had been discovered to be lowly expressed in OS, which could reduce OS proliferation and migration [ 32 ]. Similar to this reports, we also found that miR-577 had the ability to inhibit OS progression in this study. In functional experiments, miR-577 suppressed OS cell growth, invasion and glycolysis. Circ_0000376 negatively regulated miR-577 level, and miR-577 inhibitor also revoked si-circ_0000376-mediated OS cell function. These results provided evidence that circ_0000376 targeted miR-577 to regulate OS progression. Glycolysis is one of the prominent features of malignant tumors and is the main source of energy during tumor growth [ 33 , 34 ]. HK2, a member of HK family, is a key rate-limiting enzyme in glycolysis pathway, mainly responsible for catalyzing glucose phosphorylation [ 35 ]. LDHA is also a key enzyme in the glycolysis pathway that converts pyruvate to lactic acid [ 36 ]. Many studies had confirmed that the increased expression of HK2 and LDHA promoted the glycolysis process of tumor cells, thus accelerating the malignant phenotype of tumors, such as hepatocellular carcinoma [ 37 ] and bladder cancer [ 38 ]. Research had suggested that HK2 was overexpressed in OS, and its overexpression promoted OS cell proliferation and invasion [ 39 , 40 ]. Besides, LDHA had been shown to be upregulated in OS, which enhanced cell growth and metastasis to promote OS progression [ 41 , 42 ]. Here, we pointed out that miR-577 targeted HK2 and LDHA. Overexpressed HK2 and LDHA reversed miR-577-mediated the inhibition on OS cell growth, invasion and glycolysis, confirming that miR-577 indeed suppressed OS development through targeting HK2 and LDHA. Importantly, circ_0000376 had a positively regulation on HK2 and LDHA expression, which perfected the mechanism of circ_0000376/miR-577/HK2/LDHA axis. In summary, we provided strong evidence that circ_0000376 played a key role in OS development, which promoted OS growth, invasion and glycolysis through miR-577/HK2/LDHA pathway (Fig. 9 ). Inhibition of circ_0000376 might be an effective treatment method for OS, offering new evidence that circ_0000376 served as a potential therapeutic target for OS.
Background Many studies have confirmed that circular RNAs (circRNAs) mediate the malignant progression of various tumors including osteosarcoma (OS). Our study is to uncover novel molecular mechanisms by which circ_0000376 regulates OS progression. Methods The expression of circ_0000376, microRNA (miR)-577, hexokinase 2 (HK2) and lactate dehydrogenase-A (LDHA) was determined by quantitative real-time PCR. OS cell proliferation, apoptosis and invasion were measured using cell counting kit 8 assay, colony formation assay, EdU assay, flow cytometry and transwell assay. Besides, cell glycolysis was assessed by testing glucose consumption, lactate production, and ATP/ADP ratios. Protein expression was examined by western blot analysis. The interaction between miR-577 and circ_0000376 or HK2/LADA was verified by dual-luciferase reporter assay. The role of circ_0000376 on OS tumor growth was explored by constructing mice xenograft models. Results Circ_0000376 had been found to be upregulated in OS tissues and cells. Functional experiments revealed that circ_0000376 interference hindered OS cell growth, invasion and glycolysis. Circ_0000376 sponged miR-577 to reduce its expression. In rescue experiments, miR-577 inhibitor abolished the regulation of circ_0000376 knockdown on OS cell functions. MiR-577 could target HK2 and LDHA in OS cells. MiR-577 suppressed OS cell growth, invasion and glycolysis, and these effects were reversed by HK2 and LDHA overexpression. Also, HK2 and LDHA expression could be regulated by circ_0000376. In vivo experiments showed that circ_0000376 knockdown inhibited OS tumorigenesis. Conclusion Circ_0000376 contributed to OS growth, invasion and glycolysis depending on the regulation of miR-577/HK2/LDHA axis, providing a potential target for OS treatment. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s13018-023-04520-y. Highlights Circ_0000376 knockdown inhibits OS development and tumor growth. Circ_0000376 sponges miR-577. MiR-577 targets HK2 and LDHA. Supplementary Information The online version contains supplementary material available at 10.1186/s13018-023-04520-y. Keywords
Supplementary Information Below is the link to the electronic supplementary material.
Acknowledgements Not applicable. Authors’ contribution All authors made substantial contribution to conception and design, acquisition of the data, or analysis and interpretation of the data; take part in drafting the article or revising it critically for important intellectual content; gave final approval of the revision to be published; and agree to be accountable for all aspect of the work. Funding No funding was received. Availability of data and materials The analyzed data sets generated during the present study are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate The present study was approved by the ethical review committee of The Third Hospital of Mianyang. Written informed consent was obtained from all enrolled patients. Consent for publication Patients agree to participate in this work Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-15 23:43:48
J Orthop Surg Res. 2024 Jan 13; 19:67
oa_package/d4/55/PMC10788008.tar.gz
PMC10788009
38218909
Background Primary liver cancer stands as a pervasive and lethal malignancy worldwide, posing grave threats to human life and health [ 1 , 2 ]. Hepatocellular carcinoma (HCC) accounts for approximately 75–85% of primary liver cancers [ 3 ]. Currently, early surgical resection is still considered the first-line treatment to decrease the rate of mortality in patients with HCC [ 4 , 5 ]. With continuous advancements at the medical level, new therapeutic options, such as interventional therapy, targeted therapy, and immunotherapy, have been proposed [ 5 , 6 ]. However, the prognoses for HCC patients remain unfavorable, with a persistently poor 5-year survival rate [ 4 ]. The main factors leading to the poor prognosis are the insidious onset and the high heterogeneity of tumors, making it difficult to find a therapeutic target for HCC. Additionally, the infiltrative and disseminated nature of HCC tumors makes it practically impossible to completely remove the tumor by surgery, and the rapid drug resistance along with drug side effects also limit the treatment efficacy of drugs [ 2 , 7 ]. Therefore, an in-depth exploration and understanding of the biological processes involved in the occurrence and progression of HCC is essential for the improvement of clinical diagnosis and treatment in patients with HCC. Recent investigations have shed light on a distinctive form of programmed cell death known as disulfidptosis, which is triggered by the accumulation of reactive oxygen species and relentless lipid peroxidation induced by disulfide-dependent mechanisms [ 8 , 9 ]. This disulfidptosis process leads to disulfide stress and ultimately culminates in cell death. Moreover, accumulating evidence shows that disulfidptosis is associated with the progression and prognosis of cancer [ 10 ]. For instance, Liu et al. demonstrated that susceptibility of the actin cytoskeleton to disulfide stress leads to disulfidoptosis, proposing a therapeutic avenue targeting disulfidoptosis for cancer treatment [ 8 , 10 ]. Chen et al. constructed a disulfidptosis-related lncRNAs signature for predicting the prognosis and immunotherapy of glioma [ 11 ]. However, novel biomarkers linked to disulfidoptosis for HCC prognosis and therapy remain elusive. Thus, our dedication lies in pinpointing new biomarkers to advance targeted therapies for HCC patients through this innovative mode of cell death. Long non-coding RNAs (lncRNAs) are non-coding RNAs with more than 200 nucleotides [ 12 ]. Recent studies suggest that lncRNAs are related to multiple biological processes in HCC, including cell proliferation, angiogenesis, and invasion, and thus are emerging as new targets for the diagnosis, treatment, and prognosis of HCC [ 13 – 15 ]. Additionally, the construction of lncRNA signatures has proven valuable in predicting the prognosis of HCC patients, offering novel clinical insights for guiding targeted treatment approaches [ 14 ]. For example, Xu et al. demonstrated that a ferroptosis-related nine-lncRNA signature can effectively predict prognosis and immune response in HCC [ 15 ]. However, the involvement of lncRNAs in the disulfidoptosis process of HCC remains obscure. The potential of disulfidoptosis-related lncRNA (DRLs) signatures as prognostic biomarkers for HCC patients has yet to be systematically evaluated. In this study, we established a novel DRLs signature designed to predict the overall survival (OS) of HCC patients. Subsequently, we delved into the immune microenvironment of HCC, examined the participation of tumorigenesis pathways, and identified potential drugs for HCC treatment based on the prognostic signature. Furthermore, our findings underscored the functional relevance of TMCC1-AS1 in HCC progression, revealing that its inhibition resulted in suppressed cell proliferation, migration, and invasion. Collectively, this study enhances our comprehension of HCC prognosis and lays the groundwork for developing individualized therapeutic strategies.
Methods Data acquisition and determination of prognostic DRLs The RNA sequencing transcriptome data and clinical information of patients with HCC were retrieved from The Cancer Genome Atlas (TCGA) dataset ( https://portal.gdc.cancer.gov/ ). To obviate statistical bias in our study, individuals lacking complete clinical information were excluded. Ultimately, 374 patients with HCC and 50 healthy individuals were included in subsequent analyses (last accessed: 6 May 2023). Ten disulfidptosis-related genes (GYS1, LRPPRC, NCKAP1, NDUFA11, NDUFS1, NUBPL, OXSM, RPN1, SLC3A2, and SLC7A11) were collected based on previously published studies [ 8 – 11 , 16 ]. We performed Pearson correlation analysis with a threshold of Pearson’s R > 0.4 and p < 0.001 to assess the relationship between disulfidptosis-related genes and lncRNAs. Subsequently, univariate Cox regression analysis was performed to evaluate the prognostic significance of the DRLs ( p < 0.001). Construction and validation of the DRL prognostic signature The entire TCGA set was randomly divided into training and testing sets. The training set was used to establish the DRL signature, and the testing set along with the entire TCGA set was employed to validate the reliability of the signature. Subsequently, the R package “glmnet” was enlisted to establish the Least Absolute Shrinkage and Selection Operator (LASSO) Cox regression, incorporating a penalty parameter determined through 10-fold cross-validation and a significance threshold of 0.05. The computation formula for the risk score is expressed as follows: Risk score = Σ [Exp (lncRNA) × coef (lncRNA)]. Herein, Exp (lncRNA) signifies the expression levels of the included lncRNAs, while coef (lncRNA) denotes their respective regression coefficients. Based on the risk scores (with the median risk score used as a cutoff), all the HCC samples were separated into the low- and high-risk groups. The prognosis of patients with HCC was assessed by K-M curves and ROC curves. Independent prognostic analysis and establishment of a nomogram Univariate and multivariate ( p < 0.05) Cox regression analyses were conducted to confirm whether the prognostic signature can be used as a clinical prognostic predictor independent of other clinicopathological characteristics (age, gender, grade, and stage) in the patients with HCC using the R package “survival.” Additionally, a nomogram was established to predict the survival of patients with HCC via the R package “survival” and “regplot.” The accuracy of nomogram was estimated using the consistency index (C-index) and calibration curves. PCA and functional enrichment analysis Principal component analysis (PCA) was performed using the R package “scatterplot3d” to weaken the dimensionality, identify the model, and visualize the high-dimensional data of the entire gene expression profiles, disulfidptosis-related genes (DRGs), DRLs, and risk model. The differentially expressed genes (DEGs) between the high- and low-risk groups were identified (|log2fold-change (FC)| > 1 and adjusted p < 0.05). Gene Ontology (GO) functional analyses, including cellular component (CC), molecular function (MF), biological processes (BP), and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analyses, were performed on DEGs using the R package “clusterProfiler,” “org.Hs.e.g.db,” and “enrichplot.” Immune-related functional analysis and tumor mutation burden (TMB) analysis The immune infiltration statuses were analyzed via the tools XCELL, TIMER, QUANTISEQ, MCPCOUNTER, EPIC, CIBERSORT-ABS, and CIBERSORT according to the profile of infiltration estimation for all TCGA tumors [ 17 ]. The differences in immune-related functions, infiltrating immune cells, and immune checkpoints between the low and high-risk groups were analyzed using the R package “ggpubr,” “reshape2,” and “ggplot2.” Additionally, we utilized the “maftools” package to examine and integrate the TCGA data and analyzed the difference in TMB between high- and low-risk groups. TIDE analysis and drug efficacy evaluation for HCC treatment We utilized the tumor immunity dysfunction and exclusion (TIDE) algorithm to assess the differences in immunotherapy response between the low-risk and high-risk groups ( http://tide.dfci.harvard.edu/ ) [ 18 ]. Furthermore, the half-maximal inhibitory concentration (IC50) was used to predict the sensitivity of patients with HCC to chemotherapeutic and targeted therapeutic agents. Screening of therapeutic drugs and observation of drug sensitivity using the R packages included “pRRophetic,” “limma,” “ggpubr,” and “ggplot2” with pFilter = 0.0001. Tumor samples collection A total of eight HCC tissue specimens and eight corresponding normal liver samples were obtained from individuals undergoing surgical resection during the period spanning November 2022 to April 2023 at the First Affiliated Hospital of Zhengzhou University, situated in Henan, China. Following the surgical excision of tissue, the samples were promptly subjected to freezing in liquid nitrogen. The study garnered approval from the Ethics Committee of the First Affiliated Hospital of Zhengzhou University, aligning with the principles set forth in the Declaration of Helsinki. Cell culture and reverse transcription quantitative PCR (RT-qPCR) The hepatocellular carcinoma cell lines (HEP3B and HEPG2) and normal liver control cell (NC) were procured from the National Collection of Authenticated Cell Cultures (Shang Hai, China). HEP3B and HEPG2 cells underwent cultivation in RPMI-1640 medium supplemented with 2 mM l-glutamine and 10% Fetal Bovine Serum (FBS) within a humidified incubator set at 37 °C with 5% CO2. Total cellular RNA was extracted using TRIzol reagent (Invitrogen, Carlsbad, CA, United States). Data normalization was achieved through glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA expression, and calculations were executed using the 2^(-ΔΔCT) method. The primer sequences for RT-qPCR analysis are provided in Supplementary Table S1 . Cell transfection Two siRNAs targeting TMCC1-AS1 (si-TMCC1-AS1) and a negative control (si-NC) were synthesized by GenePharma (Shanghai, China). Transfection of HEP3B and HEPG2 cells was carried out using si-TMCC1-AS1#1, si-TMCC1-AS1#2, and si-NC with lipofectamine® 3000 (Invitrogen, USA). After 24 h, the transfection efficiency was evaluated using RT-qPCR. The sequences of the siRNAs can be found in Supplementary Table S2 . Cell counting kit-8 (CCK-8) assay The HCC cells were seeded into 96-well plates at a density of 3 × 10 3 cells per well. Subsequently, 10 μL of CCK-8 solution (Dojindo, Tokyo, Japan) was added to each well at 0, 24, 48, and 72 h, followed by a 2-hour incubation period. The absorbance of the cells at 450 nm was then measured using a SpectraMax i3x instrument (Molecular Devices, USA). After 72 h, the proliferation curve of the cells was constructed based on the absorbance values. Transwell migration and invasion assays The migratory and invasive capacities of HCC cells were assessed using 24-well Transwell chambers with an 8 μm pore size (Corning, NY, USA). For the migration assay, 3 × 10 4 HCC cells were placed in the top compartment containing 250 μL of serum-free medium, while the bottom compartment received 500 μL of medium with 10% FBS. After 48 h of culture, cotton swabs were employed to eliminate cells in the upper compartment. The cells traversing the filter were fixed with 95% ethanol, stained with a 0.5% crystal violet solution, and subsequently imaged and counted using a microscope (Olympus, Tokyo, Japan). In the invasion assay, prior to cell inoculation, the filter was coated with a layer of Matrigel (BD Biosciences, San Jose, CA, USA). The remaining procedures were analogous to those of the migration assay. Wound healing assay Wound healing assays were executed following previously delineated protocols [ 19 ]. Briefly, cells were seeded in 6-well plates and incubated at 37 °C. With the cells were completely attached, we scraped the middle of the plate to form a wound and replaced the medium with serum-free medium. After 48 h, the coverage of the line was measured. Statistical analysis The R software (version 4.1.3) was used for all statistical analyses and graph visualization. The classification variables in the training and testing sets were contrasted using the chi-square test. Student’s t-test or one-way ANOVA test was utilized to determine the differences between the high- and low-risk groups. The links between clinicopathological factors, risk score, immune check inhibitors, and immune infiltration levels were assessed using the Pearson correlation test. P < 0.05 was considered statistically significant.
Results Identification of DRLs in HCC patients A comprehensive flow diagram is depicted in Fig. 1 . Initially, we gathered a total of 16,876 lncRNAs from the TCGA database’s HCC project and acquired 10 DRGs from previously published studies. Next, 945 DRLs were found by performing Pearson correlation analysis (|Pearson R| > 0.4 and p < 0.001) between lncRNAs and DRGs. Following the criteria of |log2 fold change (FC)| > 1 and p < 0.05, we obtained 750 differentially expressed DRLs. A heatmap was established to visualize the differential expression of DRLs between normal and tumor samples (Fig. S1 A). Construction and validation of the DRLs prognostic signature Upon univariate analysis, we identified 11 DRLs from 750 differentially expressed DRLs that exhibited correlations with OS. The forest plot (Fig. 2 A), heatmap (Fig. 2 B), and Sankey diagram (Fig. S1 B) illustrated that all 11 DRLs were upregulated and considered poor prognostic factors for patients with HCC ( p < 0.001, hazard ratio, HR > 1). In the subsequent Lasso regression analysis aiming at reducing the risk of overfitting (Fig. 2 C and D), 9 DRLs were found to be associated with OS. Further multivariate Cox regression narrowed this count to 3 DRLs (POLH-AS1, TMCC1-AS1, AC124798.1), which were used to construct the OS prognostic signature. The risk score for each HCC patient was calculated using the following formula: Risk score = (0.413458729998944 × POLH − AS1 expression) + (0.818274047598138 × TMCC1 − AS1 expression) + (0.248268992114983 × AC124798.1 expression. The correlation heatmap depicted the relationship between DRGs and the three selected DRLs (Fig. S1 C). Patients were stratified into low- and high-risk groups based on the median value of risk scores. As depicted in Fig. S2 A-C, the low-risk group exhibited significantly extended survival times compared to the high-risk group across the training set, testing set, and the entire set ( P < 0.01). Furthermore, the distribution plot of risk score and survival status revealed a positive correlation: higher risk scores corresponded to a higher number of deaths in HCC patients (Fig. S2 D-I). The heatmap highlighted elevated expression levels of three DRLs in the high-risk group relative to the low-risk group (Fig. S2 J-L). Overall, these findings indicated that patients in the high-risk group experienced worse prognoses. Independent prognostic analysis and establishment of a nomogram To assess the independent prognostic utility of the DRLs signature, we conducted both univariate and multivariate Cox regression analyses. As shown in Fig. 3 A, univariate Cox regression analysis demonstrated that the prognostic signature of the three DRLs could predict OS outcomes in HCC patients (HR = 1.324; 95% CI, 1.211–1.448; p = 0.001). Multivariate Cox regression analysis further affirmed that the prognostic signature of the three DRLs remained an independent prognostic factor for HCC (HR = 1.277, 95% CI, 1.155–1.412, p < 0.001) after adjusting for gender, age, grade, and stage (Fig. 3 B). Subsequently, a nomogram was established employing these independent prognostic factors (stage and risk score) to predict 1-, 3-, and 5-year survival rates for HCC patients (Fig. 3 C). Calibration curves were developed to validate the nomogram’s effectiveness in predicting survival rates at 1, 3, and 5 years, demonstrating optimal agreement between nomogram predictions and actual survival outcomes (Fig. 3 D). Correlation analysis between DRLs signature and clinical characteristics To investigate the correlation between the prognostic signature of DRLs and the clinical characteristics of patients with HCC, we examined the relationship between the survival probability and the risk score in different subgroups based on age, grade, and stage. As shown in Fig. 4 , the results revealed that patients in the low-risk group had a much higher OS rate than patients in the high-risk group. Furthermore, the concordance index (C-index) of the risk score surpassed that of clinical characteristics, including age, gender, grade, and stage (Fig. S3 A). Additionally, the high-risk group showed a significantly shorter progression-free survival compared to the low-risk group (Fig. S3 B). Moreover, the AUC value for the risk grade was 0.754, markedly outperforming the predictive accuracy of individual clinical characteristics, such as age (0.531), gender (0.509), grade (0.499), and stage (0.671) (Fig. S3 C). The AUC of the novel DRL signature for 1-, 3-, and 5-year survival rates were 0.754, 0.699, and 0.671, respectively (Fig. S3 D). Overall, these findings affirm the reliability of the prognostic signature based on the three DRLs for patients with HCC. PCA and functional enrichment analysis To discern differences between the low- and high-risk groups, we conducted PCA using four expression profiles (entire gene expression profiles, DRGs, DRLs, and the three DRLs risk signature). The results illustrated that the three DRLs exhibited robust discriminatory ability, effectively distinguishing between the low- and high-risk groups (Fig. S4 A-D). Then, we identified 2397 DEGs between the low- and high-risk groups in the TCGA set, comprising 2300 upregulated genes and 97 downregulated genes (|log2 fold change (FC)| > 1 and p < 0.05) (Fig. S4 E). Functional enrichment analysis was performed to unravel the biological functions of these DEGs. GO analysis revealed significant enrichment in processes such as organelle fission, chromosomal region, and tubulin binding (Fig. 5 A). KEGG analysis unveiled enrichment in pathways associated with carcinogenesis, including the PI3K-Akt signaling pathway, cytokine − cytokine receptor interaction, and the cell cycle (Fig. 5 B). These results strongly suggest the involvement of DRLs in the development and progression of HCC. Evaluation of the immune microenvironment using the DRLs signature Immune infiltration stands as a pivotal determinant in countering HCC progression, wielding significant influence over the survival rates of afflicted patients [ 3 , 15 ]. The heatmap depicting immune responses unveiled substantial correlations between DRLs-scores and various immune cells, encompassing B cells, T cells CD4+, macrophages, and NK cells (Fig. 6 A). Employing the ssGSEA method, we delved into the association between DRLs-scores and immune cell subpopulations, unraveling distinct patterns of immune cell infiltrations in the high-risk group characterized by elevated abundance of activated dendritic cells (aDCs), immature dendritic cells (iDCs), and regulatory T cells (Tregs), juxtaposed with diminished levels of B cells, neutrophils, and NK cells (Fig. 6 B). Functional disparities in immune cell subpopulations, including cytolytic activity, major histocompatibility complex (MHC) class I, type I interferon (IFN) response, and type II IFN response, were pronounced between the high- and low-risk groups (Fig. 6 C). Moreover, immune checkpoint analysis unveiled heightened activation of numerous checkpoints in the high-risk group (Fig. 6 D). Collectively, these findings underscored the predictive capability of the DRLs signature regarding the immune microenvironment in HCC patients, holding potential utility in steering individualized immunotherapeutic strategies. TMB, TIDE and drug susceptibility analysis Accumulating evidence suggests a linkage between TMB status and the clinical responsiveness to immunotherapy in HCC [ 13 , 20 ]. Notably, our findings demonstrated a heightened frequency of mutations in the high-risk group compared to the low-risk group, particularly among the top 15 genes exhibiting the highest mutation rates (Fig. 7 A-B). Subsequent categorization of patients into high and low TMB groups based on TMB scores unveiled a superior survival rate in the low TMB group (Fig. 7 C). An assessment of the synergistic impact of TMB and DRLs-score groups in prognostic stratification revealed that the high-TMB and high-risk subgroup exhibited the poorest prognosis, while the low-TMB and low-risk subgroup displayed a more favorable prognosis. Importantly, even in instances of high or low TMB, the high-risk subgroup consistently manifested a worse prognosis compared to the low-risk counterpart (Fig. 7 D). Moreover, TIDE analysis was conducted to scrutinize the sensitivity to immunotherapy among HCC patients. Intriguingly, the low-risk group exhibited a higher TIDE score, indicative of a more favorable response to immunotherapy (Fig. 7 E). Subsequently, drug susceptibility analysis aimed to discern potential therapeutic agents for HCC treatment based on the IC50 of each drug. The outcomes underscored that patient in the low-score group demonstrated lower IC50 values for anti-cancer drugs such as sorafenib, 5-Fluorouracil, and doxorubicin (Fig. 7 F-H). This implies that individuals in the low-risk group might harbor a heightened sensitivity to these three drugs. Collectively, these results advocate for the utility of the DRLs signature as a promising predictor for treatment efficacy in the context of HCC. Identifying TMCC1-AS1 as a diagnostic and prognostic biomarker for HCC In our pursuit of a prognostic biomarker pertinent to DRLs for HCC patients, we initially scrutinized the expression levels of three DRLs (POLH-AS1, TMCC1-AS1, and AC124798.1) in HCC tissues sourced from the TCGA dataset. The findings illuminated a pronounced upregulation of these three DRLs in HCC tissues relative to normal tissues (Fig. S5 A-C). Furthermore, diminished expression levels of POLH-AS1, TMCC1-AS1, and AC124798.1 exhibited a significant association with extended overall survival (Fig. S5 D-F). Then, our exploration delved into the assessment of the Area Under the Curve (AUC) values for the three DRLs, revealing that TMCC1-AS1 displayed commendable discriminatory prowess for diagnosing patients with HCC (Fig. S5 G-I). This underscores the potential of TMCC1-AS1 as a valuable prognostic and diagnostic biomarker for individuals afflicted with HCC. Knockdown of TMCC1-AS1 prevented cell proliferation, migration, and invasion in HCC To further substantiate the functional role of TMCC1-AS1 in HCC, we initially examined its expression levels in both HCC tissues and cell lines (Fig. 8 A-B). Notably, TMCC1-AS1 exhibited heightened expression in both HCC tissues and cell lines, namely HEP3B and HEPG2. Subsequent to confirming the elevated expression, we sought to elucidate the impact of TMCC1-AS1 on HCC cell proliferation. Employing siRNA-mediated knockdown of TMCC1-AS1 in HEP3B and HEPG2 cells, we achieved effective silencing, as evidenced by RT-qPCR results (Fig. 8 C-D). The growth curves further underscored that the depletion of TMCC1-AS1 significantly impeded the growth of HCC cells, implicating its role in promoting cell proliferation (Fig. 8 E-F). Simultaneously, we also investigated the biological functions of POLH-AS1 and AC124798.1, and the ultimate results were consistent with the functions of TMCC1-AS1 described earlier (Fig. S6 -7). Moving beyond proliferation, our investigations extended to migration and invasion capabilities. The Transwell assay unveiled that the inhibition of TMCC1-AS1 markedly curtailed cell migration and invasion in both HEP3B and HEPG2 cells (Fig. 9 A-D). Furthermore, the wound healing assay demonstrated that the depletion of TMCC1-AS1 hampered the speed of wound closure in both cell lines (Fig. 9 E-H). In summation, these findings strongly suggest that TMCC1-AS1 plays a pivotal role in fueling hepatocellular carcinoma cell proliferation, migration, and invasion in vitro, establishing TMCC1-AS1 as a promising target for therapeutic intervention in HCC.
Discussion As the predominant form of primary liver cancer, hepatocellular carcinoma (HCC) significantly jeopardizes the well-being and survival of afflicted individuals due to its elevated morbidity and mortality rates [ 6 ]. Recent years have witnessed substantial progress in HCC treatment with the advent of targeted agents like sorafenib and immune checkpoint inhibitors (ICIs) [ 21 ]. Nevertheless, the inherent heterogeneity of HCC results in variable treatment outcomes, with only a subset of patients deriving benefit from ICIs and other targeted drugs [ 22 ]. Therefore, the identification of innovative biomarkers for prognostication and predicting therapeutic responses holds paramount clinical significance for those grappling with HCC. Disulfidoptosis has recently garnered extensive attention in tumorigenesis and cancer therapies [ 23 ]. It has been proposed that disulfidoptosis-related biomarkers serve as robust prognostic indicators and predictors of antitumor efficacy in various cancers [ 24 ]. Additionally, several studies have highlighted the pivotal role of long non-coding RNAs (lncRNAs) in the transport and metabolism of disulfide during tumorigenesis and subsequent tumor progression [ 17 , 25 , 26 ]. Nonetheless, the precise involvement of disulfidoptosis-related lncRNAs (DRLs) in HCC remains elusive, necessitating a comprehensive evaluation of their prognostic significance. In the current study, we identified 11 prognostically significant DRLs from the TCGA dataset, three of which were selected to construct the prognostic DRLs signature. Regardless of training or testing sets, the DRL signature demonstrated robust efficacy in predicting survival outcomes for HCC patients. Subsequently, we examined the relationship between survival probability and risk score across various clinical characteristics. The results revealed a significantly higher overall survival rate in the low-risk group, irrespective of gender, age, grade, or stage, substantiating the validity of the prognostic DRL signature. Furthermore, we delved into tumorigenesis pathways, the immune microenvironment of HCC, and potential drugs for HCC treatment based on the prognostic signature. Lastly, our investigation unveiled that the inhibition of TMCC1-AS1 suppressed the proliferation, migration, and invasion of hepatocellular carcinoma cells. This study provides valuable insights into the molecular mechanisms underpinning HCC progression and offers potential avenues for personalized therapeutic strategies. Immunotherapy, an advancing and effective anti-tumor treatment, strengthens the therapeutic effect by regulating the tumor immune microenvironment (TIME) [ 6 ]. Presently, TIME is acknowledged for its profound intricacy [ 14 , 27 ]. Numerous studies have certified that TIME is involved in the process of tumor metastasis, immune escape, and immunotherapy resistance by altering the immune response [ 27 , 28 ]. In our study, DEGs between different risk groups were enriched in some immune-related biological processes and pathways. Our results unveiled that many immune cells (including B cells, neutrophils, and NK cells) and many functions of immune cell subpopulations (such as cytolytic activity, MHC class I, type I IFN response, and type II IFN response) were significantly different between high- and low-risk groups. Additionally, immune checkpoint-related genes exhibited higher expression levels in the high-risk group compared to the low-risk group. This provides a foundation for discerning responsive patients for immunotherapy. In brief, these results indicated that DRLs signature could reflect the TIME of HCC, which may contribute to personalized immunotherapy and targeted therapy for patients with HCC. TMB is currently recognized as a valuable biomarker across various cancers, believed to be linked with the efficacy of immunotherapy for HCC [ 27 , 29 , 30 ]. We observed that the proportion of gene mutations differed significantly between the two groups and that the high-risk group had higher frequency of mutations than the low-risk group in the top 15 genes with the highest mutation rates. Specifically, it was found that patients in the high-risk group had a significantly higher frequency of TP53 mutation (35% vs. 17%). TP53 is a typical tumor suppressor, and its mutation leads to the development and progression of many types of tumors, including HCC [ 29 , 31 ]. This is consistent with our results where the low TMB group had a higher survival rate than the high TMB group. Recent studies have elucidated that epigenetics, transport processes, regulated cell death, and the tumor microenvironment are involved in the development of drug resistance in HCC [ 32 , 33 ]. To enhance the treatment of patients with HCC, we evaluated the drug sensitivity of different anticancer drugs in the treatment of patients with HCC in different DRL-score groups. Based on IC50 values, the drugs of sorafenib, 5-Fluorouracil, and doxorubicin showed better responses in the low-score group than in the high-score group. These findings indicated that DRLs signature could be used as a potential predictor for the efficacy of medical treatment of HCC. Moreover, the occurrence of drug resistance may be reduced by regulating the DRLs; this brings new breakthroughs for the choice of individual therapeutic strategies. The study outcomes revealed 11 DRLs influencing the survival of HCC patients, with POLH-AS1, TMCC1-AS1, and AC124798.1 selected to compose the prognostic signature. Among them, the expression of POLH-AS1 was confirmed to be upregulated in HCC tissues based on RT-qPCR [ 27 ]. Fang et al. investigated a novel risk model with POLH-AS1 for predicting the prognosis of HCC [ 6 ]. In addition, Cui et al. identified TMCC1-AS1 as a valuable resource for novel biomarker and therapeutic target identification in HCC [ 34 ]. Furthermore, Zhu et al. constructed a prognostic signature with AC124798.1 to predict the prognosis of pancreatic adenocarcinoma [ 35 ]. However, few studies have investigated whether these three DRLs contribute to the progression of HCC. To substantiate the prognostic potential of the identified DRLs, we conducted further investigations using the TCGA dataset. Our findings indicated elevated expression of these DRLs in HCC tissues, correlating with poorer survival outcomes. Notably, TMCC1-AS1 exhibited a higher AUC compared to POLH-AS1 and AC124798.1, suggesting its potential as a more promising biomarker for HCC diagnosis and prognosis. Subsequently, we elucidated the biological roles of TMCC1-AS1 in HCC, revealing significantly lower expression in NC compared to HEP3B and HEPG2 cells. Inhibition of TMCC1-AS1 effectively impeded HCC cell growth, migration, and invasion. These results align with Zhao et al., who observed TMCC1-AS1 as a prognostic biomarker for HCC patients [ 36 ]. In summary, TMCC1-AS1 appears to play a role in promoting HCC cell growth and migration in vitro, suggesting its potential as a therapeutic target. Nevertheless, the study has inevitable limitations. Firstly, the sample data solely originated from TCGA databases, lacking clinical information from external cohorts. Secondly, the absence of comprehensive clinical follow-up data hinders thorough validation and assessment of the prognostic model’s clinical value. Finally, the precise mechanisms through which TMCC1-AS1 influences HCC growth, invasion, and migration remain incompletely understood, necessitating further comprehensive experimental investigations.
Conclusions Conclusively, the DRLs signature demonstrated promising prognostic value, offering insights into the immune microenvironment and potential therapeutic avenues for HCC. Particularly, TMCC1-AS1 showed potential as a novel prognostic biomarker and therapeutic target for HCC.
Background Hepatocellular carcinoma (HCC) stands as a prevalent malignancy globally, characterized by significant morbidity and mortality. Despite continuous advancements in the treatment of HCC, the prognosis of patients with this cancer remains unsatisfactory. This study aims at constructing a disulfidoptosis‐related long noncoding RNA (lncRNA) signature to probe the prognosis and personalized treatment of patients with HCC. Methods The data of patients with HCC were extracted from The Cancer Genome Atlas (TCGA) databases. Univariate, multivariate, and least absolute selection operator Cox regression analyses were performed to build a disulfidptosis-related lncRNAs (DRLs) signature. Kaplan–Meier plots were used to evaluate the prognosis of the patients with HCC. Functional enrichment analysis was used to identify key DRLs-associated signaling pathways. Spearman’s rank correlation was used to elucidate the association between the DRLs signature and immune microenvironment. The function of TMCC1-AS1 in HCC was validated in two HCC cell lines (HEP3B and HEPG2). Results We identified 11 prognostic DRLs from the TCGA dataset, three of which were selected to construct the prognostic signature of DRLs. We found that the survival time of low-risk patients was considerably longer than that of high-risk patients. We further observed that the composition and the function of immune cell subpopulations were significantly different between high- and low-risk groups. Additionally, we identified that sorafenib, 5-Fluorouracil, and doxorubicin displayed better responses in the low-score group than those in the high-score group, based on IC50 values. Finally, we confirmed that inhibition of TMCC1-AS1 impeded the proliferation, migration, and invasion of hepatocellular carcinoma cells. Conclusions The DRL signatures have been shown to be a reliable prognostic and treatment response indicator in HCC patients. TMCC1-AS1 showed potential as a novel prognostic biomarker and therapeutic target for HCC. Supplementary Information The online version contains supplementary material available at 10.1186/s12935-023-03208-x. Keywords
Electronic supplementary material Below is the link to the electronic supplementary material.
Abbreviations Area under the receiver operating characteristic Differentially expressed genes Disulfidptosis-related genes Disulfidptosis-related lncRNAs Gene Ontology Gene set enrichment analysis Kyoto Encyclopedia of Genes and Genomes Least absolute selection operator Overall survival Principal component analysis Receiver operating characteristic The Cancer Genome Atlas Tumor immune microenvironment Tumor mutation burden Acknowledgements Not applicable. Author contributions LXX, SC, QQL and DC designed the project, analyzed the data and drafted manuscript. XYC, YX and YJZ downloaded and collated the data. JL, ZXG and JYX analyzed the data. All authors reviewed the manuscript. Funding This work was supported by the Henan Medical Science and Technology Joint Building Program (no. LHGJ20190255, LHGJ20190262, LHGJ20230239). Data availability Data from this study can be found in the TCGA databases ( http://cancergenome.nih.gov ). Declarations Ethics approval and consent to participate The First Affiliated Hospital of Zhengzhou University’s Ethics Committee approved this study in accordance with the Declaration of Helsinki. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:48
Cancer Cell Int. 2024 Jan 13; 24:30
oa_package/48/5b/PMC10788009.tar.gz
PMC10788010
38218775
Introduction Ovarian cancer (OC) is the seventh most common cancer in women and the eighth leading cause of cancer-related death worldwide [ 1 ]. At the time of initial diagnosis, over 70% of patients present with advanced disease due to the presence of atypical early symptoms [ 2 ]. Currently, for patients with a new diagnosis, the standard first-liner treatment involves cytoreductive surgery combined with platinum-based systematic chemotherapy, with or without the addition of bevacizumab. However, at first relapse, approximately 25% of patients develop platinum-resistant ovarian cancer (PROC), and nearly all patients will experience relapse and eventually develop platinum-resistant [ 3 ]. PROC is associated with a poor prognosis and an overall survival (OS) of less than 12 months, presenting a significant therapeutic challenge [ 4 ]. In the platinum-resistant setting, monotherapy with docetaxel, paclitaxel, topotecan or pegylated liposomal doxorubicin (PLD) remains the primary therapeutic option, but it results in a remarkably short survival, highlighting the urgent need for better treatment options. Furthermore, several trials have demonstrated that combining chemotherapy agents leads to increased adverse events without improving clinical benefit for PROC [ 5 – 7 ]. Tumor angiogenesis has been established as a hallmark of tumor development, growth, and metastasis. This complex process involves multiple signaling pathways. Vascular endothelial growth factor (VEGF), an important driver of angiogenesis in solid tumors, binds to VEGF receptor-1 or -2 (VEGFR-1/VEGFR-2) on target cells [ 8 ], thereby activating intracellular tyrosine kinase signaling. VEGF promotes the recruitment of circulating endothelial progenitor cells from the bone marrow and facilitates endothelial cell survival, differentiation, and proliferation during angiogenesis. Angiogenesis also plays a crucial role in the pathogenesis of OC by promoting tumor proliferation and metastasis [ 9 , 10 ]. The presence of extensive neovascularization is closely associated with a poor prognosis in OC. Anti-VEGF therapy has emerged as a promising therapeutic target with potential clinical benefits for patients with OC, including those with platinum-resistant disease [ 11 – 14 ]. Recently, various anti-VEGF therapies, such as anti-VEGF monoclonal antibodies (e.g., bevacizumab) and VEGF-R tyrosine kinase inhibitors (e.g., sorafenib, pazopanib, apatinib, cediranib, anlotinib), have been evaluated in OC patients [ 15 ]. The AURELIA trial, a randomized phase III trial, demonstrated a significant improvement in progression-free survival (PFS) in PROC patients when treated with a combination of bevacizumab and chemotherapy compared to monochemotherapy (hazard ratio (HR) = 0.48; 95% CI: 0.38–0.60). The median PFS was 6.7 months with the combined regimen versus 3.4 months with monochemotherapy. The objective response rate (ORR) also increased by 15.5% compared to chemotherapy alone. However, there was no statistically significant improvement in OS when bevacizumab was combined with chemotherapy (HR = 0.85; 95% CI: 0.66–1.08, p < 0.17) [ 16 ]. Bevacizumab has been approved by the Food and Drug Administration (FDA) for PROC. Other anti-VEGF agents, such as apatinib, have also shown preliminary evidence of efficacy when combined with chemotherapy for PROC. Wang et al. reported that treatment with apatinib plus PLD resulted in a clinically meaningful improvement in PFS (HR = 0.44; 95% CI: 0.28–0.71, p < 0.001). The median PFS was 5.8 months for apatinib plus PLD versus 3.3 months for PLD alone. The median OS was 23.0 months versus 14.4 months for apatinib plus PLD and PLD alone, respectively (HR = 0.66; 95% CI: 0.40–1.09) [ 17 ]. Previous meta-analyses have demonstrated that combination therapy offers improved survival benefits compared to chemotherapy alone in ovarian cancer patients [ 18 – 21 ]. However, there is a lack of specific meta-analysis focusing on platinum-resistant patients. Given the clinical uncertainty and inconsistent efficacy related to VEGF/VEGFR inhibitors in PROC, a systematic review and meta-analysis was conducted to overcome the limitations of individual studies and provide a more accurate estimation of the efficacy and safety of VEGF/VEGFR inhibitors in PROC.
Materials and methods This systematic review and meta-analysis adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The study was registered with the International Prospective Register of Systematic Reviews (PROSPERO CRD42023402050). Data source and search strategy Eligible studies were identified by searching databases including Cochrane Library, PubMed, Embase, and Web of Science. The search covered the period from inception to December 2022. The main search terms associated with therapy included (anti-angiogenic OR targeted therapy OR molecular targeted therapy OR bevacizumab OR nintedanib OR pazopanib OR cediranib OR sorafenib OR apatinib OR anlotinib OR lenvatinib OR ramolumab OR VEGF OR VEGFR OR vascular endothelial growth factor). The terms related to the disease included ovarian cancer OR ovarian neoplasm. Subsequently, the reference lists of all relevant articles were also browsed. Study selection The following criteria were used to screen potential trials: (1) prospective phase II and phase III randomized controlled trials (RCTs); (2) patients with OC, peritoneal cancer (PC), or fallopian tube cancers (FTC) that had progressed during platinum therapy (platinum-refractory) or within 6 months of platinum-containing therapy (platinum-resistant); (3) comparison with therapy combining VEGF/VEGFR inhibitors with other drugs (chemotherapy or Poly (ADP-ribose) polymerase (PARP) inhibitors) and chemotherapy alone; (4) the study’s clinical outcomes included at least one of OS, PFS, ORR, and treatment-related adverse events (TRAEs); (5) Only studies published in English were included. The following criteria were excluded: reviews, fundamental studies, editorials, animal studies, comments, and case reports. Data extraction and quality assessment For each eligible study, we extracted the following information: (1) general study information (study name, publication year, first author, study design, trial phase, sample size); (2) basic patient information (region, age, Eastern Cooperative Oncology Group (ECOG) performance status, primary tumor site); (3) control and intervention group. The main outcomes assessed were OS, PFS, ORR, and TRAEs. The risk of bias and methodological quality assessment was performed using the Cochrane Collaboration’s tool in RevMan5.4. Statistical analysis Statistical analysis was conducted using Stata 14.0 and RevMan5.4. Pooled odds ratios (ORs) and 95% confidence interval CI were calculated for ORR and TRAEs, while pooled HRs and 95% (CI) were calculated for OS and PFS. With I 2 > 50% and p < 0.05 indicating statistically significant heterogeneity [ 22 ], a random-effects model was utilized to calculate the HR and OR; otherwise, the fixed-effects model was employed. Publication bias assessment, sensitivity analysis, and subgroup analysis were conducted to further explore the source of heterogeneity. Begg’s test was performed to evaluate publication bias, and the results indicated the absence of publication bias with p > 0.05 [ 23 ]. The symmetry of the funnel plot was also visually observed to assess publication bias. Additionally, a sensitivity analysis was carried out by excluding each study to observe any changes in the pooled HR and OR. Subgroup analysis took into account factors such as region, combination therapeutic agents, trial phase, ECOG performance status, publication year, and primary tumor site.
Results Study selection and characteristics A total of 2408 potentially relevant trials were collected through independent evaluation by two authors. After removing irrelevant and duplicate studies, the initial search yielded 1422 abstracts and articles. Finally, eight studies were included (Fig. 1 ) [ 16 , 17 , 24 – 29 ]. Table 1 recorded the general information of the studies, therapeutic regimens, and baseline characteristics of the patients. Seven studies were prospective phase II RCTs, and one was a prospective phase III RCT. The studies were published between 2014 and 2022, and a total of 1097 patients were available for the meta-analysis, with a mean age of approximately 61 years. Risk of bias Seven studies were deemed to have a high risk of bias in blinding participants and personnel, while five studies had an unclear risk of bias in blinding outcome assessment, and one study had a high risk. The remaining studies were rated as having a low risk of bias (Figure S1 ). Meta-analysis of OS and PFS The pooled effects of HR for OS and PFS were available for all eight trials. The results demonstrated that combination therapy with VEGF/VEGFR inhibitors had a significantly better OS than chemotherapy (HR = 0.72; 95% CI: 0.62–0.84, p < 0.0001) (Fig. 2 A). Compared to chemotherapy, combination therapy with VEGF/VEGFR inhibitors resulted in a significant improvement in PFS (HR = 0.52; 95% CI: 0.45–0.59, p < 0.0001) (Fig. 2 B). Additionally, there were no significant heterogeneities observed in OS and PFS results among the included studies ( I 2 = 0% and 22.2%, respectively). Meta-analysis of ORR All eight trials with PROC reported ORR. Interestingly, the group of combination therapy exhibited respectable ORRs compared to chemotherapy (OR = 2.34; 95%CI: 1.27–4.32, p < 0.0001). There was a high degree of heterogeneity among different studies for ORR ( I 2 = 69.3%, p = 0.002). Subgroup analyses were conducted to determine the source of heterogeneity. A pooled analysis of ORR in patients with PROC was presented in Fig. 3 . Subgroup analysis for OS Subgroup analyses were conducted based on stratification factors including region, combination therapeutic agents, trial phase, ECOG performance status, publication year, and primary tumor site. The results were displayed in Table 2 and Figure S2 . In the subgroup of combination therapeutic agents, a better OS benefit was revealed in combination treatment with chemotherapy (HR = 0.71; 95% CI: 0.61–0.84, p < 0.0001). Patients with an ECOG performance status of 0 to 2 showed greater OS benefit in the combination treatment group compared to monochemotherapy (HR = 0.72; 95% CI: 0.61–0.85, p < 0.0001). Furthermore, no significant heterogeneity was observed in any of the subgroups. Subgroup analysis for PFS The subgroups of region, trial phase, ECOG performance status, publication year, and primary tumor site suggested that combination therapy exhibited better PFS than those receiving chemotherapy alone (Table 3 and Figure S3 ). Compared to the chemotherapy group, only the subgroup of combination treatment with PARP inhibitors exhibited no significant difference (HR = 0.76, 95% CI: 0.50–1.15, p = 0.192). The heterogeneity within each subgroup was no significant ( p > 0.05). Subgroup analysis for ORR The results were presented in Table 4 and Figure S4 . In the subgroup analysis of combination therapeutic agents, the combination therapy with chemotherapy showed a greater benefit in terms of ORR (OR = 2.97; 95% CI: 1.89–4.67, p < 0.0001). In the subgroup analysis of ECOG performance status, significant benefit of ORR was observed in patients with ECOG scores of 0 to 2 (OR = 3.14; 95% CI: 1.87–5.27, p < 0.0001). The heterogeneities of the two subgroups were reduced. Meta-analysis of TRAEs Six trials reported the incidences of any grade TRAEs and four trials reported grade 3–4 TRAEs. For both any grade TRAEs (OR = 2.06; 95% CI: 1.47–2.89, p < 0.0001) and grade 3–4 TRAEs (OR = 2.53; 95% CI: 1.64–3.90, p < 0.0001), the combination therapy with VEGF/VEGFR inhibitors was associated with significantly higher incidences compared to chemotherapy (Fig. 4 ). The meta-analysis indicated that compared to chemotherapy, combination therapy had a higher incidence of any grade hypertension (OR = 4.38, 95%CI 1.28–14.93, p = 0.018), mucositis (OR = 3.20, 95%CI 1.25–8.16, p = 0.015), proteinuria (OR = 6.15, 95%CI 1.75–21.59, p = 0.005), diarrhea (OR = 3.14, 95%CI 1.36–7.25, p = 0.007), and hand-foot syndrome (OR = 6.52, 95%CI 1.02–41.70, p = 0.048). There was no statistical difference in the incidence of fatigue (OR = 1.64, 95%CI 0.87–3.10, p = 0.124), nausea (OR = 1.36, 95%CI 0.72–2.54, p = 0.341), and vomiting (OR = 1.74, 95%CI 0.76–4.02, p = 0.192) (Table 5 and Figure S5 ). Sensitivity analysis and publication bias Microvariation was observed in the sensitivity analysis when each trial was removed in turn (Figure S6 ). There was no publication biases according to Begg’s test (OS, p = 0.107; PFS, p = 0.998; ORR, p = 0.617), and the funnel plots were mostly symmetric (Figure S7 ).
Discussion OC is often asymptomatic until it reaches an advanced stage, resulting in delayed diagnosis and poor prognosis. The current screening programs for OC diagnosis are inadequate [ 31 ]. PROC remains a significant challenge for clinical diagnosis and treatment due to the extreme cellular heterogeneity and the expression of various resistance and immune evasion mechanisms in this advanced stage of tumor complexity [ 25 ]. Combination therapy with VEGF/VEGFR inhibitors has shown a higher likelihood of being the most effective treatment compared to chemotherapy. Recent studies have reported encouraging results, particularly in terms of PFS, for several combination strategies involving VEGF/VEGFR inhibitors in PROC. However, the OS outcomes have been uncertain and inconsistent [ 16 , 17 ]. To address this, a meta-analysis was conducted, which included eight randomized controlled trials in PROC, and demonstrated better OS, PFS, and ORR outcomes with VEGF/VEGFR inhibitors compared to monochemotherapy. Furthermore, heterogeneity was observed in terms of ORR among the included studies. Subgroup analyses were performed for OS, PFS, and ORR, considering various stratification factors such as region, combination therapeutic agents, trial phase, ECOG performance status, publication year, and primary tumor site. Regardless of OS, PFS, or ORR, combination therapy with chemotherapy showed greater benefits in the subgroup analysis of combination therapeutic agents. Only one trial included combined PARP inhibitors therapy (cediranib plus olaparib), but it failed to demonstrate any superiority in efficacy compared to the standard treatment for patients with PROC [ 25 ]. Some studies have reported that cediranib induces the down-regulation of certain genes in the homologous recombination system, which synergistically enhances the effect of olaparib [ 32 , 33 ]. Liu et al. demonstrated that the combination of cediranib and olaparib significantly prolonged PFS compared to olaparib alone in platinum-sensitive OC patients (HR = 0.50). Additionally, in the gBRCA/unknown-subset, the combination therapy showed significantly improved OS compared to olaparib alone (37.8 versus 23.0 months, p = 0.047) [ 34 ]. However, disappointing results were observed for both OS and PFS in the platinum resistance trials included in our analysis [ 25 ]. It should be noted that due to the limited number of trials, the accuracy of subgroup analysis may be insufficient. It is necessary to explore randomized controlled trials of new combinations of PARP inhibitors with various drugs, such as anti-angiogenesis agents, immune checkpoint inhibitors, or other inhibitors of DNA damage response pathways [ 35 ]. The analysis of TRAEs revealed that the combination therapy had significantly higher incidences of both any grade TRAEs and grade 3–4 TRAEs compared to monochemotherapy. These findings were consistent with the previously published safety profile of VEGF/VEGFR inhibitors in OC and other solid tumors [ 36 – 41 ], and no new safety concerns were identified. Most of the TRAEs reported were of grade 1–2, indicating that the adverse events were manageable. Only four trials reported the incidence of grade 3–4 TRAEs. Among them, paclitaxel plus pazopanib treatment had a higher incidence (OR = 3.33, 95% CI: 1.27–8.76), while bevacizumab plus chemotherapy had a lower incidence (OR = 1.68, 95% CI: 0.76–3.69). Combination therapy was associated with a higher incidence of any grade hypertension, mucositis, proteinuria, diarrhea, and hand-foot syndrome. Hypertension is a common adverse effect of VEGF inhibitors, with an incidence of approximately 30% in various clinical trials, and moderate hypertension occurring in 3–16% of cases. Mucositis is another common adverse effect of anti-VEGF therapy, characterized by symptoms such as pain, difficulty swallowing and pronunciation. Mucositis typically manifests 7–10 days after the initiation of treatment, and in the absence of concurrent bacterial, viral, or fungal infections, it is self-limiting and resolves spontaneously within 2–4 weeks. The mechanism underlying proteinuria production involves the regulation of glomerular vascular permeability by the VEGF signaling pathway. Inhibition of VEGF can result in the destruction of glomerular endothelial cells and epithelial cells (podocytes), leading to proteinuria. The use of VEGF-R tyrosine kinase inhibitors can induce hand-foot syndrome, characterized by red spots, swelling, and pain on the extremities, particularly the palms or soles of the feet. This syndrome typically emerges within the first 6 weeks of treatment. A meta-analysis has demonstrated that combination therapy with VEGF/VEGFR inhibitors yields superior survival benefits compared to chemotherapy for patients with PROC [ 42 ]. However, the trials included in the analysis encompassed recurrent OC rather than exclusively focusing on platinum-resistant disease, and they encompassed a subset of patients with platinum-sensitive disease as well. Moreover, the most recent clinical trials were not incorporated. Therefore, our study serves as a supplement to previous meta-analyses, offering more comprehensive content and considering more stratification factors. It also addresses the limitations of previous meta-analyses and provides additional treatment options for patients with PROC. Several limitations were encountered in this meta-analysis. Firstly, the RCTs employed various therapeutic agents and had different baseline characteristics, resulting in a high degree of heterogeneity in the data analysis for ORR. In an attempt to stratify based on baseline characteristics to mitigate heterogeneity, subgroup analyses were conducted. In the future, network meta-analysis can be employed to further investigate the efficacy and safety of combination therapy. Secondly, this study only included eight RCTs comparing VEGF/VEGFR inhibitors in combination therapy with chemotherapy in patients with PROC, and the majority of these trials were phase II trials. Further more reliable data would be provided from phase III clinical trials for analysis, especially when combined with VEGF/VEGFR inhibitors and PARP inhibitors, which are expected to be included in future studies. Additionally, it is important to note that this meta-analysis lacks sufficient subgroup analyses, and the inclusion of more stratification factors would be crucial in demonstrating the efficacy of VEGF/VEGFR inhibitors for PROC.
Conclusions The combination therapy of VEGF/VEGFR inhibitors for PROC has shown superior OS, PFS, and ORR compared to monochemotherapy, particularly when combined with VEGF/VEGFR inhibitors and chemotherapy. However, it is worth mentioning that combination therapy is associated with a higher incidence of certain adverse events, such as hypertension, mucositis, proteinuria, diarrhea, and hand-foot syndrome. Nevertheless, the safety profile of combination therapy remains manageable. The present study provides more treatment options for PROC patients.
Background Almost all patients with ovarian cancer will experience relapse and eventually develop platinum-resistant. The poor prognosis and limited treatment options have prompted the search for novel approaches in managing platinum-resistant ovarian cancer (PROC). Therefore, a meta-analysis was conducted to evaluate the efficacy and safety of combination therapy with vascular endothelial growth factor (VEGF) /VEGF receptor (VEGFR) inhibitors for PROC. Methods A comprehensive search of online databases was conducted to identify randomized clinical trials published until December 31, 2022. Pooled hazard ratios (HR) was calculated for overall survival (OS) and progression-free survival (PFS), while pooled odds ratio (OR) was calculated for objective response rate (ORR) and treatment-related adverse events (TRAEs). Subgroup analysis was further performed to investigate the source of heterogeneity. Results In total, 1097 patients from eight randomized clinical trials were included in this meta-analysis. The pooled HRs of OS (HR = 0.72; 95% CI: 0.62–0.84, p < 0.0001) and PFS (HR = 0.52; 95% CI: 0.45–0.59, p < 0.0001) demonstrated a significant prolongation in the combination group compared to chemotherapy alone for PROC. In addition, combination therapy demonstrated a superior ORR compared to monotherapy (OR = 2.34; 95%CI: 1.27–4.32, p < 0.0001). Subgroup analysis indicated that the combination treatment of VEGF/VEGFR inhibitors and chemotherapy was significantly more effective than monochemotherapy in terms of OS (HR = 0.71; 95% CI: 0.61–0.84, p < 0.0001), PFS (HR = 0.49; 95% CI: 0.42–0.57, p < 0.0001), and ORR (OR = 2.97; 95% CI: 1.89–4.67, p < 0.0001). Although the combination therapy was associated with higher incidences of hypertension, mucositis, proteinuria, diarrhea, and hand-foot syndrome compared to monochemotherapy, these toxicities were manageable and well-tolerated. Conclusions The meta-analysis demonstrated that combination therapy with VEGF/VEGFR inhibitors yielded better clinical outcomes for patients with PROC compared to monochemotherapy, especially when combined with chemotherapy. This analysis provides more treatment options for patients with PROC. Systematic review registration [ https://www.crd.york.ac.uk/PROSPERO ], Prospective Register of Systematic Reviews (PROSPERO), identifier: CRD42023402050. Supplementary Information The online version contains supplementary material available at 10.1186/s12905-023-02879-y. Keywords
Electronic supplementary material Below is the link to the electronic supplementary material.
Acknowledgements We would like to thank all authors who provided published data for our meta-analysis. Author contributions HDX and SFL contributed the study concept and design. HDX and LS contributed to the data acquisition. HDX and KLY were responsible for data analysis and editing the manuscript. LS and CHX contributed to critical revision of the manuscript. All authors approved the final version of the manuscript. Funding This study was funded by Mass spectrometry project of Liaoning Cancer Hospital & Institute, Project number: ZP202019. Data availability The original datasets for this study are included in the article/Supplementary Material. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare that the research was conducted in the absence of any commercial or financial relationships, and they have no competing interests. Disclosure The authors have nothing to disclose. Abbreviations ovarian cancer platinum-resistant ovarian cancer poly (ADP-ribose) polymerase pegylated liposomal doxorubicin hazard ratio confidence intervals overall survival progression-free survival odds ratio objective response rate treatment-related adverse events vascular endothelial growth factor Eastern Cooperative Oncology Group American Society of Clinical Oncology European society of medical oncology Society of Gynecologic Oncology randomized controlled trial Response Evaluation Criteria in Solid Tumors version 1.1 Gynecological Cancer Intergroup cancer antigen 125
CC BY
no
2024-01-15 23:43:48
BMC Womens Health. 2024 Jan 13; 24:34
oa_package/8d/f0/PMC10788010.tar.gz
PMC10788011
38218755
Introduction Type 2 Diabetes Mellitus (T2DM) is a complex condition associated with impaired glucose tolerance, insulin resistance and hyperglycemia; its increasing prevalence has become a serious global health challenge. It is accountable for 11.3% of deaths worldwide and is believed to affect approximately 10.9% of the global population [ 1 ]. T2DM is accompanied by debilitating chronic complications such as kidney disease, retinopathy, neuropathy, microvascular impairment, and cardiovascular complications [ 1 , 2 ]. Cardiovascular complications are responsible for up to 68% of all diabetes-related mortalities. Several studies have revealed that patients with T2DM are at increased risk of coronary disease [ 3 ], myocardial infarction [ 4 ], heart failure [ 5 ], cardiomyopathy [ 6 ], and thrombotic events [ 7 ]. It has been shown that diabetic patients have a two- to three-fold increase in cardiovascular disease (CVD) development [ 8 ]. Various mechanisms have been proposed to explain the increased CVD rates among diabetic patients. Higher incidence of dyslipidemia [ 9 ], chronic inflammatory states [ 10 – 12 ], enhanced oxidative stress and reactive oxygen species [ 13 ], and hypercoagulability [ 14 ] are some of the key findings in patients with T2DM that can potentially increase atherosclerosis, plaque formation, and consequently result in increased rates of CVD [ 10 , 15 ]. Thus, it is of great importance to investigate sufficient early detection methods and effective therapeutic approaches for CVD among diabetic patients. An electrocardiogram (ECG) is a useful and non-invasive assessment that has been utilized for several biomedical uses such as determination of arrhythmias, fibrillations, heart rates, premature contractions, and ischemia [ 16 – 19 ]. T-wave in ECG represents ventricular repolarization. T-wave abnormalities (TWA) can be an indicator of a variety of conditions such as cardiomyopathy, pulmonary embolism, peri- and myocarditis, and ischemia [ 20 – 23 ]. Given the importance of T2DM and its complications, especially those affecting the cardiovascular system, as well as considering the ease of accessibility and practicality of ECG in medical practice, this cross-section study was designed to investigate the prevalence of T-wave abnormalities and its association with T2DM.
Method Study design and participants The current cross-sectional study was conducted on the population of Mashhad stroke and heart atherosclerotic disorder (MASHAD) cohort study [ 24 ]. A total of 9704 individuals aged 35 to 65 years were enrolled into this cohort study. A checklist containing participants’ demographic data including age, sex, educational level, and marital status was recorded. Patients whose systolic blood pressure levels were at or above 140 mmHg and/or diastolic blood pressure were at or beyond 90 mmHg—measured using a mercury sphygmomanometer- were considered hypertensive [ 25 ]. A fasting blood glucose (FBG) > 126 mg/dl, or being under anti-hyperglycemic medication was defined as diabetic patients [ 26 ]. The FBG was measured in a peripheral blood sample following 14 h of fasting [ 27 ]. The study was approved by the Human Research ethics committee of Mashhad University of Medical Sciences, Mashhad, Iran, and all participants provided informed consent prior to data collection. ECG analysis A standard resting 12-lead ECG was taken from each participant of the study. These ECGs were interpreted by instructed medical students in accordance with Minnesota coding system [ 28 ]. Five percent of all ECGs were also read by certified cardiologists. Among the 9704 participants, the ECGs of 9035 participants were available and readable according to Minnesota coding system [ 28 ]. Four different t-wave abnormalities were described within the coding system including codes 5–1, 5–2, 5–3 and 5–4. The code 5–1 was defined as T amplitude negative 5.0 mm or more in either of leads I, V6, or in lead aVL when R amplitude is ≥ 5.0 mm and code 5–2 was defined as T amplitude negative or diphasic (positive–negative or negative–positive type) with negative phase at least 1.0 mm but not as deep as 5.0 mm in lead I or V6, or in lead aVL when R amplitude is ≥ 5.0 mm. Code 5–3 was described as flat, negative or diphasic t-wave with less than 1 mm negative phase in any leads of I, II or V3 to V6 or in lead aVL when the R amplitude is ≥ 5.0 mm. Lastly, code 5–4 was defined as a positive T amplitude and a T/R amplitude ratio < 1:20 in any of leads I, II, aVL, or V3 through V6. The R-wave amplitude must be ≥ 10.0 mm [ 28 ]. Statistical analysis Qualitative and quantitative variables were summarized as Mean SD and frequency (%), respectively. An independent t-test was used in order to compare the mean of quantitative variables between the two groups. In addition, evaluating the association between qualitative variables was performed using Chi-square and Fisher's exact test. Further analyses were performed in order to investigate the association between T wave impairments and T2DM after adjusting the effect of potential confounders (variables with P < 0.25 in the univariate logistic regression model) using the multiple logistic regression (LR) model. Furthermore, receiver operating characteristic (ROC) curves were used to evaluate the ability of the multiple LR model to predict the occurrence of TWA and T2DM. All statistical analyses were carried out using SPSS version 20 and the statistical significance level was considered at 0.05.
Results Study population characteristics A total of 9035 subjects were included into this study, including 1273 diabetic patients and 7762 non-diabetic individuals (Fig. 1 ). The average age was 47.45 ± 8.17 years and 51.77 ± 7.73 years in non-diabetic and diabetic patients which differed significant ( p < 0.001). Diabetic patients were found to have higher body mass index (BMI), as well as higher rates of hypertension (50.3 vs 27.9%, p < 0.001). Marital status and educational levels also showed a significant different distribution between the two diabetic and non-diabetic groups with married being the most prevalent status among studied groups ( P < 0.001). Table 1 presents patients’ demographic data distributions. T-wave abnormality frequency A total of 1246 T wave abnormalities were reported among the study sample population, approximately 13.79% of all participants. The most frequent TWA among both groups were code 5–2 (4.9% in diabetics and 3.6% in the control group) and major T-wave abnormalities (5% in diabetics and 3.7% in the control group). Different TWA yielded varying associations with T2DM. While T-wave abnormalities code 5–1 and 5–4 failed to show a significantly different distribution among diabetic and non-diabetic participants ( P = 0.24 and 0.92 respectively), code 5–2 and 5–3 were shown to be significantly higher among diabetic patients compared to the non-diabetic individuals ( P = 0.02 and 0.01, respectively). Overall, both major and minor T-wave abnormalities were significantly more frequent among patients with T2DM compared to the control group, ( p = 0.02 and 0.008, respectively). Figures 2 and 3 compare T wave impairments and T2DM distribution. T2DM predictive factors Results from the multiple logistic regression models indicated a significant association between age (OR = 1.05, 95%CI = 1.04–1.05) and BMI (OR = 1.03, 95%CI = 1.02–1.05) with having T2DM. Gender, marital status, and educational level did not show a significant association with having T2DM (all P > 0.05) (Tables 2 and 3 ). Hypertension has been reported to increase the odds of T2DM by 86 percent (95%CI = 1.63–2.12, p < 0.001). According to Tables 2 and 3 , only major and minor T wave impairments as well as impairments code 5–2 and 5–3 were reported to be higher among diabetic patients and thus only these items were further analyzed by inclusion in the multiple LR model. A model analyzing T-wave abnormality code 5–2 and 5–3 showed that the odds of having T2DM among patients with T-wave code 5–2 and 5–3 abnormalities were 1.07 and 1.31 times as those without these abnormalities, respectively. This observed difference between patients with and without T-wave abnormalities regarding having T2DM failed to yield statistical significance ( P = 0.63 and 0.12, respectively). The area under the ROC curve (AUC) of the final multiple LR model was 0.6847, which indicates a good predictive power of the final model, as shown in Fig. 4 A. Also, the results of our model for major and minor T-wave abnormality revealed that the odds of having T2DM in patients who had T major and T minor abnormalities were 1.06 and 1.30 times than those without ischemia abnormalities, respectively. However, this difference did not show a significant difference within the logistic regression model ( P = 0.65 and 0.11 respectively). The AUC for this model was 0.6846 which suggests a good predictive power of the final model, as shown in Fig. 4 B. Tables 2 and 3 presents the results of the regression model analyses.
Discussion The current study aimed to investigate the distribution of t-wave impairment among diabetic patients and its association with T2DM according to the Minnesota coding system. The primary results showed significantly higher rates of code 5–2 and 5–3 t-wave impairment among diabetic patients. Both minor and major t-wave abnormalities were also significantly higher among diabetics. However, upon adjusting for several factors such as age, gender, and hypertension within the regression model, none of the mentioned t-wave abnormalities showed a significant association with T2DM. Myocardial ischemia is a relatively frequent finding among diabetic patients and can potentially lead to coronary artery disease [ 29 , 30 ]. Patients with myocardial ischemia can present both symptomatic and asymptomatic, with or without previous cardiovascular events. The rates of silent asymptomatic myocardial ischemia have been shown to be three to six times higher among diabetic patients [ 29 ]. Atherosclerosis and endothelial damage of vessels has been shown to be strong risk factors for ischemic heart disease (IHD). On the other hand, the formation of plaque and thrombi can lead to acute forms of myocardial ischemia and coronary syndromes [ 31 , 32 ]. T2DM can contribute substantially to atherogenesis [ 33 ], thrombosis [ 34 ], and vascular damage [ 35 ], therefore leading to increased risks of IHD [ 36 ]. Hyperglycemia, increased levels of free fatty acids, and insulin resistance can lead to several destructive mechanisms such as inflammation, oxidative stress, and the production of advanced glycation products (AGE) [ 36 , 37 ]. Following the increase in AGE production, inflammatory responses are triggered and pro-inflammatory transcription factors such as NF-kB are upregulated [ 38 , 39 ]. Vascular motion is also affected via the reduction in nitric oxide synthesis and enhanced endothelin-1 release. Upregulated pro-thrombotic tissue factor and plasminogen activator inhibitor-1 levels, as well as decreased tissue plasminogen activator within T2DM, can lead to thrombi formation [ 36 , 40 ]. The results of these various mechanisms is endothelial dysfunction, vasoconstriction, and enhanced plaque formations, which as mentioned before, are key components in the development and progression of IHD [ 36 , 40 ]. Several studies have shown TWA among diabetic patients and their utilization as risk predictors. A 2021 study by Molud et al. studied the relationship between TWA and cardiovascular events among diabetic patients [ 41 ]. Minnesota code 5–1 and 5–2 were considered major TWA and codes 5–3 and 5–4 were considered to be minor TWAs. Their results indicated that patients with TWA had increased risks of both cardiovascular and all-cause mortality and major TWA was attributed to higher risk than minor TWAs [ 41 ]. They also highlight the usefulness of TWA in prognostication of diabetic patients in long-term settings. According to a prospective longitudinal study by Harms et al. [ 42 ] 45% of diabetic patients had or develop ECG abnormalities and 7.5% developed major adverse cardiac events within a 6.6-year follow-up period. Upon grading ECG abnormalities using the Minnesota coding system, 6 and 5% of the diabetic population had minor and major ST-segment/T-wave abnormalities respectively. They also concluded that ST-segment/T-wave abnormalities were associated with heart failure and coronary heart disease. Thus, T-T-wave modifications can be used as risk predictor for cardiovascular events and mortality among diabetic patients. In addition to ST/T-wave changes which exhibit ischemic disorders, signs of decreased conductivity such as PR and QRS prolongation, and hypertrophy such as tall R-wave was also observed in diabetics and were associated with chronic heart disease [ 42 ]. T-wave variation and abnormalities have also been shown within several other diabetes-related pathologies other than IHD. T-wave inversion within some diabetic patients can be explained via hyperkalemia. Diabetic ketoacidosis is a state of hyperkalemia and can result in a variety of ECG modifications affecting T-wave, QT, and ST segments [ 43 ]. T-wave inversion is also associated with left ventricle hypertrophy findings of ECG among diabetic patients, which might indicate myocardial injury but not coronary disease [ 44 ]. This finding is contradicted by another study, in which, ST-T changes are significant predictors of coronary artery disease, defined as elevated, depressed, or inversed T waves [ 45 ]. The observed difference can be due to sample size or ECG coding and grading system. Some of the novel ECG parameters such as the QRS-T angle and T-wave axis of the frontal plane have also been investigated in diabetic patients by other studies [ 46 ]. It has been shown that 20.9% of diabetic patients have abnormal T-wave axis while 14% of them have increased QRS-T angle. These two ECG parameters are associated with some atherosclerotic disease markers among type II diabetic patients [ 46 ]. Studies on the relationship between T2DM and T-wave abnormalities have reported inconsistent results. A Chinese study investigated ECG abnormalities within several disorders such as hypertension, smoking, obesity, and so forth [ 47 ]. T2DM was found to be associated with ST elevation but failed to show a significant correlation with other electrocardiogram findings such as ST depression, T-wave and Q-wave impairment, tall R wave, atrial hypertrophy, and axial deviations. Unlike T2DM, hypertension, and hypercholesterolemia were significantly attributed to ST depression and T-wave abnormalities. These findings are in line with the results of our study, since upon adjustment, none of the T-wave abnormalities were associated with T2DM. However, two studies showed a contrary result. Flatter and asymmetric T-waves were observed in patients with type I diabetes, according to the study by Isaksen et al. [ 48 ]. This association was also confirmed by a regression model corrected for age, gender, BMI, blood pressure, potassium, and cholesterol. Interestingly, asymmetrical t-wave was significantly associated with both macro and microalbuminuria among type I diabetic patients. An Italian cross-sectional study [ 49 ] also confirmed this finding and suggests higher rates of T-wave axis abnormalities – described as T-wave rotation in the frontal plane – in diabetic patients compared to non-diabetic individuals [ 49 ]. These differences could be due to a lack of differentiating diabetes types, as well as the ethnicity of the study population. Even though our analysis showed no significant association between T2DM and T-wave changes in the ECG, several other factors such as hypertension, age, and BMI were found to be significantly associated with T2DM. A meta-analysis of a total of 452,584 patients also showed similar results about the association between T2DM and hypertension (pooled OR:8.32, 95%CI: 3.05–22.71) [ 50 ]. The mechanisms by which, diabetes increases risks of hypertension can be explained through disturbed sodium homeostasis, insulin resistance, enhanced volume expansion and prominent resistance within peripheral vessels [ 51 ]. Our results indicated no significant relationship between gender and T2DM, whereas some studies show a significant contribution of sex and T2DM [ 52 ]. A longitudinal study in Iran showed significantly higher rates of T2DM among females while the global prevalence is higher in men [ 53 , 54 ]. These differences in findings can be due to sampling size as well as not differentiating the type of diabetes among different studies. It has been also shown that gender differences poses varied risks of diabetes development among different races [ 55 ]. This study is one of very few studies to differentiate T-wave abnormalities into six categories, while most of the studies only summarize them in two. Second, a large population ( n = 9035) was examined and observed in this cross-section study which belonged to the MASHAD cohort study. Third, some of the interpretations were also controlled by certified cardiologists which reduce the chances of errors. However, our study faced several limitations which need to be considered for future studies. First, available documentation did not differentiate type I or II diabetes, and thus, exact conclusions cannot be made for each type. Second, the age group of the study was limited to 35–65 years old, and variation might exist in ages above or below the cutoff used in our study. Third, only t-waves were used for ischemic changes of the heart, and future studies can use several other modalities, such as other ECG findings, and other para-clinical values to further confirm ischemic diseases of the heart due to T2DM. We also highly encourage future researchers to perform multi-central cohort studies in order to precisely evaluate the relationship between the two. High-quality meta-analyses are needed for confirming our findings.
Conclusion The results of this study show a significantly higher prevalence of Minnesota codes 5–2, 5–3, major and minor T-wave abnormalities in diabetic patients compared to non-diabetic individuals. However, the association between these abnormalities was not significant using regression models and adjusting for age, gender, and BMI. Considering the aberrant T2DM complications, especially cardiovascular ones, it is highly important to investigate CVD diagnostic tools among diabetics. Given the contrary results of other studies, large-scale studies on the topic of using t-wave abnormalities as ischemic pathologies resulting from T2DM are needed for further identification of sufficient indicative and predictive tools.
Background Type 2 Diabetes Mellitus (T2DM) has become a major health concern with an increasing prevalence and is now one of the leading attributable causes of death globally. T2DM and cardiovascular disease are strongly associated and T2DM is an important independent risk factor for ischemic heart disease. T-wave abnormalities (TWA) on electrocardiogram (ECG) can indicate several pathologies including ischemia. In this study, we aimed to investigate the association between T2DM and T-wave changes using the Minnesota coding system. Methods A cross-sectional study was conducted on the MASHAD cohort study population. All participants of the cohort population were enrolled in the study. 12-lead ECG and Minnesota coding system (codes 5–1 to 5–4) were utilized for T-wave observation and interpretation. Regression models were used for the final evaluation with a level of significance being considered at p < 0.05. Results A total of 9035 participants aged 35–65 years old were included in the study, of whom 1273 were diabetic. The prevalence of code 5–2, 5–3, major and minor TWA were significantly higher in diabetics ( p < 0.05). However, following adjustment for age, gender, and hypertension, the presence of TWAs was not significantly associated with T2DM ( p > 0.05). Hypertension, age, and body mass index were significantly associated with T2DM ( p < 0.05). Conclusions Although some T-wave abnormalities were more frequent in diabetics, they were not statistically associated with the presence of T2DM in our study. Keywords
Abbreviations Type 2 Diabetes Mellitus T-wave abnormalities Electrocardiogram Cardiovascular disease Mashhad stroke and heart atherosclerotic disorder Fasting blood glucose Logistic regression Receiver operating characteristic Body mass index Area under the ROC curve Ischemic heart disease Advanced glycation products Acknowledgements We would like to thank Mashhad University of Medical Sciences for supporting this study. Authors’ contributions All authors have read and approved the manuscript. Study concept and design: SSS and AI; data collection: FF, ME, HA, BS, AG, SM, and MT; Analysis and interpretation of data: EN and HE; Drafting of the manuscript: TS and AM; Critical revision of the manuscript for important intellectual content: GF, MG, and MM. Funding The collection of clinical data was financially supported by Mashhad University of Medical Sciences. Availability of data and materials The authors confirm that the data supporting the findings of this study are available from the corresponding author on request. Declarations Ethics approval and consent of participate The study protocol was given approval by the Ethics Committee of Mashhad University of Medical Sciences and written informed consent was obtained from participants. Consent of publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:48
BMC Cardiovasc Disord. 2024 Jan 13; 24:48
oa_package/9a/dc/PMC10788011.tar.gz
PMC10788012
38218847
Background After the outbreak of the Russo-Ukrainian war on 24 February 2022, 8.2 million Ukrainians have been displaced or led to flee all over Europe, as of May 2023 [ 1 ]. Data on the consequences of the current war in Ukraine on the psychological well-being of refugees is still limited. Preliminary data on resettled Ukrainian refugees have only been reported from a study conducted in Germany. It found a prevalence rate of depressive and anxiety symptoms of 44.7% and 51.0%, respectively [ 2 ]. Refugee mental health assessment is particularly challenging since it may require cultural mediators and/or interpreters to facilitate communication and dialogue, and to fully understand the health status and the underlying needs. If the refugee feels that he/she is heard and understood, he/she may show an enhanced help-seeking behavior when in need [ 3 ]. This is even more important for mental health conditions, especially those at risk of self-harm and suicide. In recent years, there was a growing awareness of the role of mental health in global health outcomes, premature deaths, and economic losses [ 4 ]. Research has shown that the prevalence of mental disorders, such as post-traumatic stress disorder (PTSD) and depression, is higher in the refugees than in the general population. A prevalence rate of 22.7%, 13.8%, and 15.8% for PTSD, depression, and anxiety disorders, respectively, was found in child and adolescents refugees resettled in Europe [ 5 ]. Risk factors for mental health are many and diverse and can change depending on the moment and the migration context. They can be distinguished into risk factors of the pre-migratory context, i.e. when the person is in the country of origin, where he or she may be directly or indirectly exposed to war and suffer trauma; during migration, as the journey itself may expose refugees to further traumatic events; and third, the post-migratory context may be a source of further stress for the refugee due to social isolation, unemployment and difficult cultural integration [ 6 ]. The importance of mental health and well-being as factors influencing the overall health status of refugees during migration and in the resettlement country has been widely recognized [ 5 ]. In order to assess the health status and needs of this vulnerable population, it is crucial to provide primary health workers with reliable and easy-to-use tools that allow a multicultural approach, such as short and simple questionnaires. These can reach large numbers of people and help health workers identify individuals at risk and provide timely assistance. The General Health Questionnaire (GHQ) is a widely used assessment instrument of current psychological distress developed by Goldberg in 1970. In the following decades, different shortened versions of the original 60-items tool, such as the GHQ-30, GHQ-28, and the GHQ-12, have been proposed [ 7 ]. The questionnaire assesses the presence and severity of some psychological and psychosomatic symptoms over the previous few weeks using a self-reported four-point scale expressing whether a particular symptom or behaviour has recently been experienced by the respondent from less to much more than usual. The GHQ-12 most common scoring methods are bimodal (0–0–1-1) and Likert (0–1–2-3) resulting in a total score of 12 or 36 points, respectively [ 8 ]. The GHQ-12, due to its ease of use and brevity, has been extensively used to screen psychological distress in primary health care, outpatient settings, and in different cultures and populations [ 9 , 10 ]. The GHQ-12 has also proved to be a consistent and reliable instrument when used in the refugee population [ 11 ]. Therefore, this study aims to translate the 12-item General Health Questionnaire (GHQ-12) into Ukrainian and to test its psychometric features (i.e. construct validity, internal consistency, and concurrent validity).
Methods Ethical approval The research was performed following the ethical standards of the 1964 Declaration of Helsinki and was approved by the Ethical Committee of the University Hospital of Verona on 24/10/2022 (protocol number 63939). Study design, setting, and population This is a cross-sectional validation study. It was carried out in the province of Verona. The reception system in Italy for Ukrainian refugees is built on two different services provided by the governmental authorities, under the Home Office: the Reception and Integration System (RIS), managed at the local level and the Special Reception Centres (SRC), centrally managed [ 12 ]. Alongside these systems is the extended network of reception consisting of nonprofit organizations, social service centers, religious organizations, and co-housing measures with families or accommodation provided by other private entities. In Verona, the reception network supporting Ukrainian refugees is coordinated among all 98 municipalities in the province and includes about 117 SRC and four projects related to the RIS [ 13 , 14 ]. As of April 2023, the number of Ukrainian refugees in the province of Verona reached 2265, of whom 1623 (71.7%) were females [ 15 ]. All persons who arrived in Italy from Ukraine after 24 February 2022, following the outbreak of the Russian-Ukrainian conflict, were considered eligible for this study. Refugees older than 14 years old whose native language was Ukrainian were included. Sample size According to Mundfrom et colleagues [ 16 ] considering a ratio of variables to factors (p/f) of 6 and a two-factor solution, as in the original questionnaire [ 17 ], in a level of communality set as low, the minimum sample size to obtain an excellent-level criterion (0.98) was 120. Accounting for a drop-out rate of 15%, the target sample of participants was set at 146 for this study. Data collection Data was collected between November and February 2023, progressively including all persons meeting the inclusion criteria until the computed sample size was reached. Ukrainian refugees were recruited in the province of Verona through the local refugee reception network (i.e., regional and local authorities, SRC, RIS, and non-profit organizations). A written disclosure about the study was first given and those who agreed to participate signed an informed consent form. Both documents were written in Ukrainian, the participants’ mother language. For those under the age of 18, informed consent was signed by their parents or legal guardian. Each participant was asked to complete the Ukrainian translation of the GHQ-12 together with a short sociodemographic questionnaire (i.e., age, sex, education level, and marital status) and the subscale for PTSD of the International Trauma Questionnaire (ITQ) to serve as external validation. At all phases of the study, the research team was supported by a cultural mediator. Instruments The original GHQ-12 consists of 12 items to be answered by the participant according to the variation, compared to his or her habitual standard, in the frequency of scenarios or behaviors described in the specific statement of the items (Table 1 ). The GHQ-12 has 6 positive items (answers options: “Better than usual”, “Same as usual”, “Less than usual”, “Much less than usual”) and 6 negative items (answers options: “Not at all”, “No more than usual”, “Rather more than usual”, “Much more than usual”). In the present study, both scoring methods, bimodal and Likert, were evaluated. In the bimodal scoring method, the response categories have a score of 0, 0, 1, 1 for the positive items, while the negative items are scored the other way round (1,1,0,0). Therefore, the score ranges from 0 to 12 points. In the Likert scoring method, the positive items scored from 0 to 3 and the negative ones from 3 to 0, with a score range between 0 and 36 [ 18 ]. The most used cut-offs are between 2 and 4 for the bimodal method and ranged between 10 and 15 for the Likert one [ 18 ]. The ITQ is a self-report measure that allows a simple and concise assessment of key aspects of PTSD, according to the ICD-11 diagnostic criteria. The ITQ has two main subscales: the first (9 items), concerns PTSD and assesses three symptom domains, namely re-experiencing, avoidance, and sense of threat; the second (9 items), used to assess the complex PTSD, investigates the symptoms of self-organization disorder and the functional impairment caused by them. Each item is answered on a Likert scale from 0 (not at all) to 4 (very much). The cut-off for PTSD is given by a score > 2 in at least one of the two items of each of the three symptom domains (re-experiencing, items 1 and 2; avoidance, items 3 and 4; hyperarousal, items 5 and 6) plus at least one of the three indicators of functional impairment (items 7, 8 and 9). The ITQ is available in the Ukrainian language-validated version [ 19 ]. The PTSD subscale was used in the present study. Previous studies have analyzed psychological distress by combining the PTSD symptom score from the ITQ and the mental health problem risk score from the GHQ-12 to test the links between mental health, well-being, and conflict exposure [ 20 ]. Translation and pilot testing The translation process followed the WHO guidelines, which include a forward translation into the target language, i.e. Ukrainian, followed by a backward translation into the original language, i.e., English (Fig. 1 ) [ 21 ]. After obtaining permission from the Author to translate and the license to use the questionnaire, a professional translator provided the first Ukrainian version of the GHQ-12 from the original English questionnaire. This version was then revised with a third party fluent in both languages. The back-translation was carried out independently by a second professional translator who had not seen the original questionnaire in English. Both the authors and a third person reviewed the translation and revised it consensually. To avoid any conceptual losses during the translation process, the consensual retranslation was then compared with the original GHQ-12. The translated questionnaire was initially administered to a sample of 28 refugees to test the acceptability and comprehensibility of the Ukrainian version. After completing the questionnaire, a cognitive interview was conducted to assess the clarity of the questions, any problems or difficulties in answering, and possible improvement actions. The pilot-sample was recruited based on sociodemographic criteria in order to be representative of both genders and different age groups (adolescents, adults, and elderly). Refugees who participated in the pre-test were not included in the final study sample. The original English GHQ-12 and the Ukrainian GHQ-12 are available in the Supplementary material . Statistical analysis A descriptive statistic was first conducted on sociodemographic data using frequencies and proportions for categorical variables and means and standard deviations (SD) or medians and interquartile ranges (IQRs) for continuous ones. Sample distribution was tested via χ2 and Fisher exact test or Mann-Whitney-U non-parametric, as appropriate. GHQ-12 internal consistency was assessed through Cronbach’s alpha and McDonald’s omega coefficient testing the reliability and considering satisfactory a coefficient greater than 0.70. A tetrachoric correlation matrix was generated to assess the correlation between all the items of the GHQ-12 scored with a bimodal method. A confirmatory factor analysis (CFA) was carried out to examine the factor structure of the Ukrainian version of the GHQ-12. First, a single-factor structure that contained all the GHQ-12 items was assessed. Secondly, a two-factor structure was tested encompassing two correlated latent factors: “Anxiety/Depression” (items: q1, q3, q4, q7, q8, q12) and “Social Dysfunction” (items: q2, q5, q6, q9, q10, q11). The two-factor structure was the one suggested by the author of the original English version of the GHQ-12 [ 16 ]. The models were tested for both the scoring method; for the bimodal method, the diagonally weighted least squares estimator was used and all variables were considered as ordered (ordinal) variables, for the Likert method, the maximum likelihood estimator was used with the Satorra-Bentler adjustment accounting for non-normality and heteroscedasticity of the data [ 22 ]. Model fit was evaluated using the χ2 test, the comparative fit index (CFI), the Tucker-Lewis index (TLI), the root-mean square error of approximation (RMSE), and the standardized root-mean-square residual (SRMR). Variance explained by latent variables was assessed through Average Variance Extracted (AVE). Criteria for acceptable model fit indices were based on Hooper et al. [ 23 ]. Pearson product moment statistic (Pearson’s correlation coefficient = “ρ” ) was used to assess the concurrent validity of the GHQ-12 as the correlation with the ITQ subscale for PTSD. It was expected that the GHQ-12 would positively correlate with the ITQ subscale. A coefficient “ρ” above 0.40 was considered satisfactory. Association between single item score of the GHQ-12 and being screened positive for PTSD at the ITQ was conducted via z-test and t-test for bimodal and Likert scoring methods, respectively. A p -value < 0.05 was considered significant. All analyses were performed using the R software (version 4.3.0).
Results Sample characteristics A total of 150 participants were recruited and 141 (94%) completed the questionnaire. The majority were females ( n = 111, 78.7%), and the median age was 36 years (IQR 23–43). The level of education of the majority of the sample was university degree or higher ( n = 77, 54.6%) followed by high school diploma ( n = 32, 22.7%). Concerning marital status, 76 (53.9%) were married or in a de facto union, 39 (27.7%) were single, and 18 (12.8%) were divorced. The mean score at GHQ-12 scored with the binomial method was 4.8 points (SD 3.4). Using two of the most used cut-offs in literature for the bimodal scoring method, i.e., > 3 and > 4, the percentage of people screened positive was 97 (68.8%) and 85 (60.3%), respectively. Those with a score equal to or higher to the mean GHQ-12 score for the whole study sample were 72 (51.1%). Table 2 shows descriptive statistics for the single items of the GHQ-12 based on both scoring methods (bimodal and Likert). The mean score at the ITQ subscale for PTSD was 14.0 points (SD 8.3). People with an ITQ score suggestive of PTSD were 59 (41.8%). Concurrent validity Validity was assessed through Pearson correlation coefficient between the total score at GHQ-12 and the ITQ subscale for PTSD. A positive significant correlation was found with a coefficient “ρ” equal to 0.53 (0.95CI 0.40–0.64, p < 0.001). When looking at the association between the single items and a suggestive score for PTSD at the ITQ, eight items showed a positive significant association (Table 2 ). The items more frequently associated with PTSD and with the highest difference between positive and negative PTSD proportions were item 7 (74.6%), item 5 (83.1%), and item 1 (55.9%). Construct validity The results of the CFA are shown in Table 3 . The bimodal scoring method had good indices for both single- (model B1, TLI = 0.98, RMSEA = 0.05[0.90CI 0.00–0.07]) and two-factor models (model B2, TLI = 0.98, RMSEA = 0.04[0.90CI 0.00–0.07]). In model B2 the two subscales had a high correlation index, equal to 0.88. Both B1 and B2 models achieved a satisfactory AVE above 0.50. In the Likert scoring method, the single factor model (model L1) didn’t fit the data well (TLI = 0.77, RMSEA = 0.11[0.90CI 0.09–0.13]). The two-factor model (model L2) showed better and acceptable indices (TLI = 0.58, RMSEA = 0.09[0.90CI 0.06–0.11]). Model L2 had a correlation of 0.75 between the two subscales. Figure 2 shows the standardized parameter estimates for all the four models. Internal consistency The mean score of the GHQ-12 items was 0.40 (SD = 0.29). The items with the highest frequency of positive results (i.e., a score equal to 1) were item 5 (66%), item 2 (55%), and item 7 (53%) (Table 2 ). Reliability was tested with Cronbach’s and McDonald’s omega coefficients that were found to be 0.84 (0.95CI 0.80–0.88) and 0.85 (0.95CI 0.81–0.88) in the whole sample, respectively. The alpha and omega coefficients in the two subscales were 0.78 [0.95CI 0.71–0.83] and 0.78 [0095CI 0.72–0.83] for ‘anxiety/depression’ and 0.72 [0.95CI 0.64–0.79] and 0.73 [0.95CI 0.66–0.79] for ‘social dysfunction’. Stratifying by sex both alpha and omega coefficients remained consistent as in the whole sample (alpha: female = 0.84[0.95CI 0.80–0.88], male = 0.85[0.95CI 0.76–0.92]; omega: female = 0.84[0.95CI 0.75–0.92], male = 0.84[0.95CI 0.79–0.88]). The items with the highest correlation were q7 and q12 (0.695[0.95CI 0.502;0.835]), while those with the lowest were q 1 and q11 (0.114[0.95CI -0.242;0.468) (Fig. 3 ).
Discussion The present study showed that the Ukrainian translation of GHQ-12 had good reliability and validity and a two-factor structure consistent with the original English version. The GHQ-12 is a well-known instrument to assess the general well-being and mental health, used in different populations and settings, including low- and middle-income countries [10]. It was widely used in several study designs (cross-sectional, RCT, and longitudinal) among migrants and refugees to screen for mental health disorders [ 24 – 26 ]. Internal reliability of the Ukrainian translation of the GHQ-12 was overall satisfactory in our study (alpha = 0.84). The Ukrainian GHQ-12 also showed a good level of concurrent validity through the correlation with the ITQ (ρ = 0.53). The GHQ-12 has previously been used with satisfactory results for screening refugees for PTSD [ 27 ]. This mental disorder is one of those that most affect refugees and one of the main ones examined in the literature on this population [ 28 ]. PTSD seriously endangers both the mental and general health of persons, as it can lead to self-harm and suicidal ideation and attempts. Only one study has previously used the GHQ-12 in Ukrainian refugees, although it only evaluated its internal reliability, finding an alpha of 0.83, as in the present study. It didn’t explore the validity and factorial structure of the Ukrainian translation of the GHQ-12 [ 2 ]. In the confirmatory factor analysis, both single- (model B1) and two-factor (model B2) structures with bimodal scoring methods fitted data well. The bimodal scoring system has previously proven its validity as a screening tool, as in the case of the present study, whereas the Likert method may be more useful for the follow-up of patients over time [ 29 ]. The GHQ-12 was originally developed as a unitary screening measure and the high correlation found in our sample between the two subscales in model B2 and L2 supports this structure. Several multidimensional factor constructions comprising two to three factors have been proposed and tested [ 30 ]. A multicentric study of psychological disorders in general health by WHO found a substantial factor variation between the 15 centres involved. However, after rotation two factors expressing “Anxiety/Depression” and “Social Dysfunction” were found for the GHQ-12 [ 17 ]. Another study comparing different factorial structures for the GHQ-12 found that a unidimensional model, with a general factor representing the commonality between all items and two orthogonal specific factors reflecting the common variance due to wording effects (negatively and positively worded items) and representing the two previously identified factors, was the best fit [ 31 ]. The present study showed that the Ukrainian translation of GHQ-12 is consistent with the factor structures proposed in the literature and very similar to that of the original English version. Using a binary scoring method, as the original Goldberg version of the GHQ-12, we found a mean score of 4.8 points. Different cut-offs have been proposed in the literature depending on the population involved, mainly ranging between 2 and 4 [ 18 ]. As a rule of thumb, it has been proposed to use the mean score for the overall population of respondents as a rough guide to the best threshold [ 32 ]. The cut-off of screening tools is also driven by the prevalence of a specific disorder in a given population [ 32 ]. In the present study, the sample consisted of Ukrainian refugees. This is a well-known at-risk population for mental health disorders, and we therefore found a higher threshold than that proposed in the literature. Adopting a 5-point cut-off, 51% of the sample showed a suggestive score for mental distress. The GHQ, even in its short 12-item form, is therefore a robust self-report tool for screening people who may be at risk for mental health disorders, especially adolescent and young people [ 33 ]. For this reason, it could be particularly useful in the Ukrainian refugee population, made up mainly of young women and children. Simple tools to investigate the prevalence of people at risk of mental health problems are widely used such as the Refugee Health Screener-15 (RHS-15) as a general measure of emotional distress and the Primary Care PTSD Screen for DSM-5 (PC-PTSD-5). They have the advantage of being rapid and easy to be administered, allowing even non-specialized personnel to use them [ 34 ]. These questionnaires were used in a school setting to screen Ukrainian refugee adolescents, finding a prevalence of 57.1% and 45.2% above the critical cut-off of RHS-15 and PC-PTSD-5, respectively [ 35 ]. The GHQ in its short 12-item form can therefore complement these instruments and be used not only by clinicians but also by schools, nonprofit organizations, or social service personnel as a self-report tool to identify persons at risk for mental health at an early stage and to provide them with timely assistance and support. This study has some limitations. First of all, it was conducted only in the province of Verona, so it may not be representative of the entire population of Ukrainian refugees. Likewise, it involved a particularly high-risk category, so it may not be generalizable to the entire Ukrainian population. Our sample was also unbalanced between males and females, with the latter being the most represented. This sample however reflects the composition of the study population. It would be useful to repeat this in a larger and more general sample of people in Ukraine to see if the results are confirmed. Moreover, a larger sample would have offered the possibility of conducting an analysis based on the item response theory to assess the invariance of the results concerning the characteristics of the participants. Secondly, the validation was assessed on a specific mental disorder, and this could be restrictive compared to the general health explored by the GHQ-12. Future studies, across different regions, should explore how the different cultural contexts may influence the responses and thus the validation of the questionnaire. Furthermore, the use of emerging techniques, such as clinimetric analysis, would be important to apply to verify the clinical properties of the Ukrainian version of the GHQ-12 [ 36 ].
Conclusions The present study showed that the Ukrainian translation of the GHQ-12 had good internal reliability and concurrent validity and showed a factor structure consistent with the original version. It provides a useful tool for assessing general well-being in an at-risk population such as Ukrainian refugees. To the best of our knowledge, this is the first study to provide a comprehensive validation of the Ukrainian translation of the GHQ-12. Future studies may use it on larger population samples both as a screening tool and to study factors associated with general and mental well-being in the resettlement country to improve reception and integration services for this vulnerable population.
Following the Russian-Ukrainian conflict, the well-being of millions of Ukrainians has been jeopardised. This study aims to translate and test the psychometric features of the Ukrainian version of the General Health Questionnaire 12 (GHQ-12). The study included Ukrainian refugees housed in Verona (Italy) between November/2022 and February/2023. The Ukrainian translation was obtained through a ‘forward-backward’ translation. Questionnaire was completed by 141 refugees (females: 78.7%). Median age was 36 years (IQR 23–43). Individuals with a score suggestive of psychological distress were 97 (68.8%). Cronbach’s coefficient was 0.84 (0.95CI 0.80–0.88). According to confirmatory factor analysis, both single- (modelB1) and two-factor (model B2) structures with bimodal scoring method fitted the data satisfactorily. The two factors of model B2 had a 0.88 correlation. Pearson coefficient showed a positive significant correlation between the GHQ-12 and International Trauma Questionnaire scores (ρ = 0.53, 0.95CI 0.40–0.64, p < 0.001). The GHQ-12 Ukrainian translation showed good psychometric features being a reliable and valid instrument to assess Ukrainian refugees’ general well-being. Supplementary Information The online version contains supplementary material available at 10.1186/s12955-024-02226-1. Keywords
Supplementary Information
Abbreviations Confirmatory Factor Analysis General Health Questionnaire Inter Quartile Range International Trauma Questionnaire Post-Traumatic Stress Disorder Standard Deviation World Health Organization Authors’ contributions RB conceptualized and designed the study and made substantial contributions to original writing. AS contributed to conceptualization, data collection and made substantial contributions to original writing. RB and MM were responsible for data analysis. MS was responsible for the translation process and contributed to data collection. EP and LB contributed to conceptualization and to data collection. GV, ST and MR reviewed the study critically. FM conceptualized and designed the study and reviewed it critically. Funding None. Availability of data and materials The datasets generated and/or analysed during the current study are available from the corresponding author upon reasonable request. Declarations Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:48
Health Qual Life Outcomes. 2024 Jan 13; 22:6
oa_package/be/c3/PMC10788012.tar.gz
PMC10788013
38221621
Introduction The human gastrointestinal tract is home to billions of microorganisms that interact symbiotically with their hosts and play a critical role in both health and illness. H. pylori, a gastrointestinal microorganism, is one of the most studied bacteria. The network of interactions that H. pylori have constituted with its host is closely linked to all systems of the organism [ 1 ]. Numerous systemic illnesses, including neural, hematological, cardiovascular, dermatological, and allergic diseases are linked to H. pylori [ 2 , 3 ]. Among them, the relationship between H. pylori infection and the risk of allergic diseases is becoming better known and is of some concern to the general public. The interaction of the human immune system and environmental factors leads to allergic diseases, and given the substantial regional heterogeneity of these diseases, it is likely that environmental factors play a significant role in their etiology [ 4 ]. As a result, growing evidence from research demonstrating an association between early H. pylori exposure and allergic diseases suggests that early life exposure to H. pylori may act as a preventative factor in the development of allergic disease [ 5 , 6 ]. However, only a small number of studies have described the immune response to H. pylori and the relationship between the bacteria and the gut microbiota. This paper explored the relationship between H. pylori infection and asthma in terms of immunity and gut microbiota, as well as the use of H. pylori and its related components in the treatment of asthma. It also introduced the most recent developments in the correlation between H. pylori infection and allergic diseases.
Conclusions Many domestic and international scholars have made significant progress in recent years by conducting multi-dimensional and multi-angle discussions and studies on the relationship between extra gastric disorders and H. pylori. In terms of microbiota and immunity, this review summarizes recent developments in H. pylori infection and asthma. Topics covered include the relevance of H. pylori to allergic disease, potential mechanisms by which H. pylori infection exerts a protective effect on asthma, and the use of H. pylori in the treatment of asthma. According to the majority of studies, H. pylori infection has a strong negative correlation with the risk of a number of allergic disorders, including asthma and eosinophilic esophagitis. The hygiene hypothesis suggests that exposure to certain infectious agents may prevent the development of allergic diseases such as asthma, and therefore it is hypothesized that H. pylori infection would exert a protective effect against asthma by promoting immune tolerance. Through a variety of mechanisms, H. pylori infection alters the composition and abundance of the gut microbiota, which in turn exerts a preventive and protective effect against asthma through the gut-pulmonary axis. Dendritic cells can be reprogrammed by H. pylori to become tolerogenic dendritic cells, and tolerogenic dendritic cells promote the production of Treg with high inhibitory activity. Both Th1/Th2 balance and Th17/Treg balance play a significant role in the onset and persistence of asthma and can prevent and protect against asthma when Th1 and Treg are dominant in the ratio. Many studies have demonstrated the great potential of H. pylori neutrophil-activating protein(NAP)in the prevention and treatment of allergic diseases such as asthma. H. pylori, its components, or extracts have certain preventive and therapeutic effects on asthma. It may represent a new way to treat asthma in the future, but it is not widely known by clinical staff. The eradication of H. pylori in asthmatic patients remains to be discussed. There are still some unanswered questions despite the fact that the studies mentioned above showed an association between H. pylori infection and the risk of allergic disease. The detailed mechanisms that give rise to these correlations are not clear. The mechanisms may be closely interconnected. The hygiene hypothesis is a significant theory rooted in epidemiology. This hypothesis not only explains the negative correlation between H. pylori and asthma from an epidemiological perspective but may also account for other mechanisms, such as alterations in the gut microbiota. Changes in the gut microbiota can affect the balance of Th1/Th2 and Treg/Th17. Tolerogenic dendritic cells can promote the differentiation of T cells into regulatory T cells. Regulatory T cells can not only directly protect against asthma but also influence the balance of Th1/Th2, which is crucial in the onset and progression of asthma. It’s also unknown if there are any confounding variables besides H. pylori that affect this correlation. Large-scale cohort studies are needed to determine whether the effect of H. pylori on allergic disease is through mediating variables. Further fundamental experimental investigations will be required in the future to investigate and assess these problems and to develop effective strategies for the prevention and treatment of allergic diseases.
H. pylori is a gram-negative bacterium that is usually acquired in childhood and can persistently colonize the gastric mucosa of humans, affecting approximately half of the world’s population. In recent years, the prevalence of H. pylori infection has steadily reduced while the risk of allergic diseases has steadily climbed. As a result, epidemiological research indicates a strong negative association between the two. Moreover, numerous experimental studies have demonstrated that eradicating H. pylori increases the risk of allergic diseases. Hence, it is hypothesized that H. pylori infection may act as a safeguard against allergic diseases. The hygiene hypothesis, alterations in gut microbiota, the development of tolerogenic dendritic cells, and helper T cells could all be involved in H. pylori’s ability to protect against asthma. Furthermore, Studies on mice models have indicated that H. pylori and its extracts are crucial in the management of asthma. We reviewed the in-depth studies on the most recent developments in the relationship between H. pylori infection and allergic diseases, and we discussed potential mechanisms of the infection’s protective effect on asthma in terms of microbiota and immunity. We also investigated the prospect of the application of H. pylori and its related components in asthma, so as to provide a new perspective for the prevention or treatment of allergic diseases. Keywords
Association between H. Pylori infection and the risk of allergic diseases Association between H. pylori and asthma Asthma is a heterogeneous disease with chronic airway inflammation, bronchial hyperresponsiveness and airway remodeling, and its pathogenesis is very complex [ 7 ]. In recent years there have been many studies on the association of H. pylori infection with the risk of asthma. Epidemiological studies have shown a decline in the prevalence of H. pylori infection in the Western World and in some developing countries in contrast to an increase in the incidence of asthma and allergic diseases [ 8 ]. Studies have demonstrated that H. pylori infection can prevent asthma [ 9 , 10 ], and it has been noted that CagA-positive H. pylori infection is significantly negatively associated with the risk of asthma [ 11 , 12 ] and may even be negatively associated with the severity of asthma [ 11 ]. A meta-analysis of 18 cross-sectional studies found that H. pylori infection, especially CagA-positive H. pylori infection, was inversely associated with the prevalence of asthma [ 13 ]. Another meta-analysis of 24 studies (8 case-control studies and 16 cross-sectional studies) reached the same conclusion [ 12 ]. However, there are questions about the negative association between H. pylori infection and the risk of asthma. Several studies suggest no correlation between H. pylori infection and asthma risk and do not support the notion that H. pylori infection has a protective effect against asthma [ 14 – 16 ]. The aforementioned study analyzed the correlation between H. pylori IgG antibody positivity and the incidence of asthma. A positive H. pylori IgG antibody indicates a previous H. pylori infection but does not necessarily imply a current infection. Therefore, we believe that further studies and experiments are necessary to support and confirm this discovery. Research by Wang et al. pointed out that H. pylori infection was significantly associated with a 1.38-fold increased risk of asthma. This indicates that the risk of asthma is significantly higher in patients with H. pylori infection than in subjects without H. pylori infection [ 17 ]. However, the methods of detecting H. pylori and possible H. pylori treatment during the follow-up were not fully addressed. Socioeconomic factors, as potential confounding factors, had not been taken into account in the study. We noticed that a relevant article raised doubts about the conclusion of the study [ 18 ]. Although the findings are slightly controversial to some extent, the negative association of H. pylori infection with asthma risk is supported by most scholars. Association between H. pylori and eosinophilic esophagitis Eosinophilic esophagitis (EoE) is a chronic, immune-mediated inflammatory disease whose pathogenesis is not fully understood. The histology is characterized by eosinophil-dominated inflammation with clinical symptoms associated with esophageal dysfunction [ 19 , 20 ]. Emerging evidence suggests that modifiable host factors and environmental allergen exposure may play a key role in the pathogenesis of eosinophilic esophagitis [ 21 ]. The gradual increase in the incidence of eosinophilic esophagitis and the decrease in the rate of H. pylori infection in recent years have given rise to speculation and discussion about the relationship between the two. A strong negative correlation between the presence of H. pylori and esophageal eosinophilia has been demonstrated [ 22 ]. The results of case-control studies and meta-analyses suggest that H. pylori infection is associated with a reduced risk of eosinophilic esophagitis [ 23 , 24 ], but the protective effect of H. pylori infection against eosinophilic esophagitis has also been questioned as an uncritical claim that requires the exclusion of confounding factors associated with it and the demonstration of a causal rather than a coincidental trend relationship [ 25 , 26 ]. Association between H. pylori and food allergies or allergic rhinitis The area of the relationship between H. pylori infection and allergic rhinitis has rarely been learned. A study in Japan indicated a negative correlation between H. pylori infection and the incidence of allergic rhinitis in young people [ 27 ]. However, there is no further evidence to support this conclusion. Similarly, there has been limited discussion about the relationship between H. pylori infection and food allergies. A systematic review described the relationship between them but did not come to a conclusive result [ 28 ]. However, subsequent studies have shown that H. pylori infection has a protective effect against food allergies, including ovalbumin allergy and peanut allergy [ 29 , 30 ]. Further researches are needed to fully understand the mechanisms behind this relationship and to determine if H. pylori infection could potentially be used as a treatment or preventative measure for food allergies or allergic rhinitis. Mechanism of H. pylori protection against asthma Genetics and environment are two factors essential for the development of asthma in patients. Genetics determines the patients’ special allergies, and susceptibility to asthma, and whether such patients develop the disease or not is highly related to environmental factors. H. pylori infection showed a significant negative association with asthma risk, but as an environmental factor, the specific pathophysiological mechanism by which it exerts a protective effect on asthma remains unclear. From the analysis of some previously published articles on the subject, it is hypothesized that H. pylori may exert its protective effect against asthma through several pathways(Fig. 1 ). Application of hygiene hypothesis to the protective effect of H. pylori on asthma The “hygiene hypothesis,” which has been adopted by the infectious and chronic disease research community since the early 1990s, proposes that exposure to certain infectious agents may prevent the development of allergic diseases [ 12 ]. Poor hygiene and lower socioeconomic status increase the risk of exposure to bacteria or other antigens, and therefore to H. pylori infection [ 2 , 31 ]. In recent years, with the improvement of people’s quality of life, hygiene conditions, and socioeconomic status, the rate of H. pylori infection has gradually decreased and the low prevalence of H. pylori infection could explain the recent high prevalence of allergic diseases [ 28 ]. Lack of exposure to infection early in life leads to defective immune tolerance, which in turn leads to increased susceptibility to allergic diseases such as asthma [ 21 , 32 ], leading to the hypothesis that H. pylori infection exerts a protective effect against allergic diseases such as asthma by promoting immune tolerance. It was pointed out that the hygiene hypothesis can explain the negative correlation between H. pylori infection and allergic diseases. However, it only fits to IgE-mediated allergic diseases and not to non-IgE-mediated allergic diseases [ 33 ]. IgE-mediated allergic diseases are caused by immunoglobulin E (IgE)-mediated allergic reactions and are the most common type of allergy. Non-IgE-mediated allergic diseases are mediated by other immune cells, and the pathogenesis is very complex, but the incidence is low. Asthma is an IgE-mediated allergic disease, so it can be concluded that the hygiene hypothesis may explain the negative association between H. pylori infection and asthma. Alterations in the gut microbiota The composition of the gut microbiota may regulate the onset and development of H. pylori-associated diseases. The composition of the gut microbiota influences the immune regulation of the body, and microbial drivers have significant effects on immune development, asthma susceptibility, and asthma pathogenesis [ 34 ]. It is known that H. pylori is strictly colonized within the human gastric mucosa and that H. pylori in the stomach may be affecting the intestinal microbiota in the following ways. Theoretically, H. pylori in the stomach can affect the intestinal microbiota by interacting with the body’s immune system and also by altering the local gastric environment. Alterations in the local gastric environment include reduced gastric acid and hypergastrinemia during H. pylori infection, with the low gastric acid environment promoting the entry of acid-sensitive bacteria into the distal intestine as probably the most important pathway of effect, leading to alterations in the composition and abundance of the gut microbiota [ 1 ]. Even perinatal H. pylori exposure can have a significant impact on the composition and diversity of the neonatal gastrointestinal microbiota [ 35 ]. Accordingly, it can be concluded that H. pylori infection affects the composition and abundance of the gut microbiota. Ecological dysregulation caused by alterations in the composition and abundance of the gut microbiota plays a role in asthma [ 36 , 37 ], especially in the development and progression of asthma in children [ 38 – 40 ]. The gut microbiota exerts its influence on asthma through several known pathways. The gut-pulmonary axis is an important link between the gut microbiota and the respiratory tract [ 36 ], and the metabolites produced by the gut microbiota may have an impact on the development of asthma through the gut-pulmonary axis pathway [ 41 , 42 ]. The gut microbiota is a key regulator of the intestinal epithelial barrier and the immune response [ 43 ], which can act on asthma through the induction of tolerance and allergen penetration through the epithelial barrier [ 44 ]. In addition, short-chain fatty acids (SCFA) produced by dietary fiber metabolism by the gut microbiota can prevent asthma by affecting the host G protein-coupled receptor GPR 41, shaping pulmonary immune cell differentiation, and improving allergic airway inflammation [ 45 ]. Studies on the relationship between gut microbiota and asthma development in mothers and infants have shown that alterations in maternal gut microbiota composition affect the risk of asthma in infants [ 46 ]. Based on the conclusion that gut microecological dysbiosis has an impact on the development of asthma, it can be hypothesized that the gut microbiota could be a target for the treatment of asthma by altering its composition and abundance and thus exerting a therapeutic effect on asthma. Clinically used probiotics can have a preventive or therapeutic effect on asthma by regulating the gut microbiota [ 47 , 48 ]. Some studies have also shown that alterations in the composition and abundance of the gut microbiota are not associated with the development of asthma. In a mouse experiment, the gut microbiota was found to be independent of reflecting airway hyperresponsiveness penh values [ 49 ]. In a cohort study of adults, no significant differences were found in the composition of the fecal microbiota between asthmatic and non-asthmatic patients [ 39 ]. The reasons for these results may be due to the underrepresentation of the fecal microbiota to the gut microbiota, the adult immune system is well developed and alterations in the gut microbiota do not or only slightly affect the adult immune system. H. pylori infection can affect the composition and abundance of the gut microbiota through interactions with the body’s immune system and changes in the local gastric environment. The gut microbiota uses the gut-pulmonary axis as an important linkage pathway to exert a protective effect against asthma, either through metabolites or by modulating immunity(Fig. 2 ). However, a limitation of this research area is that in most of the relevant studies, the fecal microbiota is used instead of the gut microbiota, ignoring the microorganisms remaining in the gut, which may cause bias in the results. In the H. pylori-gut microbiota-asthma liaison pathway, ignoring the possible bias, the gut microbiota can serve as an emerging target for the prevention and treatment of asthma. Modification of the gut microbiota by certain drugs or treatments, which in turn exerts a protective effect against asthma. The critical role of tolerogenic dendritic cells in the protection of asthma by H. pylori H. pylori inhibits lipopolysaccharide-induced dendritic cell (DC) maturation and is able to recode dendritic cells into tolerogenic dendritic cells [ 50 , 51 ]. Some findings show that tolerant dendritic cells do not induce effector functions of T cells, but rather convert naive T cells into FoxP3 + Treg with high suppressive activity. FoxP3 + Treg can prevent airway inflammation and hyperresponsiveness, thus exerting a protective effect against asthma [ 50 ]. H. pylori can produce urease, which activates NLRP3, a component of cytoplasmic inflammatory vesicles, and stimulates the TLR2/ NLRP3/IL-18 axis [ 52 ]. IL-18 on this axis is a key cytokine for Treg to perform its function, IL-18 produced by dendritic cells is not only the basis for the conversion of CD4 + T cells into Treg but also for Treg to perform its function [ 2 ]. γ-glutamyl transpeptidase (GGT) and vacuolar cytotoxin (VacA) are virulence factors of H. pylori, and it was demonstrated that isogenic H. pylori mutants lacking GGT or VacA cannot prevent LPS-induced dendritic cells maturation or drive dendritic cells tolerance, thus the above two virulence factors play a key role in dendritic cell tolerance [ 53 ]. Based on the promoting effect of tolerogenic dendritic cells on Treg formation and the protective effect of Treg in asthma, it can be inferred that transforming sufficient numbers of dendritic cells into tolerogenic dendritic cells and maintaining their tolerance status is key for H. pylori to exert a protective effect against asthma. The immune balance of Th1/Th2 and Treg/Th17 cells A large number of cells such as eosinophils, neutrophils, mast cells, and T lymphocytes are involved in the airway inflammation of asthma [ 54 ]. Among them, CD4 + T cells are the main lymphocytes that infiltrate the airways and play a crucial role in controlling asthma-related inflammation. Naive CD4 + T cells can differentiate into Th1, Th2, Th17, and Treg. Th1 cells produce IFN-γ, while Th2 cells produce IL-4, IL-5, and IL- 13 [ 55 ]. Th2-biased immune responses in genetically susceptible individuals may cause allergic diseases such as asthma [ 56 ]. It has been claimed that H. pylori infection affects the Th1/Th2 balance by influencing gastric hormones. When growth inhibitory hormone levels decrease and gastrin production increases, it suppresses the Th2 response and promotes the Th1 response [ 11 ]. The mechanism by which H. pylori prevents and protects against asthma may be to drive the Th1 inflammatory response and inhibit the Th2-mediated allergic asthmatic response [ 4 , 5 , 14 ]. In the clinic, upregulation of Th1 response or downregulation of Th2 response seems to be a target for the treatment of asthma, but it still needs to be explored and tested in the clinic. Treg and Th17 cells are functionally antagonistic to each other, and the balance of Treg and Th17 cells plays an important role in the development and progression of H. pylori and its associated diseases [ 57 ]. Excess IL- 17 has been found in sputum, bronchoalveolar lavage fluid (BALF), and lung tissue in chronic allergic airway inflammation [ 54 ]. It is hypothesized that both Th1/Th2 balance and Th17/Treg balance play a key role in the onset and persistence of asthma, and that asthma can be prevented and protected when Th1 and Treg are dominant in the ratio. One study experimented with the relationship between Th1 and Treg responses to H. pylori and allergen-specific IgE levels. The results showed a significant increase in IL- 10(+) Treg in the peripheral blood of H. pylori-infected individuals and correlated with a decrease in plasma IgE concentrations [ 58 ]. Th2 and its its cytokines are the basis of inflammation in asthma pathogenesis, and H. pylori exerts a protective effect against asthma by promoting the Th1 response and inhibiting the Th2 response. Th17 and its cytokines are also important in controlling asthma-associated inflammation, and Treg not only antagonism with Th17 but also directly suppresses airway inflammation and hyperresponsiveness in asthma. The protective effect on asthma that can be exerted by enhancing the Treg response is a currently available target for asthma treatment and is a very promising route for the treatment of asthma. H. pylori affects the onset and development of asthma by influencing the balance of Th1/Th2 and Treg/Th17. This is one of the potential mechanisms, but it is still in the developmental stage, and the exact mechanism remains to be determined. Several factors influence the balance between Th1/Th2 and Treg/Th17. The Th1 response is mainly associated with autoimmune reactions, while the Th2 response is primarily linked to allergic reactions. Bacterial or viral infections can cause their imbalances, and H. pylori may be no exception. Further experiments are needed to explore the distinctiveness and dependability of this mechanism. Helicobacter pylori in the treatment of asthma It has been shown that the protective effect of H. pylori infection against allergic airway disease does not require live bacteria and that treatment with H. pylori extracts is also effective in suppressing allergic airway disease [ 59 ]. Even perinatal exposure to H. pylori extract or its immunomodulator VacA can exert a protective effect against allergic airway disease, and this powerful protective effect occurs not only in the first but even in the second generation of offspring [ 35 ]. This shows the great scope for the development of H. pylori and its extracts in the prevention and treatment of allergic airway diseases such as asthma, and we may try to intervene in suspected asthma in newborns through perinatal exposure. Helicobacter pylori neutrophil-activating protein (Hp-NAP), the main virulence factor of H. pylori, is a modulator with anti-Th2 inflammatory activity for the prevention of IgE-mediated allergic reactions [ 60 ]. Hp-NAP is a member of an extensive superfamily of ferritin-like proteins, which are homopolymers of 12 tetrahelical bundle subunits containing iron ligands, and whose members mostly have DNA-protective functions under starvation conditions [ 60 ]. Hp-NAP plays an important role in the protection of H. pylori infection against allergic diseases and is one of the candidates for a new strategy of prevention and protection against allergic diseases. H. pylori neutrophil-activating protein was shown to prevent allergic asthma in mice. Experimental mice exposed to purified rNAP by intraperitoneal injection or inhalation showed a significant reduction of eosinophils in lung tissue and bronchoalveolar lavage fluid (BALF) after stimulated sensitization with nebulized ovalbumin (OVA), and also a significant reduction of inflammatory infiltration in lung tissue. In addition, the treatment group showed lower levels of IL-4 and IL- 13, higher levels of IL- 10 and IFN-γ, and lower levels of serum IgE compared to the control group [ 61 ]. Another similar study showed the same results, where a fusion protein CTB-NAP of cholera toxin B (CTB) and neutrophil-activating protein (NAP) was constructed on the surface of Bacillus subtilis, and oral administration of recombinant CTB-NAP spores was effective in preventing asthma in mice [ 60 ]. The prevention and treatment of asthma are systematic, the treatment of asthma focuses not only on the acute onset of symptoms but also on preventing the recurrence in clinical remission stage. Therefore, the above studies show the great potential of NAP in the prevention and treatment of allergic diseases such as asthma, but future experiments are still needed to verify whether NAP can cause side effects and toxic effects, and other adverse reactions in humans. Another substance, human protein S, enables a shift to Th1 through the Th1/Th2 balance and promotes Th1 cytokine secretion to exert a powerful protective effect on the development of allergic asthma [ 62 ]. It is clinically recognized that H. pylori eradication reduces the risk of gastric cancer, but based on its preventive and protective effects on allergic diseases such as asthma and other systemic diseases, the issue of H. pylori eradication should be considered with caution. Some studies have shown that eradication of H. pylori can restore the intestinal flora to a state similar to that of uninfected individuals [ 63 – 65 ], and others have shown that eradication treatment leads to short-term disruption of the intestinal flora, but that this disruption is restored within weeks to months [ 66 – 68 ]. The us e of H. pylori in the treatment of asthma opens the breadth of research on the association of H. pylori infection and asthma risk, with a novel perspective on the importance of H. pylori infection in asthma. However, the application of H. pylori and its extracts in the treatment of asthma still requires a large number of clinical trials to verify its safety and effectiveness and to exclude its possible adverse reactions.
CC BY
no
2024-01-15 23:43:48
Allergy Asthma Clin Immunol. 2024 Jan 14; 20:4
oa_package/c5/bd/PMC10788013.tar.gz
PMC10788014
38218850
Introduction Background and rationale Submucosal tumors (SMTs) histologically include both epithelial and nonepithelial tumors. Nonepithelial tumors typically present as protruding lesions or masses covered with intact mucosa [ 1 ]. Large SMTs (≥2 cm) in the stomach may lead to early-stage complications such as bleeding or perforation, resulting in symptoms such as abdominal bloating, pain, hematemesis, or melena, which prompt patients to seek medical attention. In contrast, small gastric SMTs (<2 cm) are typically discovered incidentally during endoscopy without any apparent symptoms [ 2 , 3 ]. The risk of small gastric SMT-MPs has been underestimated [ 4 ]. Studies suggest that 60–70% of SMT-MPs are pathologically identified as gastrointestinal stromal tumors (GISTs) and categorized as potential malignancies regardless of their size [ 2 , 3 ]. However, although surgeons propose resection for large gastric SMT-MPs, clinical controversy persists [ 1 , 5 , 6 ]. In a retrospective study conducted by Ge QC et al. [ 7 ], a cutoff value of 1.48 cm was established to predict the malignant potential of GISTs. Tumors larger than 1.48 cm were associated with greater malignant potential, warranting intensive surveillance or endoscopic surgery. According to the modified National Institute of Health, the risk of small GISTs varies only with the mitotic count. The classifications included very-low risk (mitotic count ≤5), intermediate risk (mitotic count between 5 and 10), and high risk (mitotic count >10). Some advocate for imaging surveillance as the primary approach, suggesting resection only when tumor progression is confirmed. This includes cases where the tumor shows signs of increasing size, irregular borders, or pathological confirmation as a cancer [ 8 ]. Although endoscopic ultrasound (EUS) is a common method for diagnosing gastrointestinal superficial lesions, its role in diagnosing SMT-MPs has not been determined. Additionally, consistent observation of dynamic changes in tumor size and border length for patients with SMT-MPs < 16 mm is challenging. Although EUS-guided fine needle aspiration (EUS-FNA) is often employed for pathology, it may not fully reveal the pathological features of GISTs due to heterogeneity. In conclusion, en bloc resection is crucial for both diagnosis and prognosis [ 9 ]. Endoscopic resection, in comparison to open or laparoscopic surgery, yields a shorter operation duration, reduced blood loss, and shorter average hospitalization duration [ 9 – 14 ]. Endoscopic submucosal resection (ESD) has been demonstrated to be feasible for treating gastric SMTs. Guidelines from the European Society of Gastrointestinal Endoscopy (ESGE) and the American Society for Gastrointestinal Endoscopy (ASGE) recommend ESD as the preferred treatment for most gastric superficial neoplastic lesions [ 15 , 16 ]. However, its effectiveness is limited for lesions originating from deeper layers such as the muscularis propria, increasing the complexity of the operation and the risk of complications. A systematic review by Ichiro Oda et al., encompassing more than 300 patients with early gastric cancer treated with ESD, identified several complications associated with the procedure. These complications included perforation (1.2–5.2%), bleeding (7% for immediate bleeding, up to 15.6% for delayed bleeding), stenosis (0.7–1.9%), aspiration pneumonia (0.8–1.6%), and air embolism, among others [ 17 ]. Although management strategies exist for these adverse events, they demand a higher level of technical expertise, adding to the financial burden and psychological stress on patients. Furthermore, ESD may not always achieve R0 resection, posing challenges for diagnosis and prognosis [ 18 , 19 ]. According to an analysis of 733 patients with upper gastrointestinal SMT-MPs, extensive tumor connection was identified as a risk factor for incomplete resection [ 20 ]. In a multicenter prospective study by Ye LP et al. involving 692 patients, the R0 resection rate was 84.2% [ 19 ]. Hence, a more judicious treatment approach is imperative. We previously introduced a novel endoscopic treatment termed precutting EBL. In this operation, an electrosurgical snare resection is performed to initially remove the mucosa surrounding the tumor, followed by the use of a transparent ligator to suction the tumor. A long-term, single-center study has substantiated its safety and efficacy. Precutting EBL was associated with a significantly shorter operation duration (16.6 min) and lower cost ($603.3 ± 5.9) than ESD ($2783 ± 601), and it was associated with fewer complications [ 21 ]. However, precutting EBL has two notable drawbacks. First, pathological specimens were not collected since the tumor spontaneously drops off after ligation, necessitating long-term follow-up for eradication verification. Second, like other ligate-and-let-go techniques, there is a risk of delayed perforation after the operation, which warrants careful consideration [ 22 ]. Given that we did not have sufficient samples to assess the possibility of delayed perforation, we opted to perform en bloc resection of lesions after ligation. Although this approach increases the chances of intraoperative perforation, we can promptly address this possibility if it occurs. Consequently, we propose a modified endoscopic operation for small gastric SMT-MPs, termed precutting EBLR. This involves an additional snare resection immediately after ligation. After thorough communication and detailed informed consent, we experimentally performed precutting EBLR on 16 patients. All patients showed rapid postoperative recovery, with no instances of delayed gastric bleeding or perforation. Importantly, subsequent pathological examination confirmed R0 resection in every patient. To further enhance the clinical validation of precutting EBLR, we opted to initiate a randomized controlled trial comparing the efficacy and safety of ESD and precutting EBLR for the treatment of small gastric SMT-MPs. Trial design and objective This was a single-center, open-label, parallel-group, randomized controlled trial. The main objective of this trial was to verify the efficacy and safety of precutting EBLR in the management of small gastric SMT-MPs. The trial began on December 1, 2022. The procedures included recruitment, informed consent, allocation of participants, intervention, data collection, data monitoring, and statistical analysis. All procedures were conducted at The First Affiliated Hospital of Chongqing Medical University (CQMU). A detailed flowchart for this trial is available in the Supplementary Materials . The drafting of this manuscript adheres to the SPIRIT reporting guidelines [ 23 ]. The SPIRIT checklist is attached as Additional file 2 in Supplementary Materials.
Methods Definition Several key definitions are outlined below: Efficacy: the efficacy was determined based on the operation duration, operation cost, and hospitalization duration Operation cost: the sum of the operational and material expenses, retrievable from the hospital system Operation duration: the time from the administration of preoperative anesthesia to the patient's recovery of consciousness in the postoperative period Safety: the ratio of intraoperative to postoperative complications En bloc resection: complete removal of a lesion without any segmentation or partial lesion remaining R0 resection: the absence of cancerous tissue on the edges of the lesion after resection Postoperative gastric bleeding: a patient experienced hematemesis, melena, or an unexplained decrease in hemoglobin levels after the operation Delayed perforation: the occurrence of sudden abdominal pain after the operation, accompanied by the detection of retroperitoneal pneumatosis or free gas through imaging examination Postoperative recurrence: the discovery of a newly investigated tumor-like lesion that is eventually proven to be the same pathology as the previously resected tumor Hospitalization duration: the number of days from admission to discharge Patient and public involvement No patients or members of the public were involved in any way in the design of this trial. Recruitment Patients with SMT-MPs admitted to The First Affiliated Hospital of CQMU were recruited. The inclusion criteria were as follows: Age between 18 and 80 years SMT-MPs with a diameter less than 1.6 cm confirmed through EUS Preoperative computed tomography (CT) indicated no evidence of tumor metastasis in the liver or other organs Willingness of the patient to undergo treatment with either ESD or precutting EBLR Informed consent was obtained The exclusion criteria were as follows: EUS data were not available from The First Affiliated Hospital of CQMU or any other hospitals Contraindications for gastroscopy or endoscopic surgery, such as cardiopulmonary insufficiency rendering the patient unsuitable for endoscopy, shock, or gastrointestinal perforation; inability to cooperate due to psychiatric disorders; acute severe laryngopharyngeal disorders preventing endoscope insertion; acute stage of corrosive esophageal injury; coagulation disorders; or a hemorrhagic tendency Pregnant or breastfeeding Presence of advanced malignant tumors Allergy to oral lidocaine syrup and dimethicone oil Current participation in other clinical trials Option to withdraw from the trial exercised at any time The entire recruitment process is managed by postgraduates MfL and RY. All patients with SMT-MP who met the inclusion criteria were approached for potential participation. Despite the absence of specific literature and sample data on enrollment and recruitment rates, achieving the desired sample size is deemed feasible based on the current participant flow. As the principal investigator of this trial, Physician LD assumes the responsibility of conducting comprehensive communication and obtaining informed consent from patients. Each participant received a copy of the informed consent form detailing the trial's potential benefits and risks. After thoughtful consideration, participants are empowered to make independent decisions about their involvement. The recruitment and informed consent process is devoid of inducements or pressures, ensuring voluntary participation and preventing unwarranted termination or loss to follow-up. Participants retain the option to withdraw from the trial at any point. Allocation The sample size was determined based on the primary outcome, operation duration, using PASS 2011 software (NCSS, LLC, Kaysville, Utah, USA). Drawing from insights obtained from our previous single-arm retrospective study and a trial investigating ESD [ 24 ], with a power of 90% ( β = 0.1) and a significance level ( α ) of 0.05 [ 25 ], the estimated primary sample size was approximately 34 patients. To account for a potential dropout rate of 10–20%, the final sample size was set at 40 patients. Consequently, each group included 20 patients. Randomization was performed by SL using a random numbers table generated by IBM SPSS Statistics 23. From 1 to 40, each order was randomly assigned either the letter A or B with an equal probability. After the generation was completed, participants with the letter A were assigned to the Precutting EBLR group, while those with the letter B were assigned to the ESD group. Interventions The participating operators were required to meet the following criteria: Possess more than 5 years of experience in medicine Demonstrated ability to independently conduct endoscopic operations Operators with a history of performing no fewer than 300 endoscopic operations annually and a total of at least 1000 procedures Patients were required to undergo a comprehensive preoperative evaluation to ensure the absence of absolute surgical contraindications. The operation was immediately stopped in the event of an unexpected intraoperative contingency, and appropriate clinical measures were taken accordingly. A detailed analysis and documentation of the possible reasons for such contingencies will be conducted. Intraoperative and postoperative interventions may be adjusted following established guidelines [ 26 ]. Implementing ESD or precutting EBLR will not require alteration to usual care pathways (including the use of any medication), and these steps will continue for both trial arms. Regarding postdischarge interventions (regular intake of esomeprazole), we contacted each participant via phone to provide reminders for consistency in medication adherence. This approach was approved by the participants when they signed the informed consent form. ESD Initially, a high-viscosity solution is employed to elevate the submucosal covering of the tumor. Subsequently, electrocautery knives are used for dissecting the tissue beneath and surrounding the lesion, leaving a resection bed. In the event of a perforation, closure can be facilitated using titanium clips or a purse-string suture [ 18 ]. Following the completion of the operation, patients undergo a 48-h observation period during which they fast and regularly take esomeprazole (40 mg, twice daily). Upon discharge, patients are required to continue taking esomeprazole (40 mg, once daily) for 2 weeks. Precutting EBLR Initially, an electrosurgical snare is positioned on the tumor’s mucosal protuberance, followed by snare resection using an electrosurgical current set at 30 W to precut and remove the covering mucosa. Subsequently, an appropriate ligator is chosen based on the tumor size: a small ligator for tumors within 1 cm, a medium ligator for tumors ranging from 1 to 1.2 cm, and a large ligator for tumors greater than 1.2 cm. After proper ligator installation, the tumor is drawn from the surface and effectively removed using an electrosurgical snare. Closure of the perforation caused by ligation is assisted by employing three-armed clips or titanium clips. Finally, the excised tumor is sent for pathological examination. Postsurgery, fast for 12–24 h is required, followed by a liquid diet for 2–3 days and esomeprazole (40 mg, once daily) for 2 weeks. The steps of the operation and postoperative pathology are shown in Fig. 1 . Devices CT, EUS (OLYMPUS EU-M2000, 20 MHz, Japan), and standard endoscopy (AOHUA AQ200L, China) were used for preoperative assessment and follow-up. Standard endoscopes (AOHUA AQ200L, China) and loop snares (MICRO-TECH (NANJING) Co., Ltd., China) were used for mucosal protuberance precutting. Small ligators (TIANJIN TY, Medical Organism Material Research Company Ltd., China) were used for tumors ≤ 10 mm in length; medium ligators (OTSC cap plus ligation band, Ovesco Endoscopy AG, Tubingen, Germany) were used for tumors > 10 mm but ≤12 mm in length; and large ligators (colonoscopy transparent cap plus ligation band, OLYMPUS, Japan) were used for tumors >12 mm in length. All the ligators were disposable. Injectors (OLYMPUS, Japan), an IT knife, a dual knife, and an electronic cutting device (EREB VIO 200S, Germany) were used for ESD. Outcomes and follow-up The primary outcome for the trial was operation duration, and the secondary outcome was operation cost. Both outcomes will be assessed prior to discharge. Additional meaningful indicators, also set as secondary outcomes, include estimated blood loss, intraoperative and postoperative adverse events (such as bleeding, immediate and delayed perforation, infection), tumor recurrence, mortality rates, and hospitalization costs. The trial’s endpoint will be established as 6 months after the operation of the last included patient. Each patient is given a detailed follow-up evaluation via telephone 6 months after discharge to gather information about their postoperative condition. At the 6-month mark, patients are required to undergo an endoscopic re-examination to assess tumor recurrence.
Discussion Initially, precutting EBL was designed to address the current challenge of treating small gastric SMT-MPs, and a previous study demonstrated its clinical feasibility. However, before we could extend its application to other areas or institutions, notable shortcomings emerged. In response, we promptly started to further modify the operation. This is why precutting EBLs were not tested on a larger scale. Simultaneously, a case of delayed perforation heightened our concern. Forty-eight hours after receiving precutting EBL, a middle-aged male patient suddenly complained of severe abdominal pain. A CT scan revealed a gastric perforation at the site of the lesion. Fortunately, the patient soon recovered and was discharged after immediate closure of the perforation. This case indicated a way to further modify precutting EBL to a certain extent. Precutting EBLR was proposed. Its local performance in 16 patients revealed its advantages in terms of a shorter operation duration and lower expenses. Encouraged by these findings, we decided to gradually expand the scale of the study in anticipation of promoting precutting EBLR. The current trial is specifically designed to compare precutting EBLR and ESD. The operations were conducted at The First Affiliated Hospital of CQMU, a large-scale 3A general teaching hospital renowned for its high level of clinical and academic research. Located in southwestern China, the hospital attracts a substantial amount of patient flow, mainly from the surrounding regions and provinces. The Department of Gastroenterology at this hospital handles an extensive patient population, both in terms of quantity and variety, providing the necessary conditions to achieve the planned sample size. Prior to conducting this RCT, we collected primary data from 16 patients who underwent precutting EBLR. The data showed a mean operation duration of 21.3 ± 4.5 min (Table 2 ). Moreover, we performed a retrospective study involving 537 patients in whom the use of endoscopic resection for the treatment of small gastric SMTs was analyzed. The study revealed a mean operation duration of 38.3 ± 21.8 min. 24 The shorter operation duration of precutting EBLR was evident. In this RCT, we designated the operation duration as the primary outcome. Based on sample size estimation guidelines for clinical studies, we set α and β to 0.05 and 0.9, respectively. With the above numerical values, 17 patients were included in one group. In other words, 34 participants were necessary in total for a 1:1 group ratio. We considered a 10–20% drop-out rate. The final sample size was determined to be 40 patients in total. The primary outcome was set as the operation duration, with the objective of showcasing the main advantage of precutting EBLR. The other outcomes also help to demonstrate the safety and efficacy of the treatment, such as reduced hospitalization costs when the duration is equal or shorter hospitalization duration when the costs are equal. Precutting EBLR holds the potential to emerge as a creative and promising endoscopic approach for treating SMT-MPs, offering a more practical, simpler, and safer alternative. Moreover, this approach has the potential to alleviate the economic burden on both patients and health insurance companies, leading to substantial societal benefits. These advantages also foster the prospect of transforming the resection of gastric small SMT-MPs from a hospitalized operation to an ambulatory operation. This trial has several limitations. The relatively small sample size may introduce bias if patients are lost to follow-up, and conducting further multicenter studies could address this issue. Additionally, the 6-month follow-up duration might be insufficient to thoroughly observe tumor recurrence. Currently, there is a lack of a specific method for investigating tumor recurrence in a timely manner. In other words, if tumor recurrence occurs at 1 or 6 months after the operation, it is ultimately identified during the re-examination 6 months after discharge. This may lead to an underestimation of the impact of different operations on tumor recurrence.
Background The management of small gastric submucosal tumors (SMTs) originating from the muscularis propria layer (SMT-MPs) remains a subject of debate. Endoscopic submucosal dissection (ESD) is currently considered the optimal treatment for resection. However, high expenses, complex procedures, and the risk of complications have limited its application. Our previously proposed novel operation, precutting endoscopic band ligation (precutting EBL), has been demonstrated in a long-term, single-arm study to be an effective and safe technique for removing small gastric SMTs. However, the absence of a pathological examination and the potential for delayed perforation have raised concerns. Thus, we modified the precutting EBL by adding endoscopic resection to the snare after ligation and closure, yielding the precutting endoscopic band ligation-assisted resection (precutting EBLR). Moreover, the initial pilot study confirmed the safety and efficacy of the proposed approach and we planned a randomized controlled trial (RCT) to further validate its clinical feasibility. Methods This was a prospective, single-center, open-label, parallel group, and randomized controlled trial. Approximately 40 patients with SMT-MPs will be included in this trial. The patients included were allocated to two groups: ESD and precutting EBLR. The basic clinical data of the patients were collected in detail. To better quantify the difference between ESD and precutting EBLR, the primary outcome was set as the operation duration. The secondary outcomes included total operation cost and hospitalization, intraoperative adverse events, and postoperative recurrence. The primary outcome was tested for superiority, while the secondary outcomes were tested for noninferiority. SPSS is commonly used for statistical analysis. Discussion This study was designed to validate the feasibility of a novel operation for removing gastric SMT-MPs. To intuitively assess this phenomenon, the operation durations of precutting EBLR and ESD were compared, and other outcomes were also recorded comprehensively. Trial registration Chinese Clinical Trial Registry ChiCTR2200065473 . Registered on November 5, 2022. Supplementary Information The online version contains supplementary material available at 10.1186/s13063-024-07902-7. Keywords
Data management Collection All the data were collected and verified by two statisticians simultaneously using a spreadsheet (Microsoft Excel 2016) in accordance with each patient’s personal information and medical images. Patients are assigned numerical codes instead of their names to ensure the confidentiality of personal information. To minimize statistical errors, any controversial data is reviewed and discussed by a third person. Preoperative, intraoperative, and postoperative data are collected. Missing data will be declared in the appendix, and the corresponding participant will be considered withdrawn. Preoperative data included demographic information (age, sex, date of admission) and tumor characteristics (size, layer, location, shape, and density of the echo site investigated via EUS). Intraoperative data included the operation date, duration, estimated blood loss, and details related to intraoperative perforation (size, duration, and amount of titanium clips). Postoperative data included the size of the resected tumor (assessed by ruler), tumor pathology (tumor type, mitotic count, achievement of R0 resection or not, and immunohistochemistry), postoperative management (duration of fasting and liquid diet, use of medications), postoperative symptoms and adverse events, hospitalization duration and cost, operation cost, and 6-month follow-up outcome. Monitoring The data were monitored by The First Affiliated Hospital of CQMU. In this trial, the platform is exclusively utilized for hospitalization purposes and remains independent of any competing interests. Monthly trial audits will be conducted without the presence of funders or sponsors to assess the progress of each participant. LD will conduct an interim analysis around June 2024, and the trial may be terminated earlier than planned if the data are sufficiently convincing to draw a final conclusion or if a significant proportion of precutting EBLR patients develop unexpected postoperative complications. Adverse events (AEs) or severe adverse events (SAEs) will be promptly reported to the clinical trial team. Relevant information will also be recorded locally for further analysis. Statistical analysis For statistical analysis, commercial software, specifically IBM SPSS Statistics 23, will be used. Normally distributed data are presented as the means and standard deviations (X±S). Student’s t test was used to analyze significant differences between groups. The data that conformed to a skewed distribution are expressed as the median and range. Statistical differences between groups were analyzed using the Mann–Whitney U test. Categorical data are presented as numbers and percentages and were analyzed using Fisher’s exact test or the chi-square test. To explore potential risk factors, participants were allocated to two subgroups based on tumor recurrence. Relevant data, including age, operation duration, tumor size, tumor layer, pathology, and mitotic count (if there was a GIST), were collected again. Univariate analysis will be conducted to identify differential expression of the genes. Multiple regression analysis was subsequently conducted on the various indicators. P < 0.05 indicated statistical significance. Participant timeline See Table 1 . Supplementary Information
Abbreviations Submucosal tumor Submucosal tumor originating from the muscularis propria layer Endoscopic submucosal dissection Precutting endoscopic ligation Precutting endoscopic band ligation-assisted resection Randomized controlled trial Gastrointestinal stromal tumor Endoscopic ultrasound Computed tomography EUS-guided fine needle aspiration Adverse event Severe adverse event Acknowledgements Not applicable. Protocol amendments Any protocol changes that could result in deviations will be thoroughly documented in a separate spreadsheet. The protocol update history will be reported to the clinical trial registry in the future. Trial deviations, violations, AEs, and SAEs will also be sent to the registry. Authors’ contributions The authors read and approved the final manuscript. Availability of data and materials No identifying images or other personal or clinical details of the participants are presented here or will be presented in reports of the trial results. The datasets used and/or analyzed during the current study, the participant information materials and the informed consent form are available from the corresponding author upon request. The trial results will be available via publication. Declarations Ethics approval and consent to participate This study was approved by the Ethics Committee of the First Affiliated Hospital of Chongqing Medical University (approval number: 2022-161) and successfully registered in the Chinese Clinical Trial Registry (registration number: ChiCTR2200065473). Written informed consent to participate will be obtained from all participants. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-15 23:43:48
Trials. 2024 Jan 13; 25:49
oa_package/1f/bf/PMC10788014.tar.gz
PMC10788015
38218838
Introduction Cigarette smoking is the leading cause of preventable death worldwide and thus smoking cessation is a critical step for improving global health [ 1 , 2 ]. Nicotine is a psychoactive chemical that can be found in tobacco, causing neurobehavioral responses such as arousal, pleasure, mood/cognitive changes, appetite suppression, and physical signs [ 3 ]. Nicotine can act on neuronal nicotinic acetylcholine receptors (nAChRs) in the brain, which are ligand-gated cation channels that are activated and desensitized in response to nicotine binding. Although our understanding of nicotine physiology is still limited, through acting on various nAChR subtypes in the mesolimbic reward pathway (i.e. ventral tegmental area to nucleus accumbens) and habenulo-interpeduncular pathway, nicotine is thought to exert its reinforcing and aversive effects, respectively, that ultimately contribute to nicotine addiction and continued cigarette smoking [ 4 , 5 ]. Interestingly, while the chronic intake of nicotine is required to develop addiction, both clinical and preclinical studies have shown that “acute dependence”-like symptoms from nicotine (i.e., signs of nicotine withdrawal and tolerance) emerge even with a low level of nicotine intake [ 6 – 13 ]. In both DSM-V and DSM-V-TR [ 14 ], it has been acknowledged that nicotine withdrawal can occur in adolescent smokers even prior to daily tobacco use, and that significant symptoms of nicotine withdrawal can occur in nondaily smokers. In the clinic [ 7 , 15 ], people frequently report symptoms of withdrawal after their first cigarette, and most smokers report the experience of withdrawal symptoms even before progressing to daily smoking. These findings collectively indicate that nicotine can induce acute dependence in animals. However, while the existence of acute dependence appears undisputable, the behavioral phenotype and pathophysiological significance of acute dependence are still unclear. In prior studies, rat models of early nicotine withdrawal have been characterized [ 8 , 10 , 12 ], in which reward function and somatic signs were assessed. In this paper, we strived to model and characterize the physical, affective, and cognitive functions during early withdrawal from nicotine in mice, thereby supplying a novel preclinical model of acute dependence. Mimicking light nicotine intake during the initial experimentation stage of cigarette use in novice smokers [ 7 , 16 ], low-dose nicotine (0.5 mg/kg (-)-nicotine ditartrate, which is nearly equivalent to 0.175 mg/kg free-base nicotine) was systemically administered to mice once daily for three days. The dosage of nicotine was decided based on previous studies showing that intraperitoneal administration of 0.175 mg/kg nicotine to mice should be sufficient to evoke striatal dopamine release and induce behavioral alterations in wild-type mice [ 17 – 19 ]. It has been proven that abrupt pharmacological reversal of a drug’s action through inactivation of the target receptors in drug-dependent animals leads to the rapid and predictable emergence of withdrawal-like behaviors [ 20 , 21 ], such as in the case of naloxone for opioid withdrawal [ 22 – 24 ]. In the case of nicotine withdrawal, administration of the nicotinic antagonist mecamylamine allows experimental control over the onset timing, symptom severity, and replicable measurements of nicotine withdrawal in rodents [ 8 , 25 ]. As such, mice were challenged with either saline or mecamylamine to elicit spontaneous or precipitated signs of nicotine withdrawal, respectively [ 21 , 25 , 26 ]. Important validity criteria in modeling precipitated drug withdrawal are that (1) the signs of withdrawal should be precipitated by antagonist administration in drug-exposed animals and not in drug-naïve animals, and that (2) the withdrawal signs should be higher/larger in animals after precipitated drug withdrawal than in animals after spontaneous drug withdrawal [ 21 , 23 ]. We explored these two criteria in our mouse model of early nicotine withdrawal using a battery of behavioral assays that encompass physical, affective, and cognitive domains.
Methods Animals Seven- to eight-weeks-old male C57BL/6 N mice were purchased (Daehan Bio Link, Daejeon, Republic of Korea) 1 week before experimentation. Mice were housed in plastic cages with metal wire grids and were maintained under a 12-h reversed light/dark cycle (lights off at 7:00 AM). Mice had ad libitum access to food and drinking water. Mice were housed in groups of 2 to 4. All animals were randomly assigned to each group. Induction of early nicotine withdrawal (-)-Nicotine ditartrate (Cat. No. 3546; Tocris Bioscience, Abindgon, UK) was dissolved in physiological saline (0.175 mg/kg free-base), and the pH was adjusted to 7.4. Mecamylamine hydrochloride (M9020; Sigma-Aldrich, St. Louis, MO, USA) was dissolved in physiological saline (3.0 mg/kg). Mice were intraperitoneally injected with the nicotine solution (10 ml/kg) once/day for three days and were intraperitoneally injected with the mecamylamine solution on the following day (24 h after the last nicotine injection) to precipitate the behavioral signs of early nicotine withdrawal. The dosing regimen is illustrated in Fig. 1 . Behavioral tests Mice were handled for more than 3 days (10 min/day) prior to behavioral tests. All behavioral tests were video-recorded for analysis. Each behavioral test was performed with an independent batch of animals. All behavioral tests commenced at 10 min after the last injection of saline or mecamylamine solution. All experiments were replicated at least once. During analysis, the experimenter was blinded to the groups of mice. Open field test The open field test was conducted to measure general locomotor activity and anxiety-like behavior [ 52 ]. A white open field box consisting of (in cm; L x W x H) 40 × 40 × 40 inner dimensions was used for the test. The floor luminosity was maintained at 5 lx. Mice were placed facing one side of the wall within the open field box, and allowed to freely explore the box for 30 min. The distance moved in the open field, the time spent immobile in the open field, and the time spent in the center zone (20 × 20 cm) were analyzed using EthoVision XT 11.5 (Noldus, Wageningen, Netherlands). Elevated plus maze test The elevated plus maze test was conducted to measure anxiety-like behavior [ 53 ]. An apparatus consisting of an elevated maze with four arms (two white open arms and two black closed arms), each arm consisting of (in cm; L x W) 60 × 10 inner dimensions was used for the test. The closed arms were surrounded by 18-cm-high walls. The center of the elevated plus maze was maintained at 5 lx. The maze was elevated 50 cm above the ground. Mice were placed facing the wall at the end of the closed arm and allowed to freely explore the maze for 5 min. The time spent in each compartment (open arms, closed arms, and center zone) and the number of entries to each arm type were manually analyzed using a stopwatch. An entry was defined as the mouse having three paws into an arm or the center zone of the maze. Somatic signs of nicotine withdrawal Somatic signs were analyzed to measure physical withdrawal symptoms in mice [ 25 , 26 ]. A clear plexiglass square column consisting of (in cm; L x W x H) 7 × 7 × 30 inner dimensions with openings at the top and bottom was used for measurement of the somatic signs of nicotine withdrawal in mice. The floor luminosity was maintained at 100 lx. Mice were confined in the plexiglass column for 20 min to allow a close-up video-examination of paw and body movements. The number of events was counted for each sign: paw tremor (rapidly shaking paw(s) two times while the two paws are supported on the ground or columnar wall or three times while three paws are in support), body shakes (wet-dog shakes; rapidly shaking the body with the anteroposterior axis as the axis of rotation), and freezing (continuous immobility with minimal movement and without paw movement for 60 s). For paw tremors or body shakes, (1) the events that occurred within 10 s of each other were counted as a single event (10-s epoch), and (2) the events that appeared 3 s before or after grooming were excluded from analysis (counted as an innate sequence for grooming). Passive avoidance test The passive avoidance test was conducted to measure fear memory [ 53 ]. A two-chambered foot-shock apparatus (Jeungdo Bio & Plant Co., Seoul, Republic of Korea) consisting of light (~ 100 lx) and dark chambers separated by a gate was used for the test. Mice were gently placed in the light chamber, and the gate was opened after 1 min. When mice entered the dark chamber, the gate was closed, and an electrical foot-shock (0.2 mV, 2 s) was delivered through the floor grid. Mice were left in the dark compartment for an additional 1 min and then returned to the home cages. On the following day, mice were placed in the light chamber, the gate was opened after 1 min, and mice were allowed to freely explore the two chambers for 10 min. The latency to enter the dark chamber, the time spent in the dark chamber, and the number of entries into the dark chamber were manually analyzed. An entry was defined as the mouse having all four paws into one chamber. Spatial object recognition test The spatial object recognition test was conducted to measure spatial recognition memory [ 54 ]. The open field box, two identical objects (blue glossy cylinder) consisting of 7-cm height and 4-cm radius, and a visual cue consisting of (in cm; L × W) 18 × 24 dimensions with a checkered pattern of (in cm) 2 × 2 dimensions were used for the test. The visual cue was attached to one wall of the open field box. On the training day, the two objects were placed in the corner, 8 cm away from each wall, near the visual cue-attached wall (Fig. 6 A, middle). On the recall day (24 h after training), one of the two objects placed during the training day was moved perpendicular from its original position and to the opposite wall of the visual cue-attached wall (Fig. 6 A, right). For both training and recall, mice were placed facing the opposite side of the visual cue-attached wall within the open field box, and allowed to freely explore the box for 10 min. The time spent sniffing each object was manually analyzed using a stopwatch, and the recognition index was calculated. The recognition index, defined in a previous study [ 55 ], is as follows: Here, T d is the time spent exploring the displaced object, and T f is the time spent exploring the familiar object. Social interaction test The social interaction test was conducted to measure social behavior [ 56 ]. The open field box and a cylindrical stainless steel cage measuring 15-cm high and 5-cm wide (radius) were used for the test. The cage was placed near one wall of the open field box in the central position. During the first session, the cage remained empty. During the next session, a conspecific weighing ~ 90% of the exploring mouse’s body weight was confined in the cage. The two sessions were carried out consecutively. For both sessions, mice were placed facing the opposite side of the wall with a stainless steel cage within the open field box and allowed to freely explore the box for 15 min. The time spent sniffing the cage was manually analyzed, and the social interaction ratio was calculated. The social interaction ratio, as in a previous study [ 56 ], was defined as follows: Here, T c is the time spent exploring the conspecific-containing cage, and T e is the time spent exploring the empty cage. Statistics One-way analysis of variance (ANOVA) followed by Holm-Sidak’s post-hoc test (Figs. 2 , 4 C, 5 C and D, 6 D and 7 D) and two-way repeated measures (RM) ANOVA followed by Holm-Sidak’s post-hoc test (Figs. 3 , 4 B, 5 B, 6 B and C and 7 B and C) were conducted to identify between-subject differences in behavior. The Wilcoxon signed rank test (Figs. 4 C, 6 D and 7 D) was conducted to identify the differences between one sample and a specified hypothetical value. p < 0.05 was considered statistically significant. Exact p values, F values, degrees of freedom, and the sum of signed ranks (W) are provided in the manuscript. Data are displayed as the mean ± standard error of the mean (SEM). Statistical analyses were performed with Prism v6.0 (GraphPad, CA, USA). Study approval All procedures regarding the handling and use of animals in this study were conducted as approved by the Institutional Animal Care and Use Committee (IACUC) of the Korea Institute of Science and Technology (KIST).
Results To mimic nicotine exposure from light cigarette use during the initial experimentation stage, C57BL/6 N wild-type mice were treated with nicotine (0.5 mg/kg (-)-nicotine ditartrate in physiological saline, pH adjusted to 7.4) once daily for three days. On the following day, mice were treated with 0.3 mg/kg mecamylamine (MEC) to induce precipitated withdrawal (PW) from nicotine, while other mice were treated with saline to induce spontaneous withdrawal (SW). Mecamylamine or saline was administered 24 h after the last nicotine administration based on previous findings that the somatic signs of nicotine withdrawal intensify 24–48 h after cessation of nicotine administration [ 25 , 26 ]. Behavioral tests were conducted 10 min after the last injection of MEC or saline. For all experiments, different mice were used and the experimenter was blinded to the experimental conditions during analysis. The overall injection scheme and experimental schedule are depicted in Fig. 1 . The open field test was conducted to examine general locomotor function and anxiety-like behavior (Fig. 2 A) ( n = 10–11 mice/group). Precipitated withdrawal from nicotine caused a significant decrease in the distance moved compared to the control and spontaneous withdrawal groups (Fig. 2 B) (Group effect, F (3,37) = 6.542, p = 0.0012; post-hoc analysis, ** p = 0.0092 for Control vs. PW, ** p = 0.0012 for SW vs. PW). In addition, precipitated nicotine withdrawal led to a significant increase in the time spent immobile compared to the control group (Fig. 2 C) (Group effect, F (3,37) = 4.024, p = 0.0142; post-hoc analysis, * p = 0.0167 for Control vs. PW). Lastly, precipitated nicotine withdrawal significantly reduced the time spent in the center zone compared to the control and spontaneous withdrawal groups (Fig. 2 D) (Group effect, F (3,37) = 4.600, p = 0.0078; post-hoc analysis, * p = 0.0265 for Control vs. PW, * p = 0.0110 for SW vs. PW). These findings show that early precipitated withdrawal from nicotine reduces locomotor activity and increases anxiety-like behavior in the open field, but not early spontaneous withdrawal. Next, the elevated plus maze test was conducted to further examine anxiety-like behavior (Fig. 3 A) ( n = 7–12 mice/group). Unexpectedly, mecamylamine challenge and precipitated nicotine withdrawal caused a significant increase in the time spent in the closed arm (Fig. 3 B) (Interaction effect, F (6,72) = 3.039, p = 0.015; post-hoc analysis, * p = 0.0245 for Control vs. MEC, * p = 0.0296 for Control vs. PW, * p = 0.0106 for MEC vs. SW, * p = 0.0120 for SW vs. PW). In addition, mecamylamine challenge and precipitated nicotine withdrawal caused a significant reduction in the number of entries into the closed arm (Fig. 3 C) (Group effect, F (3,36) = 14.04, p < 0.0001; post-hoc analysis, ** p = 0.0034 for Control vs. MEC, **** p < 0.0001 for Control vs. PW, ** p = 0.0033 for MEC vs. SW, **** p < 0.0001 for SW vs. PW). On the other hand, only precipitated nicotine withdrawal caused a significant reduction in the number of entries into the open arm (Fig. 3 C) (post-hoc analysis, ** p = 0.0014 for Control vs. PW, ** p = 0.0018 for SW vs. PW). These findings indicate that mecamylamine acutely increases anxiety-like behavior and reduces movement in the elevated plus maze. Then, the somatic signs of early nicotine withdrawal were assessed to further examine the physical aspects. Previous studies have shown that the somatic signs of nicotine withdrawal in rodents include rearing, head shakes, forelimb shakes (paw tremor), body shakes, jumping, abdominal constrictions, teeth chattering/chewing, facial tremor, scratching, grooming, eye blinks, ptosis, genital licking, yawns, immobility, etc. [ 21 , 25 , 26 ]. Previous clinical studies have demonstrated that the reduction in hand steadiness or increased hand tremor is a prominent motor sign of nicotine withdrawal in humans [ 27 ], while macroscopic physical gestures such as head/body shakes and immobility can be readily translated into the clinic. However, most other somatic signs defined in rodents cannot be translated into the physical symptoms of nicotine withdrawal in humans, since those somatic signs are (1) not observed in the clinic, (2) largely rodent-specific, or (3) more appropriate when included in the category of natural rodent behavior. Moreover, preclinical data from pioneering studies have suggested that paw tremor is the single most replicable somatic sign of withdrawal in rodents observed after both low- and high-dose nicotine treatment [ 21 , 25 , 26 ]. Lastly, a seminal study has shown that episodes of locomotor immobility can be observed after precipitated nicotine withdrawal [ 21 ]. Therefore, three replicable and translatable signs of somatic nicotine withdrawal were selected for analysis: paw tremors, body shakes, and immobility. In the analysis of the somatic signs of early nicotine withdrawal (Fig. 4 A) ( n = 10–11 mice/group), precipitated withdrawal from nicotine caused a significant increase specifically in the number of paw tremors compared to all other groups (Fig. 4 B) (Group effect, F (3,39) = 4.540, p = 0.0080; Interaction effect, F (6,78) = 3.643, p = 0.0031; post-hoc comparison, **** p < 0.0001 for Control vs. PW, **** p < 0.0001 for MEC vs. PW, ** p = 0.0042 for SW vs. PW). In addition, precipitated withdrawal from nicotine caused a significant increase in the overall number of somatic signs compared to the control and mecamylamine challenge groups (Fig. 4 C) (Group effect, F (3,39) = 4.540; p = 0.0080; post-hoc comparison, * p = 0.0134 for Control vs. PW, * p = 0.0185 for MEC vs. PW). Additionally, both spontaneous and precipitated withdrawal from nicotine caused a significant increase in the overall number of somatic signs compared to a hypothetical value of 2 (the value was decided as the median of the control group, which was 2) (Fig. 4 C) (SW, sum of signed ranks (W) = 49, †† p = 0.0098; PW, sum of signed ranks (W) = 55, †† p = 0.0020). Furthermore, precipitated nicotine withdrawal showed a significant distancing from other groups in the cumulative distribution plot of somatic signs (Additional file 1 : Fig. S1A). Lastly, precipitated withdrawal from nicotine caused a largely consistent distribution of somatic events throughout time (Additional file 1 : Fig. S1B). These findings show that early precipitated withdrawal from nicotine increases the number of somatic signs, mainly paw tremor. Next, the passive avoidance test was conducted to examine fear memory (Fig. 5 A) ( n = 9–12 mice/group). Early nicotine withdrawal did not alter the latency to enter the dark chamber (Fig. 5 B), the time spent in the dark chamber (Fig. 5 C), or the number of entries into the dark chamber (Fig. 5 D) compared to the other groups. These findings suggest that early withdrawal from nicotine did not affect fear memory. Then, the spatial object recognition test was conducted to examine spatial recognition memory (Fig. 6 A) ( n = 6–10 mice/group). Early nicotine withdrawal did not affect the time spent sniffing all objects during either training or recall (Fig. 6 B), the time spent sniffing displaced objects during recall (Fig. 6 C), or the recognition index (Fig. 6 D) compared to other groups. On the other hand, mice after early precipitated withdrawal from nicotine did not differ in the recognition index compared to the hypothetical value of 50% (Fig. 6 D) (Control, Sum of signed ranks (W) = 28, † p = 0.0156; MEC, Sum of signed ranks (W) = 21, † p = 0.0313; SW, Sum of signed ranks (W) = 49, †† p = 0.0098). These findings suggest that early nicotine withdrawal did not grossly affect spatial recognition memory. Finally, the social interaction test was conducted to examine social behavior (Fig. 7 A) ( n = 9–11 mice/group). Early nicotine withdrawal did not affect the time spent sniffing the empty or social object (Fig. 7 B and C), or the social interaction ratio (Fig. 7 D) compared to other groups. In addition, early nicotine withdrawal did not affect the social interaction ratio when compared to the hypothetical value of 1 (Fig. 7 D) (Control, Sum of signed ranks (W) = 45, †† p = 0.0039; MEC, Sum of signed ranks (W) = 64, †† p = 0.0020; SW, Sum of signed ranks (W) = 55, †† p = 0.0020; PW, Sum of signed ranks (W) = 55, †† p = 0.0020). These findings suggest that early nicotine withdrawal did not affect social behavior.
Discussion This study provides evidence that, in mice, early withdrawal from repeated (3 days), low-dose nicotine (0.175 mg/kg free-base) administration induces physical and affective signs of nicotine withdrawal. Novice smokers do not immediately engage in heavy daily smoking; they usually go through the initial experimentation of smoking through “mooching” or “bumming” [ 7 ]. In addition, smokers experience a bolus intake of nicotine, not continuous infusion [ 26 ]. This mouse model is significant in that it mimics the initial experimentation stage in human smokers and displays meaningful withdrawal-like signs from short-term nicotine exposure. Although early spontaneous withdrawal from nicotine was not sufficient to induce notable signs of withdrawal (except for somatic signs), a single dose of nicotinic antagonist mecamylamine was able to unmask the latent behavioral signs of early nicotine withdrawal. This suggests that short-term, low-dose nicotine exposure increases dependence vulnerability, or drives animals into an acute dependence-like state. Mounting evidence suggest that withdrawal signs can be precipitated upon short-term nicotine exposure. A seminal study demonstrated that precipitated withdrawal can ensue even after a single dose of nicotine [ 8 ]. In the study, mecamylamine was administered 2 h after a single dose of nicotine in rats. The modeling resulted in a significant elevation of intracranial self-stimulation threshold and somatic signs, which lasted for 5 days after mecamylamine-induced precipitation of nicotine withdrawal. These results showed that acute dependence is a replicable and prominent component of nicotine physiology. Our study further supports the existence of acute dependence to nicotine by showcasing a novel mouse model of early nicotine withdrawal, in which the physical (or somatic) signs were most prominent.
Conclusion In summary, our study demonstrated that early nicotine withdrawal produces behavioral alterations in mice, supporting the preclinical findings [ 6 – 13 ] and clinical observations [ 7 , 15 ] that short-term low-dose nicotine can induce an acute dependence-like state in animals. Although the phenotype of acute dependence on nicotine is clear and its presence might indicate potential vulnerability to the progression toward daily smoking, the pathophysiological significance of acute dependence on nicotine has been neglected. We believe that the phenomenon of early nicotine withdrawal deserves more attention in the field. In the future, (1) the neurobiological mechanisms underlying early nicotine withdrawal could be investigated, (2) the molecular/behavioral differences as well as the progression from acute to chronic dependence on nicotine could be explored in depth, and (3) the potential impact of early nicotine withdrawal on the progression to addiction could be assessed.
Background Clinical and preclinical research have demonstrated that short-term exposure to nicotine during the initial experimentation stage can lead to early manifestation of withdrawal-like signs, indicating the state of “acute dependence”. As drug withdrawal is a major factor driving the progression toward regular drug intake, characterizing and understanding the features of early nicotine withdrawal may be important for the prevention and treatment of drug addiction. In this study, we corroborate the previous studies by showing that withdrawal-like signs can be precipitated after short-term nicotine exposure in mice, providing a potential animal model of acute dependence on nicotine. Results To model nicotine exposure from light tobacco use during the initial experimentation stage, mice were treated with 0.5 mg/kg (-)-nicotine ditartrate once daily for 3 days. On the following day, the behavioral tests were conducted after implementing spontaneous or mecamylamine-precipitated withdrawal. In the open field test, precipitated nicotine withdrawal reduced locomotor activity and time spent in the center zone. In the elevated plus maze test, the mecamylamine challenge increased the time spent in the closed arm and reduced the number of entries irrespective of nicotine experience. In the examination of the somatic aspect, precipitated nicotine withdrawal enhanced the number of somatic signs. Finally, nicotine withdrawal did not affect cognitive functioning or social behavior in the passive avoidance, spatial object recognition, or social interaction test. Conclusions Collectively, our data demonstrate that early nicotine withdrawal-like signs could be precipitated by the nicotinic antagonist mecamylamine in mice, and that early withdrawal from nicotine primarily causes physical symptoms. Supplementary Information The online version contains supplementary material available at 10.1186/s12993-024-00227-0. Keywords
Addiction versus dependence: the timely question on “acute dependence” General theories on the transition to addiction dictate that a pattern of chronic, escalating drug intake is required to develop addiction [ 28 , 29 ]. From an integrative perspective, the hedonic allostasis theory proposes that a spiraling distress cycle takes place during the progression towards drug addiction, in which drug-dependent subjects experience three distinct stages in repetition; preoccupation/anticipation, binge/intoxication, and withdrawal/negative affect [ 30 ]. These theories suggest that the term “addiction” refers to a relapsing disease defined by long-term drug taking and seeking. In comparison to addiction, the term “dependence” should be held separate [ 31 , 32 ] as recognized in DSM-V-TR (March 2022) [ 14 ], for consistency and clarity in the terminologies used in the category of substance use disorders. The term “addiction” mainly refers to the pathological condition of compulsive drug-taking that stems from chronic drug use, whereas the term “dependence” traditionally refers to the normal, physical adaptations that result in tolerance and withdrawal symptoms and can stem from any psychoactive drug/medication that affects the CNS. As such, DSM-V-TR described that (1) dependence does not necessarily indicate the presence of addiction, and that (2) withdrawal can ensue without comorbid use disorder in a wide assortment of drugs including tobacco, alcohol, cannabis, sedatives, stimulants, and opioids. Importantly, the hedonic allostasis theory indicates that withdrawal/negative affect is an essential component in the development of drug addiction. Integrating these ideas, it could be inferred that physical dependence precedes, and is an independent driving factor of, drug addiction. The important question is the onset time of physical dependence. The overarching evidence from the 20th century to this date have demonstrated that both tolerance- and withdrawal-like behaviors can develop with nondaily, repeated or even a single experience of drug/medication [ 8 , 13 , 23 , 24 , 33 – 38 ], which has been termed “acute dependence”. The most noteworthy are the cases of “acute dependence” on opioids, in which repeated/single dose of opioid agonist (i.e. morphine) followed by administration of opioid antagonist (i.e. naloxone) can effectively precipitate the symptoms of opioid withdrawal in both humans and animals [ 22 – 24 , 36 , 39 ], and has also been acknowledged as a diagnostic criterion for opioid withdrawal throughout DSM-IV to DSM-V-TR [ 14 ]. Moreover, pioneering studies have suggested that this early manifestation of tolerance/withdrawal symptoms reflects certain initiating factors that may contribute to the development of the full extent of physical dependence [ 13 , 22 , 23 ], which warrants further attention in the field. However, despite the plethora of evidence, the significance of tolerance/withdrawal signs observed during acute dependence has been far neglected to date. Behavioral signs of early nicotine withdrawal The observed signs of early nicotine withdrawal in this study were mild, which can be expected based on the severity of drug withdrawal being correlated with the dose and duration of drug intake. However, the important findings were that (1) short-term nicotine exposure nevertheless induces acute dependence-like signs and that (2) the magnitude of signs from early nicotine withdrawal are comparable to those reported in previous studies. For example, paw tremors were the most prominent somatic sign after early nicotine withdrawal in mice. The number of paw tremors induced by early precipitated withdrawal from nicotine (mean = 8.545) was comparable to those found in pioneering studies that investigated somatic nicotine withdrawal in rodents (mean = 7–10) [ 21 , 25 , 26 ], in which precipitated withdrawal was induced after chronic nicotine exposure. In the physical aspect, mice displayed decreased locomotor activity in the open field and an increased number of somatic signs after early precipitated withdrawal from nicotine, at levels that were comparable to those observed in the seminal studies by Isola et al. [ 26 ] and Damaj et al. [ 25 ]. The effects were attributable to the interaction between nicotine exposure and mecamylamine, suggesting that nicotinic antagonism unmasks (or precipitates) the latent physical symptoms of early nicotine withdrawal. Body shakes and immobility were minor somatic signs in mice, although immobility was prominent during the open field test. This indicates that immobility in the open field may reflect the affective aspect due to the mild anxiogenicity of the open field environment. Regarding the affective aspect, mice displayed increased anxiety-like behavior in the open field test after early precipitated withdrawal from nicotine, but unexpectedly displayed strong anxiety-like behavior in the elevated plus maze test owing to the mecamylamine challenge. Previous studies have consistently demonstrated that nicotine withdrawal causes anxiety-like behaviors [ 40 – 42 ], but have not reported mecamylamine challenge-induced anxiety-like behavior. The differing phenotypes in the open field and elevated plus maze by mecamylamine challenge might be attributable to the relative anxiogenicity of each environment: The open field is mildly anxiogenic, while the elevated plus maze is more anxiogenic [ 43 ]. Systemic mecamylamine at 3.0 mg/kg induced anxiety-like behavior in mice, but only when exposed to a strongly anxiogenic environment (i.e., elevated plus maze). In nicotine-naïve animals, mecamylamine microinjection into the dorsal hippocampus was found to have an anxiogenic effect in the elevated plus maze test [ 44 ], but subcutaneous mecamylamine injection at 3.0 mg/kg did not affect the time spent in the open arm in the elevated plus maze [ 25 ]. Although the gross lack of literature on mecamylamine’s sole effect on control subjects precludes further insight, these results imply that the route of mecamylamine administration might have differential effects on anxiety-like behaviors. Collectively, caution is necessary in the interpretation of anxiety-like behaviors observed during mecamylamine-precipitated nicotine withdrawal. Regarding cognitive aspects, mice did not display alterations in passive avoidance or spatial object recognition. Previous studies have shown that withdrawal from chronic nicotine treatment impairs learning and memory [ 45 , 46 ], a phenotype that is distinct from the absence of cognitive dysfunction during early nicotine withdrawal in this study. In addition, mice did not display altered social behavior in the social interaction test after early nicotine withdrawal. A body of clinical studies has suggested that withdrawal from nicotine seems to impair social functioning [ 47 ], but whether it could be replicated in rodents has not been investigated to date. At the least, during early nicotine withdrawal, mice do not display overt deficits in social behavior. The lack of cognitive and social phenotypes in early nicotine withdrawal suggests that acute dependence presents a distinct (or at least a less severe) set of behavioral phenotypes compared to that of chronic dependence. Limitations of the study Three limitations of this study warrant caution in the generalization of the findings. First, although the prevalence of cigarette smoking is nearly four times higher in men [ 48 ], the importance of nicotine withdrawal in women cannot be overlooked, as the burden of nicotine withdrawal seems to be as crucial in women as in men [ 49 , 50 ]. In addition, three translatable and replicable somatic signs were analyzed in this study, but examination of all other somatic-like signs (e.g., teeth chattering/chewing, jumping, scratching, etc.) may yield more information about the impact of early nicotine withdrawal on animals. Lastly, the widely used markers of nicotine withdrawal, i.e. blood nicotine and cotinine levels, were not measured. However, this was because blood nicotine and cotinine are not reliable markers of nicotine withdrawal as stated in DSM-V-TR [ 14 ], and because nicotine pharmacokinetics is abnormally higher in mice than in humans [ 51 ]. Other experimental limitations of this study warrant further investigation. For instance, only a single dose (0.175 mg/kg free-base nicotine) and single duration (three days of daily exposure) regimen was implemented on a single rodent strain, thus further studies should investigate the impacts of nicotine dosage, exposure duration, and genetic influence on early nicotine withdrawal. In addition, the predictive validity of this mouse model has not been explored (i.e. reversal of withdrawal signs by varenicline or bupropion). The main purposes of this study were to demonstrate the existence of early nicotine withdrawal, and to characterize the phenotype of early nicotine withdrawal. Regardless, the therapeutic effect (and lack thereof) of clinically approved drugs on early nicotine withdrawal and its potential difference with withdrawal from chronic nicotine exposure should be confirmed. Also, the attenuation of early withdrawal symptoms by nicotinic agonists was not examined. This was due to the finding that spontaneous early withdrawal did not induce significant withdrawal signs in mice except for somatic signs, which was expected from the short-term low-dose nicotine administration regimen. Supplementary Information
Acknowledgements We cordially thank Tae Kyoo Kim for proofreading. Author contributions BK: project administration, conceptualization, methodology, funding acquisition, investigation, validation, visualization, formal analysis, data curation, writing—original draft, and writing—review and editing. HI: supervision, project administration, conceptualization, funding acquisition, resources, writing-original draft, and writing—review and editing. All authors read and approved the final manuscript. Funding This work was supported by the National Research Foundation of Korea (2020R1A2C2004610, 2022R1A6A3A01087565; Republic of Korea). Availability of data and materials All datasets supporting the findings of this study are available within the article. Source data can be provided from the corresponding author upon request. Declarations Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:48
Behav Brain Funct. 2024 Jan 13; 20:1
oa_package/a1/68/PMC10788015.tar.gz
PMC10788016
38218771
Introduction There are many challenges impeding progress in our understanding of the immune response following sport-related concussion (SRC). Animal model research has been helpful in hypothesis generation for human studies across the severity spectrum of brain injury from concussion [ 1 ] to severe traumatic brain injury (TBI) [ 2 , 3 ]. However, differences in animal and human immune systems make translation challenging [ 4 – 6 ] and it can be difficult to design human experiments to validate animal findings. Human studies, which have relied almost entirely on the evaluation of cytokines and chemokines measured in the systemic circulation [ 7 – 11 ], have been informative but have not yet gone beyond speculative group differences in individual biomarker concentrations between healthy and injured groups, or the identification of correlations between biomarker levels and clinical outcomes (symptoms, recovery, etc). The complexity of the immune response and its pleiotropic and redundant features make interpretation of these findings difficult; it is not clear how elevated/depressed concentrations of individual blood cytokines relate to immune system function or status. While multiple marker panels evaluated with multivariate statistical models can help identify signatures and infer system-level changes, functional interpretation remains difficult when looking at static measures in the blood at a given time. It may be advantageous to assess immune function by stressing or challenging the immune system and quantifying the reactivity to a particular stimulus with a known signaling pathway [ 12 – 14 ]. Here, when group differences are estimated, they more closely approximate differences in function of a specific facet of the immune system. For example, prior studies on immune biomarkers in SRC have focused on a broad suite of inflammatory cytokines and chemokines [ 8 – 11 ] that are common products of Nuclear Factor Kappa B (NF-kB) transcription [ 15 , 16 ]. In the innate immune system, NF-kB-mediated cytokine production is classically linked to toll-like receptor (TLR) signalling [ 15 ]. While this is a likely mechanism at play in the acute phase post SRC, there are also several other potential pathways that may be involved, such as the sympathoadrenal-immune response [ 17 ] or the inflammasome [ 18 ]. Without evaluation of cytokine reactivity to direct stimulation of known pathways, it is difficult to understand the etiology of post-injury immune activity. In addition to shifting the methodological paradigm of assessing immune function using blood biomarkers, statistical considerations may improve the quality of study results. First, a change in focus to effect estimation as opposed to p values and significance testing could improve results interpretation. Arbitrary, historically determined cut points have constrained findings into a false dichotomy of mattering (significant) or not mattering (not significant), which is often incorrect, or at the very least oversimplistic in biological systems [ 19 – 23 ]. Second, causal modelling that provides a priori transparency of scientific beliefs would also provide clarity and simplify efforts at replication [ 22 , 24 ]. Heuristic models like directed acyclic graphs (DAGs) can be useful to advance knowledge and inform future studies because they are explicit in their assumptions and come with a set of simple rules for effect estimation [ 22 , 24 ]. Indeed, identifying group differences and correlations in data without explicitly expressed scientific beliefs is potentially misleading; confounding and colliding variables can induce false relationships between an exposure and an outcome, mediating/moderating variables erroneously adjusted for can eliminate real effects, and competing causes can hamper precision [ 22 , 24 – 27 ]. The application of causal modelling to study the immune system following SRC can draw from one of several lines of research. First, early animal models and human studies on TBI have suggested acute inflammation in response to the injury [ 1 , 2 , 28 ], with speculation of chronic persistence [ 28 – 30 ]. Human SRC data from our group and others has shown that individual inflammatory cytokines and chemokines such as monocyte chemoattractant protein (MCP)-4 and macrophage inflammatory protein (MIP)-1β may be elevated within the first week following injury [ 8 ], inflammatory gene expression may be decreased [ 16 ], and elevated cytokines have been observed in healthy individuals with a concussion history [ 10 , 31 ]. Animal model research has also suggested a phenomenon known as ‘microglial priming’ may occur following an initial TBI/concussion [ 29 , 30 , 32 , 33 ], possibly leading to an amplified reaction to subsequent injuries and providing a potential pathway to neurodegeneration [ 29 , 32 ]. Interestingly, we have previously observed an interaction between IL-6 and concussion history in those with an acute SRC [ 7 ]. Furthermore, given the noted differences in recovery trajectories in males and females following SRC [ 34 – 37 ], the general difference in male and female immunity [ 38 ], and some preliminary work by our group showing potentially contrasting biomarker signatures following injury [ 9 ], it seems reasonable that males and females have a different immunological response to SRC. However, and importantly, all the human work, including our own, was done within a null hypothesis significance testing framework that relied upon an arbitrary decision theoretic cut point of p < 0.05, without a causal model. This preliminary study aimed to implement whole blood stimulation within a causal analytical framework to estimate the effect of SRC on immune function. To achieve this, we derived a DAG based on hypotheses generated from prior literature of how SRC and concussion/TBI may alter immunity. Immune function was measured through the stimulation of whole blood ex- vivo using common inflammatory ligands LPS and R848, and subsequent quantitation of a multi marker panel of cytokines and chemokines.
Methods Participants Fifty-two athletes from a Canadian university’s sport program participated in this study during the 2018/2019 academic year; this sub study was part of a larger project conducted by our group from 2013 to 2019. Of the 52-athlete convenience sample, 22 athletes (n = 11 female, n = 11 male) from seven sports were enrolled within a week (median = 4 days, interquartile range [IQR] = 3–5) of being diagnosed with an SRC; 30 healthy athletes (n = 18 female, n = 12 male) from 11 sports were enrolled at the beginning of their competitive season. Concussion diagnosis and medical clearance decisions were made by a staff physician at the university sport medicine clinic in accordance with the Concussion in Sport Group guidelines [ 39 ]. Prior to enrollment, all participants were provided written informed consent. All study procedures were in accordance with the declaration of Helsinki, and approved by the Health Science Research Ethics Board, University of Toronto (protocol reference # 27958). Blood collection and stimulation Blood was sampled via standard venipuncture from athletes at the time of study enrolment. Athletes were excluded if they were currently symptomatic from a known infection, illness, or seasonal allergies, or for taking any medications beyond birth control; in the sample used for this study, no athletes were excluded. Blood was drawn via standard venipuncture into 4 ml vacutainers coated with heparin. At this point, heparinized blood was transferred into the TruCulture® system (Rules Based Medicine, Q 2 Solutions, Texas, USA) for stimulation in two separate tubes containing either the toll like receptor 4 (TLR4) ligand Lipopolysaccharide (LPS, 100 ng/mL) or the TLR7/8 ligand resiquimod (R848, 1 uM). Briefly, 1 ml of blood was pipetted into each of the TruCulture® tubes and placed on a benchtop heatblock (VWR, USA) where they were kept at 37 °C for 24 h. Following stimulation, a plunger was inserted into the tube to separate the cells from the cell supernatant. The supernatant was collected and then stored at -80 °C until analysis. Biomarker analysis Stimulated supernatant samples were analyzed by immunoassay using the protein biomarker platform Olink® (Olink, Uppsala, Sweden). The commercially available ‘Target 48’ cytokine panel was run according to manufacturer’s instructions at a certified clinical research laboratory. Given that the samples were stimulated, a 1:100 dilution was applied before the assays were run. Each stimulated tube was also accompanied by a ‘Null’ control tube without the stimulant present. However, preliminary analyses by our group found that the stimulants used in the current study induced such a substantial level of cytokine production compared to the Null tube (orders of magnitude in most relevant markers) that subtracting the Null cytokine values from the stimulated cytokine values made no difference in the estimates derived from our statistical models. Thus, for simplicity, we only analyzed and reported the results from the stimulated tubes in our sample. The 45 cytokines evaluated using the Target 48 panel can be found at the Olink website using the following link: https://olink.com/content/uploads/2021/09/olink-target-cytokine-48-panel-content-v1.0.pdf . Symptoms Athletes reported their symptoms on the day of the blood draw by completing a 22-item post-concussion symptom scale where questions were answered using a seven-point Likert rating. This symptom questionnaire is part of the Sport Concussion Assessment Tool (SCAT), the most widely used tool to assist in the diagnosis, management, and prognosis of individuals with concussion [ 40 , 41 ]. A total symptom score was obtained by summing the presence or absence of each symptom irrespective of severity, with a maximum value of 22; symptom severity was evaluated by summing the rated symptom score for each symptom. Data analysis Our aim was to estimate the effect of SRC on immune function in the acute/subacute phase (within 7 days post-injury). Immune function was measured using a panel of cytokines and chemokines commonly associated with inflammation in response to stimulation with two well-characterized inflammatory agents (LPS and R848), which are known to cause the production several cytokines and chemokines through TLR-mediated signalling [ 39 , 40 ]. The analysis plan consisted of three steps: (1) create a heuristic scientific model in the form of a DAG to make explicit modelling assumptions regarding the effect of SRC on immune function, (2) create two latent cytokine variables representing LPS and R848 reactivity, respectively, and (3) employ the rules of causal inference to estimate the effect of SRC on immune function through student-t regression modelling, with the latent variables created in step 2 serving as proxies of immune function. Heuristic directed acyclic graph of concussion and immune function To arrive at a generative statistical model, we first used a heuristic DAG (Fig. 1 ) to map out our scientific beliefs based on our own prior work and that of others. We believed that SRC would influence immune function, and that the effect would be moderated by sex [ 9 ]. Given the historical precedent of ‘priming’ [ 7 , 29 , 33 , 38 ] we believed that prior concussion history would interact with an acute concussion to influence immune function. There were two backdoors into the SRC node in our DAG because sex and concussion history were not equal across groups in our sample. Furthermore, we acknowledge the possibility that due to the initial period of rest commonly observed following SRC, and given the relationship between acute exercise and inflammation [ 42 , 43 ], a potential change in exercise behaviour in an active population may alter immune function,. However, as we did not capture the type and time from exercise in our study, this is an unmeasured mediating variable; hence, given the rules of causal inference [ 24 ] and the DAG in Fig. 1 , we could estimate the total effect of SRC on immune function, but were unable to measure the direct effect. Latent modelling of cytokines LPS and R848 cause the synthesis and release of inflammatory cytokines and chemokines from cells into the systemic circulation in a coordinated fashion. To capture the nature of this process, we employed a Bayesian latent factor model to estimate a single variable comprised of the weighted contributions of each individual cytokine and chemokine in response to either LPS and R848 stimulation. We then used these variables as a proxy of immune function for downstream modelling of the DAG in Fig. 1 . For model explanation, including the statistical notation used, please see the Supplementary Material 1 : Supplementary Methods. For raw circulating cytokine/chemokine concentrations, please see Supplemental Table 1 . Missing data Cytokines and chemokines are often found in low concentrations in the peripheral blood and are frequently below the quantitation range of commercial assays. While stimulation helps alleviate this concern by elevating the blood concentration of several mediators by orders of magnitude, in a large panel of markers there will often be some that either do not respond to stimulation or respond to a lesser degree. Hence, missingness is not completely at random (MCAR) nor is it random (MAR), and therefore requires special consideration [ 22 ]. In the present study, for values that were below the quantifiable limits of the assay, we used Bayesian multiple imputation [ 22 ] within a confined range between zero and the lowest quantifiable value found in the sample data for each cytokine. This imputation strategy was validated on its ability to recover the latent structure of simulated data. In our simulations, data structure was preserved when several markers were missing up to 50% of their lowest values. For more information on the imputation strategy, please see the simulated data and code at the GitHub link associated with this publication. A table quantifying the missing data for each of the markers used in this study under each condition can also be found in Supplementary Table 2 . Student-t regression Student-t regressions [ 2 ] were used to estimate the total causal effect of SRC on immune function (y in [ 2 ]). According to the rules of causal inference [ 24 ] applied to our DAG (Fig. 1 ), to estimate the total causal effect of SRC on immune function we had to adjust for sex and concussion history. We also interacted these variables, as we believed that concussion history interacts with an acute concussion to modulate the immune response, and we believed that the effect of SRC on immune function differs in males and females (moderating effect). Because the number of SRC participants in our study was low (n = 22) and subclassification of concussion history and sex were needed, data coverage for all model parameters was a concern. Student-t models were chosen in place of linear models due to the adaptive degrees of freedom parameter (ν) which the model can learn to help put the appropriate amount of weight in the tails of the distribution. This served to stabilize model estimates and protect against leverage points [ 22 ]. Regularizing priors were used for all parameters, and in the interaction term where data coverage was lowest, adaptive priors were used to allow for information sharing and regularization across all interaction term parameter estimates [ 22 ]. As a result, posterior group-level parameter estimates were used to create posterior contrasts to estimate group differences in LPS and R848 reactivity, respectively. All data were z-score transformed prior to modelling. For the notation of the statistical model, please see the Supplementary Material 1 : Supplementary Methods. Algorithm used to provide estimates Posterior distributions for all estimates were derived using Hamiltonian Monte Carlo as implemented in Stan through RStan [ 44 , 45 ] (version 2.21) via R [ 46 ] (version 4.3) and the RStudio Integrated Development Environment [ 47 ] (version 2023.03.1). The R package ‘rethinking’ [ 48 ] was used to aid in the post processing of posterior samples and for the creation of density plotting. Latent factor plots were created using the ‘ggplot2’ [ 49 ], and tidybayes [ 50 ] packages. Tables were made using the gt [ 51 ] and gtsummary [ 52 ] packages. Latent models were validated on simulated data, and all models were assessed for convergence by inspection of trace plots, R-hat values, and effective sample sizes. For student-t models, a non-centered parameterization was employed to allow full exploration of the entire parameter space and prevention of divergent transitions. Priors were selected via prior predictive simulation to span a scientifically credible range of outcomes, and to regularize posterior parameter estimates. The prior distributions were included in all results figures for transparency and to show the influence of the sample data on the model. All models were evaluated for out-of-sample performance and leverage points using Pareto-smoothed importance sampling cross-validation via the ‘loo’ package [ 53 ]. Data and code used in this study for latent modelling, student-t modelling, latent model simulations under varying levels of data missingness, model checks, Stan model files, figures, and tables, can found in a public GitHub repository ( https://github.com/dibatti5/Di-Battista-et-al-2023-JNI-Whole-blood-stimulation- ).
Results Participants Participant characteristics can be seen in Table 1 . Age was similar in both groups (median = 21 years), although there were slightly more females in the healthy group (60% vs. 50% in the SRC group), and more athletes in the healthy group without a history of concussion (60% vs. 45% in the SRC group). In those with a history of concussion, both groups had a median time of ~ 2 years from the time of their last concussion to the time of study enrolment. Athletes with SRC presented with a median total 15 symptoms (IQR = 8–23) and a median symptom severity of 36 (IQR 12–67). The median days to recovery was 37 (IQR 21–71). SRC athlete characteristics can be seen in Table 2 . Latent cytokine modelling Two latent variables were derived from stimulated cytokine values: a latent variable of LPS reactivity, and a latent variable of R848 reactivity. The posterior estimates of the cytokine/chemokine correlations to the latent structure for each model can be seen in Fig. 2 . As expected, cytokines Interleukin (IL)-6, tumor necrosis factor (TNF)-α, colony stimulating factor (CSF)-3, and chemokine ligands (CCLs)-3,4, and C-X-C motif chemokine ligand (CXCL)-8, loaded highly on the LPS latent variable, as these are known to be released in response to LPS through the TLR4/Nuclear Factor Kappa B (NF-κB) pathway [ 54 ]. Also as expected, the R848 latent variable had many similar important cytokine loadings [ 55 ], but differed slightly from LPS by inducing a greater chemokine response. Latent modelling for both R848 and LPS stimulated conditions was completed on all 52 samples. Preliminary evidence of an effect of SRC on immune function Student-t derived posterior estimates of the differences (contrasts) in LPS reactivity and R848 reactivity between athletes with SRC and healthy athletes under the modelling assumptions of our DAG (Fig. 1 ) can be seen in the density plots shown in Figs. 3 and 4 . In males with no history of SRC, those with an acute SRC (n = 3) had lower LPS reactivity compared to healthy athletes (n = 8) with 93% posterior probability (pprob) (estimated mean difference (emd) = -0.82 SD units, 90% compatibility interval [CI] -1.15–0.3 SD units); they also had slightly reduced R848 reactivity with 77% pprob (emd = -0.35 SD units, 90% CI = -0.23–0.91 SD units). Conversely, in males with a history of SRC, those with an acute SRC (n = 8) had higher LPS reactivity compared to healthy athletes (n = 4) with 85% pprob (emd = 0.45 SD units, 90% CI -0.16–1.14 SD units), and higher R848 reactivity with 82% pprob (emd = -0.35 SD units, 90% CI = -1.15–0.3 SD units). In females, irrespective of concussion history, there was no effect of SRC on LPS reactivity. However, in females with no concussion history, those with an acute SRC (n = 7) had higher R848 reactivity compared to healthy athletes (n = 10) with 86% pprob (90% CI = -0.18–0.92 SD units).
Discussion In this preliminary study, we utilized ex - vivo whole blood stimulation with known cytokine-producing inflammatory agents to better approximate immune function in individuals following SRC. To foster transparency and reproducibility, we made all statistical modelling assumptions explicit using a causal framework in the form of a DAG. Our DAG was constructed on both our own prior work in the space, as well as others. Our a priori heuristic model suggested that SRC would influence immune function, that the effect would be different in males and females, and may be influenced by a prior concussion history. The results of our initial modelling suggest that the effect of an acute SRC on males depends on their concussion history; those with no history of concussion appear to have lower immune reactivity while those with a concussion history appear to have greater immune reactivity compared to their respective healthy counterparts. This effect was not present in females, although there was evidence that females with no concussion history may have increased reactivity to R848 following SRC. The immune priming hypothesis discovered in animal models of TBI suggests that microglial cells ‘activated’ from a prior injury may overreact to a subsequent injury [ 33 , 38 , 56 ]. This process may then compound with successive insults over time, leading to aberrant inflammatory signaling in the brain that may cause/expedite neurodegeneration [ 32 , 33 ]. A primed microglia is defined by (1) a higher baseline level of inflammatory mediators, (2) a lower threshold for activation, and (3) an exaggerated response following activation [ 33 ]. We found that males with an acute SRC and with a history of concussion had an elevated cytokine response to stimulation with both LPS and R848 compared to their healthy counterparts, suggests a potentially overactive or ‘inflamed’ state. If we were to map the priming definition to our proxy of systemic immune function, we found evidence of (3) an exaggerated response following activation – in males. However, we were unable to test (1) and (2), because we did not measure baseline mediators to assess the former, and the current study was not designed to measure the latter. We are encouraged by these findings, and believe the priming hypothesis warrants further investigation in humans. We observed that males with an acute SRC and no history of concussion had comparatively lower stimulated cytokine levels to their healthy counterparts in response to both LPS and R848, suggesting possible immunosuppression. Downregulated inflammatory genes have been observed previously in the days following SRC [ 16 ], although functional interpretation of static gene expression is difficult. For example, IL-6 can be both pro- and anti-inflammatory given the context [ 57 ], and even then, that a known proinflammatory marker like TNF-α is elevated in the blood doesn’t necessarily reflect the current state of the immune system – it may reflect current activity, or it may reflect a recently-active system that is now anergic and suppressed. In the current study, we attempted to make interpretation more intuitive by approximating the current function and state of the immune system through stimulation. The results of our study suggest that male athletes with their first SRC may be immunosuppressed, but validation on a larger sample is needed. We found an elevated cytokine/chemokine response to R848 stimulation following SRC in females with no concussion history – the opposite of what we found in males with no history of concussion. Of importance, the results reiterate our prior work on sex differences in cytokine signatures following SRC [ 9 ], and further supports the need to evaluate males and females separately following injury, particularly when looking at their biology. It is unclear why we observed these sex-disparate findings, although they are wholly unsurprising given the differences in male and female immune function generally [ 58 – 60 ]; indeed, we found that healthy female athletes had a lower cytokine response to LPS compared to healthy male athletes with 86% pprob. However, given the small sample size, and that we found R848 but not LPS reactivity to be altered following SRC in females despite the significant overlap in transcription factor activation between the two stimulants, we caution that further investigation is warranted before these initial findings are generalized. Limitations and future directions We refer to the findings of this study as preliminary because of the limited sample size, relative simplicity of our DAG, and reliance on linear models. The adjustments for sex and concussion history required to estimate the total causal effect of SRC on immune function yielded small effective sample sizes for estimation of the interaction term parameters. However, regularizing priors and pooling of the interaction term helped strengthened the estimates in these low coverage spaces [ 22 ], and out-of-sample testing revealed no leverage points. The simplicity of the DAG in Fig. 1 was intentional, in that we wanted to provide an intuitive example of how causal modelling can be used in the SRC biomarker space to estimate causal effects. Beyond the unmeasured effect of exercise, we acknowledge there are many other possible additions/modifications to our causal model, and we hope that our colleagues build upon this in future studies. For example, the role of sex on immune function in this model may be further nuanced by the implications of the female menstrual cycle. Collision sport participation and exposure to repeated head contact may also interact with an acute concussion similarly to concussion history in our model. Genetic variability, presence of comorbid mental health disorders, time from injury to sample acquisition, and many other factors may be added to the DAG in our study or used for the creation of several other DAGs. Because we were explicit in all our assumptions, this will help in the design of future studies regardless of whether they are building upon, replicating, or refuting the findings of this study. Additionally, while we realize that linear models have been useful and intuitive to interpret across much of scientific research, there is no reason to believe that the effects of SRC on immune function are most closely approximated by a line. We believe that there is utility in the simplicity of linear modelling, and that a student-t regression was useful in this sample because of its flexibility in modeling data points in the tails of the distribution. Nonetheless, we encourage future studies to look for non-linear alternatives, including bespoke models, that may better approximate the data generating process. And, finally, it is important to consider that we did not evaluate reactivity of the entire immune system, but rather two specific pathways commonly associated with innate immunity in response to bacterial challenge: the TLR4/NF- κB pathway via LPS, and the TLR7/TLR8 pathway through R848. These two stimulants provided a proxy of the ability of study participants to mount an inflammatory response via two mechanisms that impact a broad suite of cytokines and chemokines. We encourage future studies to continue to look at immune stimulation experiments using different ligands; for example, it would be interesting to know the effects of SRC on viral immunity.
Conclusion Whole blood stimulation is a practical and insightful technique that can be used to evaluate immune function post SRC. Moreover, employing an explicit causal framework will facilitate the replication of findings and drive enhancements in subsequent research endeavors. Our preliminary findings indicate that SRC impacts immune function, with a more pronounced effect in male athletes. This effect varies according to concussion history: males without a concussion history tend to exhibit a depressed inflammatory response, while males with a concussion history may have an amplified inflammatory response. Replication of this study in a larger cohort with a more sophisticated causal model is necessary.
Purpose To implement an approach combining whole blood immune stimulation and causal modelling to estimate the impact of sport-related concussion (SRC) on immune function. Methods A prospective, observational cohort study was conducted on athletes participating across 13 university sports at a single academic institute; blood was drawn from 52 athletes, comprised of 22 athletes (n = 11 male, n = 11 female) within seven days of a physician-diagnosed SRC, and 30 healthy athletes (n = 18 female, n = 12 male) at the beginning of their competitive season. Blood samples were stimulated for 24 h under two conditions: (1) lipopolysaccharide (lps, 100ng/mL) or (2) resiquimod (R848, 1uM) using the TruCulture® system. The concentration of 45 cytokines and chemokines were quantitated in stimulated samples by immunoassay using the highly sensitive targeted Proximity Extension Assays (PEA) on the Olink® biomarker platform. A directed acyclic graph (DAG) was used as a heuristic model to make explicit scientific assumptions regarding the effect of SRC on immune function. A latent factor analysis was used to derive two latent cytokine variables representing immune function in response to LPS and R848 stimulation, respectively. The latent variables were then modelled using student-t regressions to estimate the total causal effect of SRC on immune function. Results There was an effect of SRC on immune function in males following SRC, and it varied according to prior concussion history. In males with no history of concussion, those with an acute SRC had lower LPS reactivity compared to healthy athletes with 93% posterior probability (pprob), and lower R848 reactivity with 77% pprob. Conversely, in males with a history of SRC, those with an acute SRC had higher LPS reactivity compared to healthy athletes with 85% pprob and higher R848 reactivity with 82%. In females, irrespective of concussion history, SRC had no effect on LPS reactivity. However, in females with no concussion history, those with an acute SRC had higher R848 reactivity compared to healthy athletes with 86% pprob. Conclusion Whole blood stimulation can be used within a causal framework to estimate the effect of SRC on immune function. Preliminary evidence suggests that SRC affects LPS and R848 immunoreactivity, that the effect is stronger in male athletes, and differs based on concussion history. Replication of this study in a larger cohort with a more sophisticated causal model is necessary. Supplementary Information The online version contains supplementary material available at 10.1186/s12865-023-00595-8. Keywords
Electronic supplementary material Below is the link to the electronic supplementary material.
Acknowledgements The authors would like to acknowledge Sarah Watling for her help with data collection during the study period. Author contributions APD, MGH, SR & MS helped with the study design and implementation. AD & MGH wrote the main text and prepared all figures and tables. All authors reviewed the manuscript and approved submission for publication. Funding This research was funded by the Canadian Institutes of Military and Veterans Health (CIMVHR) Task 7: Understanding Concussion. Data availability An altered dataset and code used in the study are available in a GitHub repository located at the following link: https://github.com/dibatti5/Di-Battista-et-al-2023-JNI-Whole-blood-stimulation- . Declarations Ethical approval and consent to Participate Prior to enrollment, all participants provided written informed consent. All study procedures were in accordance with the declaration of Helsinki, and approved by the Health Science Research Ethics Board, University of Toronto (protocol reference # 27958). Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:48
BMC Immunol. 2024 Jan 13; 25:6
oa_package/ae/fc/PMC10788016.tar.gz
PMC10788017
38218963
Introduction The cumulative harm caused by pollution and inadequate resource and land management of our remaining natural resources is a major connected element of many global concerns. Microalgae, including cyanobacteria, have the ability to address some of these issues by decreasing aquatic pollutants and offering a sustainable supply of biomass for product development, as evidenced by the emerging uses of microalgal biotechnology assisting the United Nations’ Sustainable Development Goals (SDGs) [ 1 , 2 ]. The microalgal biomass possesses a wide variety of primary and secondary metabolites that are increasingly being acknowledged for their importance in the production of novel products and biotechnological applications. These valuable products, which can be produced directly from CO 2 via photosynthesis, include pigments such as carotenoids, and chlorophylls, carbon storages such as glycogen and polyhydroxybutyrate (PHB), as well as macromolecular compounds such as proteins, carbohydrates, and lipids [ 3 – 9 ]. There are several strategies for boosting algal biomass in order to achieve the lower carbon restriction that limits the yield of biofuel and bioproducts, including nutritional adjustment, and gene manipulation by genetic and metabolic engineering. The increased CO 2 fixation capacity, such as RuBisCO gene overexpression, and glucose utilization, has driven cyanobacteria to have better growth and photosynthesis [ 6 , 10 – 12 ], and be able to produce more PHB and lipids [ 4 ]. Under nitrogen or phosphorus deficiency, most cyanobacteria preferentially store carbon sources in the forms of glycogen and PHB, which are a consequence of the 2-oxoglutarate balance for carbon/nitrogen control [ 13 – 15 ]. In Fig. 1 , the arginine catabolism flux associated with the polyamine synthesis, proline-glutamate reaction, and GS/GOGAT pathway [ 16 , 17 ]. Previous studies found that a transposon-mutated Synechocystis sp. PCC 6803 significantly enhanced PHB synthesis while lacking the proA gene encoding gamma-glutamyl phosphate reductase. This mutant also had decreased proline reduction and increased glutamate accumulation [ 18 ]. Furthermore, the arginine–ornithine to proline–glutamate reaction was primarily driven by the deletion of the adc1 gene, which encodes arginine decarboxylase involved in polyamine biosynthesis, in Synechocystis sp. PCC 6803. This undoubtedly boosted PHB production, although the precise mechanism is still unknown [ 5 ]. Notably, in Synechocystis sp. PCC 6803, during nitrogen shortage, the activation of an OmpR-type response regulator (Rre37) may stimulate the metabolic flux from glycogen to PHB as well as the hybrid TCA cycle and arginine–ornithine cycle [ 9 ]. In cyanobacteria, glutamate can be synthesized through two alternative systems including GS/GOGAT pathway, and another catalyzed by glutamate dehydrogenase (GDH) [ 19 , 20 ]. The GS/GOGAT pathway is the main ammonium assimilation system in Synechocystis 6803, while GDH encoded by the gdhA gene is regulated by the late stage of growth [ 21 , 22 ] and when energy supply is limited in Escherichia coli [ 23 ]. On the other hand, when acetyl-CoA flow is driven to PHB accumulation, it is mostly directed to the TCA cycle and fatty acid synthesis from pyruvate and acetate, except for nutritional constraints (Fig. 1 ). Enzymes which involved in PHB biosynthesis are β-ketothiolase (phaA), acetoacetyl-CoA reductase (phaB), and heterodimeric PHB synthase (phaE and phaC), respectively [ 24 ]. Instead of producing new CO 2 fixation, the cyanobacteria that were starved of nutrients favored producing PHB from internally stored carbon storage, such as glycogen [ 8 , 25 ]. In this study, to supply more glutamate to TCA cycle, we overexpressed the proC gene, encoding Δ 1 pyrroline-5-carboxylate reductase (Fig. 1 ), in Synechocystis sp. PCC6803 wild type and Δ adc1 mutant strains. Two engineered strains included a Synechocystis sp. PCC6803 overexpressing proC gene or OXP, and Synechocystis sp. PCC6803 overexpressing proC gene with a knockout of adc1 gene in polyamine synthesis or OXP/Δ adc1 strains. Both engineered strains certainly accumulated higher PHB content, particularly in a nitrogen and phosphorus-deprived BG 11 medium with acetate supplementation (BG 11 -N-P + A). It is important to highlight that, particularly in the presence of BG 11 -N-P + A, the acetyl-CoA flow was mainly diverted to the PHB biosynthetic pathway.
Materials and Methods Construction of proC- overexpressing Synechocystis sp. PCC 6803 First, the recombinant plasmid pEERM_ ProC was constructed (Table 1 ), which was naturally transformed into Synechocystis sp. PCC 6803 wild type (WT) and Δ adc1 mutant (obtained from [ 5 ]), thereby generating a proC- overexpressing Synechocystis sp. PCC 6803 (OXP), and an OXP lacking adc1 gene (OXP/Δ adc1 ), respectively. The pEERM_ proC plasmid was constructed by ligating the proC gene fragment amplified by PCR using a pair of ProC-F and ProC-R primers, as shown in Additional file 1 : Table S1, in between the Spe I and Pst I restriction sites in the pEERM vector [ 26 ]. The correct recombinant plasmid pEERM_ proC was transformed into WT and Δ adc1 mutant cells by natural transformation to create OXP and OXP/Δ adc1 strains, respectively. In addition, we also constructed the Synechocystis sp. PCC 6803 wild-type control (WTc) and the Δ adc1 mutant control (Δ adc1 c) by transforming the empty pEERM vector into WT and Δ adc1 cells, represented as Synechocystis WT or Δ adc1 mutant containing the Cm R cassette gene (Fig. 2 A). For host cell suspension preparation, the host cells (WT or Δ adc1 ) were cultures in BG 11 medium until OD 730 reaching about 0.3–0.5. Then, 10 mL of cell culture was harvested by centrifugation at 5500 rpm (3505 × g ), 25 °C for 10 min, and cell pellets were resuspended in 500 μL of new BG 11 medium. Next step, the host cell suspension was mixed with 10 μL of recombinant plasmid solution, and incubated that mixture overnight in the culture room with continuous light illumination at 40–50 μE/m 2 /s, at 28–30 °C. Then, the sample mixture was spread on a BG 11 agar plate containing 10 μg/mL of chloramphenicol, and incubated in the culture room for 2–3 weeks until survived colonies occurred on plate. Each single colony was picked and streaked on a new BG 11 agar plate containing higher concentrations of chloramphenicol (20 and 30 μg/mL), and incubated under same condition until transformant colonies appeared. The obtained transformants were confirmed for gene size, location, and segregation by PCR method using many specific pairs of primers (Additional file 1 : Table S1). Strains and culture conditions Synechocystis sp. PCC 6803 wild type (WT), derived from the Berkeley strain 6803 from fresh water in California, USA [ 43 ], Synechocystis lacking adc1 gene (Δ adc1 ), and all engineered strains (WTc, Δ adc1 c, OXP, and OXP/Δ adc1 ) were grown in normal BG 11 medium for 16 days. The culture room, set for normal growth condition, was performed at 28–30 °C, with a continuous white light illumination by 40–50 μE/m 2 /s intensity. The cell culture flasks with the initial cell density at 730 nm (OD 730 ) of about 0.05 were placed on the rotary shaker at 160 rpm speed. Cell growth was measured at OD 730 by spectrophotometer. For nutrient-deprived conditions, all Synechocystis strains were initially grown in normal BG 11 medium until late-log phase of cell growth before treating them with nutrient-derived media under the same growth condition for 11 days. There were two modified media including a BG 11 medium without nitrogen (N) and phosphorus (P) (or BG 11 -N-P), and a BG 11 -N-P medium with 0.4%(w/v) acetate (A) addition (or BG 11 -N-P + A). For BG 11 -N-P medium, it was a BG 11 medium lacking NaNO 3 with KCl added in place of KH 2 PO 4 , and FeSO 4 added in place of ferric ammonium citrate in equimolar concentrations [ 5 ]. In addition, the initial OD 730 of cell culture under nutrient-modified conditions was adjusted to about 0.2. Acetate concentration in medium was determined according to the method of Ref. [ 44 ]. Determinations of intracellular pigments and oxygen evolution rate Cell culture (1 mL) was harvested by centrifugation at 12,000 rpm (14,383 × g ) for 10 min. The intracellular pigments including chlorophyll a and carotenoids in cell pellets were extracted by N,N-dimethylformamide (DMF) (1 mL), vortexed and incubated under darkness for 10 min. After centrifugation at the same speed, the absorbance of the yellowish supernatant was spectrophotometrically measured at 461, 625, and 664 nm. The contents of chlorophyll a and carotenoids were calculated according to Refs. [ 45 , 46 ]. For oxygen evolution rate, cell culture (10 mL) was harvested by centrifugation at 5500 rpm (3505 × g ), 25 °C for 10 min. Cell pellets were resuspended in new BG 11 medium (1 mL) and incubated under darkness for 30 min before measuring the oxygen evolution. Saturated light source was employed at 25 °C using Clark-type oxygen electrode (Hansatech instruments Ltd., King’s Lynn, UK). The unit of oxygen evolution rate was addressed as μmol O 2 /mg chlorophyll a/h [ 5 ]. Total RNAs extraction and reverse transcription-polymerase reaction (RT-PCR) Total RNAs were extracted from Synechocystis cells using the TRIzol® Reagent (Invitrogen, Life Technologies Corporation, Carlsbad, CA, USA). The purified RNAs (1 μg) were converted to cDNA by reverse transcription using ReverTra ACE® qPCR RT Master Mix Kit (TOYOBO Co., Ltd., Osaka, Japan). This obtained cDNA was subsequently used as a template for PCR with different pairs of primers (Additional file 1 : Table S1). The PCR conditions were performed by initial denaturing at 95 °C for 5 min, followed by 30 cycles of 95 °C for 30 s, annealing temperature of each gene (Additional file 1 : Table S1) for 30 s, and 72 °C for 35 s, followed by a final extension at 72 °C for 5 min. For 16s rRNA reference, the PCR condition was the same, but there was 19 cycles instead. Prior to initiating the experiment, the optimum cycle for all genes was determined. Those bands were not saturated; instead, they were in the appropriate cycle. The PCR products were checked by 1.5% (w/v) agarose gel electrophoresis. Quantification of band intensity was detected by Syngene® Gel Documentation (Syngene, Frederick, MD, USA). HPLC analysis of PHB contents and Nile red staining Cell cultures (50 mL) were harvested by centrifugation at 5500 rpm (3505 × g ) for 10 min. To prepare sample for HPLC detection, cell pellets were hydrolyzed by boiling for 60 min with 98%(v/v) sulfuric acid (800 μL) and 20 mg/mL adipic acid (100 μL), an internal standard [ 5 ]. After that, the hydrolyzed sample was filtered using a 0.45 μm polypropylene membrane filter which was further detected by HPLC instrument (Shimadzu HPLC LGE System, Kyoto, Japan) using a carbon-18 column, Inert sustain 3-μm (GL Science, Tokyo, Japan), with a UV detector at 210 nm. The running buffer consisted of 30% (v/v) acetonitrile dissolved in 10 mM KH 2 PO 4 (pH 7.4), with a flow rate of 1.0 mL/min. Authentic commercial PHB was used as standard, which was prepared as same as the samples. For dry cell weight (DCW), it was determined by incubating cell pellets in an oven at 80 °C for 16–18 h, until obtaining the constant weight. For Nile red staining, cell culture (1 mL) was harvested by centrifugation at 5500 rpm (3505 × g ) for 10 min. Cell pellets were resuspended in Nile red staining solution (3 μL). Then, the addition of normal saline (0.9%, w/v, 100 μL) was conducted and incubated overnight under darkness [ 5 , 40 ]. To visualize the stained cells, fluorescent microscope (Carl Zeiss, Oberkochen, Jena, Germany) was applied using a filter cup with 535 excitation wavelength, at a magnification of 100x. Extraction and determination of glycogen content Harvested cell pellets from liquid culture (15–30 mL) were extracted by alkaline hydrolysis [ 47 , 48 ]. The 30% potassium hydroxide (KOH) solution (400 μL) was mixed with cell pellets, and boiled for 1 h. After centrifugation at 12,000 rpm (14,383 g), 4 °C for 10 min, the supernatant was transferred to a new tube and mixed with 900 μL of cold absolute ethanol before incubating at − 20 °C overnight to precipitate glycogen. Next step, the sample mixture was centrifuged at 12,000 rpm (14,383 g ), 4 °C for 30 min to obtain glycogen pellets which were subsequently dried at 60 °C overnight. Glycogen pellets were dissolved in 1 mL of 10% H 2 SO 4 . Then, dissolved sample (0.2 mL) was mixed with 10% H 2 SO 4 (0.2 mL) and anthrone reagent (0.8 mL) before boiling for 10 min. After cooling down the samples to room temperature, the absorbance of samples was measured at 625 nm by spectrophotometer (modified from [ 4 , 7 ]). A commercial oyster glycogen was used as the standard which was prepared as similar as the sample. The unit of glycogen content was %w/DCW. Extraction and determination of polyamine content Total polyamines were extracted from Synechocystis cells with 5% cold HClO 4 (modified from [ 49 , 50 ]. After extraction by 5% cold HClO 4 for 1 h on ice, the extracted samples were centrifuged at 12,000 rpm (14,838 g ) for 10 min. The supernatant and pellet fractions were represented as the fraction containing free and bound forms of polyamines, respectively. Both fractions were used to derivatize and quantify the total polyamines. For the derivatization, it was performed with benzoyl chloride using 1,6-hexanediamine as an internal standard. 1 mL of 2 M NaOH was mixed with 500 μL of HClO 4 extract and 10 μL of benzoyl chloride. After vigorously mixing, the mixture was incubated for 20 min at room temperature. To terminate the reaction, saturated NaCl solution (2 mL) was added. The benzoyl polyamines were subsequently extracted by cold diethyl ether (2 mL). In addition, the ether phase (1 mL) was evaporated to dryness, and redissolved in methanol (1 mL). Authentic polyamine standards were prepared as similar as the samples. The polyamine content was detected by high-performance liquid chromatography (HPLC; Shimadzu HPLC LGE System, Kyoto, Japan) with inertsil®ODS-3 C-18 reverse phase column (5 μm; 4.6 × 150 mm) with UV–Vis detector at 254 nm. The mobile phase was a gradient of 60–100% methanol with a flow rate of 0.5 mL/min. Quantification of proline, glutamate, and GABA contents HPLC detection of amino acids, including proline, glutamate, and GABA, was performed using o -phthalaldehyde (OPA) and 9-Fluorenylmethyl chloroformate (FMOC) derivatives (modified from [ 51 , 52 ]). Cell pellets obtained from cell culture (50 mL) were washed and resuspended in 10 mM potassium phosphate citrate buffer (pH 7.6). Cell suspensions were homogenized using SONOPLUS Ultrasonic homogenizer (BANDELIN electronic GmbH & Co., Berlin, Germany). The supernatant was collected after centrifugation at 12,000 rpm (14,838 g) for 10 min, and concentrated by a Centrivap concentrator (Labconco Corporation, MO, USA). The concentrated sample was further extracted with 600 μL of a mixture of water:chloroform:methanol (3:5:12,v/v/v), followed by 300 μL of chloroform and 450 μL of distilled water before centrifugation again at 5500 rpm (3505 × g), 4 oC, for 10 min. The upper fraction of water–methanol phase was collected, and evaporated before redissolving in 200 μL of 0.1 N HCl. The sample solution was filtered through a 0.45 μm membrane filter, and then diluted (1:4,v/v) with internal standard solution of norvaline and sarcosine (62.5 mM in 0.1 M HCl). Then, this mixture was again filtered through a 0.45 μm membrane filter before detecting by HPLC with UV–VIS detector (Shimadzu HPLC LGE System, Kyoto, Japan) using 4.6 × 150 mm, 3.5 μm Agilent Zorbax Eclipse AAA analytical column and 4.6 × 12.5 mm, 5.0 μm guard column (Agilent Technologies, CA, USA). For the mobile phase, eluent A was 40 mM Na 2 HPO 4 , pH 7.8, and eluent B was acetonitrile:methanol:water (45:45:10, v/v/v), with a flow rate of 2 mL/min. The OPA- and FMOC-derivatized amino acids were monitored at 338 and 262 nm, respectively. The unit of amino acid content was nmol/mg protein.
Results Overexpression of native proC gene in Synechocystis sp. PCC 6803 wild-type and mutant strains Initially, we constructed four engineered Synechocystis sp. PCC 6803 strains, including wild-type control (WTc), Δ adc1 mutant control (Δ adc1 c), OXP, and OXP/Δ adc1 by double homologous recombination (Table 1 , Fig. 2 A). For the WTc and Δ adc1 c strains, they were created by replacing the psbA2 gene with a Cm R cassette in the genomes of Synechocystis sp. PCC 6803 WT and Δ adc1 mutant, respectively (Fig. 2 A). To create a recombinant plasmid pEERM_ proC (Table 1 ), a native proC (or slr0661 ) gene fragment with a size of 1.0 kb was ligated between flanking regions of the psbA2 gene of the pEERM vector and the upstream region of Cm R cassette (Fig. 2 A). Next, all overexpressing strains were verified by PCR using specific pairs of primers (Additional file 1 : Table S1) for their complete segregation and gene location. To confirm the complete segregation, PCR products with Up_psbA2-F and Dw_psbA2-R primers confirmed the correct size of 3.2 kb, in OXP and OXP/Δ adc1 strains (Fig. 2 B.1 and B.2, respectively), while there were 2.3 kb in WT and Δ adc1 strains, and 2.2 kb in WTc and Δ adc1 c strains. PCR products with ProC-F and Cm R -R primers confirmed the correct size of 1.9 kb in OXP and OXP/Δ adc1 strains (Fig. 2 C1 and C2, respectively), compared with no band in WT, WTc, Δ adc1 and Δ adc1 c strains. In addition, ProC gene overexpression was verified by RT-PCR data in all engineered strains (Fig. 2 D). Growth, intracellular pigment contents, oxygen evolution rates, and metabolite accumulation under normal growth condition We found a slight increase in cell growth of the Δ adc1 c and OXP/Δ adc1 strains in comparison with the WTc and OXP strains (Fig. 3 A). All strains had comparable amounts of chlorophyll a ; however, proC overexpression in OXP and OXP/Δ adc1 had an impact on the decrease of carotenoids (Fig. 3 B, C). In comparison to WTc, all engineered strains showed lower rates of oxygen evolution (Fig. 3 D). On the other hand, as anticipated, total polyamines (PAs) in both bound and free forms declined in the Δ adc1 c and OXP/Δ adc1 strains, but the OXP strain showed a minor decrease in total PAs as compared to the WTc strain (Fig. 3 E). It was found that bound PAs were the main decrease when the adc1 gene was disrupted. On day 7 under normal growth condition, the proline levels of OXP and OXP/Δ adc1 strains were found to be much higher, but the Δ adc1 c strain has the lowest proline content (Fig. 3 F). Moreover, the WTc strain exhibited significant glutamate content that was almost ten times greater than proline under normal growth conditions (Fig. 3 G). All mutant strains, especially the OXP/Δ adc1 strain, had a greater increase in glutamate content than the WTc. Similarly, on day 7 of culture, the GABA level in WTc was higher than the proline content but somewhat lower than the glutamate content under normal condition (Fig. 3 H). The GABA content of all engineered strains, especially OXP/Δ adc1 , was lower than that of the WTc strain. Glutamate appeared to be the preferred compound that Synechocystis cells accumulated, followed by GABA and proline. On the other hand, cells substantially produced a low level of PHB by about 4 – 23% w/DCW under normal growth condition (F i g. 3 I). When compared to the WTc, the PHB quantity in the OXP and OXP/Δ adc1 strains appeared to be larger. In order to adapt to the nutrient-modified medium, cells growing on day 11, which represents the late-log phase of cell growth with the maximum level of PHB accumulation, were subsequently selected. Growth, intracellular pigment contents, and metabolite accumulation under nutrient-modified conditions All Synechocystis strains were grown in normal BG 11 medium for 11 days before starting the adaptation phase (Fig. 4 ). Both two nutrient-modified media, including BG 11 lacking nitrogen and phosphorus (BG 11 -N-P) and BG 11 -N-P medium with acetate addition (BG 11 -N-P + A), caused a certain reduction in growth (Fig. 4 A–C) and intracellular contents of chlorophyll a and carotenoids (F i g. 4 D–I). It is worth noting that all engineered strains had a slightly higher level of cell growth under the BG 11 -N-P + A condition, in particular Δ adc1 c (Fig. 4 C). In addition, the proC -overexpressing strains, including OXP and OXP/Δ adc1 , contained a higher accumulation of chlorophyll a than WTc under the BG 11 -N-P + A condition (Fig. 4 F). For the main carbon storages of glycogen and polyhydroxybutyrate (PHB) (Fig. 5 ), glycogen was markedly accumulated rather than PHB under normal growth condition, in particular for a longer period at days 9–11 of cultivation (Fig. 5 A and D). The OXP strain contained the highest level of glycogen, up to 30–49% of dry cell weight, among other strains, during 9–11 days under normal BG 11 condition (Fig. 5 D). Both BG 11 -N-P and BG 11 -N-P + A conditions promoted the specific induction of PHB synthesis in all strains, whereas cells comparatively reduced the quantity of glycogen compared to those under normal condition (Fig. 5 B–F). Remarkably, on day 7 of the adaptation phase of both BG 11 -N-P and BG 11 -N-P + A media, the OXP/Δ adc1 strain accumulated the greatest amount of PHB, with around 39.2 and 48.9%w/DCW, respectively (Fig. 5 B, C). Moreover, in Fig. 6 , there was a 2.7-fold increase in PHB in the OXP/Δ adc1 strain compared to WTc at day 7 under a BG 11 -N-P + A condition. Interestingly, after adapting to both BG 11 -N-P and BG 11 -N-P + A media, the PHB accumulation of the OXP strain was later driven to reach its maximum level on day 9. On the other hand, it was anticipated that these BG 11 -N-P and BG 11 -N-P + A conditions would result in a reduction in polyamines in all strains, particularly OXP/Δ adc1 strain (Table 2 ). It was evident from Fig. 6 that the OXP/Δ adc1 strain has decreased by 0.8 fold in comparison to WTc (Fig. 6 ). Regarding the proline-glutamate-GABA pathway, glutamate production predominated under typical BG 11 condition, particularly in OXP and OXP/Δ adc1 strains, followed by GABA and proline (Table 2 ). The proline content was presumably increased by BG 11 -N-P and BG 11 -N-P + A conditions, based on the greater fold change compared to WTc in OXP and OXP/Δ adc1 strains (Fig. 6 ). Glutamate accumulation was reduced (Table 2 ), but in the modified strains, specifically the OXP strain, it increased by more than 5—7 times in both BG 11 -N-P and BG 11 -N-P + A conditions, compared to the WTc (Fig. 6 ). Moreover, GABA accumulation was mostly decreased under nutrient-modified conditions. We also stained cells adapted under the BG 11 -N-P + A condition for 7 days with Nile red dye and visualized them under fluorescent microscopy (Fig. 7 A). When compared to other strains, the OXP/Δ adc1 strain manifestly exhibited a high abundance of PHB granules in entire cells. Furthermore, RT-PCR was conducted to measure the transcript levels of 15 different genes (Fig. 7 B, C). Both in OXP and OXP/Δ adc1 under normal and BG 11 -N-P + A conditions, the proC transcript level increased. It is noteworthy that OXP and OXP/Δ adc1 strains likewise exhibited elevated putA transcript levels, encoding proline oxidase. Furthermore, BG 11 -N-P + A condition increased the transcript levels of the gdhA and gad genes, encoding glutamate dehydrogenase and glutamate decarboxylase, respectively, with the exception of the Δ adc1 c strain. The transcript levels of the acs , ach , and ackA genes, encoding acetyl-CoA synthase, acetyl-CoA hydrolase, and acetate kinase, respectively, in acetate metabolism, were increased by the acetate supplementation in BG 11 -N-P medium. Remarkably, all strains under the BG 11 -N-P + A condition showed an increase in the transcript level of the gltA gene, encoding citrate synthase in a first step of the TCA cycle, when compared to the normal BG 11 condition. On the other hand, although the BG 11 -N-P + A condition raised the quantity of the accA transcript, encoding acetyl-CoA carboxylase subunit A in fatty acid synthesis, relative to the normal condition, there was a low alteration in the plsX transcript level, encoding fatty acid/phospholipid synthesis protein. Strikingly, transcript levels of all pha genes, including phaA , phaB , p haC , and phaE , were upregulated by BG 11 -N-P + A condition. The reduction of glgX transcript amount, encoding glycogen debranching enzyme in glycogen degradation, was induced by the BG 11 -N-P + A condition rather than the normal BG 11 control.
Discussion The disruption of the polyamine synthetic adc1 gene in Synechocystis sp. PCC 6803 was initially discovered in a previous study [ 5 ], but the metabolic regulation remained unclear. In this work, we highlight the remarkable finding that higher PHB synthesis (up to 48.9% of dry cell weight) was caused by enhanced metabolic flux from arginine to proline and glutamate, which is closely related to nutrient stress. The introduction of the native proC gene, encoding pyrroline-5-carboxylate reductase of proline synthesis, in Synechocystis sp. PCC 6803 wild type (WT) and adc1 mutant (Δ adc1 ) was constructed, thereby creating OXP and OXP/Δ adc1 strains, respectively. The proC mutant of Synechocystis sp. PCC 6803 was previously shown to produce less proline, but a putA mutant that lacks the enzyme proline oxidase, which breaks down proline to glutamate, nonetheless accumulated a high amount of proline metabolites without producing any glutamate [ 16 ]. Enhanced proline accumulation is indicated in response to environmental stress [ 27 – 29 ]. In plants, the stress response of proline accumulation was controversial depending on different species and organisms; maize seedlings had an increased proline production in response to nitrogen and phosphorus deficiency [ 30 ], while French bean ( Phaseolus vulgaris L cv Strike) plants showed a decline in proline accumulation under nitrogen-deprived condition [ 31 ], as well as a low proline level in Arabidopsis thaliana growing under nitrogen-limiting condition [ 32 ]. In our study, regarding BG 11 -N-P and BG 11 -N-P + A conditions, we found a minor alteration in proline accumulation in WTc as compared to BG 11 control (Table 2 ). It is worthy to note that the adc1 knockout with a decreased polyamine also contained lower proline content than the WTc, except for the BG 11 -N-P condition. Our results also indicated that glutamate accumulation in all strains was dramatically decreased in response to nitrogen and phosphorus deprivation in comparison to the WTc. Nevertheless, glutamate content increased more than twofold in all engineered strains as compared to WTc results (Fig. 8 ), particularly in OXP and OXP/Δ adc1 strains. Amidst the deficiency of nitrogen and phosphorus, glutamate might have taken up a rapid key role in the metabolism of amino acids through pathways including the GS/GOGAT pathway, multispecific aminotransferases, GABA synthesis, and reversible conversions to proline and arginine [ 33 – 37 ]. Interestingly, the transcript level of the gdhA gene, encoding glutamate dehydrogenase (GDH), was upregulated only in the OXP strain which contained the highest level of glutamate under the BG 11 -N-P + A condition (Fig. 7 B, C), with 1.77-fold increase compared to WTc (Fig. 8 ). Our finding demonstrated that the proC overexpression and adc1 disruption in Synechocystis (OXP/Δ adc1 ) had noted results in higher proline and/or glutamate contents in comparison with WTc, which partially alleviated cells under nitrogen and phosphorus deficiency, as evidenced by the increased accumulation of chlorophyll a , although the stress effect of nutrient deprivation still existed. In addition, we demonstrated that, under typical BG 11 condition, when cells reached the late phase of growth on day 11, they accumulated more glycogen (Fig. 5 D), in particular the OXP strain with 49%w/DCW, and had less PHB (Fig. 5 A). The growth phase of cyanobacterial cells, in which the levels of ATP and ADP are elevated during the lag phase and then dropped during the log phase, is directly attributed to the energy charge. The improved metabolism and storage of glycogen significantly contribute to maintaining energy homeostasis [ 38 ]. Moreover, the nitrogen and phosphorus deficiency certainly accelerated the glycogen accumulation within 3 days of the adaptation phase (Fig. 5 E), as well as PHB production (Fig. 5 B). The impact of glycogen breakdown on PHB synthesis during nitrogen deprivation has been previously reported [ 8 ]. Subsequently, in order to improve the acetyl-CoA substrate for PHB synthesis, we added carbon source, herein acetate, to the BG 11 -N-P medium. On day 7 of the adaptation phase, there was a noticeable increase in PHB accumulation in OXP/Δ adc1 of around 48.9% w/DCW (Fig. 5 C). The increased acetate utilization in the OXP/Δ adc1 strain in comparison to other strains confirmed this finding (Additional file 1 : Fig. S2). Our results indicated that, with the exception of the OXP strain, Synechocystis cells favored using exogenous acetate to acetyl-CoA over glycogen breakdown, as demonstrated by a reduced amount of glgX transcript under BG 11 -N-P + A condition (Fig. 7 B, C). Regarding acetate metabolism, cells acclimated to a BG 11 -N-P medium containing acetate exhibited significantly higher levels of the transcripts ackA and acs , encoding acetyl-CoA synthase and acetate kinase, respectively. This is consistent with their enhanced fold change, as noted in Fig. 8 . It is worth noting that the OXP strain also had a higher ackA transcript level than that under normal BG 11 medium (Fig. 7 B), but it showed a decreased fold change when compared to WTc (Figs. 7 C and 8 ). According to these data, the OXP strain may have utilized acetate less than other strains, which may have contributed to its reduced PHB contents on day 7 under BG 11 -N-P + A treatment compared to the Δ adc1 c and OXP/Δ adc1 strains. This finding was confirmed by higher amount of acetate remaining in the medium during treatment with the OXP strain than other strains (Additional file 1 : Fig. S2). As demonstrated earlier by the acs mutant, which did not use the external acetate in medium, it is crucial to stress that the Acs enzyme functions as the primary metabolic route for acetate absorption in Synechocystis sp. PCC 6803 [ 39 ]. Remarkably, we suggested that the flow of acetyl-CoA metabolite to citrate in the TCA cycle in all strains was induced by the BG 11 -N-P + A condition, as supported by the upregulated transcript level of gltA gene, encoding citrate synthase (Fig. 7 B, C). Nonetheless, it is crucial to note that the gltA transcript levels in the engineered strains, including OXP, adc1 c, and OXP/Δ adc1 , were lower than the WTc (Fig. 8 ). This result indicated that the lowered flow of acetyl-CoA to the TCA cycle in engineered strains in comparison to WTc substantially contributed to driving acetyl-CoA to other flux directions, such as PHB biosynthetic pathway and fatty acid synthesis. Then, we postulated that the increased levels of proline and glutamate in the engineered strains OXP, Δ adc1 c, and OXP/Δ adc1 were substantially related to the flow of acetyl-CoA to the TCA cycle and a conversion between 2-oxoglutarate and glutamate. Our results have not provided the TCA cycle’s metabolites; further identification of pertinent metabolites or the application of integrative bioinformatic approaches might increase our knowledge of the actual mechanism. For PHB synthesis, it is important to note that all pha genes in the PHB synthetic pathway were increased in their transcript amounts under the BG 11 -N-P + A condition, in particular the phaC and phaE genes, in comparison to the normal BG 11 condition (Fig. 7 B, C). Nonetheless, our findings demonstrate a strong correlation between the increased amounts of phaA and phaB transcripts and the improved synthesis of PHB in the modified strains (OXP, Δ adc1 c, and OXP/Δ adc1 ) (Fig. 8 ). This was in line with a previous study in Synechocystis sp. PCC6803, where increased PHB synthesis was associated with overexpression of the phaAB gene rather than phaEC overexpression during nitrogen deprivation [ 40 ]. On the other hand, the acetyl-CoA direction to fatty acid synthesis was also induced by the BG 11 -N-P + A condition due to the high upregulation of the accA transcript level and a slight induction of the plsX transcript (Fig. 7 B, C). This could imply that acetate addition contributes to lipid production in cyanobacteria [ 40 , 41 ]. According to Ref. [ 42 ], cyanobacterial cells grown in high C/low N conditions functioned by preventing the inhibitory interaction of PII protein with ACCase, while cells grown in high N/low C conditions could enhance the PII-ACCase interaction, leading to an inhibition of the ACCase enzyme.
Conclusions The nitrogen and phosphorus-deprived condition efficiently induced the accumulation of glycogen and PHB in Synechocystis sp. PCC 6803. In this study, higher PHB production was attained in three modified Synechocystis sp. PCC6803 strains, including Δ adc1 c, OXP, and OXP/Δ adc1 , under the nutrient-deprived treatments, in particular nitrogen and phosphorus-deprived BG 11 medium with acetate addition (BG 11 -N-P + A). The proC overexpression and adc1 knockout in Synechocystis apparently induced the changes in proline and glutamate contents inside the cells, which partially alleviated cells under nitrogen and phosphorus deprivation. However, the acetate addition, enhancing acetyl-CoA metabolite, significantly boosted the PHB and glycogen storage. These genetically modified strains of Synechocystis (Δ adc1 c, OXP, and OXP/Δ adc1 ) might serve as practicable cell factories for biotechnological applications including biomaterials and biofuels.
Background Lack of nutrients, in particular nitrogen and phosphorus, has been known in the field to sense glutamate production via 2-oxoglutarate and subsequently accelerate carbon storage, including glycogen and polyhydroxybutyrate (PHB), in cyanobacteria, but a few studies have focused on arginine catabolism. In this study, we first time demonstrated that gene manipulation on proC and adc1 , related to proline and polyamine syntheses in arginine catabolism, had a significant impact on enhanced PHB production during late growth phase and nutrient-modified conditions. We constructed Synechocystis sp. PCC 6803 with an overexpressing proC gene, encoding Δ 1 pyrroline-5-carboxylate reductase in proline production, and adc1 disruption resulted in lower polyamine synthesis. Results Three engineered Synechocystis sp. PCC 6803 strains, including a ProC -overexpressing strain (OXP), adc1 mutant, and an OXP strain lacking the adc1 gene (OXP/Δ adc1 ), certainly increased the PHB accumulation under nitrogen and phosphorus deficiency. The possible advantages of single proC overexpression include improved PHB and glycogen storage in late phase of growth and long-term stress situations. However, on day 7 of treatment, the synergistic impact created by OXP/Δ adc1 increased PHB synthesis by approximately 48.9% of dry cell weight, resulting in a shorter response to nutrient stress than the OXP strain. Notably, changes in proline and glutamate contents in engineered strains, in particular OXP and OXP/Δ adc1 , not only partially balanced the intracellular C/N metabolism but also helped cells acclimate under nitrogen (N) and phosphorus (P) stress with higher chlorophyll a content in comparison with wild-type control. Conclusions In Synechocystis sp. PCC 6803, overexpression of proC resulted in a striking signal to PHB and glycogen accumulation after prolonged nutrient deprivation. When combined with the adc 1 disruption, there was a notable increase in PHB production, particularly in situations where there was a strong C supply and a lack of N and P. Supplementary Information The online version contains supplementary material available at 10.1186/s13068-024-02458-9. Keywords
Supplementary Information
Abbreviations Arginine decarboxylase Arginine Carotenoids Chlorophyll a Dry cell weight N,N-Dimethylformamide 9-Fluorenylmethyl chloroformate Gamma-aminobutyric acid Hour Microgram Milliliter Minute Nanometer Optical density O-Phthalaldehyde Polymerase chain reaction Polyhydroxybutyrate Revolutions per minute Seconds Wild type Acknowledgements We gratefully thank Professor Peter Lindblad, Microbial Chemistry, Department of Chemistry–Ångström, Uppsala University, for providing the expression vector pEERM for our work. Author contributions SU responsible for study conception, experimenter, data collection and analysis, manuscript preparation. SJ study conception, supervision, and design, critical revision and manuscript writing, and final approval of the manuscript. All the authors read and approved the final manuscript. Funding This research was supported by the 90th Anniversary of Chulalongkorn University Fund (Ratchadaphiseksomphot Endowment Fund) to S.U. and S.J. This Research is also funded by Thailand Science research and Innovation Fund Chulalongkorn University (CU_FRB65_hea(66)_129_23_59) to SJ. Availability of data and materials Data generated and analyzed during this study are included in the published article. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. All the authors agree to the submission and publication of this manuscript. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-15 23:43:48
Biotechnol Biofuels Bioprod. 2024 Jan 13; 17:6
oa_package/71/fb/PMC10788017.tar.gz
PMC10788018
38218927
Introduction Osteoarthritis (OA), among the most prevalent degenerative joint diseases, is caused by a multitude of factors, including aging, obesity, strain, and trauma [ 1 , 2 ]. This condition affects more than approximately 240 million individuals worldwide, predominantly middle-aged and elderly people [ 3 ]. Crucially, chondrocytes serve as vital protectors of matrix integrity [ 4 ]. Presently, the pathogenesis of OA is thought to involve intricate interplay among multiple factors, and effective prevention and treatment methods are lacking. Hence, the search for efficacious OA treatments has become paramount. Emerging evidence implicates various cytokines in cartilage degradation, with IL-1β a prominent player [ 5 ]. Notably, Yang et al. investigated the mechanism by which downregulation of microRNA-23b-3p alleviates IL-1β-induced injury in chondrogenic CHON-001 cells [ 6 ]. Consequently, we postulate that inhibiting IL-1β expression may be the key to mitigating OA. Long noncoding RNAs (lncRNAs) are a class of RNA molecules exceeding 200 nucleotides in length. Serving as a pivotal layer in biological regulation, lncRNAs significantly influence various biological processes, including regulation of the cell cycle and cell differentiation [ 7 ]. LINC00958, in particular, has been referenced in numerous contexts, across diverse cancers. Zhou et al. [ 8 ] described how LINC00958 drives tumour progression through the miR-4306/CEMIP axis in osteosarcoma. Li et al. proposed that the LINC00958/miR-3174/PHF6 axis orchestrates the cell proliferation, migration, and invasion observed in endometrial cancer [ 9 ]. However, the specific mechanism of LINC00958 in the context of OA remains to be explored further. MicroRNAs (miRNAs), a subclass of noncoding RNAs, have garnered substantial recognition as pivotal regulators of diverse cellular processes, through their binding to target mRNAs [ 10 – 12 ]. Similarly, Ding et al. [ 13 ] suggested that miR-93 inhibits chondrocyte apoptosis and inflammation in OA through the TLR4/NF-kappaB (κB) signalling pathway. Wang et al. [ 14 ] suggested that circATRNL1 protects against OA by targeting miR-153-3p and KLF5. Concurrently, Fioravanti et al. [ 15 ] pinpointed miR-214-3p as a promising therapeutic target in the context of OA pathogenesis. Conversely, downregulation of miR-214-3p has been implicated in activating the NF-κB pathway, exacerbating the progression of OA [ 16 ]. However, for a comprehensive understanding, further analysis is warranted to elucidate the complete set of functions and molecular mechanisms governed by miR-214-3p in the contest of OA. Hence, our study was structured to elucidate the roles of LINC00958 in the pathogenesis of OA. In this study, we propsed the following hypotheses: (i) stimulation of CHON-001 cells with IL-1β may accelerate damage in human chondrocytes, cnstituting an in vitro model for studying inflammation; (ii) LINC00958 exhibits a protective effect against OA-related behaviors of CHON-001 cells following IL-1β treatment; and (iii) the underlying mechanisms responsible for the protective effects of LINC00958 could be intricately linked to the miR-214-3p/FOXM1 axis. These findings could lead to the development of a promising effective therapeutic approach for OA.
Materials and methods Cell culture CHON-001 cells were purchased from the ATCC and grown in DMEM (Thermo Fisher) supplemented with 10% FBS and 1% penicillin–streptomycin under humidified conditions with 5% CO 2 at 37 °C. Subsequently, CHON-001 cells were stimulated with 10 ng/ml IL-1β for 12 h to establish an in vitro cellular model of inflammatory injury. Dual-luciferase reporter assay To investigate the relationships of miR-214-3p with LINC00958 and FOXM1, StarBase was utilized. We employed the WT-LINC00958 and MUT-LINC00958 3′-UTR luciferase reporter plasmids to clarify the potential interactions between miR-214-3p and LINC00958. Through this approach, LINC00958 was identified as a potential target of miR-214-3p. In the luciferase activity assay, the LINC00958 wild-type or mutant plasmid was co-transfected with the miR-214-3p mimic or mimic control into 293 T cells using Lipofectamine 2000 (Invitrogen) following the provided protocol for a duration of 24 h. Subsequently, the luciferase activity was measured using the dual-luciferase reporter assay system (Promega, USA). Cell transfection To modulate LINC00958 or miR-214-3p expression in CHON-001 cells, we used control-siRNA, LINC00958-siRNA, an inhibitor control, a miR-214-3p inhibitor, a mimic control, or a miR-214-3p mimic. All transfections were performed using Lipofectamine ® 3000 reagent (Thermo), following the manufacturer’s instructions, and the cells were incubated for 48 h. Subsequently, we assessed the cell transfection efficiency by qRT-PCR. qRT-PCR analysis We extracted total RNA from CHON-001 cells using TRIzol reagent (TaKaRa, Shiga, Japan) following the manufacturer’s instructions. Subsequently, total RNA was reverse transcribed into cDNA using the PrimeScript RT Reagent Kit (TaKaRa, China). PCR amplification was conducted on an ABI PRISM 7900 sequence detection system (Applied Biosystems, USA) to measure the levels of LINC00958, miR-214-3p, FOXM1, and GAPDH. The expression of target genes was quantified using the 2 −ΔΔCt method. MTT assay Following treatment, CHON-001 cells were cultured in 96-well plates at 37 °C. Subsequently, the cells were treated with 10 μl of MTT solution (5 mg/ml) and incubated for an additional 4 h. Following this incubation step, the solution was carefully removed, and 100 μl of DMSO was added to each well in the dark to dissolve the formazan crystals. Finally, after 15 min of gentle mixing, the optical density (OD) at 490 nm was measured using a multifunctional plate reader (BioTek, USA) following the manufacturer’s instructions. Flow cytometry (FCM) analysis We assessed apoptosis in CHON-001 cells using an Annexin-V/Propidium Iodide (PI) Apoptosis Detection Kit (BD Biosciences) with incubation at room temperature for 10 min following the provided instructions. Apoptotic cells were subsequently quantified using a flow cytometer (BD Technologies), and the data were analysed with FlowJo software. Western blotting analysis Proteins were extracted from CHON-001 cells using RIPA buffer (Beyotime), and protein concentrations were measured using a BCA Protein Assay Kit (Invitrogen, USA). Subsequently, proteins in the samples were separated on a 10% SDS‒PAGE gel and then transferred onto a PVDF membrane (Millipore, USA). After blocking with 5% skim milk in PBST for 1 h, the membranes were incubated overnight at 4 °C with primary antibodies against β-actin, Bax, Bcl-2, and FOXM1 (1:1000 dilutions). The membranes were subsequently incubated for 1 h with secondary antibodies. Finally, protein signals were visualized using the ECL method (Cytiva) following the manufacturer’s instructions. ELISA We collected supernatant samples from CHON-001 cells and measured the concentrations of secreted IL-6, IL-8, and TNF-α using ELISA kits (BD Biosciences) following the manufacturer’s instructions. Subsequently, the OD at 450 nm was measured using a Multiskan Spectrum microplate spectrophotometer (MD, USA). Lactate dehydrogenase (LDH) assay We assessed the release of LDH from CHON-001 cells using an LDH Cytotoxicity Assay Kit (Sigma). Cells were cultured in 12-well plates for 48 h. Following treatment, we collected both the supernatant and total lysate from the CHON-001 cells, and these samples were incubated with the LDH reaction mixture according to the manufacturer’s instructions for 15 min. The absorbance at 490 nm was then measured, and LDH release was quantified using a microplate reader (BioTek, USA). Statistical analysis Statistical analysis was performed using SPSS 20.0 software. The results are presented as the mean ± SD from three independent experiments. The statistical significance of differences among three or more groups and between two groups was assessed using one-way analysis of variance (ANOVA) or Student’s t test, respectively. * P < 0.05 and ** P < 0.01 were considered to indicate statistically significant differences.
Results MiR-214-3p was identified as a direct target of LINC00958 To investigate whether LINC00958 functions as a competing endogenous RNA by targeting miRNAs, we employed the target prediction tool StarBase to identify potential target genes. Our analysis revealed that LINC00958 and miR-214-3p may have binding sites (Fig. 1 A). To further validate this interaction, we employed a dual luciferase reporter system, which confirmed that The miR-214-3p mimics decreased the activity of LINC00958 reporter gene plasmids while having no effect on mutant plasmids (Fig. 1 B). This evidence strongly suggested that miR-214-3p directly interacts with LINC00958. Expression of LINC00958 and miR-214-3p in articular cartilage tissue samples from OA patients and in IL-1β-stimulated CHON-001 cells Moreover, we assessed the expression levels of LINC00958 and miR-214-3p in articular cartilage tissue samples obtained from OA patients. qRT‒PCR analysis revealed upregulation of LINC00958 in the articular cartilage tissues of OA patients, as shown in Fig. 2 A, in contrast to the normal control group. Furthermore, as demonstrated in Fig. 2 B, the expression level of miR-214-3p was significantly lower in the articular cartilage tissues from OA patients than in those from normal control individuals. Additionally, we examined the expression of LINC00958 and miR-214-3p in an in vitro model of chondrocyte inflammatory injury induced by IL-1β. Our findings revealed upregulation of LINC00958 and downregulation of miR-214-3p in IL-1β-stimulated CHON-001 cells (Fig. 2 C, D). These observations confirmed the involvement of both LINC00958 and miR-214-3p in the progression of OA. LINC00958 negatively regulated the expression of miR-214-3p in CHON-001 cells We then assessed the functional roles of LINC00958 and miR-214-3p in CHON-001 cells. These cells were stimulated with 10 ng/ml IL-1β and transfected with various constructs, including control-siRNA, the miR-214-3p inhibitor, the inhibitor control, and LINC00958-siRNA. The transfection efficiency was determined by qRT‒PCR. As shown in Fig. 3 A, the introduction of LINC00958-siRNA led to a substantial reduction in LINC00958 expression in CHON-001 cells. In contrast, the miR-214-3p level was significantly lower in cells transfected with the miR-214-3p inhibitor than in cells in the control, control-siRNA, and inhibitor control groups (Fig. 3 B). Additionally, LINC00958-siRNA transfection markedly increased the level of miR-214-3p in CHON-001 cells. However, this increase was effectively countered by transfection of the miR-214-3p inhibitor (Fig. 3 C). These findings aligned with our earlier results, which indicated that IL-1β induced an increase in LINC00958 expression and a decrease in miR-214-3p expression and that these effects were reversed by LINC00958-siRNA transfection. Furthermore, we detected the opposite results in cells transfected with the miR-214-3p inhibitor, as evidenced by the upregulation of LINC00958 expression and the downregulation of miR-214-3p expression (Fig. 3 D, E). Taken together, these findings provide strong evidence that LINC00958 exerts a negative regulatory effect on miR-214-3p expression in CHON-001 cells. Downregulation of LINC00958 alleviated the decrease in the viability and increase in the apoptosis in IL-1β-stimulated CHON-001 cells by targeting miR-214-3p To elucidate the roles of LINC00958 and miR-214-3p in regulating the viability and apoptosis of CHON-001 cells, we stimulated these cells with 10 ng/ml IL-1β for 12 h. Additionally, we transfected cells with control-siRNA, LINC00958-siRNA, the inhibitor control, or the miR-214-3p inhibitor. Exposure to IL-1β led to a reduction in cell viability (Fig. 4 A), an increase in LDH release (Fig. 4 B), increases in apoptotic cell populations (Fig. 4 C, D), an increase in BAX expression (Fig. 4 E, F), and inhibition of BCL2 expression (Fig. 4 E, G). We observed the opposite effects in cells transfected with LINC00958-siRNA. Importantly, these effects were consistently reversed by transfection of the miR-214-3p inhibitor, highlighting the potential of LINC00958 downregulation to mitigate the IL-1β-induced reduction in cell viability and increase in apoptosis in CHON-001 cells and indicating that LINC00958 achieves this modulatory effect by targeting miR-214-3p. Downregulation of LINC00958 alleviated the IL-1β-induced release of inflammatory factors from CHON-001 cells Furthermore, we elucidated the impacts of LINC00958 and miR-214-3p on the inflammatory response in CHON-001 cells by specifically measuring IL-6 (Fig. 5 A), IL-8 (Fig. 5 B), and TNF-α (Fig. 5 C) concentrations. Our ELISA results indicated significant increases in the concentrations of these inflammatory factors in IL-1β-treated CHON-001 cells. Importantly, introduction of LINC00958-siRNA significantly inhibited this inflammatory response compared to that in the control-siRNA group. However, these inhibitory effects were subsequently reversed following treatment with the miR-214-3p inhibitor. These findings indicated that downregulation of LINC00958 effectively mitigated the IL-1β-induced inflammatory response in CHON-001 cells. MiR-214-3p mimic transfection alleviated the decrease in the viability and increase in the apoptosis of IL-1β-induced CHON-001 cells To gain further insight into the effects of miR-214-3p in IL-1β-stimulated CHON-001 cells, we stimulated cells with 10 ng/ml IL-1β for 12 h. Subsequently, we transfected either the mimic control or the miR-214-3p mimic into the cells. As shown in Figs. 4 A and 6 B, miR-214-3p was markedly upregulated in the miR-214-3p mimic group compared to the control and mimic control groups. As shown by our MTT and LDH release assays, transfection of the miR-214-3p mimic significantly augmented cell viability (Fig. 6 C) while decreasing LDH release (Fig. 6 D). Additionally, upregulation of miR-214-3p resulted in suppression of apoptosis (Fig. 6 E and F), reduced BAX expression (Fig. 6 G and H), and an increase in BCL2 expression (Fig. 6 G and I). These findings indicate that the miR-214-3p mimic effectively alleviated the IL-1β-induced decrease in the viability and increase in the apoptosis of CHON-001 cells. Upregulation of miR-214-3p relieved IL-1β-treated inflammatory response in CHON-001 cells Similarly, we investigated the impact of the miR-214-3p mimic on the release of inflammatory factors from CHON-001 cells. Our ELISA results indicated significant reductions in the secretion of IL-6, IL-8, and TNF-α from CHON-001 cells treated with the miR-214-3p mimic (Fig. 6 J–L). Collectively, these findings strongly suggest that the miR-214-3p mimic alleviated the inflammatory response induced by IL-1β in CHON-001 cells. MiR-214-3p negatively regulated FOXM1 expression in CHON-001 cells by targeting FOXM1 Next, we elucidated the potential mechanisms involving miR-214-3p in CHON-001 cells. Utilizing the online database starBase, we identified a binding site for miR-214-3p in FOXM1 (Fig. 7 A). Subsequently, a dual-luciferase reporter system was used to validate the interaction between miR-214-3p and FOXM1. Notably, transfection of the FOXM1 mimic significantly reduced the luciferase activity in the miR-214-3p-WT group, while no evident changes were observed in the miR-214-3p-MUT group (Fig. 7 B). Furthermore, western blot and qRT‒PCR analyses revealed that the FOXM1 level was elevated in cells transfected with the miR-214-3p mimic but markedly reduced upon miR-214-3p inhibition (Fig. 7 C–F). Collectively, these results substantiate the hypothesis that silencing LINC00958 impedes OA progression through the miR-214-3p/FOXM1 axis.
Discussion OA is a prevalent joint ailment characterized by articular cartilage degeneration and deterioration, and is a significant global public health concern. Accumulating evidence implicates various factors, including factors such as mechanical stress, structural abnormalities, and obesity, in the aetiology of OA [ 2 , 17 ]. As the global population ages, the incidence of OA continues to greatly increase annually. Recently, traditional Chinese medicinal compounds have garnered attention for their potential use in OA treatment, owing to their anti-inflammatory properties and limited side effects [ 18 ]. However, a definitive OA treatment remains elusive, promoting our research to explore novel and effective therapeutic strategies for OA management. LncRNAs have emerged as pivotal players in the pathogenesis of numerous diseases, including OA [ 19 – 21 ]. Ji et al. [ 22 ] revealed the regulatory role of the lncRNA BLACAT1 in modulating the differentiation of bone marrow stromal stem cells by targeting miR-142-5p in OA. Moreover, LINC00958 has been implicated in various cancer types, including bladder cancer [ 23 ], breast cancer [ 24 ] and hepatocellular carcinoma [ 25 ]. Despite these findings, the specific function of LINC00958 in the context of OA remains unclear. Thus, our research focused on revealing the mechanistic role of LINC00958 in OA. Furthermore, accumulating investigations have revealed the involvement of lncRNAs in disease pathogenesis through their interactions with miRNAs. Initially, we identified the target miRNA of LINC00958 and confirmed the direct interaction between LINC00958 and miR-214-3p. To shed light on the roles of LINC00958 in OA, we examined the expression levels of LINC00958 and miR-214-3p in articular cartilage tissue samples from OA patients. Our findings revealed upregulation of LINC00958 and downregulation of miR-214-3p in articular cartilage tissues of OA patients compared to those of normal control volunteers. Compelling evidence highlights the pivotal role of excessive IL-1β production in arthritic joints, which is closely linked to the onset and progression of OA, through the regulation of chondrocyte apoptosis and inflammatory responses [ 4 ]. In our study, we established an in vitro OA model by stimulating CHON-001 cells with 10 ng/ml IL-1β for 12 h. Our data consistently indicated that LINC00958 was upregulated, while miR-214-3p was downregulated in IL-1β-induced CHON-001 cells, coonsistent with previous reports [ 26 ]. These findings collectively suggest that LINC00958 may contribute to OA progression by modulating miR-214-3p expression. Numerous reports have highlighted the important roles played by lncRNAs in various biological functions, including cell viability, apoptosis, and metastasis [ 27 , 28 ]. Subsequently, we performed on functional analyses with LINC00958-siRNA or miR-214-3p inhibitor to elucidate the mechanism through which they mediate IL-1β’s effects on CHON-001 cells. In our experiments, CHON-001 cells were stimulated with 10 ng/ml IL-1β and subsequently transfected with control-siRNA, the miR-214-3p inhibitor, the inhibitor control, or LINC00958-siRNA. Our results were reproducibly consistent with previous findings, confirming that LINC00958 exerts a negative regulatory effect on miR-214-3p expression in CHON-001 cells. Furthermore, the impact of IL-1β was observed as it led to diminished cell viability and an increase in LDH release. Bcl-2, which is localized primarily in the cytoplasm, exerts its anti-apoptotic effect via targeting to the nucleus. Conversely, Bax, another crucial mediator, diminishes the anti-apoptotic effect of Bcl-2, ultimately leading to apoptotic cell death [ 29 ]. Our examination of Bax and Bcl-2 expression in CHON-001 cells revealed that IL-1β stimulation amplified BAX expression while concurrently inhibiting BCL-2 expression. Intriguingly, when LINC00958-siRNA was introduced, we observed contrasting results, specifically a decrease in BAX expression and an increase in BCL-2 expression. Notably, these changes were entirely reversed following miR-214-3p inhibitor transfection. Collectively, these findings illustrate that silencing LINC00958 may mitigate the IL-1β-induced reduction in CHON-001 cell viability and the induction of apoptosis by modulating miR-214-3p expression. Inflammatory processes also play a pivotal role in driving the progression of OA, contributing to the degradation of joint tissues. Notably, Fu conducted research indicating that hesperidin protects against inflammation induced by IL-1β in human OA chondrocytes [ 30 ]. To clarify this observation, we investigated the secretion of inflammatory cytokines from IL-1β-induced CHON-001 cells, focusing on IL-6, IL-8, and TNF-α. Our ELISA results revealed that downregulation of LINC00958 effectively mitigated the inflammatory response provoked by IL-1β in CHON-001 cells. Consequently, suppressing chondrocyte apoptosis and alleviating the inflammatory response might offer valuable therapeutic benefits in OA. To further investigate specific roles of miR-214-3p in OA, we initially stimulated CHON-001 cells with 10 ng/ml IL-1β for 12 h, and then transfected them with either the mimic control or miR-214-3p mimic. Our subsequent functional assays yielded insightful results revealed that upregulation of miR-214-3p had alleviated the IL-1β-induced decrease in the viability and increase in the apoptosis of CHON-001 cells. This effect was evidenced by the increased cell viability and reduced LDH release. Furthermore, we observed that transfection of the miR-214-3p mimic led to suppression of apoptosis, a reduction in BAX expression, and an increase in BCL-2 expression. To further investigate these findings, we also assessed the impact of the miR-214-3p mimic on the release of inflammatory cytokines. ELISA demonstrated significant reduction in the secretion of IL-6, IL-8, and TNF-α from miR-214-3p mimic-treated CHON-001 cells. Taken together, these results strongly suggest that the miR-214-3p mimic effectively alleviated the IL-1β-induced inflammatory response in CHON-001 cells. Finally, we investigated the potential mechanisms involving miR-214-3p in CHON-001 cells. Utilizing the online database StarBase and a dual-luciferase reporter system, we successfully confirmed the interaction between miR-214-3p and FOXM1. Further Western blotting and qRT-PCR analyses revealed that miR-214-3p mimic transfection increased the FOXM1 level, whereas inhibition of miR-214-3p had the opposite effect, shedding light on the regulatory role of miR-214-3p in this context. Building upon the aforementioned research, we found that silencing LINC00958 effectively inhibited the progression of OA. This inhibition was achieved primarily through the mitigation of apoptosis and the suppression of the inflammatory response in IL-1β-stimulated CHON-001 cells, which were mediated primarily through the miR-214-3p/FOXM1 axis. Consequently, LINC00958 emerged as a promising candidate therapeutic biomarker in OA. To gain a more comprehensive understanding of the precise role of LINC00958 in the development of OA, additional in vivo experiments that can elucidate the exact underlying mechanisms are needed.
Objective We investigated the impact of the long noncoding RNA LINC00958 on cellular activity and oxidative stress in osteoarthritis (OA). Methods We performed bioinformatics analysis via StarBase and luciferase reporter assays to predict and validate the interactions between LINC00958 and miR-214-3p and between miR-214-3p and FOXM1. The expression levels of LINC00958, miR-214-3p, and FOXM1 were measured by qRT-PCR and western blotting. To assess effects on CHON-001 cells, we performed MTT proliferation assays, evaluated cytotoxicity with a lactate dehydrogenase (LDH) assay, and examined apoptosis through flow cytometry. Additionally, we measured the levels of apoptosis-related proteins, including BAX and BCL2, using western blotting. The secretion of inflammatory cytokines (IL-6, IL-8, and TNF-α) was measured using ELISA. Results Our findings confirmed that LINC00958 is a direct target of miR-214-3p. LINC00958 expression was upregulated but miR-214-3p expression was downregulated in both OA cells and IL-1β-stimulated CHON-001 cells compared to the corresponding control cells. Remarkably, miR-214-3p expression was further reduced after miR-214-3p inhibitor treatment but increased following LINC00958-siRNA stimulation. Silencing LINC00958 significantly decreased its expression, and this effect was reversed by miR-214-3p inhibitor treatment. Notably, LINC00958-siRNA transfection alleviated the IL-1β-induced inflammatory response, as evidenced by the increased cell viability, reduced LDH release, suppression of apoptosis, downregulated BAX expression, and elevated BCL2 levels. Moreover, LINC00958 silencing led to reduced secretion of inflammatory factors from IL-1β-stimulated CHON-001 cells. The opposite results were observed in the miR-214-3p inhibitor-transfected groups. Furthermore, in CHON-001 cells, miR-214-3p directly targeted FOXM1 and negatively regulated its expression. Conclusion Our findings suggest that downregulating LINC00958 mitigates IL-1β-induced injury in CHON-001 cells through the miR-214-3p/FOXM1 axis. These results imply that LINC00958 plays a role in OA development and may be a valuable therapeutic target for OA. Keywords
Author contributions YY designed the study. YY and QH carried out the experiments and wrote the paper. JH, YF and YX analyzed the data and discussed the results. YY and QH revised the manuscript. Funding This study was supported by the Applied Medicine of Hefei Municipal Health Commission (No. Hwk2021yb013) and the Key Project of the Third People’s Hospital of Hefei (No. SYKZ2020001). Availability of data and materials The dataset used and/or analyzed in this study is available from the corresponding author upon reasonable request. Declarations Ethics approval and consent to participate This study was conducted in compliance with ethical standards as outlined in the 1964 Declaration of Helsinki and its subsequent revisions or equivalent ethical guidelines. The Ethics Committee of the Third People’s Hospital of Hefei granted approval for all experiments conducted in this study. Informed consent was subsequently obtained from all the participating patients. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:48
J Orthop Surg Res. 2024 Jan 13; 19:66
oa_package/38/99/PMC10788018.tar.gz
PMC10788019
38218772
Introduction Epicardial adipose tissue (EAT) is a metabolically active tissue that structurally neighbors the myocardium and the coronary arteries [ 1 , 2 ]. EAT volume (EATV), as well as visceral adipose tissue, increases in obese patients and correlates with the presence and incidence of coronary artery disease (CAD) independent of traditional CAD risk factors [ 3 , 4 ]. EATV has been reported to be an independent predictor of left ventricular (LV) remodeling and LV diastolic dysfunction in patients with CAD or metabolic syndrome [ 5 – 7 ]. Excessive accumulation of EAT might have a paracrine or mechanical burden on the coronary microcirculation and myocardium [ 5 ]. Previously, we found that the EATV index (EAVTI; EATVI = EATV/body surface area, mL/m 2 ) was strongly associated with the prevalence of paroxysmal atrial fibrillation (PAF) and persistent atrial fibrillation (PeAF) in a model adjusted for known atrial fibrillation (AF) risk factors [ 8 ]. An association between the EAT and AF prevalence has also been reported [ 8 , 9 ]. Mahabadi et al. reported that EATV was significantly associated with prevalent AF, independent of AF risk factors; however, this effect was considerably reduced when corrected for left atrial (LA) size [ 9 ]. Sex disparities in the association between EATV and cardiovascular disease have been reported. We previously reported that EATV was a discriminator in men but not in women in patients with CAD [ 10 ] or in those who underwent coronary artery bypass graft surgery [ 11 ]. Until now, the sex-dependent impact of EATV on LA size has not been elucidated. In addition, there are no reports regarding sex disparities in the association between EATV and AF. This study evaluated sex differences in the association between EATVI and LA volume index (LAVI) in patients with sinus rhythm (SR) or AF.
Materials and methods Participants and data collection We retrospectively reviewed 267 consecutive Japanese patients who had undergone multi-detector cardiac computed tomography (MDCT) between May 2010 and April 2016 at Tomishiro Central Hospital, Okinawa, Japan, or at the Tokushima University Hospital, Tokushima, Japan (Fig. 1 , flowchart of patient recruitment). The subjects had undergone MDCT if they had had symptoms suggestive of symptomatic or asymptomatic coronary artery disease (CAD) in a moderate-to-high CAD risk category [ 12 ] or dyspnea suggestive of paroxysmal or chronic AF. The major exclusion criteria were as follows: LV ejection fraction (LVEF) < 50%; serum creatinine levels > 1.5 mg/dL; CAD, if ≥ 1 major coronary artery branch stenosis ≥ 50%; class III or IV heart failure; iodine-based allergy; overt liver disease; hypothyroidism; and moderate to severe valvular disease. Of the 267, 20 patients were excluded because of hypertrophic cardiomyopathy (n = 7), unmeasured LA volume (n = 9), and unmeasured EATV (n = 4). The remaining 247 patients (165 men and 82 women) were enrolled in the full analysis set. The patients were divided into the SR and AF groups. Clinical data, including CT and echocardiographic datasets, were collected from the electrical records by MM, KO, and GM, and anonymous datasets were analyzed offline by SY and MSh. Ethics approval and consent to participate The ethical committees approved the present study (Fukushima Medical University #2019 − 182, Tomishiro Central Hospital R01R027). The need for informed consent was waived by the Ethics Committee/Institutional Review Board of Fukushima Medical University and Tomishiro Central Hospital because of the retrospective nature of the study and the lack of direct patient contact or intervention. All methods were carried out in accordance with the declaration of Helsinki. Multi-detector computed tomography Cardiac CT was performed using a 320-slice CT scanner (Aquilion One; Toshiba Medical Systems, Tokyo, Japan) with 0.275-ms rotation and 0.5/320/0.25 collimation. CT images were acquired using a retrospective, nonhelical electrocardiogram-triggered acquisition mode protocol (tube voltage, 120 kV; tube current, 450 mA × 5 ms) with a thickness of 5-mm slices [ 10 , 13 , 14 ]. All reconstructed CT image data were transferred to an offline workstation (Synapse Vincent, ver. 4.4, Fuji Film, Tokyo, Japan). For measurement of EATV, the pericardium was manually traced in each trans-axial slice, and then automated processing detected the voxels with a density range of -190 to -30 Hounsfield units beneath the pericardium. The cranial and caudal borders of the epicardial adipose tissue were set at the edge of the left pulmonary artery origin and the left ventricular apex. Transthoracic echocardiography Experienced technicians performed comprehensive transthoracic echocardiography according to the American Society of Echocardiography guidelines [ 15 , 16 ]. Under the guidance of staff cardiologists, the left atrium was traced in the apical 4-chamber and 2-chamber views at the mitral valve level in end-systole, with care taken to exclude the left atrial appendage and pulmonary veins. LA volume (mL) was calculated using the biplane area-length method, and LAVI (mL/m 2 ) was divided by body surface area [ 16 ]. LVEF was measured using the modified Simpson’s biplane method. Transmitral flow (TMF) velocity was recorded from the apical long-axis or 4-chamber view, and the peak early diastolic (E) TMF velocities were measured. The mitral annular motion velocity pattern was recorded using the apical 4-chamber view with the sample volume located at the lateral or septal side of the mitral annulus using pulsed tissue Doppler echocardiography. The mean peak early diastolic mitral annular velocity (E’) was measured on the septal and lateral sides, and the E to E’ ratio (E/E’) was calculated as a marker of LV filling pressure, as described previously [ 14 ]. Statistical analysis Continuous variables were expressed as mean ± standard deviation for normal distribution and median [25%, 75%] for non-normal distribution. Categorical variables were expressed as the number of patients with percentages. The t-test and Mann-Whitney U test were used for continuous variables, and Fisher exact test for categorical variables for two-group comparisons. Our patients were a different population, with an SR group suggesting symptomatic or asymptomatic CAD and an AF group with dyspnea suggesting paroxysmal or chronic atrial fibrillation. For this reason, inter-group comparisons were not made between the SR and AF groups but only intra-group comparisons were made. Univariate and multivariate linear regression models were performed to determine factors associated with left atrial volume index in the overall, SR, and AF groups after being divided into men and women. For multivariate analysis, the selected variables were Model 1 (age, BMI, men gender, and EATVI) and Model 2 (Model 1 + LVEF, antihypertensive drug). Univariate and multivariate linear regression models to estimate LAVI were also performed in the following subgroups: ≤65 and > 65-year-old, BMI ≤ 25 and > 25 kg/m 2 , diabetes mellitus yes or no, and hypertension yes or no. Statistical analysis were done by using Exploratory 6.9.4.1 (Exploratory Inc., Mill Valley, CA, USA), Prism 9.3.1 (GraphPad Software Inc., La Jolla, CA, USA), and R 4.0.2 (R Foundation for Statistical Computing, Vienna, Austria). All statistical tests were two-tailed, and statistical significance was set at P < 0.05.
Results General characteristics Overall General characteristics were shown in both men and women (Table 1 ). Overall, 247 patients had a mean age of 65 years; 67% were men, 26% had SR, and 74% had AF. Men vs. women (all rhythm) Men who had combined SR and AF (n = 165) were younger than women (Table 1 ). EATV was larger in men than in women (men, 131.8 ± 48.3 vs. women, 112.5 ± 42.3; P = 0.002); however, EATVI was comparable between men and women. The use of antihypertensive drugs, angiotensin-converting enzyme (ACE) inhibitors, or angiotensin II receptor antagonists (ARB) was similar, while the use of calcium blockers was lower, and the use of beta-blockers was higher in men. The LA volume was higher in men than in women, and the LAVI was comparable between men and women. E/E’ was lower in men than in women. SR vs. AF In men, age, BMI, EAT, and EATVI were comparable between the SR and AF groups. The prevalence of type 2 diabetes mellitus and hypertension was similar between the SR and AF groups; however, the use of antihypertensive drugs was higher in the AF group. LVEF, LA volume, and LAVI were comparable between the SR and AF groups; however, E/E’ was lower in patients with SR. In women, age, BMI, EAT, and EATVI were similar between the SR and AF groups. The prevalence of type 2 diabetes mellitus and hypertension was comparable between the SR and AF groups; however, the use of antihypertensive drugs was higher in the AF group. LVEF and LAVI were comparable between the SR and AF groups; however, E/E’ was lower in patients with SR. Comparison of EATVI and LAVI between SR and AF groups Overall, EATVI and LAVI were higher in the AF group than in the SR group (Fig. 2 ). In men, EATVI and LAVI were comparable between the SR and AF groups. In women, EATVI was comparable between the SR and AF groups, whereas LAVI was higher in the AF group than in the SR group. The relationship between EATVI and LAVI All rhythm Univariate analysis showed that EATVI was positively correlated with LAVI in men, but not in women (Fig. 3 , left panel). SR In overall patients, univariate analysis showed that only age was positively correlated with LAVI (Fig. 3 , middle panel, and Table 2 A). In men, univariate analysis showed that only EATVI was positively correlated with LAVI (Fig. 3 , middle panel, and Table 2 A). Multivariate analysis revealed that EATVI was not associated with LAVI after correcting for confounding factors (Models 1 and 2)(Table 2 A). In women, univariate and multivariate analyses showed that no significant factors were correlated with LAVI (Fig. 2 ; Table 2 ). AF Univariate analysis showed that age and beta-blocker use were positively correlated with LAVI in overall patients (Table 2 B). Univariate and multivariate analyses showed no significant correlation between EATVI and LAVI (Fig. 3 , right panel, and Table 2 B). In men, the univariate analysis showed that the use of antihypertensive drugs was positively correlated with LAVI (Table 2 B). Univariate and multivariate analyses showed no significant correlation between the EATVI and LAVI (Fig. 3 , right panel). In women, there was a negative correlation between EATVI and LAVI, not significantly, but a trend (Fig. 3 , right panel). Univariate analysis showed no significant factors correlated with LAVI (Table 2 B). However, multivariate analysis showed that age was positively correlated and EATVI was negatively correlated with LAVI (Models 1 and 2)(Table 2 B). Subgroup analysis Overall EATVI was positively correlated with LAVI for BMI ≤ 25 in patients with SR (Fig. 4 , upper row in the column of all rhythm). However, EATVI was not correlated with LAVI for all subgroups in patients with all rhythms or AF (Fig. 4 , upper row in the column of SR and AF). Men In all rhythms (Fig. 4 , middle row in the column of all rhythm), EATVI was positively correlated with LAVI in the subgroups of SR, DM (no), and HT (yes). In SR, the EATVI was positively correlated with the LAVI in patients aged ≥ 65 years (Fig. 4 , middle row in the column of SR). No significant factors were associated with LAVI in AF (Fig. 4 , middle row in the column of AF). Women In all rhythms (Fig. 4 , lower row in the column of all rhythm), EATVI was negatively correlated with LAVI in the subgroup of patients aged > 65 years. There were no significant factors associated with LAVI in SR (Fig. 4 , lower row in the column of SR) In AF, EATVI was negatively correlated with LAVI in the subgroups of age > 65 years, BMI > 25, and HT (yes) (Fig. 4 , lower row in the column of AF).
Discussion In this study, we evaluated sex differences in the association between EATVI and LAVI in patients with either SR or AF, and found two major findings. First, in overall, that includes both men and women, EATVI and LAVI were not significantly correlated with SR and AF. Second, when analyzed separately in men and women, the relationship between EATVI and LAVI differed between men and women. In patients with SR, there was a positive relationship between EATVI and LAVI in men, but not in women. In contrast, in patients with AF, a negative relationship was found between EATVI and LAVI in women, whereas no association was found in men. This is the first report to evaluate sex differences in the relationship between EATV and LAVI, suggesting that the effect of EAT on LAVI may differ between men and women. Relationship between EATVI and LAVI in overall patients EATV has been reported to be associated with the incidence and prevalence of AF [ 17 , 18 ]. EATV has also been associated with an increased LA size [ 19 ]. The prevalence and incidence of AF and LA size are closely and mutually related [ 9 , 19 ]. In other words, the larger the LA size, the more AF will develop; conversely, when AF develops, LA size will increase [ 17 , 18 ]. This study showed that EATV and LAVI were not significantly correlated with SR and AF. These results are not consistent with those of previous reports. However, as discussed below, the relationship between EATVI and LAVI was found to be significant when analyzed separately by sex. Sex differences in the association between EATVI and LAVI in patients with SR The relationship between the EATVI and LAVI differed between men and women in both the SR and AF groups. Sex differences in the degree of EATV and its clinical significance have been reported. The relationship between increased EATV and the presence of CAD [ 1 ] or a history of coronary artery bypass graft surgery [ 11 ] was found in men, but not in women. In our patients with SR, there was a positive relationship between EATVI and LAVI in men, but not in women. Fox et al. showed that in patients with SR, EATV correlated with LA dimension in men but not in women, which is in agreement with our results [ 19 ]. Fox et al. [ 19 ] and our results support that EATV is involved in LA size or LAVI in men, but not in women. Sex differences in the association between EATVI and LAVI in patients with AF Few studies have reported sex differences in the relationship between EATV and AF. van Rosendael et al. showed that EATV was a factor in the development of AF only in men, even after correcting for AF risk factors [ 20 ]. We also reported that EATV was a factor in the development of both PAF and PeAF in men [ 8 ]. To our knowledge, this is the first report to examine the association between EATV and LA structure, such as LA size or LAVI, in patients with AF. Our results showed a negative relationship between EATVI and LAVI in women with AF, suggesting that a larger EATVI suppresses the increase in LAVI in women with AF. Because this was a cross-sectional study, cause-and-effect relationships could not be determined. However, the sex difference in EATVI and LAVI in patients with AF is an interesting result and prompts further investigation in future studies. Potential mechanisms of sex differences in the relationship between EATVI and LAVI In men, there was a significant correlation between EATVI and LAVI in the SR group (Table 2 ). These correlations were found in the SR and DM (no) subgroups for all rhythms and in those aged ≥ 65 years in the SR group (Fig. 4 ); however, no correlations were found in the AF group. Theoretically, the deleterious effects of EATVI on LA size might be observed more clearly in patients with SR than in those with AF [ 19 ], since AF strongly affects LA function and size [ 21 ]. In contrast, in women, EATVI was not correlated with LAVI in the SR group but was negatively correlated with LAVI in the AF group (Table 2 ). Currently, there are no reasonable explanations for this; however, we have attempted to provide hypothetical explanations. In women with AF, EATVI was negatively correlated with LAVI in the subgroups of age > 65 years, BMI > 25, and HT (yes). Therefore, it can be assumed that EATVI inhibits LA enlargement in preobese elderly women. Ovarian estradiol inhibits left ventricular remodeling and protects against LA diastolic dysfunction [ 22 ]. However, the present study found a negative link between EATVI and LAVI in menopausal women, indicating effects other than those of ovarian estradiol. There are two possible mechanisms through which EATVI inhibits the increase in LAVI. First, EATVI may be linked to the favorable effects of estradiol and the protective adipocytokine profiles. Estradiol declines rapidly after the loss of ovarian function in menopause; however, it is continuously produced in the subcutaneous (SAT) and visceral adipose tissue (VAT) [ 23 ] in EAT [ 24 ]. We previously showed that anti-inflammatory adiponectin was largely produced, and proinflammatory IL1B and NLRP3 were less abundant in SAT and VAT of menopausal women than of men [ 25 ]. The anti-inflammatory and anti-fibrotic patterns of estradiol and adipocytokines in menopausal women with a larger EATVI could be protective against LA function. However, previous reports are against the protective effects of EAT on cardiac function in menopausal women [ 26 , 27 ]. Second, sex differences in heart cells, including myocytes, endothelial cells, smooth muscle cells, macrophages, fibroblasts, and valve cells, may be linked to the association between EATVI and LAVI [ 28 ]. Quantitative and qualitative differences in the local EAT and whole-body adiposity may differentially affect LA function in men and women. However, the sex differences and underlying mechanisms observed in the current study should be reconfirmed in future studies. If there are sex differences in the effect of EATV on LA size and LA function, it suggests that measures to prevent cardiovascular events related to LA abnormalities need to be considered separately for men and women [ 29 ]. Limitations First, the cross-sectional design of this study limited the interpretation of causality. Second, the predominantly Japanese patient sample in two recruit location could be biased and limits the generalizability of our findings to a broader population. Third, we did not measure waist circumference or waist-to-hip ratio, which may have added incremental information on local versus systemic adiposity effects. Fourth, AF frequently develops in elderly individuals, who are typically lean. Our study subjects were relatively young and obese, which may have biased the results. Finally, the small subgroup sizes limited the number of adjusted variables in the binary logistic regression models to avoid overfitting the models. Furthermore, the small subgroup sizes could make β error and tend to yield extreme data with no reproducibility. This is the first report on sex differences in EATVI and LAVI, with an exploratory analysis of the hypothesis not performed a priori. This study is not a conclusive design and we should be careful in the interpretation of this finding. Therefore, future large, unbiased and prospective studies, including external validation, are required to address these conclusions and detailed mechanisms.
Conclusion We evaluated the sex differences in the association between EATV and LAVI in patients with either SR or AF. We found a positive relationship among men with SR, and a negative relationship among women with AF. This is the first report to evaluate the relationship between EATV and LAVI, divided by sex, and may suggest clinical implications of sex differences in the etiology of AF.
Background Sex disparities in the association between epicardial adipose tissue volume (EATV) and cardiovascular disease have been reported. The sex-dependent effects of EATV on left atrial (LA) size have not been elucidated. Methods Consecutive 247 subjects (median 65 [interquartile range 57, 75] years; 67% of men) who underwent multi-detector computed tomography without significant coronary artery disease or moderate to severe valvular disease were divided into two groups: patients with sinus rhythm (SR) or atrial fibrillation (AF). Sex differences in the association between the EATV index (EATVI) (mL/m 2 ) and LA volume index (LAVI) in 63 SR (28 men and 35 women) and 184 AF (137 men and 47 women) patients were evaluated using univariate and multivariate regression analyses. Results In overall that includes both men and women, the relationship between EATVI and LAVI was not significantly correlated for patients with SR and AF. The relationship between EATVI and LAVI differed between men and women in both SR and AF groups. In SR patients, there was a positive relationship between EATVI and LAVI in men, but not in women. In contrast, in patients with AF, a negative relationship was found between EATVI and LAVI in women, whereas no association was found in men. Conclusions We evaluated sex differences in the association between EATVI and LAVI in patients with either SR or AF, and found a positive relationship in men with SR and a negative relationship in women with AF. This is the first report to evaluate sex differences in the relationship between EATVI and LAVI, suggesting that EAT may play a role, at least in part, in sex differences in the etiology of AF. Keywords
Acknowledgements We are deeply grateful to the staff at the Ultrasound Examination Center, Tokushima University Hospital, and Tomishiro Central Hospital for acquiring echocardiographic parameters. Author Contributions MSh conceptualized the study; SY and MSh analyzed the data and wrote the manuscript; MM, KO, and GM collected and managed the data; OA, SY, KK, TS, HY, DF, HM, and MSa reviewed and approved the final draft. Funding This study was supported by the Japan Society for the Promotion of Science (JPSP) (Grant No. JP16K01823 and JP17K00924 to MSh). Data Availability Derived data supporting the findings of this study are available from the corresponding author on reasonable requests. Declarations Competing interests The authors declare no competing interests. Ethics approval and consent to participate The ethical committees approved the present study (Fukushima Medical University #2019 − 182, Tomishiro Central Hospital R01R027). The need for informed consent was waived by the Ethics Committee/Institutional Review Board of Fukushima Medical University and Tomishiro Central Hospital because of the retrospective nature of the study and the lack of direct patient contact or intervention. Consent for publication Not applicable. Conflict of interest All authors declared no conflict of interest.
CC BY
no
2024-01-15 23:43:48
BMC Cardiovasc Disord. 2024 Jan 13; 24:46
oa_package/c8/32/PMC10788019.tar.gz
PMC10788020
38218810
Background The codon represents the fundamental connection between genes and proteins when deciphering genetic information. In the 64 standard genetic codes, there are 61 sense codons encoding 20 types of amino acids, and the remaining three are translation termination signals. Compared to the number of codable amino acids, the excess of possible nucleotide triplets results in a redundancy of the genetic code. Indeed, apart from tryptophan and methionine, which are encoded by a single codon, all other gene products are translated by two to six different triplets, a phenomenon defined as codon degeneracy [ 1 ]. Multiple codons that are decrypted into an identical amino acid are referred to synonymous codons, which are not uniformly utilized during protein synthesis in many organisms [ 2 ]. This species preference for certain codons, termed codon usage bias (CUB), is a consequence of the optimization of the deciphering strategy and plays an imperative role in the gene expression regulation [ 3 , 4 ]. Information on CUB can provide important insights into exogenous gene expression [ 5 ], gene function prediction [ 6 ], genetic divergence assessment [ 7 ], and organism evolution exploration [ 8 ] and can contribute to revealing the molecular mechanisms underlying the environmental adaptation of various species [ 9 ]. The degree of CUB divergence differs widely across species, genes, and even within an individual gene [ 10 , 11 ]. Causes for the existence of CUB in organisms are diverse and complicated. In the process of long-term evolution, CUB deviations are primarily driven by natural selection, directional mutation, and random genetic drift [ 12 ]. With the continuous progress of genome sequencing and bioinformatics, additional factors of complexity involved in CUB have been established over the last few decades, including genome size [ 13 ], gene expression pattern [ 14 ] and degree [ 15 ], gene length [ 16 ], efficient gene translation initiation [ 17 ], tRNA abundance [ 18 ] and interactions [ 19 ], synonymous substitution frequency [ 20 ], and mRNA folding [ 21 ], among others. Moreover, the patterns of CUB appear to be related to phylogenetic relationships, i.e., the more closely phylogenetically related species tend to share a more similar CUB pattern [ 22 ]. Given all of this, CUB is highly complex, and understanding it is challenging when considering the difficulty in determining the relative effect of the various factors. Much more detailed analyses of this fascinating phenomenon are needed to broaden our understanding of its biological implications and applications. Mitochondria (mt) are semiautonomous energy-producing eukaryotic organelles that drive oxidative phosphorylation for energy metabolism [ 23 ]. Ordinarily, plant mt genomes (mitogenomes) exhibit more complex features compared with both their counterparts in animals and the conserved plastid genomes of plants [ 24 ]. Ongoing advances in sequencing and assembly technologies have significantly promoted the complete sequencing of mitogenomes in land plants, but nevertheless, there is a requirement for more available data to gain more refined knowledge of plant mitogenomes. The analysis of codon preference in plant mitogenomes is of great significance for studying the genetic patterns, phylogenetic relationships, and evolution of their mtDNA. Although the research of CUB in plant mitogenomes has made continuous progress [ 25 – 27 ], it has not been addressed more extensively and intensively like its equivalent nuclear and plastid genomes. Hemerocallis citrina Baroni belongs to the Asphodelaceae family and is a popular perennial herbaceous plant widely cultivated across Asia for food nutrition [ 28 ], medicinal properties [ 29 ], and landscape beautification [ 30 ]. The immature flower buds are generally processed into dried vegetables with high nutraceutical value. H. citrina , also respected as the mother’s flower, has a long cultivation history and unparalleled cultural significance in China [ 31 ]. Recent studies have demonstrated that H. citrina is rich in flavonoids, polyphenols, alkaloids, and anthraquinones [ 29 , 30 ], making it a potent medicine for anti-inflammatory, antidepressant, and antioxidant uses. The successive acquisition of sequence information for the chloroplast (cp) [ 32 ] and nuclear [ 33 ] genomes symbolizes the considerable progress of H. citrina genomics research in recent years. Our team adopted a strategy of integrating Oxford Nanopore long-read and Illumina short-read sequencing to complete the sequencing, assembly, and annotation of the H. citrina mitogenome [ 34 ]. However, systematical analysis on the CUB of the mitogenome has not been performed in H. citrina. The knowledge gained from CUB research provides useful clues for improving the expression level of exogenous genes and optimizing molecular-assisted breeding programmes in H. citrina . Consequently, it is particularly significant to analyze the CUB patterns and further evaluate the evolution and phylogeny of H. citrina , considering its tremendous economic benefits and various utilities. In this research, we conducted comprehensive analysis of the CUB of mt genes in H. citrina . We investigated the codon composition characteristics and usage patterns and evaluated the factors that influence CUB. Furthermore, relative synonymous codon usage (RSCU)-based cluster and mt protein coding gene (PCG)-based phylogenetic analyses were performed to advance the understanding of the evolution and phylogeny of H. citrina . The results derived from this work may help to facilitate the mt gene utilization, genetic improvement, and molecular breeding of H. citrina .
Materials and methods Sequence retrieval The mitogenome sequences of H. citrina (MZ726801.1、MZ726802.1, and MZ726803.1) were retrieved from the National Center for Biotechnology Information (NCBI) database ( https://www.ncbi.nlm.nih.gov/nuccore/?term=Hemerocallis%20citrina%20mitochondrion ). We extracted the CDS of the mitogenome that started with ATG and ended with TAG, TGA, or TAA. Each CDS was greater than 300 bp in length and had exact multiples of three in the base number. In addition, the sequences used for the subsequent analysis were processed by eliminating duplicate sequences and sequences containing ambiguous bases, i.e., other than A, C, G, and T. Analysis of codon usage characteristic parameters The codon usage indicators of the selected CDS were analyzed using the CodonW v1.4.2 program ( http://codonw.sourceforge.net/ ), including CAI, CBI, Fop, RSCU, GC3s, A3s, T3s, C3s, and G3s. The other codon composition indices, including ENC, GCall, GC1, GC2, and GC3, were determined using the online Cusp and Chips programs from EMBOSS ( http://www.bioinformatics.nl/emboss-explorer/ ). Then, correlation analysis of the main characteristic parameters was performed using the Correlation Plot tool in Origin 2022 software based on the Pearson correlation coefficient method. ENC-plot analysis ENC is a vital indicator to evaluate the degree of preference for the imbalanced use of synonymous codons [ 52 ]. Usually, the ENC value ranges from 20 to 61 and is negatively correlated with codon preference. A smaller ENC value indicates a gene with a stronger bias, thus displaying extreme preference of using a unique codon to individually encode each amino acid. Conversely, a gene with an ENC value higher than 35 is considered to have weak usage preference and even no bias in the case of an ENC value up to 61 [ 49 ]. GC3s represents the average GC content at the ‘silent’ site of synonymous codons and is an important index to reveal the nucleotide composition bias. The ENC-plot was compiled using the ENC value of each gene as the ordinate and GC3s as the abscissa to explore the decisive factor influencing CUB. The standard curve was drawn according to the following equation: [ 52 ]. Under the condition that mutation pressure is the sole determinant of codon usage, the genes are located on or close to the standard curve, whereas when the points fall below and are far away from the excepted curve, this suggests that natural selection and other factors may greatly affect codon bias [ 53 ]. In order to better evaluate the difference between the expected and actual ENC values, the ENC ratio was calculated following the previously described formula: [ 50 ]. Neutrality plot analysis Neutrality plot analysis is commonly applied to study the correlation among bases at three codon positions, revealing the role of natural selection and mutation pressure in the CUB patterns [ 54 , 55 ]. In the current neutral graph, an individual mt gene of H. citrina is represented by a discrete point. The mean value of GC1 and GC2 for each gene was denoted by GC12, and GC12 and GC3 serve as the respective ordinate and abscissa of the scatterplot. It was assumed that if a notable correlation exists between GC12 and GC3, namely, that the discrete points are diagonally distributed in the plot with a slope close to one, this indicates that the CUB is dominated by mutation pressure. Contrastingly, a regression curve with a slope of zero and no significant correlation between GC12 and GC3 imply pure natural selection [ 56 ]. PR2-plot analysis In previous studies, the development of codon usage patterns was confirmed to be associated with the base composition at the ‘silent’ site of the codon [ 57 ]. PR2-plot analysis is extensively applied to evaluate the bias relationship between A/T and C/G at the synonymous site of the codon and, further, to determine the effects of mutation, selection, or other factors on CUB. The analysis is particularly meaningful for amino acids of a coding gene with four synonymous codons [ 58 ]. Consequently, the plan scatter diagram was constructed with A3s/(A3s + T3s) as the ordinate and G3s/(G3s + C3s) as the abscissa. The four-codon amino acids, i.e., valine, proline, threonine, alanine, and glycine, were selected to calculate the composition frequency of the third base position of each gene. The center point of the plot represents A = T and G = C with both coordinates equal to 0.5, presenting that codon bias is entirely caused by mutation; otherwise, natural selection and other factors may act on codon preference. The degree of distribution deviation from the center allows us to determine the direction and degree of the base deviation [ 58 ]. Analysis of RSCU and putative optimal codons The RSCU value of a codon refers to the ratio between the observed usage value and the expectation, reflecting the relative usage preference for specific codon compositions encoding the same amino acid [ 59 ]. When RSCU is equivalent to 1, codon usage is unbiased, and the codon is therefore selected randomly or equally. Codons with RSCU values greater than 1 are taken as high-frequency codons, which illustrates that codon usage is biased with high preference; the converse indicates the specific codon frequency is low [ 60 ]. For high-frequency codons, the codon whose ENC difference exceeds a certain critical value is determined to be an optimal codon [ 61 ]. The optimal codon is the preferred codon identified by calculating and ordering the ENC values of all genes. In general, highly expressed genes represent a large degree of codon preference and thus a small ENC value. On the basis of the above principles, 10% of the genes at the high and low end of the ordered ENC values were selected to establish low- and high-bias gene groups, respectively. The difference between the RSCU values of the codons from the two groups was calculated as ΔRSCU. The codons with RSCU > 1 and ΔRSCU > 0.08 were defined as the optimal codons of the gene [ 62 ]. Clustering of codon usage preference and phylogenetic analyses To explore the degree of divergence in the mitogenome codon usage more accurately, a cluster analysis was conducted between H. citrina and 14 other monocotyledons using SPSS 25.0 software. In the clustering process, each monocotyledon was taken as an object, and the RSCU values corresponding to 59 codons (excluding the codon AUG encoding methionine, UGG encoding tryptophan, and the three stop codons UAA, UAG, and UGA) were used as variables. The cluster pedigree was then established based on the squared Euclidean distance method [ 63 ]. Meanwhile, a contiguous sequence was constructed by lining up the 16 conserved mt PCGs ( atp1 , atp6 , atp9 , ccmB , ccmC , ccmFc , ccmFn , cob , cox3 , matR , nad3 , nad4L , nad6 , nad7 , nad9 , and rps12 ) followed by alignment using MAFFT v.7.4.0 program [ 64 ] for the analyzed species. The maximum likelihood (ML) phylogenetic tree was constructed based on a Tamura-Nei model using MEGA 7 software [ 65 ] with 1,000 bootstrap replicates.
Results Codon composition of the H. citrina mitogenome The final 28 protein coding sequences (CDS) of the mitogenome in H. citrina were available for codon usage analysis. The overall GC content of the whole mitogenome (GCall) was estimated at 43.59%, and the frequency of GC at each codon position (GC1, GC2, and GC3) was lower than 50% without exception (Table 1 ). Although the percentage of the GC composition in each gene was slightly different, the content order ranking of GC1 > GC2 > GC3 was highly consistent (Table 2 ). Furthermore, the average GC composition at the third position of synonymous codons (GC3s) of the CDS was lower than 50%, and the percentage of each individual base at the synonymous site (A3s, C3s, G3s, and T3s) conformed to the order ranking of T3s > A3s > G3s > C3s (Table 1 ), indicating that the codons of the H. citrina mitogenome tend to end in A/T. In the analysis of 28 CDS in the mitogenome, a total of 8850 codons were also obtained (Table 1 ), involving all 64 types of codons. The codon number of the mt genes in H. citrina varies greatly, ranging from 101 in rps14 to 673 in matR (Table 2 ). The effective number of codon (ENC) values range from 39.34 to 60.01, with an average of 53.89, exceeding 50 in the mitogenome. All of the genes had ENC values greater than 35, and up to 75% of them had high (> 50) ENC values, indicating fairly weak CUB in H. citrina . In addition, the codon adaptation index (CAI) values of the mt genes ranged from 0.12 to 0.21, with a mean value of 0.17, far less than 1. The values of codon bias index (CBI) and frequency of optimal codons (Fop) were clustered around − 0.18–0.02 and 0.29–0.42, respectively. In conclusion, the above results suggest that both codon bias and mt gene expression are relatively low in H. citrina . Correlation analysis between CUB parameters To reveal the role of the composition properties in CUB, Pearson’s correlation analysis was conducted between the important indices of codon usage. The results displayed a significantly positive correlation between GCall and GC1, GC2, and GC3 ( P < 0.01, Fig. 1 ), indicating an overall strong correlation of the composition among the three codon bases in the mitogenome. The ENC value had a significantly positive correlation with GC3 ( P < 0.01), implying that the base composition of the synonymous site has a crucial impact on CUB. Simultaneously, ENC positively correlated with the codon counts (CC) ( P < 0.05), which elucidates that gene length also contributes greatly to codon bias. Further, it was found that CBI and Fop were significantly correlated with GCall ( P < 0.01) and with GC3 ( P < 0.05), indicating that GCall is another major factor that affects CUB. Cause analysis of codon usage preference For purpose of understanding whether the G + C mutation bias influences the CUB of H. citrina , the ENC for genes were mapped against the GC3s. The ENC-plot of H. citrina is displayed in Fig. 2 . Only a few genes approached the solid curve, inferring that compositional mutation plays a significant role in CUB. However, most of the genes were scattered on both sides away from the standard curve, implying that natural selection has also shaped the CUB patterns. Besides, to better estimate the difference in ENC values, the ENC frequency distribution of the current genes was analyzed. The ENC ratio varied from − 0.15 to 0.25 (Fig. 3 ). Among the 28 mt genes, 19 (67.86%) had an ENC ratio greater than 0, reflected by these genes being distributed below the standard curve. Additionally, 15 genes (53.57%) were distributed within the range of -0.05–0.05 and had slight differences between the actual and expected ENC values. These results further demonstrate that the CUB patterns of the H. citrina mitogenome might be shaped by the joint effects of natural selection and mutation pressure. To determine the relationship among bases at three codon positions, neutrality plot analysis was performed for each mt gene of H. citrina (Fig. 4 ). Narrow ranges of GC3 and GC12 (0.2991–0.5676 and 0.3933–0.5199, respectively) were observed, and only a few genes were diagonally distributed in the plot. Moreover, GC12 displayed no significant correlation with GC3 ( r =-0.1755, P > 0.05), indicating that natural selection might have a considerable influence on the CUB of the H. citrina mitogenome. In addition, the slope of the regression line was − 0.1038, suggesting the mutation pressure effect accounted for only 10.38%. Consequently, the above results infer that natural selection is superior to mutation pressure in affecting the development of CUB in the H. citrina mitogenome. To further estimate the bias relationship of the four bases of mt genes, Parity rule 2 (PR2) - plot analysis was performed on the fourfold degenerate codon families. As depicted in Fig. 5 , the distribution of genes is not uniform in the PR2-plane. Most of the points are in the lower half of the area along the vertical direction, revealing that the use frequency of T is higher than that of A at the synonymous position. However, in the horizontal direction, more genes are obviously distributed on the left side of the plane, so the content of C is higher than that of G. Consequently, higher levels of pyrimidines (T and C) are confirmed at the ‘silent’ site of the codon in the H. citrina mitogenome. The unbalanced usage of bases again illustrates that not only mutation but also selection and other factors determine the CUB patterns of the H. citrina mitogenome. Determination of RSCU values and putative optimal codons In the present study, there were 29 codons with RSCU values greater than 1 defined as high-frequency codons (Fig. 6 ), indicating a high bias in the usage of these codons in the mitogenome of H. citrina. Excluding UUG (leucine), UCC (serine), and ACC (threonine), the remaining preferentially used codons end in A (11 of 29) or T (15 of 29). These results are further evidence that the mt gene of H. citrina is biased toward codons ending in A/T, illustrating that compositional constraints might have an impact on the synonymous CUB patterns of the H. citrina mitogenome. By comparing the RSCU values from the two bias gene groups constructed by the ENC difference, 22 optimal codons were identified whose RSCU values were greater than 1 with ΔRSCU > 0.08 (Table 3 ). In the preferred codons, 19 codons ended with A (7/19) or T (12/19), while only three codons ended with G (2/3) or C (1/3). These results illustrate that both the high-frequency and optimal codons of the mt genes in H. citrina tend to end in A/T. Cluster and phylogenetic analyses In order to gain a more accurate understanding of the divergence in the mitogenome codon usage, RSCU-based cluster analysis was conducted between H. citrina and other relatives. Since H. citrina is the only member of the Asphodelaceae family to have its complete mitogenome sequenced, 14 other monocotyledonous species with published mitogenome data were selected for subsequent comparison, i.e., Asparagus officinalis L. and Chlorophytum comosum (Thunb.) Baker of Asparagaceae, Allium cepa L. and Allium fistulosum L. of Amaryllidaceae, Apostasia shenzhenica Z.J.Liu & L.J.Chen, Paphiopedilum micranthum T. Tang & F. T. Wang, Gastrodia elata Blume, and Dendrobium amplum Lindl. of Orchidaceae, Cocos nucifera L. and Phoenix dactylifera L. of Arecaceae, Zea mays L. and Oryza sativa L. of Poaceae, Spirodela polyrrhiza (L.) Schleid. of Araceae, and Butomus umbellatus L. of Butomaceae. The RSCU-based cluster analysis results indicated that the analyzed monocotyledons group into two clusters (Fig. 7 ). The first cluster is a separate branch of Z. mays , while the second cluster is composed of the remaining 14 monocots. H. citrina along with Allium cepa , Allium fistulosum , Asparagus officinalis , Chlorophytum comosum , and S. polyrrhiza are classified as one clade, indicating that these species have similar codon usage patterns. In addition, the phylogenetic tree based on the mt PCG was also established for validation. As seen in Fig. 8 , although the 15 analyzed species are samely divided into two clades, there are several differences between the topologies of the two graphs, at least when distant taxa are compared. The analyzed Arecaceae and Orchidaceae plants were classified into different clades of the phylogeny. While Cocos nucifera and Phoenix dactylifera , which belong to Arecaceae share a similar RSCU with Orchidaceae taxa ( Paphiopedilum micranthum and Apostasia shenzhenica) . Z. mays and O. sativa , both members of the Poaceae family, were more distantly related in the RSCU-based clustering lineage. H. citrina clusters together with Asparagus officinalis , Chlorophytum comosum , Allium cepa , and Allium fistulosum , which intensely indicates their close relationships in evolutionary terms. When more closely related species are considered, such as H. citrina and Asparagaceae and Amaryllidaceae, a similar codon usage preference is observed. Consequently, H. citrina is close to Asparagus officinalis , Chlorophytum comosum , Allium cepa , and Allium fistulosum in evolutionary terms, reflecting a certain correlation between CUB and evolutionary relationships. These findings further support the likelihood that species with a close evolutionary relationship might have more similar codon usage preferences. However, it is worth noting that the position of S. polyrrhiza in the cluster analysis is quite different from that of the phylogenetic tree. The mt PCG-based phylogenetic tree is closer to the true evolutionary classification of the 15 monocotyledonous species. The discrepancy of taxonomic characters illustrates that the loci mutation of the genome sequence also plays an important role in the evolution of organisms.
Discussion Codon usage bias (CUB) in genomes is inevitable and refers to the uneven use of synonymous codons in gene coding to account for both gene regulation and molecular evolution. Previous studies have focused on the CUB patterns in many prokaryotes and eukaryotes, which was found to differ across various species and genes [ 10 , 11 ]. The ancestors of terrestrial plants are believed to be unicellular algae, which have undergone a prolonged period of selection favoring the enrichment of GC in their nuclear genomes [ 35 ]. However, the CUB of the cp and mt genomes differ from their host cell counterparts in terms of evolutionary rates and patterns [ 36 ]. It has been proposed that organellar genes exhibit AT-richness and bias toward A- or T-ending codons in their genomes [ 37 – 39 ]. Extensive studies on the codon preference of the cp genomes have been published for a wide variety of organisms, for instance, Oryza plants [ 40 ], Elaeagnus plants [ 41 ], Epimedium plants [ 42 ], Euphorbiaceae species [ 39 ], Asteraceae species [ 43 ], and Theaceae species [ 44 ], among others. Nevertheless, the status of plant mitogenomes has not been well surveyed. Here, we conducted comprehensive analysis on the CUB of the mt genes in H. citrina . Composition analysis of codons revealed that the GCall and GC3 of the mt genes were lower than 50%, presenting a preference for A/T-rich nucleotides and A/T-ending codons in H. citrina . Moreover, the high-frequency and optimal codons in the H. citrina mitogenome are predominantly A/T-ending codons. Similar findings have also been recorded in previous studies on the mitogenomes of O. sativa [ 45 ], Triticum aestivum L., Z. mays , Arabidopsis thaliana (L.) Heynh., and Nicotiana tabacum L. [ 37 ]. Our results lend further support to the evidence that the GC composition is the factor that most directly reflects the CUB patterns. Investigations of the factors influencing CUB in genomes have been continuous since striding into the era of genomics research. Various hypotheses have been proposed toward unraveling the reasons for deviations in CUB. Two typically accepted hypotheses explaining the origin of CUB are the selection–mutation–drift model [ 46 ] and neutral theory [ 47 ]. Ultimately, although CUB is determined by various factors, it appears that the evolution of CUB is a primary result of the balance between natural selection and directional mutation pressure. Research on Helianthus annuus L. suggests that mutation pressure is the most dominant evolutionary driving force of the cp genome [ 48 ]. However, in most cp genomes, natural selection would be more prominent in the formation of codon usage patterns [ 39 – 42 ]. With regard to plant mitogenomes, natural selection is considered to be the crucial factor shaping CUB [ 37 , 45 ]. In our present study, only a few genes approached the expected curve, whereas most genes were discretely distributed in the ENC-plot, implying mutation pressure is a minor factor of CUB. Combined neutrality plot and PR2-plot analyses augment the inference that the CUB of the H. citrina mitogenome are attributed to natural selection and mutation pressure, while natural selection is the decisive factor. Moreover, we found significant correlations of ENC with the GC3 and codon counts, suggesting that not only compositional constraints but also gene length contributes greatly to CUB. Therefore, we conclude that not only mutation but also selection and other factors, in combination, significantly contribute to framing the CUB patterns of the H. citrina mitogenome, and natural selection is the main determinant. The diversity of the CUB among various organisms can provide valuable information for species classification and molecular evolution. Research has indicated that there is a certain correlation between the distance of genetic relationships within species and codon usage preferences [ 22 ]. Here, we performed RSCU-based cluster analysis between H. citrina and 14 other monocots. H. citrina along with Allium cepa , Allium fistulosum , Asparagus officinalis , Chlorophytum comosum , and S. polyrrhiza were classified as one cluster, indicating that they share similar codon usage patterns. The phylogenetic tree, subsequently established based on the mt PCG, confirmed that H. citrina is evolutionarily close to Asparagus officinalis , Chlorophytum comosum , Allium cepa , and Allium fistulosum. Our findings are quite consistent with research on the cp genome of Mesona chinensis Benth [ 22 ], displaying a certain correlation between CUB and the evolutionary relationships. However, the phylogenetic relationship of the nuclear genomes between cotton species cannot be well reflected by taxonomic results based on codon RSCU values [ 49 ]. The likely explanation is the fairly weak codon usage preference in the H. citrina mitogenome, and thus the mt genes are not susceptible to external factors during evolution. Consequently, RSCU-based cluster analysis can complement taxonomic studies of H. citrina. Nevertheless, it is worth noting that the position of S. polyrrhiza in the cluster analysis is quite different from that of the phylogenetic tree. These results further indicate that the evolutionary relationship based on codon preference characteristics may miss some useful information, such as the non-preference codon information in CDS, which indirectly demonstrates that the non-preference codons also play an important role in organism evolution and phylogeny. For mitogenomes, although there are tremendous variations in the size, structure, and sequence among different species, the products encoded by mt genes are quite conservative [ 24 ]. Codon usage preferences affect gene expression through the preferential use of optimal codons to regulate the translational accuracy and efficiency [ 37 ]. Therefore, an investigation of CUB in the mitogenome could provide a basic understanding of mitogenomic evolution and offer deeper insight into improving the expression efficiency of exogenous target genes in host organisms. Typically, the optimal genes in the nuclear genome use predominantly C- or G-ending codons, whereas those in the organelle genome prefer A- or T-ending codons [ 37 , 50 , 51 ]. In this study, we identified a total of 29 high-frequency codons and 22 optimal codons, and most of them exhibit a preference for A or T at the synonymous site. Notably, the mitogenomes of higher plants such as T. aestivum , N. tabacum , Arabidopsis thaliana , Z. mays , Phycomitrella patens , and Marchntia polymorpha also tend to have optimal codons that end in A or T [ 37 ]. The optimization of codons will contribute essential information for the genetic transformation and protein expression of mt genes in H. citrina .
Conclusions In this study, mt genes of H. citrina were systematically analyzed to study the CUB patterns as well as the related forces influencing their evolutionary processes. The mitogenome exhibited weaker CUB and a preference for A/T-rich nucleotides and A/T-ending codons. Extensive measures were applied to evaluate the causes of CUB, as illustrated by the estimate of the codon usage characteristic indices, correlation, ENC-plot, neutrality plot, and PR2-plot analyses. Based on these, the formation of the CUB patterns of the H. citrina mitogenome is attributed to the combined effects of multiple factors, with natural selection being the decisive factor. Meanwhile, the RSCU-based cluster analysis and mt PCG-based phylogenetic tree revealed a certain correlation between CUB and evolutionary relationships. The inferred optimal codons also provide essential information for optimizing gene expression in H. citrina . In summary, these findings enrich our knowledge on the codon usage patterns of mitogenomes and serve as a fundamental reference for further studies on genetic modification and phylogenetic evolution in H. citrina .
Background Hemerocallis citrina Baroni is a traditional vegetable crop widely cultivated in eastern Asia for its high edible, medicinal, and ornamental value. The phenomenon of codon usage bias (CUB) is prevalent in various genomes and provides excellent clues for gaining insight into organism evolution and phylogeny. Comprehensive analysis of the CUB of mitochondrial (mt) genes can provide rich genetic information for improving the expression efficiency of exogenous genes and optimizing molecular-assisted breeding programmes in H. citrina . Results Here, the CUB patterns in the mt genome of H. citrina were systematically analyzed, and the possible factors shaping CUB were further evaluated. Composition analysis of codons revealed that the overall GC (GCall) and GC at the third codon position (GC3) contents of mt genes were lower than 50%, presenting a preference for A/T-rich nucleotides and A/T-ending codons in H. citrina . The high values of the effective number of codons (ENC) are indicative of fairly weak CUB. Significant correlations of ENC with the GC3 and codon counts were observed, suggesting that not only compositional constraints but also gene length contributed greatly to CUB. Combined ENC-plot, neutrality plot, and Parity rule 2 (PR2)-plot analyses augmented the inference that the CUB patterns of the H. citrina mitogenome can be attributed to multiple factors. Natural selection, mutation pressure, and other factors might play a major role in shaping the CUB of mt genes, although natural selection is the decisive factor. Moreover, we identified a total of 29 high-frequency codons and 22 optimal codons, which exhibited a consistent preference for ending in A/T. Subsequent relative synonymous codon usage (RSCU)-based cluster and mt protein coding gene (PCG)-based phylogenetic analyses suggested that H. citrina is close to Asparagus officinalis , Chlorophytum comosum , Allium cepa , and Allium fistulosum in evolutionary terms, reflecting a certain correlation between CUB and evolutionary relationships. Conclusions There is weak CUB in the H. citrina mitogenome that is subject to the combined effects of multiple factors, especially natural selection. H. citrina was found to be closely related to Asparagus officinalis , Chlorophytum comosum , Allium cepa , and Allium fistulosum in terms of their evolutionary relationships as well as the CUB patterns of their mitogenomes. Our findings provide a fundamental reference for further studies on genetic modification and phylogenetic evolution in H. citrina . Keywords:
Abbreviations Adenine Cytosine Codon adaptation index Codon bias index Coding sequences Chloroplast Codon usage bias Effective number of codons Frequency of optimal codons Guanine The overall GC content of the genome The GC content at each codon position The average value of GC1 and GC2 for each gene The average GC content at the third position of synonymous codons Mitochondrial National Center for Biotechnology Information Protein coding genes Parity rule 2 Relative synonymous codon usage Thymine The frequency of T, A, C, and G at the third position of synonymous codons Uracil Acknowledgements The authors thank MDPI for English language editing. Authors’ contributions KZ and YW conceived the study. KZ and XS performed data analysis and drafted the manuscript. YW and YZ supervised the research and revised the manuscript. All authors have read and approved the final manuscript. Funding This work was funded by Youth Science and Technology Innovation Project of Tianjin Academy of Agricultural Sciences (Grant No. 2022014), Scientific Research Project of Shanxi Datong University (Grant No. 2022CXY22). Availability of data and materials The mitochondrial genome datasets generated and analyzed in this study are available in the NCBI, Hemerocallis citrine (MZ726801.1-MZ726803.1, https://www.ncbi.nlm.nih.gov/nuccore/?term=Hemerocallis%20citrina%20mitochondrion ), Allium cepa (KU318712.1, https://www.ncbi.nlm.nih.gov/nuccore/KU318712.1 ), Allium fistulosum (OL347690.1, https://www.ncbi.nlm.nih.gov/nuccore/OL347690.1 ), Apostasia shenzhenic a (NC_077647.1, https://www.ncbi.nlm.nih.gov/nuccore/NC_077647.1 ), Asparagus officinalis (NC_053642.1, https://www.ncbi.nlm.nih.gov/nuccore/NC_053642.1 ), Butomus umbellatus (KC208619.1, https://www.ncbi.nlm.nih.gov/nuccore/KC208619.1 ), Chlorophytum comosum (MW411187.1, https://www.ncbi.nlm.nih.gov/nuccore/MW411187.1 ), Cocos nucifera (KX028885.1, https://www.ncbi.nlm.nih.gov/nuccore/KX028885.1 ), Dendrobium amplum (MH591879.1-MH591896.1, https://www.ncbi.nlm.nih.gov/nuccore/?term=Dendrobium+amplum+mitochondrion%2C+complete+genome ), Gastrodia elata (MF070084.1-MF070102.1, https://www.ncbi.nlm.nih.gov/nuccore/?term=Gastrodia%20elata%20chromosome%20mitochondrion%2C%20complete%20sequence ), Paphiopedilum micranthum (OP465200.1-OP465225.1, https://www.ncbi.nlm.nih.gov/nuccore/?term=Paphiopedilum%20micranthum%20chromosome%20mitochondrion%2C%20complete%20sequence ), Phoenix dactylifera (MH176159.1, https://www.ncbi.nlm.nih.gov/nuccore/MH176159.1 ), Spirodela polyrrhiza (JQ804980.1, https://www.ncbi.nlm.nih.gov/nuccore/JQ804980.1 ), Oryza sativa (NC_011033.1, https://www.ncbi.nlm.nih.gov/nuccore/NC_011033.1 ), and Zea mays (NC_007982.1, https://www.ncbi.nlm.nih.gov/nuccore/NC_007982.1 ). Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:48
BMC Genom Data. 2024 Jan 13; 25:6
oa_package/61/6d/PMC10788020.tar.gz
PMC10788021
38218896
Background Prostein (P501S), also termed solute carrier family 45 member 3 (SLC45A3) is a protein composed of 553 amino acids which is coded by the SLC45A3 gene at chromosome 1q32.1 [ 1 ]. Its function is not well known but some data suggest a role in transmembrane transport of sugars [ 2 ]. Prostein is predominantly expressed in the prostate, where its expression is androgen regulated [ 3 ]. Prostein is the second most common 5′ partner gene in ETS Transcription Factor ERG (ERG) rearrangements in prostate cancer after Transmembrane Serine Protease 2 (TMPRSS2) [ 4 , 5 ], another constitutively expressed androgen regulated gene in prostate epithelium [ 6 ]. In the brain, prostein plays a role in regulating the lipid metabolism of oligodendrocytes and myelin [ 7 ]. A high level of prostein expression is a common feature in prostate cancer. Amanda et al. [ 8 ] described prostein positivity in 97% of 59 analyzed prostate cancers. Queisser et al. [ 9 ] found prostein expression in 96% of 79 prostate cancers. Sheridan et al. [ 10 ] reported prostein positivity in 99% of 53 metastatic prostatic carcinomas. Based on these data, prostein immunohistochemistry (IHC) has been suggested as a diagnostic tool for the distinction of prostatic adenocarcinoma from other tumors. This notion is also supported by data describing high specificity of prostein expression for prostate cancer. For example, Garudadri et al. [ 11 ] described a 100% specificity of prostein IHC in a study on 100 prostatic carcinomas and 60 normal and cancerous extra-prostatic tissues. In an analysis of 600 tumors from 20 sites of origin, Mochizuki et al. [ 12 ] found prostein positivity in 30 of 30 prostate adenocarcinomas but in only one tumor each of 30 hepatocellular carcinomas and of 30 invasive breast cancers of no special type (NST). Kalos et al. [ 3 ] did not detect prostein staining in 3,454 samples of more than 130 tumor entities and subentities while 94% of 60 analyzed prostate cancers showed prostein positivity. Osunkoya et al. [ 13 ] did not find prostein positivity in any of 9 colorectal adenocarcinomas infiltrating the prostate. Srinivasan et al. [ 14 ] did not see any prostein positivity in 132 urothelial carcinomas. However, Arnesen et al. [ 15 ] found prostein positivity in 11 of 14 Sertoli-Leydig or Leydig cell tumors of the testis and ovary and Chuang et al. [ 16 ] reported prostein positivity in 7 of 41 invasive urothelial carcinomas. To further corroborate the potential diagnostic utility of prostein IHC, a comprehensive survey of prostein immunostaining in an even broader range of tumor types is desirable. We therefore evaluated prostein expression in more than 19,000 tumor tissue samples from 152 different tumor types and subtypes as well as 76 different non-neoplastic tissue types by IHC in a tissue microarray (TMA) format.
Materials and methods Tissue microarrays (TMAs) Our normal tissue TMA was composed of 8 samples from 8 different donors for each of 76 different normal tissue types (608 samples on one slide). The cancer TMAs contained a total of 19,202 primary tumors from 152 tumor types and subtypes. The composition of both normal and cancer TMAs is described in detail in the “ Results ” section. Clinico-pathological data including pathological tumor stage (pT), grade, lymph node status (pN), lymphatic vessel (L) and blood vessel (V) infiltration were available for 327 gastric, 2,139 breast, and 2,351 colorectal carcinomas. All samples were from the archives of the Institutes of Pathology, University Hospital of Hamburg, Germany, the Institute of Pathology, Clinical Center Osnabrueck, Germany, and Department of Pathology, Academic Hospital Fuerth, Germany. Tissues were fixed in 4% buffered formalin and then embedded in paraffin. TMA tissue spot diameter was 0.6 mm. The use of archived remnants of diagnostic tissues for manufacturing of TMAs and their analysis for research purposes as well as patient data analysis has been approved by local laws (HmbKHG, § 12) and by the local ethics committee (Ethics commission Hamburg, WF-049/09). All work has been carried out in compliance with the Helsinki Declaration. Immunohistochemistry Freshly cut TMA sections were immunostained on one day and in one experiment. Slides were deparaffinized with xylol, rehydrated through a graded alcohol series and exposed to heat-induced antigen retrieval for 5 min in an autoclave at 121 °C in pH 9.0 DakoTarget Retrieval SolutionTM (Agilent, CA, USA; #S2367). Endogenous peroxidase activity was blocked with Dako Peroxidase Blocking SolutionTM (Agilent, CA, USA; #52,023) for 10 min. Primary antibody specific for prostein (rabbit recombinant monoclonal, MSVA-460R, MS Validated Antibodies, Hamburg, Germany; #5241-460R) was applied at 37 °C for 60 min at a dilution of 1:150. For the purpose of antibody validation, the normal tissue TMA was also analyzed by the rabbit recombinant monoclonal prostein antibody EPR4795(2) (Abcam, Cambridge, UK; #ab137065) at a dilution of 1:150 and an otherwise identical protocol. Bound antibody was then visualized using the EnVision KitTM (Agilent, CA, USA; #K5007) according to the manufacturer’s directions. The sections were counterstained with haemalaun. For normal tissues, the staining intensity of positive cells was semi-quantitively recorded (+, ++, +++). For tumor tissues, the percentage of prostein positive neoplastic cells was estimated, and the staining intensity was semi-quantitatively recorded (0, 1+, 2+, 3+). For statistical analyses, the staining results were categorized into four groups. Tumors without any staining were considered negative. Tumors with 1 + staining intensity in ≤ 70% of tumor cells or 2 + intensity in ≤ 30% of tumor cells were considered weakly positive. Tumors with 1 + staining intensity in > 70% of tumor cells, 2 + intensity in 31-70%, or 3 + intensity in ≤ 30% of tumor cells were considered moderately positive. Tumors with 2 + intensity in > 70% or 3 + intensity in > 30% of tumor cells werde considered strongly positive. Statistics Statistical calculations were performed with JMP 16 software (SAS Institute Inc., NC, USA). Contingency tables and the chi2-test were performed to search for associations between prostein immunostaining and tumor phenotype.
Results Technical issues A total of 17,146 (89.3%) of 19,202 tumor samples were interpretable in our TMA analysis. Non-interpretable samples demonstrated lack of unequivocal tumor cells or loss of the tissue spot during technical procedures. A sufficient number of samples (≥ 4) of each normal tissue type was evaluable. Prostein in normal tissues Prostein staining was always granular, cytoplasmic and predominantly perinuclear (“endoplasmatic reticulum pattern”). The staining was particularly strong in acinar cells of the prostate and occurred at lesser intensity in surface epithelial cells of the stomach, in goblet cells of the respiratory epithelium of the lung and (weaker) in bronchial glands, as well as in a subset of epithelial cells of the adenohypophysis. A weak prostein staining was also seen in few colorectal epithelial cells (not in all samples) and in a subset of pancreatic islet cells. A perinuclear granular cytoplasmic prostein positivity also occurred in a small fraction of (monocytic) cells in the spleen and in few cells of lymph nodes. In the brain, some glia cells showed a perinuclear granular cytoplasmic prostein staining. Representative images are shown in Fig. 1 . All these findings were seen by both antibodies, MSVA-460R and EPR4795(2). An additional cytoplasmic staining in the placenta and in testicular cells of the spermatogenesis was only seen by EPR4795(2) (Supplementary Fig. 1 ) and therefore considered an antibody-specific cross-reactivity of EPR4795(2). Prostein immunostaning was absent in skeletal muscle, heart muscle, smooth muscle, myometrium of the uterus, corpus spongiosum of the penis, ovarian stroma, fat, skin (including hair follicles and sebaceous glands), oral mucosa of the lip, surface epithelium of the oral cavity and the tonsil, transitional mucosa of the anal canal, ectocervix, squamous epithelium of the esophagus, urothelium of the renal pelvis and urinary bladder, decidua, placenta, thymus, tonsil, gall bladder, liver, parotid gland, submandibular gland, sublingual gland, duodenum, small intestine, appendix, colorectum, kidney, seminal vesicle, testis, epididymis, breast, endocervix, endometrium, fallopian tube, adrenal gland, parathyroid gland, and the neurohypophysis. Prostein in cancer tissues Similarly, as in normal tissues, prostein immunostaining was typically cytoplasmic, granular and perinuclear in tumors. Prostein positivity, and especially a strong prostein staining was predominantly seen in prostatic adenocarcinomas. 93% of primary prostate cancers and 63% of recurrent prostate cancers showed a strong prostein immunostaining while 98% of primary prostate cancers and 94% of recurrent prostate cancers showed at least a weak positivity. Prostein staining was absent in all 18 small cell neuroendocrine carcinomas of the prostate. Prostein positivity - mostly at a lower level - was also detectable in 1,204 (7.2%) of the 16,709 analyzable extra-prostatic tumors. Of these, 922 (5.5%) showed a weak, 239 (1.4%) a moderate, and only 43 (0.3%) a strong immunostaining. Overall, 50 (34.0%) of 157 extra-prostatic tumor categories showed detectable prostein expression with 12 (8.2%) tumor categories including at least one strongly positive tumor (Table 1 ). Representative images of prostein positive tumors are shown in Fig. 2 . Extra-prostatic tumors with highest rate of prostein positivity included different subtypes of salivary gland tumors (7.6-44.4%), neuroendocrine neoplasms (15.8-44.4%), adenocarcinomas of the gastrointestinal tract (7.3-14.8%), and biliopancreatic adenocarcinomas (3.6-38.7%), hepatocellular carcinomas (8.1%), as well as adenocarcinomas of other organs of origin (up to 21%). A graphical representation of a ranking order of prostein positive and strongly positive cancers is given in Fig. 3 . A comparison between prostein expression and tumor phenotype is shown in Table 2 . Detectable prostein expression was linked to high grade ( p = 0.0105), HER2 positivity ( p = 0.0312), and estrogen receptor negativity ( p = 0.0330) in invasive breast carcinomas of no special type (NST), V0 status ( p = 0.0139), right sided tumor location ( p = 0.0479), and KRAS mutations ( p = 0.0133) in colorectal cancer, pN0 stage ( p = 0.0424) in pancreatic ductal adenocarcinoma as well as to microsatellite instability in gastric cancers ( p = 0.0015).
Discussion Our successful analysis of more than 17,000 tumors provided a comprehensive overview on the patterns of prostein expression in cancer. The predominant expression of prostein in prostate cancer was expected since studies analyzing 9-220 tumor cases had earlier identified prostein positivity in up to 100% of prostate cancers [ 4 , 11 , 17 , 18 ]. Our positivity rate of 100% in Gleason 3 + 3 = 6, 98% in Gleason 4 + 4 = 8 and 97% in Gleason 5 + 5 = 10 prostate cancers is comparable with results from most previous studies [ 3 , 19 ]. The concept that prostein IHC can be used to corroborate a suspected prostatic origin of a cancer tissue is further supported by the retained prostein expression in at least 80% of prostate cancers that recurred after hormonal therapy [ 19 ]. Sheridan et al. [ 10 ] had previously identified prostein positivity in 99% of 53 analyzed prostatic cancer metastases. Hernandez-Llodra et al. [ 4 ] have previously suggested that the few prostate cancers with reduced or absent prostein expression might harbor SLC45A3:ERG fusions and that these tumors may be characterized by poor prognosis. The extensive analysis of non-prostatic tumors in this study identified a considerable number of tumor entities that can also express prostein. Although prostein expression was less frequent and often at markedly lower level in these tumors than in prostate cancer, the characteristic staining pattern with a distinct granular, perinuclear cytoplasmic prostein staining was always retained. The most commonly prostein positive tumors included salivary gland tumors, neuroendocrine neoplasms, various categories of gastrointestinal or biliopancreatic adenocarcinomas, hepatocellular carcinomas as well as adenocarcinomas of other organs of origin. All these tumor entities represent diagnostic options in case of a prostein positive tumor mass. It is of note that in some tumor entities, a perinuclear prostein expression was also observed in cells of monocytic origin such as for example in epitheloid cells accompanying lymphomas or in giant cells of tendon sheath tumors or in pilomatricoma. These findings fit with our observation of prostein positive monocytic cells in the spleen and the lymph node. Our data in primary and recurrent prostate cancer suggest sensitivity of 94–98% for the identification of a prostatic cancer origin, although these numbers might represent a slight underestimate because of an overrepresentation of Gleason 4 + 4, 5 + 5 and recurrent prostate cancers in our cohort. Accordingly, the sensitivity of PSAP (96.5%) and PSA (99.8%) were slightly higher in previous studies of our group analyzing large consecutive prostate cancer cohorts including much higher proportions of Gleason 3 + 3 and 3 + 4 cancer than in the current set of tumors. The specificity for the distinction of prostate cancer was somewhat lower for prostein (91.7%) as compared to the 100% for PSAP and PSA (99.7%) observed in these earlier studies [ 20 , 21 ]. However, the characteristic granular perinuclear staining pattern that can hardly result from staining artefacts is a major strongpoint of prostein IHC which may thus justify the use of prostein antibodies as a part of a diagnostic panel for the identification of a prostatic cancer origin. The location of the prostein protein in subcellular vesicles in the cytoplasm and co-localization to other compartments, i.e., the endoplasmatic reticulum fits well with the estimated function of prostein as a sucrose transport protein [ 2 , 22 ]. However, many of the extra-prostatic tumor entities that were most commonly prostein positive were adenocarcinomas or neuroendocrine tumors. As these cell types share a secretory or neurosecretory function it might be speculated that prostein may have also a general role in cell secretion. The comparison of detectable prostein expression with histopathological and molecular tumor parameters in breast, colon, gastric and pancreatic adenocarcinoma had revealed only few statistically significant associations which do not provide strong evidence for a relevant biological/clinical role of prostein in non-prostatic cancers. It is possible that these findings represent statistical artifacts attributed to the high number of statistical analyses executed in this study. Considering the large scale of our study, our assay was extensively validated by comparing our IHC findings in normal tissues with data obtained by another independent anti-prostein antibody and RNA data derived from three different publicly accessible databases [ 22 – 25 ]. To ensure an as broad as possible range of proteins to be tested for a possible cross-reactivity, 76 different normal tissues categories were included in this analysis. The validity of our assay was supported by the finding of the highest levels of prostein immunostaining in the prostate, the organ with the highest documented RNA expression level and the finding of prostein positive cell populations in most other organs with documented low level RNA expression such as the stomach, respiratory epithelium, hypophysis, spleen, and the brain. Only RNA expression in the liver could not be corroborated by our assay. That all prostein positive cell types detected by MSVA-460R (islet cells of the pancreas, respiratory epithelium, epithelial cells of the adenohypophysis, surface epithelial cells of the stomach, glia cells in the brain, monocytic cells in the spleen and lymph nodes) were also identified by the independent second antibody EPR4795(2) (Supplementary Fig. 1 ) adds further evidence for the validity of our assay. Additional stainings of the placenta and the testis which were only observed by EPR4795(2) were considered antibody specific cross-reactivities of this antibody and suggest that this antibody is less appropriate for prostein assessment.
Conclusion Our data provide a comprehensive overview on prostein expression in human cancers. The data show that prostein is a highly sensitive prostate cancer marker with positive results in at least 98% of primary prostate cancers. Because prostein can also be expressed in various other tumor entities, the classification of a tumor mass as a prostate cancer should not be made based on prostein positivity alone.
Background Prostein (P501S), also termed solute carrier family 45 member 3 (SLC45A3) is an androgen regulated protein which is preferentially expressed in prostate epithelial cells. Because of its frequent expression in prostate cancer, prostein was suggested a diagnostic prostate cancer marker. Methods In order to comprehensively assess the diagnostic utility of prostein immunohistochemistry, a tissue microarray containing 19,202 samples from 152 different tumor types and subtypes as well as 608 samples of 76 different normal tissue types was analyzed by immunohistochemistry. Results Prostein immunostaining was typically cytoplasmic, granular and perinuclear. Prostein positivity was seen in 96.7% of 419 prostate cancers including 78.3% with strong staining. In 16,709 extra-prostatic tumors, prostein positivity was observed in 7.2% of all cases but only 0.3% had a strong staining. Overall, 50 different extra-prostatic tumor categories were prostein positive, 12 of which included at least one strongly positive case. Extra-prostatic tumors with highest rates of prostein positivity included different subtypes of salivary gland tumors (7.6-44.4%), neuroendocrine neoplasms (15.8-44.4%), adenocarcinomas of the gastrointestinal tract (7.3-14.8%), biliopancreatic adenocarcinomas (3.6-38.7%), hepatocellular carcinomas (8.1%), and adenocarcinomas of other organs (up to 21%). Conclusions Our data provide a comprehensive overview on prostein expression in human cancers. Prostein is a highly sensitive prostate cancer marker occurring in > 96% of prostate cancers. Because prostein can also be expressed in various other tumor entities, classifying of a tumor mass as a prostate cancer should not be based on prostein positivity alone. Supplementary Information The online version contains supplementary material available at 10.1186/s13000-023-01434-5. Keywords Open Access funding enabled and organized by Projekt DEAL.
Supplementary Information
Abbreviations ETS Transcription Factor ERG Immunohistochemistry Solute carrier family 45 member 3 Tissue microarray Transmembrane Serine Protease 2 Acknowledgements We are grateful to Laura Behm, Inge Brandt, Maren Eisenberg, and Sünje Seekamp for excellent technical assistance. Authors’ contributions FV, SK, CB, RS, MK, GS: contributed to conception, design, data collection, data analysis and manuscript writing.FV, SW, MF, AM, FB, AML, DP, AH, ML, FL, VR, DH, CF, KM, CB, PL, SS, DD, AHM, TK, TSC, FJ, NG, EB, and SM: participated in pathology data analysis, data interpretation, and collection of samplesRS, MK, CHM: data analysisSK, RS, GS: study supervisionAll authors agree to be accountable for the content of the work. Funding Open Access funding enabled and organized by Projekt DEAL. Availability of data and materials All data generated or analyzed during this study are included in this published article. Declarations Ethics approval and consent to participate The use of archived remnants of diagnostic tissues for manufacturing of TMAs and their analysis for research purposes as well as patient data analysis has been approved by local laws (HmbKHG, § 12) and by the local ethics committee (Ethics commission Hamburg, WF-049/09). All work has been carried out in compliance with the Helsinki Declaration. Consent for publication Not required. Competing interests The rabbit recombinant prostein-antibody, clone MSVA-460R was provided from MS Validated Antibodies GmbH (owned by a family member of GS).
CC BY
no
2024-01-15 23:43:48
Diagn Pathol. 2024 Jan 13; 19:12
oa_package/fd/45/PMC10788021.tar.gz
PMC10788022
38218846
Introduction Histiocytic necrotizing lymphadenitis (HNL) is a self-limiting disease of unknown cause, also known as Kikuchi-Fujimoto disease (KFD). It was first reported in 1972 by KIKUCHI and FUJIMOTO et al. [ 1 ]. HNL is a relatively uncommon benign lymph node enlargement, a self-limiting disease that usually occurs in Asian women in their 20 and 30 s. Some studies have shown a prevalence of nearly 1:2 in men and women. The most common symptoms are enlarged lymph nodes in the neck with tenderness and fever. The etiology of HNL is unclear. The association of HNL and malignancy is also seldom discussed. The coexistence of HNL and tumor is extremely rare. Herein, we report a case of metastatic papillary thyroid carcinoma coexistent with histiocytic necrotizing lymphadenitis in the same lymph node.
Discussion The clinical manifestations of the HNL lack specificity and may resolve spontaneously within 1 to 6 months after diagnosis. The most common manifestation of patients is localized cervical lymph node enlargement with tenderness, often accompanied by fever. Other rare symptoms include vomiting, diarrhea, night sweats, upper respiratory symptoms, etc. The disease is rare in extranodal lymph nodes, and is most common in the skin, where skin involvement usually presents as rashes, nodules, erythematous papules and erythema multiforme on the face and upper trunk [ 2 ]. It also rarely occurs in the bone marrow, liver, submandibular glands [ 3 ] and parotid glands [ 4 ]. The histological features are patchy and irregular necrotic areas with the expansion of the paracortical area of lymph nodes. Apoptotic bodies, crescent tissue cells, and proliferating plasmacytoid monocytes are seen in the necrotic area, accompanied by abundant nuclear fragments but a lack of neutrophils and eosinophils. According to the different stages of the disease, it is divided into three types: the proliferative type, the necrotizing type and the xanthomatous type. The proliferative type is characterized by the proliferation of histiocytes and plasmacytoid dendritic cells, mixed with small lymphocytes and nuclear fragmentation, while necrosis is rare or absent; the necrotizing type is the most common and is characterized by a significant increase in necrotic components; and the xanthomatous type refers to the predominance of foamy histiocytes in the lesion [ 5 ]. This case is the necrotizing type. The disease is self-limiting, and no specific treatment is recommended. The treatment is aimed at relieving symptoms (rest, analgesics and antipyretics) and corticosteroids are available for recurrent disease or for patients with a more severe clinical course [ 4 ]. There are no definitive laboratory tests to diagnose HNL, and lymph node biopsy should be performed in persons suspected of this disease to avoid misdiagnosis. The pathogenesis of HNL is still unclear. It is assumed that HNL represents the T-cell mediated immune response of genetically susceptible populations to various antigens, and patients with HNL more often have specific human leukocyte antigen (HLA) Class II alleles, specifically HLA-DPA1 and HLA-DPB1, compared with the general population. These alleles are more prevalent in Asians and extremely rare or absent in whites, which may account for the disease being more common in Asians. Pathogens associated with triggering this response include Epstein‒Barr virus, human herpes virus, microvirus B19, cytomegalovirus, and human herpesvirus. HNL can be associated with autoimmune diseases such as systemic lupus erythematosus, mixed connective tissue disease, psoriasis and other autoimmune diseases, suggesting that it may be a potential manifestation of autoimmune disease [ 6 ]. HNL needs to be differentiated from lymphoma, infectious lymphadenitis, systemic lupus erythematosus, infectious mononucleosis, and other diseases. (1) The proliferation of immunoblasts and plasmacytoid dendritic cells at the margins of HNL necrotic foci can mimic the invasion of T cells or B cells of non-Hodgkin’s lymphoma and be easily confused. However, the tumor cells of lymphoma have obvious cell atypia, increased volume, thickened nuclear membrane, increased and enlarged nucleoli, and pathological mitosis, but generally no necrotic lesions. Immunohistochemical staining shows that T cells or B cells are cloning-positive. Focal necrosis, nuclear fragmentation and the histiocytic cells that have engulfed nuclear debris may be present in a small number of lymphomas, especially in T-cell lymphoma. However, positive TCR gene rearrangement, few histiocytic cells, and a long course of disease all support the diagnosis of lymphoma [ 7 ]. (2) Necrotizing lymphadenitis can be caused by a variety of infectious factors and is easily confused with HNL. Epithelioid histiocytosis with granulomatous formation and scattered giant cells are seen in necrotizing lymphadenitis of tuberculosis, histoplasmosis, leprosy, and cat-scratch disease. In cases of syphilitic necrotizing lymphadenitis, there is usually a prominent perivascular infiltration of plasma cells, while a large number of neutrophils are often present in bacterial infections [ 5 ]. Special stains and immunohistochemical stains are helpful in identifying the infectious agents. In our case, the blood culture for acid-fast bacilli of the patient was negative on admission. And the multinucleated giant cells, caseous necrosis and well-formed granulomas were absent, although abundant histiocytes were present. Moreover, Ziehl–Neelsen stain has been done and negative to rule out tuberculosis. (3) The lymph nodes of systemic lupus erythematosus show varying degrees of cortical necrosis, accompanied by nuclear debris and inflammatory cell reactive proliferation. Hematoxylin bodies assist in identification. They are usually located in or near the necrotic foci, but may also be located in the lymphatic sinuses, paracortex or vascular wall. Clinically abnormal serum immunology, especially positive antinuclear antibodies, is helpful for the diagnosis of systemic lupus erythematosus. (4) Infectious mononucleosis is characterized by interfollicular enlargement, immunoblast proliferation, single-cell apoptosis and necrotic foci are common, and histiocytic and plasmoid dendritic cells are rare [ 8 ]. (5) The identification of histiocytic proliferative lesions is also essential. we focus on the most common histiocytosis among adults: Langerhans cell histiocytosis (LCH), Erdheim-Chester disease (ECD) and Rosai-Dorfman disease (RDD). The primary differential diagnosis is Langerhans cell histiocytosis (LCH), LCH lesions often show histiocytes mixed with a significant infiltration of inflammatory cells. And neoplastic LCH cells are mononucleated, typically with a coffee bean-shaped nucleus. Binucleated or multinucleated cells with the typical Langerhans cell cleft can be identified [ 9 ]. Moreover, abundant eosinophils are often observed. Characteristic immunohistochemistry such as S-100 and CD1a are helpful for identification. In our case, S-100 and CD1a were negative. And ECD mostly occurs in long tubular bones and is distributed symmetrically. Additionally, the histology of ECD shows infiltration of tissue by small CD1a– mononucleated histiocytes, sometimes associated with Touton cells. Furthermore, the histology of RDD is a massive expansion of histiocytes in the lymph node sinuses with lymphocytes and plasma cells [ 10 ]. Abundant plasma cells in the medullary cords and around the venules are typical. In combination with clinic information, histological patterns and immunohistochemistry, the above lesions were excluded. HNL and papillary thyroid carcinoma coexisting in the same lymph node is uncommon and seldom ever documented, according to a review of the current literature; so far, only two cases have been retrieved [ 11 ]. At present, 10 cases of HNL combined with other tumors have been reported (Table 1 ), most of which occurred in women (7/11), predominantly in Asia (6/11), aged 27–66 years. The tumors combined with HNL were PTC [ 6 , 11 ] (2 cases), gastric carcinoma [ 12 ] (1 case), breast carcinoma [ 13 ] (1 case), squamous cell carcinoma of the tongue [ 14 ] (1 case), malignant melanoma [ 15 ] (1 case), malignant fibrous histiocytoma [ 16 ] (1 case), multiple myeloma [ 17 ] (1 case), and diffuse large B lymphoma in remission [ 18 ] (2 cases). It can be accompanied by fever or no fever, generally without special treatment, and steroids and other hormones can be used for symptomatic treatment. Recurrence is rare (1/11); treatment of the tumor is the main focus when there is tumor coexistence. HNL coexisted with tumors: Cases 1 to 7, similar to our case, were evaluated preoperatively as metastatic tumors of the lymph node, and the HNL occurred on the same side of the tumor. In cases 8–11, lymph nodes were enlarged months or years after tumor treatment, and the biopsy was KFD. Review the literature on the coexistence of HNL with other tumors: PTC [ 6 , 11 ], gastric carcinoma [ 12 ], breast carcinoma [ 13 ], squamous cell carcinoma of the tongue [ 14 ], and malignant melanoma [ 15 ], all of which occurred on the same side of the tumor as the present case, may indicate that HNL can be induced by tumor-associated local antigens and raise the possibility of specific immune responses to antigenic stimulation in HNL. Dequante et al. reported a possible correlation between increased cytotoxic activity of T cells stimulated by the tumor and disease transformation [ 17 ]. Apoptosis of target cells is induced by two molecular mechanisms of T-cell-mediated cytotoxicity, one perforin-based and the other Fas-based [ 19 ]. The reason for the simultaneous occurrence of this case may be closely related to the patient’s infection with Epstein‒Barr virus. We speculate that EBV infection of lymph nodes activates a variety of cells, including T cells and histiocytes, and promotes massive T cell proliferation. Activated histiocytes produce various cytokines that, through Fas and FasL interaction, induce apoptosis of T cells. We speculate that EBV infection of lymph nodes activates a variety of cells, including T cells and histiocytes, and promotes massive proliferation of T cells. Activated histiocytes produce various cytokines that, through Fas and Fasl interaction, induce apoptosis of T cells. Moreover, FASL was highly expressed in papillary thyroid carcinoma [ 20 ], suggesting that papillary thyroid carcinoma may increase the cytotoxic activity of T cells and the specific immune response of its own HNL, but there is not enough evidence to show whether this is a causal relationship, and more experiments and data are needed to prove it.
Conclusion The coexistence of HNL and papillary thyroid carcinoma is unique, and the coexistence of the two diseases is rare, but the reason why the associated diseases can coexist has not been proven. When patients present with enlarged lymph nodes in the neck, they should be considered as a differential diagnosis. The pathological diagnostician should not only focus on the tumor metastasis in the lymph nodes but also pay attention to the inflammatory lesions in the lymph nodes, such as HNL and Castleman, because these lesions may mislead the clinical staging of the tumor by the clinical doctor and lead to unnecessary treatment.
Histiocytic necrotizing lymphadenitis (HNL) is a benign, self-limiting disease that is rare clinically. The coexistence of HNL and tumor is rarer. We report a male patient who was preoperatively diagnosed with papillary thyroid carcinoma with cervical lymph nodes metastasis, and the postoperative pathological examination showed histiocytic necrotizing lymphadenitis combined with metastatic papillary thyroid carcinoma in the same single lymph node. More interestingly, Epstein‒Barr virus was positive in these lymph nodes by in situ hybridization. This may suggest a trigger for the coexistence of the two diseases. Keywords
Case report A 48-year-old man was admitted to the hospital with a diagnosis of papillary thyroid carcinoma confirmed by fine needle aspiration of the thyroid after 20 days of physical examination. Ultrasound examination of the thyroid showed that a hypoechoic nodule was detected in the upper pole of the right lobe of the thyroid gland, approximately 1.3 × 1.0 cm, with regular morphology, aspect ratio < 1, fuzzy border, uneven internal echogenicity, and multiple dotted strong echogenicity with rear echo attenuation. Color Doppler flow imaging (CDFI): no significant signal was observed. Enlarged lymph nodes were seen in the II-IV region of the right neck. The largest lymph node was about 2.1 × 1.4 cm, with full morphology, clear borders, thickened cortex, and disappearance of lymphatic portal structures, and scattered strong echogenicity was detected in some of the nodes. CDFI: Blood flow signal was visible in the lymph nodes. A slightly larger lymph node was detected in the II-IV area of the left neck, about 1.0 × 0.5 cm. The findings were “nodule in the right lobe of the thyroid: Thyroid Imaging Reporting and Data System (TI-RADS) category 4a nodule in the middle of the right lobe, multiple enlarged lymph nodes in the right side of the neck; slightly enlarged lymph nodes in the left side of the neck” (Fig. 1 ). The diagnosis of fine needle aspiration of the thyroid and lymph node were shown: (right thyroid) Bethesda grade VI, papillary thyroid carcinoma; (right cervical lymph node) metastatic carcinoma, consistent with metastatic papillary thyroid carcinoma. (Figs. 2 and 3 ). For further diagnosis and treatment, he was admitted on August 20, 2022. After completing the laboratory and other related examinations, thyroid surgery was performed. Intraoperative freezing for inspection: Left thyroid gland, about 4.5 × 3 × 2 cm in size, two gray‒white areas with diameters of 0.2 and 0.3 cm were seen on the section, respectively, soft. Right thyroid gland, approximately 4.5 × 3 × 2 cm in size, a grayish white nodule, with a size of 1.2 × 0.9 × 0.8 cm, immediately adjacent to the capsule, and another grayish yellow nodule, 0.2 cm from the capsule, with a diameter of 0. 2 cm, both hard. The frozen section report was given: (left thyroid) benign lesion. (Right thyroid) Papillary thyroid carcinoma. Then, right neck dissection was performed. Postoperative paraffin pathology was shown: (right thyroid) Papillary thyroid carcinoma (diffuse sclerosing variant), invaded with the capsule (Fig. 4 ). Typical metastatic papillary thyroid carcinoma was seen in some lymph nodes, and some lymph nodes showed focal irregular pale pink stained lesion areas in cortical and paracortical areas with numerous nuclear fragments, the proliferation of mononuclear-like histiocytes and plasmacytoid dendritic cells. Coagulative necrosis was seen focally, scattered cellulose deposition, few plasma cells, and no neutrophils were seen (Fig. 5 ). The results of the immunohistochemical staining showed CD3, MPO and CD68 were expressed in most of the cells in the pale pink stained lesion areas. The expression of CD123 was slightly less than that of the previous antibodies. And CD20 was expressed sporadically (Figs. 6 – 10 ). But CD1a was not expressed (Fig. 11 ). CD21 showed a residual follicular dendritic cell network (Fig. 12 ), Ki67 was highly expressed in pale pink stained lesion areas and germinal centers (Fig. 13 ). Epstein‒Barr virus was detected by Epstein‒Barr encoding region (EBER) in situ hybridization. EBER was scattered and positive in the pale pink stained areas (Fig. 14 ). Interestingly, the coexistence of pale pink stained lesions and metastatic PTC was found in the same lymph node (Figs. 15 – 17 ). Pathological diagnosis was given: (left) nodular goiter, (right) papillary thyroid carcinoma (diffuse sclerosing variant) (two foci, maximal diameter approximately 1.2 cm and 0.2 cm), the capsule was invaded; metastatic PTC was found in 16 of 57 lymph nodes on the right side of the neck (the maximal diameter of metastatic lesions was 1.8 cm), and some lymph node biopsies showed histiocytic necrotizing lymphadenitis. Three lymph nodes were seen with histiocytic necrotizing lymphadenitis coexisting with metastatic PTC. However, there was no metastatic PTC or HNL in the left cervical or prelaryngeal lymph nodes. Follow-up: The patient recovered well after surgery and survived disease-free for more than 5 months, and the long-term prognosis remains to be observed further.
Abbreviations Histiocytic necrotizing lymphadenitis Kikuchi-Fujimoto disease Color Doppler flow imaging Thyroid Imaging Reporting and Data System Author contributions J.L: Conception or design of the work, data collection, analysis, and interpretation, drafting the article. L.C.: Data collection, analysis, and interpretation, Writing- Reviewing and Editing. L.J.: Conception or design of the work. G.Y. and G.Q.: Data collection, analysis. All authors reviewed the manuscript. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article. Data availability All data generated or analyzed in this study are included in this article. Declarations Ethical approval Written informed consent for publication of this case report and any accompanying images was obtained from the patient’s. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:48
Diagn Pathol. 2024 Jan 13; 19:14
oa_package/f7/7e/PMC10788022.tar.gz
PMC10788023
38218981
Background The COVID-19 pandemic has exerted a profound impact on society, affecting employment, schooling, healthcare and many other aspects of everyday life. In terms of mental health, increasing rates of symptoms of depression, anxiety, sleep problems, and other conditions have been reported across different continents, demonstrating the global consequences of the pandemic [ 7 ] According to an estimate based on various data sources worldwide, between January 2020 and January 2021, cases of major depressive disorder increased by 27.6% and cases of anxiety disorders by 25.6%, and this increase was particularly pronounced among women [ 7 ]. With respect to different age groups, results of studies conducted early on in the pandemic as well as studies on previous epidemics such as severe acute respiratory syndrome (SARS) or Ebola suggested that adolescents and young adults aged between 15 and 25 years showed the sharpest increase in mental health problems [ 4 , 15 ]. This finding has been replicated over the further course of the pandemic, with adolescents in the transitional stage between childhood and adulthood showing elevated rates of symptoms of depression and anxiety compared to other age groups [ 7 ]. A recent meta-analysis encompassing 29 studies including children and adolescents reported pooled prevalence estimates of 25.2% for depressive symptoms and 20.5% for anxiety symptoms [ 33 ]. A higher burden of the pandemic on mental health in young people has likewise been reported in Austria. For instance, an online survey conducted between February and March 2021 found higher rates of symptoms of depression and anxiety than in the adult population among adolescent samples in school [ 28 ] and in apprenticeships [ 11 ], with 55% of adolescents showing moderate depressive symptoms and 47% scoring above the cut-off for anxiety symptoms [ 28 ]. Moreover, in a further survey conducted between September and November 2021 using the same methodology, the rate of Austrian adolescents reporting mental health problems remained high, with 58% reporting depressive symptoms and 46% reporting anxiety symptoms [ 10 ]. The increasing rates of mental health problems have also led to more contacts with the mental healthcare sector. A Danish cohort study of young people (age 5–24 years) reported an overall relative increase of incident psychiatric diagnoses of 5% during the COVID-19 pandemic compared to the expected rates [ 3 ]. While suicide rates remained stable during the first and subsequent months of the pandemic until June 2021 [ 29 , 30 ], emergency department visits due to suicidal ideation and suicide attempts increased among adolescents [ 5 , 21 , 23 , 43 ]. Only a small number of studies have explored the impact of the COVID-19 pandemic on rates of prescription of psychotropic drugs. A recent study of the general population conducted using Austrian social insurance data focused on prescriptions in 2020, and found no significant increase in psychopharmacological prescriptions during the first lockdowns in 2020 [ 40 ]. By contrast, focusing on a younger age population, a Danish nationwide cohort study of 5–24 year-old patients reported an increase in incident use of psychopharmacological interventions, which was especially pronounced in the age group between 12 and 17 years [ 3 ]. As the COVID-19 pandemic has been associated with an increase in mental health problems throughout society, with a particularly steep increase in adolescents, we sought to assess changes in psychopharmacological medication prescription rates in different age groups before and after the pandemic-related restriction measures. In view of the literature reporting increased rates of symptoms of depression, anxiety and sleep disorders in adolescents, we undertook a detailed analysis of the group of 10–19 year-olds. The specific focus was on medication classes that are likely to be used in the treatment of children and adolescents with depression and anxiety, namely antidepressants and antipsychotics. We further aimed to track the prescription rates for these classes of psychotropic drugs in Austrian minors throughout the pandemic and compare them to expected rates based on former, pre-pandemic prescription patterns.
Methods Data The analysis was performed based on routine data from the umbrella organization of Austrian social insurance institutions (the Federation of Austrian Social Insurance Institutions), which records data for accounting purposes. The dataset comprises data on all people insured under the statutory social insurance, i.e. 98.5% of the Austrian population, corresponding to approximately 8.82 million people [ 8 ]. It includes all public prescriptions in the outpatient sector that were collected in pharmacies or from dispensing doctors nationwide within the given time frame, the first quarter of 2013 (Q1 2013) to the fourth quarter of 2021 (Q4 2021). Medication obtained through private outpatient prescriptions or over-the-counter medication are not included, and data from the hospital sector are also not included as the outpatient and inpatient sectors are run by different entities in the Austrian healthcare system. We used quarterly prescription rates of the following medications and respective Anatomical Therapeutic Chemical Codes (ATC Codes): antidepressants (ATC Code N06A) and antipsychotics (ATC Code N05A) for the analysis. A further examination, which is beyond the scope of the present article, revealed that non-selective monoamine reuptake inhibitors (N06AA), monoamine oxidase A inhibitors (N06AG), and the category other antidepressants (N06AX) had very small prescription rates among adolescents, especially among the younger group of 10–14 year-olds, with below 100 prescriptions per quarter on average in most groups and two medication groups showing 0 prescriptions in all adolescent groups. The prescription rate was defined, based on [ 32 ], as the proportion of insurees who had at least one prescription dispensed in a given quarter within a specific ATC group per 1000 insurees in the same age and sex group (therefore representing an age- and sex-adjusted rate). An insuree receiving the respective medication is counted distinctly per quarter, meaning he or she is only counted once per quarter even if the prescribed medication is received several times. Statistical analysis An interrupted time series analysis (ITS) was performed to test for the influence of the pandemic on prescription rates for the relevant medication groups. Measures to restrict the spread of COVID-19, such as lockdowns, clearly divided the observation period into a pre- and post-period, with an intervention point dividing these periods chosen in the second quarter of 2020. For the population group of interest, Austrian adolescents, the longest period of home schooling/distance learning occurred from the beginning of November 2020 to 8 February 2021, with the first (shorter) lockdown period taking place in March 2020. Since the first lockdown was restricted to a few weeks, we hypothesized an increase in the prescription rate of antidepressants, antipsychotics, and benzodiazepines starting in Q3 of 2020. To account for underlying short- and long-term trends in the data, pre-existing trends in prescription rates must be taken into account, such that a steady or a seasonal increase in rates is not attributed to the pandemic. ITS controls for issues such as trends and seasonality by longitudinally tracking outcomes before and after an intervention [ 35 ]. The dataset for each medication was split into three age groups (10–14 years, 15–19 years, all ages combined, comprising every person insured in Austria, regardless of age) and two genders (male, female), resulting in 18 individual age- and sex-stratified time series. The time series data for each group was modelled from the start of the dataset in early 2013 to the pandemic-related restrictions in the second quarter of 2020 (Q2 2020). This was done to forecast confidence intervals at the 97.5% level based on these models for the post-restriction period from the third quarter of 2020 (Q3 2020) until the end of 2021. These confidence intervals represent expected development paths, based on time series developments only prior to the pandemic-related restrictions, and provide the best projection of what would have happened in the absence of the restrictions. The forecasts were subsequently compared to post-restriction observations in order to analyze deviations (see Figs. 1 , 2 ). The forecasts were obtained using seasonal autoregressive integrated moving average (ARIMA) models. The analysis was performed using R [ 31 ], the packages tidyverse, lubridate and tsibble for transformation of the dataset [ 18 , 38 , 39 ], and the package fable to fit and select ARIMA models [ 27 ]. Each model was fit by choosing the optimal model according to the Fable package automatic model selection (with transformations applied according to prespecification), which uses a variation of the Hyndman-Khandakar algorithm selecting for the smallest AICc value [ 19 , 20 ]. As the dataset provided by the Federation of Austrian Social Insurance Institutions consisted of accumulated data without the possibility of identifying individualized data, a waiver was received from the ethical review committee of the Medical University of Vienna.
Results The results revealed an increase in the prescription rate of antidepressants and antipsychotics, which was especially pronounced in the age groups 10–14 and 15–19 years, and a steeper increase among female adolescents (see Table 1 ). Antidepressants Within the group of antidepressants, prescription rates among males in the 10–14- and 15–19-year age groups exceeded the 97.5% confidence intervals in three out of six observed quarters, while prescription rates for females in those age groups significantly exceeded predictions in five (10–14 year-olds) and six (15–19 year-olds) out of six observed quarters, respectively (see Fig. 1 ). This difference was even more pronounced when considering the differences in relative changes. The growth in prescription rates from Q3 2020 to Q4 2021 was 103.5% for 10–14-year-old females and 45.5% for 15–19-year-old females, while the growth in prescription rates for their male counterparts lay at only 34.7% and 22%, respectively. When combining all age groups, prescription rates only differed significantly from the model forecasts in one quarter for male patients and two quarters for female patients. Thus, increases more often exceeded model forecasts in female than in male patient groups, and these excesses were much more common in younger age groups than in the general population. Antipsychotics A similar pattern emerged regarding the development of antipsychotic prescription rates during the observation period Q3 2020–Q4 2021. Here, the gender gap among the younger age groups was even more pronounced than for antidepressants. As can be seen in Fig. 2 , forecasts were exceeded in four out of six quarters in 10–14 year-old females and in five out of six quarters in 15–19-year-old females, while male adolescents of both age groups showed no excesses throughout the observation period. This difference is also visible in the relative growth rates between Q3 2020 and Q4 2021, with prescription rates growing by 74.3% in 10–14 year-old females and by 49.3% in 15–19 year-old females, while their male counterparts showed much smaller growth rates (14.8% and 14.4%, respectively).
Discussion Our analysis focused on the influence of the COVID-19 pandemic on prescription patterns of common psychotropic drugs with a specific focus on adolescents. Given the reported high burden of mental health problems among adolescents during the pandemic [ 33 , 42 ], we specifically examined the age groups of 10–14-year-olds and 15–19-year-olds regarding prescription rates of antidepressants and antipsychotics. Antidepressants Antidepressant prescription rates showed a steep upward trend among females aged 10–19 years, which were significantly above model predictions in four (10–14 year-olds) and five (15–19 year-olds) out of six quarters, respectively. No comparable trend was found in the general population (all age groups combined), with only individual quarters significantly exceeding model predictions. Pre-pandemic rates showed a high degree of stability when looking at the pattern from 2013 onwards. A sharp decline was observed between 2016 and 2017. This coincided with a debate about the efficacy of SSRIs in adolescents based on a meta-analysis [ 6 ], which received broad media coverage in Austria and Germany [ 22 , 36 ]. Shortly after the start of the pandemic, prescription rates for antidepressants began to increase steadily. Given that SSRIs are recommended as part of the treatment strategy for depression and anxiety in national guidelines [ 1 , 3 ]; (AWMF, 2013), these elevated prescription rates can be interpreted as part of an increased treatment effort to counter the rising mental health problems throughout the COVID-19 pandemic. The increase in prescription rates was more pronounced in female than in male patients, with the steepest increase in antidepressant prescription rates found in female adolescents. This is consistent with findings of increasing rates of depressive symptoms reported worldwide. For instance, in a meta-analysis of 29 studies encompassing data from 80,879 children and adolescents globally, Racine et al., [ 33 ] found a pooled prevalence rate of 25.2% for clinically elevated symptoms of depression, with higher rates of depressive symptoms reported in females. Recently, another meta-analysis based on 53 longitudinal studies from 12 countries also reported an increase in depressive symptoms in children and adolescents, which was stronger in female than in male participants [ 24 ]. Besides data from international samples, studies on the mental health of Austrian adolescents during the pandemic point in the same direction, with an increase in depression and anxiety symptoms that was especially pronounced in female participants [ 10 , 28 ]. Antipsychotics From all evaluated age groups, the strongest increase in prescription rates was observed in 15–19-year-old females. As the use of antipsychotics in minors is only licensed in Austria for the treatment of bipolar disorder, schizophrenia, or major impulsive aggressive behavior, these rising rates, particularly among females, are highly interesting. Despite research demonstrating a particularly severe impact of the COVID-19 pandemic on people with schizophrenia [ 16 ], few studies have explored a potential increase in first episodes of schizophrenia. One study, conducted in Australia, reported an increase in first-episode admissions for schizophrenia among young people following the introduction of lockdown measures [ 26 ]. Additionally, while the pandemic has been linked to first presentations of manic episodes [ 34 ], evidence regarding potential increases in bipolar disorders is lacking. Atypical antipsychotics have been mentioned as part of an augmentation regime in the treatment of resistant depression [ 14 , 37 ], but are only suggested for the treatment of psychotic depression in minors [ 25 ]. Therefore, the rise in prescriptions of antipsychotics might also be interpreted as indicating an increased off-label use for the treatment of depression in minors. Furthermore, some antipsychotics are used as an off-label medication for severe anorexia nervosa [ 17 ]. As an increase in eating disorders has been reported throughout the pandemic [ 13 ], increasing prescription rates of antipsychotic prescriptions may echo an increasing clinical demand in this field. This might further explain the higher prescription rates among females found in the present study, as the rise in eating disorders is especially pronounced among female adolescents [ 13 ], potentially accompanied by increased psychopharmacological treatment. Unfortunately, we are unable to link prescriptions to diagnoses, as diagnostic ICD-10 codes are not currently available from the outpatient sector, from which this dataset is derived. Therefore, it can only be hypothesized that the use of antipsychotics in female adolescents corresponds to increased rates of treatment for depression, eating disorders, or other symptoms or disorders such as sleep problems. Overall, we were able to demonstrate an increase in prescriptions of antidepressants and antipsychotics in adolescents, which was especially pronounced in females. The observed trends are in line with increasing levels of symptoms of depression and anxiety during the COVID-19 pandemic as reported globally [ 23 , 33 ] and in Austria [ 10 , 28 ]. Data regarding prescription rates of psychopharmacological agents in children and adolescents during the COVID-19 pandemic are scarce, although a report issued by the German health insurance company DAK described a substantial increase in antidepressant prescriptions in adolescent females between 2019 and 2021 (+ 30% in 10–14-year-olds and + 65% in 15–17-year-olds in those diagnosed with a depressive disorder) [ 41 ] Our findings are consistent with this reported increase in antidepressant prescriptions, but extend further than these datasets in terms of the population covered. For example, while the DAK report includes data from 5.7% of German children and adolescents [ 41 ], our dataset includes approximately 858,000 Austrian adolescents, representing about 99.4% of the Austrian population in this age group. Furthermore, we were able to analyze two different medication groups (antidepressants and antipsychotics). A study from the US using data from the IQVIA health insurance company (encompassing roughly 8.9 million minors aged between 2 and 17 years) analyzed monthly prescription rates of ADHD medication, antidepressants, antipsychotics, and mood stabilizers between January 2019 and September 2020. The results revealed a spike in overall psychopharmacological prescriptions in April 2020, which subsequently returned to normal, pre-pandemic levels [ 2 ]. This trend is in line with the patterns assessed in our data, although it appears that the upward trend in the US sample is more short-lived than in our study. This might be explained by differences in the availability of mental health practitioners, different COVID-19 restrictions, and also different time frames of the respective analyses. A cohort study of Danish youth also reported an increase in incident prescriptions of psychotropic medication between March 2020 and June 2022, which was most prominent in the 12–24 years age group [ 3 ]. Interestingly, the rise in prescriptions was seen for all groups of psychotropic drugs, including hypnotics and sedatives, psychostimulants, antidepressants, and antipsychotics, but not in the group of anxiolytics. A recent study from Austria [ 40 ], analyzing a shorter time frame of 2020 only, observed no significant changes in defined daily doses of psychopharmacological drugs between 2019 and 2020 in any age group. The differences from our findings are likely due to the different time intervals examined in the two studies: While the present study observed cumulative effects over six quarters, the study by [ 40 ] focused on medication prescriptions during the national lockdowns in 2020. Nevertheless, it should be noted that in the study by [ 40 ], the age group of 10–20-year-olds likewise showed the largest percentage increase in psychopharmacological prescriptions of all age groups.
Conclusion Data from the Federation of Austrian Social Insurance Institutions show an increase in prescriptions of antidepressants and antipsychotics throughout the pandemic, which was especially pronounced in female adolescents. The increasing rates of depression and anxiety symptoms that have been reported globally and in Austria appear to be associated with increased use of corresponding psychotropic medication. However, the increasing rates of prescriptions of antipsychotics warrant further attention and analysis, given that the evidence for their use in this age group is limited.
Background The COVID-19 pandemic has impacted many aspects of everyday life, including the (mental) healthcare system. An increase in depression and anxiety symptoms has been reported worldwide, and is particularly pronounced in females and young people. We aimed to evaluate changes in prescription rates for psychopharmacological medication, which is often used to treat depression and anxiety. Method Based on data from the Austrian public health insurance institutions, we conducted an interrupted time series analysis of antidepressants and antipsychotics, comparing prescription rate developments before and throughout the COVID-19 pandemic (2013 to 2021), with a special focus on adolescents (10–19 years) in comparison to the general population. Data were based on all public prescriptions in the outpatient sector nationwide. Age- and sex-stratified time-series models were fitted to the pre-COVID period (first quarter (Q1) of 2013 to second quarter (Q2) of 2020). These were used to generate forecasts for the period from the third quarter (Q3) of 2020 to the fourth quarter (Q4) of 2021, which were subsequently compared to observed developments in order to assess significant deviations from the forecasted development paths. Results For the majority of the evaluated period, we found a significant excess of antidepressant prescriptions among both male and female adolescents (10–14 and 15–19 years) compared to the forecasted development path, while the general population was mostly within 97.5% confidence intervals of the forecasts. Regarding antipsychotics, the interrupted time series analysis revealed a significant excess in the group of female adolescents in almost all quarters, which was especially pronounced in the 15–19 age group. Prescription rates of antipsychotics in the general population only showed a significant excess in two quarters. Conclusion Increased rates of adolescents receiving psychopharmacological treatment echo the epidemiological trends of an increase in depression and anxiety symptoms reported in the literature. This increase is especially pronounced in female adolescents. Keywords
Limitations Despite several strengths of the present study, such as a dataset of 8.82 million insured persons in Austria, representing about 98.5% of the Austrian population, several limitations need to be addressed. First, the use of anonymous, aggregated data limited the depth of analysis. However, given the size of the dataset, we feel that this study contributes interesting information for healthcare planning. Second, not all of these 8.82 million people are necessarily living and registered in Austria and therefore part of the Austrian census, while some people who are registered in Austria and therefore part of the Austrian population are insured in neighboring countries. This is mostly the case with cross-border commuters. It is difficult to quantify the exact number of people who are insured but not part of the Austrian census, although the impact on the sample should be minimal given the large scale of the dataset; at most, it should account for two percentage points of the whole dataset and likely even lower for the age groups of interest, namely adolescents. Third, data on medicines below the Austrian prescription fee threshold (€6.50 in 2021 and adjusted annually) are not collected in the central dataset, as patients pay for prescriptions below this threshold themselves (with the exception of individuals or households that are exempted from prescription fees due to low income or high total healthcare costs) [ 9 ]. The medications in our dataset are affected by this effect to different degrees, with antipsychotics and antidepressants being mostly above the threshold (85% and 59%, respectively). With regard to our statistical approach, two points warrant further attention. First, it is difficult to determine the precise timing of the pandemic-related restrictions, as there were several lockdowns on the national level as well as additional federal measures and restrictions, which varied strongly over time. Additionally, there were other restrictions and lockdowns on a regional level which only affected parts of the Austrian population. Second, we expected the effect to occur with a time lag, given that a certain delay can be expected with regard to a possible impact on mental health and subsequent help-seeking followed by prescriptions (as symptoms take time to develop and healthcare professionals need to be sought for treatment). To test the robustness of our results regarding the timing of the pandemic-related restrictions, we used the same modelling procedure but chose two alternative, but in our opinion less fitting, restriction time specifications: the first quarter of 2020 and the third quarter of 2020. In these different specifications, the overall patterns stayed the same. While some quarters lost significance and others gained significance, this did not affect the overall structure of the results.
Author contributions MO and PLP wrote the main manuscript text and developed the study design. LL and SD provided support in data provision. MO, ODK and PLP were involved in statistical analyses. MO, PLP, ODK, LL and SD revised the manuscript. All autors reviewed the manuscript. Funding No funding has been received. Availability of data and materials The data is owned by the Federation of Austrian Social Insurance Institutions. Requests for data can be sent to the first author. Declarations Ethical approval and consent to participate As the dataset provided by the Federation of Austrian Social Insurance Institutions consisted of accumulated data without the possibility of identifying individualized data, a waiver was received from the ethical review committee of the Medical University of Vienna. Competing interests MO, OK, LL, and SD report no conflict of interest. PLP has received research funding from the German Federal Ministry of Education and Research, the German Federal Institute for Drugs and Medical Devices, the Volkswagen Foundation, the Baden-Wuerttemberg Foundation, Servier, Lundbeck, the Vienna Landeszielsteuerungskommission, the Austrian Science Fund, the Hochschuljubiläumsfonds and the Austrian National fund, the Austrian future fund. He works as an advisor for Boehringer Ingelheim and Delta 4. He has received speaker ́s fees from Infectopharm, Janssen, GSK, and Oral B.
CC BY
no
2024-01-15 23:43:48
Child Adolesc Psychiatry Ment Health. 2024 Jan 13; 18:10
oa_package/3e/05/PMC10788023.tar.gz
PMC10788024
38218831
Introduction Vaginal squamous intraepithelial lesions (SIL) are a type of diseases characterized by the occurrence of atypical hyperplasia of vaginal squamous cells and carcinoma in situ, excluding invasive carcinoma [ 1 ]. They are rare precancerous lesions of the lower genital tract, accounting for approximately 0.4-1% of epithelial tumors of the lower genital tract, with an incidence 100 times lower than that of cervical SIL [ 2 – 5 ]. Currently, the popularization of cervical cancer screening and improvements in detection technology have increased the detection rate of vaginal SIL [ 1 , 6 ]. In 2014, the World Health Organization (WHO) classified vaginal intraepithelial lesions into vaginal low-grade squamous intraepithelial lesions (LSIL) and vaginal high-grade squamous intraepithelial lesions (HSIL) in the Classification of Tumors of Female Reproduction Organs [ 7 ]. Vaginal LSIL can be treated conservatively due to its high potential for spontaneous regression and low risk for progression to malignancy [ 8 ]. Even if vaginal HSIL are benign, active treatment is always recommended, as the risk of malignant transformation in vaginal HSIL can reach 4.6-12% [ 1 , 9 – 11 ]. However, consensus concerning the best optimal management of vaginal HSIL is currently lacking. The treatment for vaginal HSIL needs individualization, so the treatment modalities are diverse at present, including surgical resection, topical pharmaceuticals, photodynamic therapy, laser vaporization, and brachytherapy [ 3 , 12 – 14 ]. In general, surgical resection is the mainstay and preferred method, because it can not only provide a specimen for complete histopathological diagnosis to identify occult invasive cancer, but also has high cure rate [ 10 , 11 , 14 , 15 ]. In clinical practice, vaginectomy is favored by gynecologists in patients with extensive and persistent vaginal HSIL, or suspicious invasive vaginal HSIL [ 14 ]. Anatomically, the vagina is located in the middle of the deep pelvic cavity next to the bladder and rectum, and the space of vaginal cavity is quite small, leading to limited vision in the transvaginal surgery and significantly increasing the difficulty of the procedure. Thus, the application of transvaginal vaginectomy is limited in complex vaginal surgeries which require greater precision because of the restricted space and intricate anatomy of vagina. In recent decades, minimally invasive laparoscopy, including robotic-assisted laparoscopy, has expanded rapidly and has been widely used in a variety of gynecological operations, such as endometrial carcinoma, cervical cancer, endometriosis, pelvic retroperitoneal tumors and pelvic organ prolapse [ 16 – 20 ]. Minimally invasive laparoscopy is characterized by magnifying the surgical field, which contributes to identifying blood vessels and finely separating tissue spaces, reducing intraoperative injury. In addition, long-arm instruments with small end-effector could simplify surgery and increase the flexibility of surgical operation in narrow spaces. Therefore, owing to these technical advantages, the conventional laparoscopic vaginectomy (CLV) has gained popularity by gynecological surgeons [ 21 ]. Unlike conventional laparoscopic surgery, the robotic-assisted laparoscopic process system has better high-definition and magnified three-dimensional view and can more precisely visualize the surgical area; it also improves the mobility and increases the range of motion of the instrument’s end-effector. According to previous studies, the robotic-assisted surgery could be considered safer and a more effective surgical tool than conventional laparoscopic surgery for women who have to undergo complex and challenging gynecology surgery [ 16 ]. With the increase of the incidence of vaginal HSIL and the popularization of robotic surgery, the use of robotic-assisted laparoscopic vaginectomy (RALV) has likely increased. However, until now, there is no guideline or consensus regarding the optimal surgical approach for vaginectomy. Studies about evaluating the safety and efficiency between RALV and CLV are absent. Therefore, the purpose of our study was to compare the safety and treatment outcomes between the RALV and CLV for selected patients with vaginal HSIL.
Materials and methods Study design This was a retrospective study of patients with vaginal HSIL who underwent either robotic-assisted laparoscopic vaginectomy or conventional laparoscopic vaginectomy in the Department of gynecology, the First Affiliated Hospital of Zhengzhou University from December 2013 to May 2022. Vaginal HSIL was diagnosed through colposcopically guided biopsy before vaginectomy. All the patients are characterized with extensive lesions (beyond the upper third of vagina or multifocal lesions limited to the upper third of vagina but concurrent with cervical HSIL), and/or persistent multifocal lesions (failure of conservative treatment), and/or recurrent lesions, and/or suspicious invasive lesions. Once vaginal HSIL combined with cervical HSIL, cervical cancer would be excluded by cervical conization before vaginectomy. In addition, patients with vaginitis were cured preoperatively and the patients were excluded if they: (1) were diagnosed with vaginal invasive cancer before vaginectomy; (2) had previous hysterectomy for gynecological cancer; (3) had vesical dysfunction (for example, incontinentia urinae or retention of urine); or (4) had incomplete information of follow-up. Surgical procedures The location and range of preoperative lesions were accurately recorded via careful inspection of the total vagina and/or cervix by colposcope. Especially for post-hysterectomy vaginal HSIL, more attention needs to be given to examining the folds of the vaginal cuff, as some lesions may hide in the vaginal angles, making them difficult to identify. For each patient, the choice of RALV and CLV was based on the final decision of the patients and their family after being informed by the surgeon about the advantages and disadvantages of the two procedures. RALV was performed using the da Vinci-Si Surgical System (Intuitive Surgical Inc, Sunnyvale CA, USA). Patients were placed in the lithotomy position. After general anesthesia, the surgical area was routinely disinfected and covered with sterile surgical towels, a urethral catheter was inserted, and then trocars were placed by surgeons. In addition, the RALV group needed to connect mechanical arms. Lesion areas were confirmed by applying Lugol’s iodine solution to the total vagina and/or cervix and marked by a suture or marking pen approximately 0.5 cm (at least 0.3 cm) below the edge of the lesion (Fig. 1 ). The uterine manipulator was placed in the vagina for patients with a uterus, but a gauze roll or the cup of the uterine manipulator was placed in the vagina for patients who had received a hysterectomy (Fig. 1 ). For patients with post-hysterectomy vaginal HSIL, the length of the vaginal wall should be resected from the vaginal stump to 0.5 cm (at least 0.3 cm) below the edge of the lesion. For those who had vaginal HSIL combined with cervical HSIL, hysterectomy ± bilateral salpingo-oophorectomy was performed simultaneously in addition to vaginectomy during the operation (Figs. 2 and 3 ). Data collection The demographic and clinical data, such as age, menopause, body mass index (BMI), the ASA grade (assessed by the The American Society of Anesthesiologists (ASA) Physical Status Classification System), clinical manifestation, comorbidities, previous hysterectomy, status of human papillomavirus (HPV) infection and antecedent cytology, intravaginal estrogen pretreatment, lesions range and treatments of vaginal HSIL before vaginectomy were extracted via our electronic medical record system. We also collected operative data, including the total operation time (defined as the time from skin incision to the time of last closure suture of the skin), estimated blood loss, complications, length of resected vagina, flatus passing time (calculated in days from the end of the operation to the first time of the ability to pass feces or gas), postoperative catheterization time (calculated in days from the end of the operation to the catheter extracted smoothly without paruria), postoperative hospitalization time, postoperative pathology and hospital cost. All procedures were performed by gynecologists with extensive experience in conventional laparoscopic or robotic-assisted surgery. Therefore, a learning curve was not included in the operations. Intraoperative complications included hemorrhage (estimated blood loss exceeding 500 mL) and bladder, ureter, and bowl injury. Postoperative complications were defined as any newly unfavorable episodes occurring during the hospital stay or within 30 days after surgery. All patients were followed up to assess postoperative outcomes, including the status of homogeneous HPV infection and the regression, remission, persistence, recurrence or progression of vaginal HSIL. The status of homogeneous HPV infection was determined by HPV screening at six months after vaginectomy. Regression was defined as negative colposcopic examination and vaginal biopsy at six months after vaginectomy. Remission was defined as vaginal LSIL diagnosed by vaginal biopsy at six months after vaginectomy. Persistence was defined as vaginal HSIL diagnosed by vaginal biopsy at six months after vaginectomy. The short-term prognosis was defined as the treatment outcomes at six months after vaginectomy. Recurrence was defined as vaginal HSIL again after remission or regression. Progression was defined as invasive vaginal carcinoma, a higher grade lesion than previous vaginal HSIL. Disease-free survival was defined as the time from vaginectomy to disease progression or recurrence. All patients were followed-up for the first time at the third month after the operation, then visited every 3 months for half a year, every 6 months for 2.5 years, and then once a year after 3 years. The pelvic examination, HPV test and thinprep cytologic test (TCT) were conducted as the essential items. Patients were referred for colposcopy when meeting the requirements for colposcopy referral, and histopathological examination was performed if necessary. All of the patients were followed up until February 2023. Statistical analysis SPSS (version 21.0, Chicago, IL, USA) software was used to analyze the data. Quantitative variables are presented as the mean (standard deviation) or median (interquartile range), and were compared using Student’s t- test or the Mann-Whitney U test, as appropriate. Categorical variables are reported as absolute numbers (percentages) and were compared using the Pearson χ 2 test or the Fisher exact test, as appropriate. Survival curves were generated by using the Kaplan–Meier method, and Cox proportional-hazards models were used to estimate the hazard ratios (HR) and 95% confidence intervals (CI) for the effect of treatment on disease-free survival. P < 0.05 (two-tailed) were considered statistically significant.
Results Patient Characteristics We identified 118 patients with vaginal HSIL who underwent either robotic-assisted laparoscopic vaginectomy or conventional laparoscopic vaginectomy from December 2013 to May 2022. As shown in Fig. 4 , nine patients were excluded. The remaining 109 patients were analyzed in our study, including 77 patients underwent robotic-assisted laparoscopic vaginectomy (RALV group) and 32 patients underwent conventional laparoscopic vaginectomy (CLV group). Among them, 7 patients (5 in the CLV group and 2 in the RALV group) experienced failure of photodynamic therapy, 2 patients in the CLV group experienced recurrence of photodynamic therapy and 3 patients (2 in the CLV group and 1 in the RALV group) experienced failure of laser ablation. The demographic and clinical characteristics of the patients are summarized in Table 1 . These baseline characteristics were similar between the two groups except for the range of vaginal HSIL. The mean age of the patients was 55.2 years, and the mean BMI was 24.0 kg/m 2 . Most patients (89.0%) were menopausal. One hundred (91.7%) patients had high-risk HPV infection, among which HPV16 infection (66.0%) was the most common type. Forty-three patients underwent previous hysterectomy (32 patients in the CLV group and 11 patients in the RALV group). Indications included cervical HSIL (27 patients in the CLV group and 8 patients in the RALV group), hysteromyoma (1 patient in the CLV group and 3 patients in the RALV group), adenomyosis (1 patient in the CLV group), abnormal uterine bleeding (2 patients in the CLV group), and benign ovarian tumor (1 patient in the CLV group). There was significant difference between the two groups in the range of vaginal HSIL ( P < 0.001). Operative data Of all patients, eight patients (25.0%) in the RALV group underwent total vaginectomy with or without hysterosalpingo-oophorectomy, and seven (9.1%) patients in the CLV group underwent total vaginectomy with or without hysterosalpingo-oophorectomy ( P = 0.059). As shown in Table 2 , the length of the resected vagina measured after the operation was longer in the RALV group than that in the CLV group (5.0 (4.3–5.9) vs. 3.5 (3.0-4.5), P < 0.001). The total operation time in the CLV group (118.2 ± 41.0 min) was similar to that in the RALV group (129.9 ± 43.8 min) ( P = 0.186). The estimated blood loss was higher in the CLV group than that in the RALV group ( P = 0.017). The operative complications details were summarized in Table 2 . The intraoperative complications, including hemorrhage (7.8% vs. 3.1%), bladder injury (13.0% vs. 3.1%,), ureteral injury (2.6% vs. 0) and rectal injury (1.3% vs. 0), more frequently occurred in the CLV group than that in the RALV group. The postoperative complications rate in the CLV group appeared to be higher than that in the RALV group, but the difference was not significant ( P = 0.192). With respect to flatus passing time, catheterization time and postoperative hospitalization time, these were all longer in the CLV group (all P < 0.05). In this study, only one patient (0.9%) who underwent CLV had positive surgical margin, and four patients (3.7%) were ultimately diagnosed with occult vaginal invasive carcinoma after vaginectomy. In addition, the RALV group was associated with significantly higher hospital costs in comparison with the CLV group (53035.1 ± 9539.0 yuan vs. 32706.8 ± 6659.2 yuan, P < 0.001). Follow-up Regarding the postoperative follow-up, four patients who were diagnosed with occult vaginal invasive carcinoma after vaginectomy were excluded. The median duration of follow-up of 105 patients after vaginectomy was 33.0 (range 7-109) months. Table 3 showed that during the long-term follow-up, similar prognosis were found between the two groups. Ninety-six patients (91.4%) got homogeneous HPV infection regression at six months after vaginectomy. A total of 94.3% (99/105) of the patients experienced vaginal HSIL regression to disease-free through vaginectomy. Six patients (5 patients in the CLV group and 1 patient in the RALV group) were observed recurrence or progression, but the difference was not significant between the two groups (HR = 0.507; 95% CI, 0.242–17.499) (Fig. 5 ).
Discussion Vaginal squamous intraepithelial lesions are the precancerous lesions of invasive vaginal carcinoma, which lack specific clinical manifestations. The vast majority of patients are asymptomatic, and only a small number of people may experience abnormal vaginal secretions or bleeding after sexual intercourse [ 22 ]. Undoubtedly, abnormal vaginal secretions are a characteristic clinical symptom of vaginitis rather than other gynecological diseases. Usyk et al. [ 23 ], based on a prospective longitudinal cohort study, reported that the cervicovaginal microbiome is related to high-risk HPV progression in cervical squamous intraepithelial lesions. Thus, whether vaginal inflammation is associated with vaginal squamous intraepithelial lesions is intriguing. In this study, only 26.6% (29/109) of patients visited the doctor because of clinical symptoms; the remaining patients were diagnosed from cervical cancer screening. Thus, the timely detection of vaginal SIL appears to remain difficult. The mean age of patients in our study was 55.2 years, similar to that in Kim’s report [ 24 ]. Previous studies had reported that high-risk HPV infection, previous hysterectomy especially for the indication of cervical HSIL, postmenopause, previous irradiation for gynecological cancer, smoking and immunosuppression, are risk factors for vaginal squamous intraepithelial lesions [ 14 , 24 – 29 ]. We noted that 91.7% (100/109) of patients had high-risk HPV infection, among which HPV16 infection was more predominant, and these findings are consistent with those of previous related studies [ 14 , 30 , 31 ]. In this study, 43 patients (39.4%) had previously undergone hysterectomy, 35 (81.4%) of whom underwent hysterectomy due to cervical HSIL. Although we did not specifically analyze the relationship between vaginal HSIL and the history of previous hysterectomy, it could obviously show that previous hysterectomy resulting from cervical HSIL was associated with vaginal HSIL. In our current study, 89.0% of patients were postmenopausal, suggesting that vaginal HSIL is more common in postmenopausal women. Li et al. [ 27 ], through a case-control study, observed that postmenopausal women had a 2.09 times higher increased risk of developing into vaginal SIL than premenopausal women ( P = 0.024; 95% CI = 1.10–3.85), indicating that menopause is a high risk factor for vaginal SIL. Researches have shown that approximately 4.6-12% of occult vaginal invasive cancers are ultimately discovered in the course of initial management of vaginal HSIL [ 1 , 9 – 11 , 22 ]. In addition, Hodeib et al. [ 32 ] observed that about 12% vaginal HSIL progressed to vaginal invasive carcinoma during close follow-up after active treatment. In this study, 3.7% (4/109) of patients were diagnosed with occult vaginal carcinoma based on postoperative pathology, and three patients progressed to vaginal carcinoma during the long-term follow-up. Unfortunately, the managements of vaginal HSIL remain controversial, which include topical pharmaceuticals (such as 5-fluorouracil cream, imiquimod and interferon), laser vaporization, photodynamic therapy, surgery and brachytherapy [ 3 , 12 , 24 , 33 , 34 ]. In fact, the treatment of vaginal HSIL is individualized in the clinic according to the patient’s age, disease characteristics, status of HPV infection, previous therapeutic procedures and others [ 14 , 33 ]. Topical pharmaceuticals are prevalent in adjuvant therapy, especially in HPV-induced patients [ 35 ]. Young patients with multifocal and exposure-prone vaginal HSIL can be treated with laser vaporization or photodynamic therapy [ 24 ]. Surgical resection treatments, which included local resection, partial vaginectomy and total vaginectomy, were characterized by shortening the time to normalization and higher cure rates, the range of which has be reported about 80% [ 11 , 14 , 15 ]. However, surgical management could shorten the length of the vagina, which negatively affects the quality of sexual life, and may place patients at risk for stenosis of the vagina [ 15 ]. Therefore, surgical treatments should only be considered for selected patients. Unifocal lesions are usually treated by local resection; partial vaginectomy is suitable for the selected vaginal HSIL, such as extensive lesions, persistent or recurrent lesions, and suspicious invasive lesions. As recommended in the Chinese expert and European expert consensuses on the management of vaginal SIL, the lesions of postmenopausal vaginal HSIL are extensive and involve the entire vagina, or lesions are extensive and persistent, total vaginectomy can be considered [ 14 ]. In our study, 94.3% (99/105) of patients had a regression of vaginal HSIL to disease-free through vaginectomy. Brachytherapy exhibits distinct efficacy on vaginal HSIL, with a cure rate of 77-96% [ 36 – 38 ]. However, patients face with the vaginal mucosal atrophy, stenosis, ulcers and injury to the rectum and bladder after brachytherapy, leading to a long-term influence on later quality of life [ 13 ]. Therefore, brachytherapy is usually recommended to the patient who cannot tolerate surgery or whose disease is resistant to conservative managements. This work is the first retrospective study comparing both operative data and patient-centered prognosis between CLV and RALV. We find that RALV was more frequently performed in the patients who had more extensive lesions of the vagina. Indeed, based on the anatomy around the vagina, the longer length of the abnormal vagina needed for resection, the more difficult it is to perform vaginectomy. However, our study suggested that the total operation time did not significantly differ between the two groups ( P = 0.186). Compared with the CLV group, the RALV group had less estimated blood loss, which is consistent with the results from most other studies comparing robotic-assisted surgery and conventional laparoscopic surgery [ 16 , 39 – 41 ]. In addition, the intraoperative complications rate was significantly lower in the RALV group than that in the CLV group (6.3% vs. 24.7%, P = 0.026). Among the reported intraoperative complications, it reveals that 10.1% (11/109) of patients experienced bladder injury, which was the main complication during vaginectomy. Choi et al. [ 21 ] reported four patients with vaginal squamous intraepithelial lesions who underwent laparoscopic upper vaginectomy, one of whom developed bladder injury. There are venous plexus, vaginal branch of uterine artery and ureter on both sides of the upper vagina. The upper 2/3 of the anterior vaginal wall is adjacent to the bladder through the vesico-vaginal septum, and the venous plexus is densely distributed between them. The lower 1/3 of the anterior vaginal wall is adjacent to the urethra through the urethra-vaginal septum, and the middle part of the posterior vaginal wall is attached to the ampulla of the rectum by a thin layer. Therefore, during vaginectomy, blood vessels, the ureter, the bladder and the rectum are easily damaged, leading to intraoperative complications. The level of estrogen in the body and vaginal elasticity are especially decreased in postmenopausal patients with post-hysterectomy vaginal HSIL. After hysterectomy, the anatomical structures of the vaginal stump are altered and tissue adhesion is formed; consequently, the risks of injury to the ureter, bladder and rectum become higher when the bladder and rectum are pushed down during vaginectomy, making vaginectomy more difficult. However, these challenges could be overcome by robotic surgery. Well-known that robotic surgery system provides three-dimensional visualization, by which the intraoperative field can be magnified approximately 10–15 times [ 42 ]. Thus, surgeons can more distinctly identify the anatomy around the vagina and avoid surgical damage; in addition, robotic instruments have multiple degrees of freedom for movement and mini end-effector, as well as tremor-filtering technology and stable cameras, which provide much flexibility and precision for vaginectomy, leading to fewer intraoperative complications. Feng et al. [ 40 ] conducted a multicenter randomized controlled trial of rectal cancer surgery and demonstrated that robotic-assisted surgery is more suitable for operations in the deeply narrow pelvic cavity. We observed that robotic-assisted surgery was associated with faster postoperative recovery in terms of shorter flatus passing time, catheterization time and postoperative hospitalization time, which is consistent with other reports [ 16 , 40 ]. Fifteen patients underwent total vaginectomy in the current study and did not undergo vaginoplasty. Because this study was retrospective, the preoperative communication informed document showed that patients had been informed about the available vaginoplasty options and the impact of total vaginectomy on their sexual function, but they all chose to refuse vaginoplasty. Undeniably, total vaginectomy can make postoperative sexual intercourse impossible in patients with vaginal HSIL. Although vaginoplasty which is a challenging procedure has high requirement on the surgeon’s technique, it can significantly improve the satisfactory of sexual life [ 43 , 44 ]. Consequently, vaginoplasty can be considered for selected patients who will be performed with total vaginectomy. Although the advantages of robotic-assisted vaginectomy are distinct, the hospital costs of robotic surgery is significantly higher than that of conventional laparoscopic surgery, consistent with the finds of other studies [ 45 – 47 ]. The cost is a continuing limitation to those who choose the surgical approach primarily based on their economic status. However, robotic surgery has the potential to be used in telemedicine, and robot-based telemedicine has become a reality in some hospitals. Through the telemedicine system platform, medical care could be performed without restrictions on time and place, and therefore, more potentialities and advantages of robotic surgery will be found. Jang et al. [ 48 ] demonstrated the economic feasibility of the robot-based telemedicine system compared with traditional face-to-face medical services through a cost-benefit analysis. Therefore, the shortcomings of robotic surgery regarding the higher hospital costs can be balanced under the utilizing of robot-based telemedicine systems. The limitations of this study must be considered when interpreting its results. First, our study is limited by single center, retrospective design, which might have selection bias of patients and affect the generalizability and transferability of the results. Second, although our institution, the First Affiliated Hospital of Zhengzhou University, is the largest comprehensive hospital in the Central Plains of China, with a large number of gynecological operations every year, the sample size of our study is still limited due to the low incidence rate of vaginal HSIL. Therefore, multicenter randomized controlled studies should be actively conducted to provide more robust evidence for comparing the advantages and disadvantages of robotic-assisted vaginectomy and conventional laparoscopic vaginectomy in the treatment of vaginal HSIL. Third, the conventional laparoscopy approach used in this study was equipped with two-dimensional cameras. Currently, the latest generation of conventional laparoscopy techniques has been improved with three-dimensional cameras, which has overcome the lack of depth perception in two-dimensional cameras. As this technology evolves, conventional laparoscopic surgery will improve, providing better assistance in vaginectomy.
Conclusions Our study is the largest retrospective study of patients with vaginal HSIL who underwent vaginectomy via robotic-assisted surgery or conventional laparoscopic surgery. Both the two groups, patients can achieve similar satisfactory treatment outcomes, but patients seem to more frequently benefit from robotic-assisted surgery. Except for higher hospital costs, patients who underwent RALV had less estimated blood loss, lower intraoperative complications rate and experienced a faster postoperative recovery. When vaginectomy is recommended to be performed for a selected patient with vaginal HSIL, robotic-assisted laparoscopic vaginectomy can be considered as a better choice.
Background Vaginectomy has been shown to be effective for select patients with vaginal high-grade squamous intraepithelial lesions (HSIL) and is favored by gynecologists, while there are few reports on the robotic-assisted laparoscopic vaginectomy (RALV). The aim of this study was to evaluate the safety and treatment outcomes between RALV and the conventional laparoscopic vaginectomy (CLV) for patients with vaginal HSIL. Methods This retrospective cohort study was conducted in 109 patients with vaginal HSIL who underwent either RALV (RALV group) or CLV (CLV group) from December 2013 to May 2022. The operative data, homogeneous HPV infection regression rate and vaginal HSIL regression rate were compared between the two groups. Student’s t- test, the Mann-Whitney U test, Pearson χ 2 test or the Fisher exact test, Kaplan-Meier survival analysis and Cox proportional-hazards models were used for data analysis. Results There were 32 patients in the RALV group and 77 patients in the CLV group. Compared with the CLV group, patients in the RALV group demonstrated less estimated blood loss (41.6 ± 40.3 mL vs. 68.1 ± 56.4 mL, P = 0.017), lower intraoperative complications rate (6.3% vs. 24.7%, P = 0.026), and shorter flatus passing time (2.0 (1.0–2.0) vs. 2.0 (2.0–2.0), P < 0.001), postoperative catheterization time (2.0 (2.0–3.0) vs. 4.0 (2.0–6.0), P = 0.001) and postoperative hospitalization time (4.0 (4.0–5.0) vs. 5.0 (4.0–6.0), P = 0.020). In addition, the treatment outcomes showed that both RALV group and CLV group had high homogeneous HPV infection regression rate (90.0% vs. 92.0%, P > 0.999) and vaginal HSIL regression rate (96.7% vs. 94.7%, P = 0.805) after vaginectomy. However, the RALV group had significantly higher hospital costs than that in the CLV group (53035.1 ± 9539.0 yuan vs. 32706.8 ± 6659.2 yuan, P < 0.001). Conclusions Both RALV and CLV can achieve satisfactory treatment outcomes, while RALV has the advantages of less intraoperative blood loss, fewer intraoperative complications rate and faster postoperative recovery. Robotic-assisted surgery has the potential to become a better choice for vaginectomy in patients with vaginal HSIL without regard to the burden of hospital costs. Keywords
Acknowledgements Not applicable. Author contributions Conceptualization, Y.L., R.G., Q.W., J.B. and M.C.; methodology, Y.L., M.M., M.Z, and H.F.; formal analysis, Y.L., R.G., and M.Z.; resources, R.G. and C.W.; data curation, R.G., J.B., Q.W., and C.W.; writing—original draft preparation, Y.L.; writing—review and editing, R.G., M.M., M.C., L.S. and H.F.; funding acquisition, R.G. All authors have read and agreed to the published version of the manuscript. Funding This research was funded by the young and middle-aged health science and technology innovation leader training project (YXKC2020012). Data availability The datasets used for analysis during the current study are available from the corresponding author on request. Declarations Ethics approval and consent to participate The study was approved by the Ethics Committee and Institutional Review Board of the First Affiliated Hospital of Zhengzhou University (No. 2022-KY-0205-002). All methods were performed in accordance with the relevant guidelines and regulations. Informed consent from patients has been waived by the Ethics Committee and Institutional Review Board of the First Affiliated Hospital of Zhengzhou University due to the retrospective nature of the study. Consent for publication Not applicable. Conflict of interest The authors declare no conflict of interest. Competing interests The authors declare no competing interests. Abbreviations Confidence intervals Conventional laparoscopic vaginectomy Hazard ratios High-grade squamous intraepithelial lesions Low-grade squamous intraepithelial lesions Robotic-assisted laparoscopic vaginectomy Squamous intraepithelial lesions
CC BY
no
2024-01-15 23:43:48
BMC Womens Health. 2024 Jan 13; 24:36
oa_package/4c/51/PMC10788024.tar.gz
PMC10788025
38218917
Background Suicide, characterized as a fatality resulting from a purposeful act of self-directed harm, exhibits systematic variations influenced by factors such as age, gender, and the chosen method of self-harm [ 1 ]. Worldwide, suicidal behaviors significantly impact public health, with an estimated incidence of 11.4 suicides per 100,000 people and 804,000 suicide-related fatalities [ 2 ]. Suicidal ideation, playing a pivotal role as a significant predictor of both attempted and completed suicides [ 3 ], poses a considerable health burden due to its predictive relevance [ 4 ]. Therefore, allocating increased clinical attention to suicidal ideation is imperative. The secondary messenger system of the brain heavily relies on cholesterol, closely linked to the actions of mood stabilizers and antidepressants [ 5 ]. This could potentially exert an indirect influence on the emergence of suicidal ideation. The intricate relationship that exists between lipid metabolism and suicidal ideation has been explored by multiple studies. The triglyceride-glucose (TyG) index and suicidal ideas were shown to be significantly associated in cross-sectional research involving 21,350 participants who were over the age of nineteen. However, with male individuals, no significant relationship was observed [ 6 ]. In a cross-sectional investigation, Hee-Young et al. observed a correlation between lower triglyceride levels and a decreased probability of experiencing suicidal ideation in a sample of 4557 Korean adults over the age of 65 [ 7 ]. Anhedonia was associated with lower LDL levels compared to equivalent control groups, but considerations of suicide were linked to more elevated HDL and cholesterol levels in a Chinese study including 287 untreated depressed patients [ 8 ]. Furthermore, research has shown that the ratio of non-high-density lipoprotein cholesterol (non-HDL-C) to HDL-C ratio (NHHR) serves as an independent risk indicator of depression in adults in the United States [ 9 ]. Approximately 90% of individuals with suicidal ideation have treatable psychological disorders, predominantly depression [ 10 ]. In the ongoing research into the association between psychological well-being and lipid metabolism, NHHR serves as a recently created composite indicator that assesses atherogenic lipid profiles and provides extensive insight into lipid particles that are both anti-atherogenic and atherogenic [ 11 ]. To determine the NHHR, non-HDL-C levels are divided by comparable HDL-C levels [ 12 ]. Previous research has shown that the NHHR exhibits superior diagnostic efficacy in comparison with standard lipid parameters in predicting the risk of cerebrovascular diseases, liver disease, insulin resistance, and metabolic syndrome [ 13 – 15 ]. Therefore, exploring the relationship between the NHHR and suicidal ideation may provide valuable insights into the intersection between lipid metabolism and mental health, prompting further investigation into preventive strategies and interventions. Despite the growing body of evidence associating lipid metabolism to suicidal ideation, suicidal ideation, and NHHR have not been thoroughly studied in any previous studies. Thus, the primary goal for the research was to investigate if suicidal ideation and the NHHR were associated. It is hypothesized that an increased NHHR might be linked to an increased likelihood of suicidal ideation. By shedding light on the links between lipid metabolism and mental health, this research will contribute to resolving the knowledge deficit concerning the association between NHHR and suicide ideology. In essence, this study explores a newly discovered field concerning the potential predictive use of the NHHR for mental health outcomes and focuses on an innovative perspective of suicidal ideation and its association with lipid metabolism.
Methods Study population The National Health and Nutrition Examination Survey (NHANES), an investigation that collects demographic information on the health and nutrient intake of US citizens, is supervised and executed through the National Center for Health Statistics (NCHS). On account of this study’s design’s utilization of a stratified multistage probability sampling process, the samples included in NHANES exhibit ideal representativeness [ 16 ]. Participants undergo a health check in a mobile examination facility and a standardized in-home interview to assess their physical and medical conditions. Additional tests are conducted to gather pertinent laboratory data. The NCHS Research Ethics Review Board authorized ethical applications for NHANES involving human subjects, along with every participant has officially granted their informed consent. By visiting https://www.cdc.gov/nchs/nhanes/ , the public can access all of the NHANES data. For this research, six cycles spanning 2005 to 2016 in NHANES were selected for the purpose of investigating the association between NHHR and suicidal ideation. This selection was based on the availability of comprehensive data on both the NHHR and suicidal ideation within four cycles. Initially, 60,936 participants were enrolled, with subsequent exclusions for individuals under 18 years of age ( n = 24,649), those with missing NHHR data ( n = 3,511), participants lacking data on suicidal ideation ( n = 2,977), and pregnant individuals ( n = 511). As a result, the final analytical cohort comprised 29,288 participants, as illustrated in Fig. 1 . Assessment of NHHR The NHHR serves as an independent variable in exposure assessment. The method outlined in prior studies was adopted to determine NHHR, specifically the Non-HDL-C/HDL-C ratio [ 17 ]. To derive non-HDL-C, obtained by subtracting HDL-C from total cholesterol (TC), lipid profiles of fasting individuals were analyzed. An automated biochemistry analyzer conducted an enzymatic test to evaluate TC and HDL-C levels. For TC concentration calculations, the research utilized both the Roche Cobas 6000 and Roche Modular P chemical analyzers in analytical procedures. Assessment of suicidal ideation The application of the ninth item of the Patient Health Questionnaire-9 (PHQ-9) could be appropriate for the evaluation of suicidal ideation. The PHQ-9 comprises nine items and is utilized to ascertain whether an individual has exhibited depressive symptoms in the preceding two weeks [ 18 ]. Every single question in the survey has a score that spans from the assigned value “absent” at 0 to “nearly every day” at 3, which sums up to an overall total that falls between 0 and 27 [ 19 ]. A threshold of 10 is employed to identify the presence of depressive symptoms [ 20 ]. In the ninth item, respondents are asked, “Over the last two weeks, how frequently have you experienced thoughts of self-harm or the belief that your life would be better off ended?” “Not at all,” “several days,” “more than half the days,” and “nearly every day” are the available response options. For analytical purposes, all responses are categorized as either absent (no) or present at any frequency (yes) [ 21 ]. Covariates Potential variables that might introduce confounding effects in the relationship between NHHR and the occurrence of suicidal ideation were considered through the implementation of multivariate-adjusted models. Several variations in covariates were considered within this study’s analysis, encompassing gender (male or female), age (years), race, education level, waist circumference, body mass index (BMI), income-to-poverty ratio (PIR), marital status (married or living with a partner/widowed, divorced, separated, and never married), physical activity (inactive/active), depressive symptoms (non-depressive/depressive), TC (mg/dl), HDL-C (mg/dl), smoking status (smoker/non-smoker), diabetes, and hypertension. Each person’s BMI was classified as being under 25, between 25 and 30 kg/m2, and above 30 kg/m2, which are divided into the categories of normal weight, overweight, and obese, in that order. The operational definition of physical activity involves engaging in activities of moderate or vigorous intensity for a minimum duration of 10 continuous minutes outside occupational or transportation contexts. In contrast, physical inactivity was defined as involvement in the mentioned activities for less than 10 min [ 22 ]. The total quantity of dietary cholesterol ingested was determined from the averages of the two 24-hour dietary recall tests, utilizing comprehensive nutrient consumption data. For detailed information on the quantifiable processes of the study variables, www.cdc.gov/nchs/nhanes/ is the official website accessible to the public. Statistical analysis The statistical analyses were performed following the protocols outlined by the Centers for Disease Control and Prevention (CDC). These guidelines prescribed the incorporation of relevant NHANES sample weights and the consideration of the complexities inherent in multistage cluster surveys. Standard deviations and means are shown for continuous data, while percentages are employed to depict categorical variables. A Student’s t-test with weights was employed to assess variations among groups based on the existence or lack of suicidal ideation. For evaluating associations between categories, a weighted chi-square test was utilized. By utilizing multivariate logistic regression, an independent association between NHHR and suicidal ideation in three distinct models was examined. Model 1 lacked covariate adjustments, meanwhile, Model 2 incorporated gender, age, and race adjustments. Gender, age, race, marital status, level of education, BMI, PIR, smoking status, diabetes, hypertension, physical activity, and dietary cholesterol were among the additional variables incorporated into Model 3. By implementing penalized spline smooth curve fitting and weighted generalized additive model (GAM) regression analysis, the non-linear association between NHHR and suicidal ideation was examined. Subgroup analyses were conducted using stratified multivariate regression analysis, with stratification based on sex, age, race, BMI, educational level, marital status, hypertension, diabetes, and smoking status. The log-likelihood ratio test model was employed for subgroup analysis, and statistical significance was determined at P < 0.05. The analytical procedures were implemented using R 3.4.3 (available at http://www.R-project.org ) and Empower (available at www.empowerstats.com ) as software applications.
Results 29,288 individuals made up the study’s sample; 48.52% of them consisted of men and 51.48% were women. The research individuals’ average ages were 48.00 ± 18.70 years. Among them, 28,168 (96.18%) reported an absence of suicidal ideation, while 1,120 (3.82%) indicated exhibiting ideas of suicide. Significant distinctions were observed among the groups classified by the absence or presence of suicidal ideation regarding the following factors: education level, gender, race, marital status, dietary cholesterol, HDL-C, income-to-poverty ratio, smoking status, diabetes, hypertension, physical activity, depressive symptoms, and dietary cholesterol and waist circumference ( P < 0.05). The attributes pertaining to those who were more likely to encounter suicidal ideas included being male, non-Hispanic White, widowed, divorced, separated, and never married, as well as having some college education or an AA degree and smoking. They also exhibited higher BMI levels, waist circumference, TC levels, and NHHR levels, and had higher rates of active physical activity and depression. However, they had lower rates of diabetes and hypertension, as well as lower household income, dietary cholesterol, and HDL-C levels within the research ( P < 0.05). Individuals’ clinical and physiological characteristics are categorized according to whether or not they have suicidal ideation in Table 1 . The association between NHHR and suicidal ideation The research results revealed that a positive association existed between elevated NHHR and an increased likelihood of experiencing suicidal ideation. This association existed in the initial unadjusted model and remained statistically significant in subsequent models, even after minimal and thorough adjustments were applied. With all adjustments implemented on Model 3 (OR = 1.06; 95% CI: 1.02–1.11; P = 0.0048), it was observed a 6% rise in the likelihood of suicidal ideation for each unit increase in NHHR. To further explore this relationship, it was categorized the continuous variable NHHR into discrete intervals (tertiles) for a sensitivity analysis. In the partially adjusted model (Model 2), tertile 3 exhibited a 20% higher probability of suicidal ideation compared to the lowest NHHR tertile (tertile 1) (OR = 1.20; 95% CI: 1.03–1.39; P = 0.0179). However, after extensive adjustments, the observed association (OR = 1.15; 95% CI: 0.94–1.41; P = 0.1751) did not reach statistical significance. Furthermore, none of the three models were able to significantly differentiate between tertile 1 and tertile 2 (Table 2 ). A nonlinear relationship between NHHR and suicidal ideation Through the application of smooth curve fits and weighted generalized additive models, an in-depth analysis of the nonlinear relationship between NHHR levels and suicidal ideation was undertaken for this study. A non-linear relationship was revealed by the results, as illustrated in Fig. 2 . Upon further examination, after stratifying by smoking status, it was observed an inverted U-shaped curve with an inflection point at 7.80 within the non-smoker subgroup (Fig. 3 ; Table 3 ). This pattern persisted even when accounting for the same covariates. Subgroup analysis The robustness of the association between NHHR and suicidal ideation was evaluated using subgroup analysis (Table 4 ). Despite this, the p -values for the interaction (all P > 0.05) indicated no statistically significant association, implying that variables encompassing age, gender, race, BMI, education level, marital status, hypertension, diabetes, and smoking status did not influence the association. Notably, the findings continuously indicate a significant link between NHHR and suicidal ideation even controlling for major demographic variables comprising age, sex, race, BMI, education level, marital status, hypertension, diabetes, and smoking status. This suggests the potential relevance of this association across diverse population settings.
Discussion In this comprehensive study involving 29,288 adults, the findings indicate that individuals with elevated NHHR scores are more likely to experience suicidal ideation. This association holds true across various subgroups, including age, sex, race, BMI, educational level, marital status, hypertension, diabetes, and smoking status. These results are consistent across diverse population settings, as demonstrated by subgroup analysis and interaction tests. Notably, within the non-smoker group, an inverted U-shaped association was discovered between NHHR and suicidal ideation, characterized by an inflection point at 7.80. In accordance with the results, which can be speculated that NHHR may serve as a predictor of suicidal ideation and that regulating lipid levels as determined by NHHR could reduce suicidal ideation and associated behaviors. From the utmost of present understanding, this research signifies the relationship’s primary investigation between NHHR and suicidal ideation. Growing evidence supports the notion that NHHR is a superior indicator of lipid-related disorder risk [ 23 – 25 ]. While empirical research examining the relationship between NHHR levels and ideation of suicide is lacking, a wealth of literature exists exploring the links between suicidal ideation and various lipid-related factors. In a cross-sectional investigation involving 13,772 adults in Korea, Hana et al. identified a significant association, indicating that reduced levels of LDL-C were linked to an elevated likelihood of suicidal thoughts for male individuals above the age of 19 [ 5 ]. A potential association was identified by Bałażej et al. between increased levels of TC and LDL and the occurrence of suicidal thoughts in females who were undergoing their initial episode of schizophrenia [ 26 ]. A retrospective cohort investigation involving 73 outpatients diagnosed with major depressive disorder suggested a noteworthy reduction in TG levels within the cohort exhibiting suicidal ideation, contrasting with individuals lacking such notions [ 27 ]. Suicidal ideation and completed suicides are associated with decreased blood lipid levels, particularly TC, as stated by Shunquan et al. in a meta-analysis involving 65 epidemiological studies [ 28 ]. While these findings do not directly present evidence, they indirectly support that NHHR levels are positively associated with suicidal ideation, contributing to the growing body of literature examining the association between profiles of lipids and suicidal tendencies through the use of novel lipid characteristics. The NHHR, a recently amalgamated metric reflecting atherogenic lipid composition [ 29 ], surpasses conventional lipid parameters in evaluating atherosclerosis extent [ 30 ]. Kwok RM states that compared to other lipid indicators, the NHHR has a more robust predictive ability for non-alcoholic fatty liver disease (NAFLD) [ 31 ]. According to Lin D’s study, the NHHR is a reliable diagnostic instrument for assessing insulin resistance. Compared to normal lipid inspections, this metric demonstrated superior accuracy in predicting conditions associated with the development of diabetes [ 32 ]. In summary, the NHHR has demonstrated exceptional predictive efficacy in a variety of studies. Furthermore, the NHHR is a widely accessible method distinguished by its noninvasive nature, ease of accessibility, and cost-effectiveness, presenting promising prospects for clinical implementation. Various perspectives offer explanations for the association between lipid metabolism and thoughts of suicide. Some theories propose that reduced cholesterol levels may influence the microviscosity of serotonin receptors, impacting serotonin activity and contributing to impulsive and suicidal behaviors [ 33 ]. An elevation in the ratio of n-6/n-3 PUFAs which is linked to the induction of a pro-inflammatory state, according to mechanisms regarding the balance of polyunsaturated fatty acids (PUFAs) and inflammation [ 34 ]. Further evidence substantiates this association, as studies have linked suicidal ideation with pro-inflammatory cytokines, including IL-6 [ 35 ]. Collectively, these findings suggest a potential key component in the pathophysiological model of suicide behavior. According to a theory proposed by Penttinen et al., increased cytokine production, specifically interleukin-2 (IL-2), leads to higher total blood cholesterol and lower serum HDL cholesterol, affecting melatonin release and increasing impulsivity and suicide risk [ 36 , 37 ]. Considering variations in dietary patterns within the target demographic, one plausible interpretation is that PUFA consumption correlates with reduced TG levels, potentially mitigating the risk of suicidal ideation [ 38 , 39 ]. Therefore, employing the NHHR as a means to assess the non-HDL-C proportion in patients could serve as a more effective tool for evaluating the impact that lipid metabolism has on the occurrence of suicidal ideation. Strengths and limitations This study exhibits several strengths. Firstly, NHANES data was utilized, representing a comprehensive and nationally representative sample obtained using a consistent procedure with a sufficient sample size [ 40 ]. Additionally, the research meticulously controlled for confounding covariates, selecting them primarily based on prior investigations that evaluated the association between suicidal ideation and various exposure variables. This approach was undertaken to enhance the reliability and validity of the results. However, it’s essential to acknowledge the inherent limits of this research. First, the assessment of suicidal ideation relied on personal interviews, introducing an inevitable recall bias. Second, though the PHQ-9’s ninth item has been used in prior research to measure suicide ideation, its extensive definition—which includes non-suicidal self-harm—may affect how the study evaluates the item’s association with suicidal ideation. Third, comprehensive validation of the PHQ-9’s utility in assessing suicidal ideation among the general public is lacking. Nonetheless, when it comes to basic internal medicine primary care, PHQ-9 possesses strengthened specificity and sensitivity. Fourth, the cholesterol data analyzed in this study were derived from fasting individuals, with non-fasting data remaining unexplored. Discrepancies in the laboratory testing protocols may introduce potential biases. Fifth, reverse causality was a possibility since the study employed a design with a cross-sectional approach, which hinders the ability to establish a causal relationship. Hence, there remains a necessity for prospective investigations encompassing larger sample sizes to elucidate the causative relationship. Meanwhile, despite adjusting for certain potential covariates, fully mitigating potential confounding factors beyond those adjusted remains elusive within the scope of the research.
Conclusion Based on the analysis conducted, a notable association was identified between suicidal ideation and higher NHHR scores, emphasizing the potential clinical relevance of lipid metabolism in mental health. Recognizing NHHR as a predictive indicator suggests a proactive two-step approach in routine lipid profile screenings: identifying potential mental health risks through abnormal NHHR levels and conducting comprehensive mental health assessments. This finding provides a valuable tool for early suicide risk detection, particularly in psychiatric care, allowing healthcare professionals to closely monitor mental health, support personalized interventions, and enhance overall psychiatric care effectiveness.
Background The ratio of non-high-density lipoprotein cholesterol (non-HDL-C) to high-density lipoprotein cholesterol (HDL-C) (NHHR) serves as a reliable lipid indicator associated with atherogenic characteristics. Studies have indicated a potential connection between suicidality and lipid metabolism. This research aims to investigate any possible association between the NHHR and the emergence of suicidal ideation within the confines of the study. Methods This study examined the association between NHHR levels and suicidal ideation using data from the National Health and Nutrition Examination Survey (NHANES), conducted in the United States spanning 2005 and 2016. Calculation of the NHHR corresponds to the proportion of HDL-C to Non-HDL-C. The Patient Health Questionnaire-9’s ninth question was implemented for assessing suicidal ideation. Using subgroup analysis, smooth curve fitting, and multivariate logistic regression analysis, the research was conducted. Results Encompassing a cohort of 29,288 participants, the analysis identified that 3.82% of individuals reported suicidal ideation. After using multivariable logistic regression and thorough adjustments, elevated NHHR levels were significantly and positively associated with a heightened likelihood of suicidal ideation, according to the findings (odds ratio [OR] = 1.06; 95% confidence interval [CI]: 1.02–1.11; P = 0.0048). Despite extensive adjustment for various confounding factors, this relationship remained consistent. An inverted U-shaped curve was utilized to illustrate the link between NHHR and suicidal ideation among nonsmokers; the curve’s inflection point was situated at 7.80. Subgroup analysis and interaction tests (all P for interaction > 0.05) demonstrated that there was no significant influence of the following variables on this positive relationship: age, sex, race, body mass index, education level, married status, hypertension, diabetes, and smoking status. Conclusion Significantly higher NHHR levels were associated with an elevated likelihood of suicidal ideation. Based on these results, it is probable that NHHR may serve as a predictive indicator of suicidal ideation, emphasizing its potential utility in risk assessment and preventive strategies. Keywords
Acknowledgements The authors express their gratitude to the NHANES database for their uploading valuable datasets. Author contributions G.Q.: conceptualization, methodology, data curation, software, writing – original draft. W.D.: data curation, visualization, software. Y.Z.: data curation, formal analysis, validation. L.Z.: data curation, software. Y.W.: writing – original draft. B.W.: conceptualization, funding acquisition, methodology, writing – review & editing, supervision. Funding This research received no external funding. Data availability In this study, publicly accessible datasets were examined. These data can be found here: (( https://wwwn.cdc.gov/nchs/nhanes/analyticguidelines.aspx , accessed on 1 November 2022). Declarations Ethics approval and consent to participate This study was reviewed and approved by the NCHS Ethics Review Board. The patients/participants provided written informed consent to participate in this study. Consent for publication Before participating in the study, all participants signed up with informed permission. Institutional Review Board Statement There was no requirement for institutional review board permission since the NHANES database was open to the public. Competing interests The authors declare no competing interests. Abbreviations Non-high-density lipoprotein cholesterol High-density lipoprotein cholesterol Non-HDL-C and HDL-C ratio Poverty-to-income ratio Body mass index Cholesterol
CC BY
no
2024-01-15 23:43:48
Lipids Health Dis. 2024 Jan 13; 23:17
oa_package/ec/ba/PMC10788025.tar.gz
PMC10788026
38218807
Background Ovarian cancer (OC) is the most lethal cancer of the female reproductive system, which is due to the lack of effective screening at the early stage and resistance to chemotherapy as the tumor progresses [ 1 , 2 ]. The preferred treatment for OC is surgery assisted by the combination of paclitaxel and platinum which prolongates the survival of OC patients [ 2 ]. Nevertheless, the survival rate of OC patients with advanced stage is still low, posing a serious threat to women’s lives [ 1 ]. Therefore, predicting individual prognosis for OC is important for both patients and gynecologic oncologists. Cells can remove incomplete or damaged mitochondria through the mechanism of autophagy selectively and the process is called mitophagy [ 3 ]. The body can maintain the integrity of mitochondrial function through mitophagy, so as to achieve the purpose of delaying aging and treating diseases [ 3 , 4 ]. In recent years, mitophagy is found to contribute to OC progression [ 5 , 6 ]. The specific regulatory mechanism of mitophagy in OC progression may be involved in tumor-associated macrophages [ 7 ] and cell stemness [ 8 ]. Mitophagy is also involved in anticancer activity of drugs in OC, such as platinum [ 4 , 9 – 14 ], EGFR tyrosine kinase inhibitors [ 15 ], Janus kinases 1/2 inhibitor [ 16 ], pardaxin [ 17 ], nanomedicine [ 5 ], and epoxycytochalasin H [ 18 ]. Despite studies in investigating the role and mechanism of mitophagy in OC, the precise effect of mitophagy in clinical applications remain challenging due to the lack of targetable biomarkers combination. Long non-coding RNA (lncRNA) refers to a loose RNA transcript with more than 200 nucleotides, which has no protein coding potential [ 19 ], and the number of lncRNAs significantly exceeds that of protein-coding genes [ 19 ]. Although the functions of lncRNAs in tumorigenesis have been confirmed [ 19 ] and our earlier study demonstrated that lncRNA can regulate autophagy in OC [ 20 ], little is known about their regulation in mitochondrial function and the mechanism by which lncRNAs regulate mitophagy even remains blank. Because of the small size and hidden location in the female pelvic cavity, early diagnosis of OC is extremely challenging [ 1 ]. Currently, the most commonly used tumor marker for OC screening in clinical practice is Carbohydrate Antigen 125 (CA125) [ 21 ] and Human Epididymis Protein 4 (HE4) [ 22 ]. Given that other benign diseases can also cause elevated serum biomarkers, the diagnostic specificity and sensitivity of using serum CA125 or HE4 alone are not high [ 23 ]. Existing studies have attempted to establish prognostic models for patients with OC based on clinicopathologic characteristics. For instance, the Risk of Ovarian Malignancy Algorithm (ROMA) model incorporated both serum CA125 and HE4, nevertheless, the model did not fully address the challenge of detecting OC with high risk [ 23 ]. More and more studies show that gene expression profiles can be used to identify many important prognostic genes in various types of cancer and to map prognostic related molecular models [ 24 , 25 ]. Based on high-throughput technologies and data sharing, cancer research has entered the era of big data due to large-scale multi-omics data accumulated in The Cancer Genome Atlas (TCGA) [ 26 ] and Gene Expression Omnibus (GEO) databases [ 27 ]. Bioinformatics is an emerging interdisciplinary subject used for analyzing biological information [ 28 ], which takes computer as a tool (mainly R packages) [ 29 ]. The application of big data from TCGA and GEO databases based on bioinformatics allows us to evaluate the predictive value of mitophagy-related lncRNA (MRL) combinations for OC patients. The packages in R language software can be used for data mining and statistical analysis [ 30 ]. Herein, we mainly utilized R packages to carry out comprehensive analyses of mitophagy-related genes (MRGs) and MRLs for patients with OC. Using weighted co-expression network analysis (WGCNA) and least absolute shrinkage and selection operator (LASSO) Cox regression analysis, we analyzed the landscape of MRGs and MRLs comprehensively. The reliable MRL-model to predict overall survival (OS) and therapeutic strategies was constructed. Our data showed that the MRL-model was associated with immunity characteristics, tumor mutational burden (TMB), immunotherapy, and chemotherapeutic drug sensitivity.
Methods Data collection The processed data were extracted from UCSC-Xena ( https://xenabrowser.net/datapages/ ) [ 31 ]. The Ensemble Gene was converted into Gene Symbol based on gene annotation information in GENCODE [ 32 ]. The low-expression mRNAs and lncRNAs were filtered. Collectively, 417 OC samples with expression profiles and prognostic information from TCGA were included. Besides, 88 normal ovarian tissues from GTEx were obtained for identification of differentially expressed genes. We also retrieved four OC datasets that had lncRNA expression profiles and prognostic information from GEO database ( https://www.ncbi.nlm.nih.gov/geo/ ) [ 27 ], including 268 OC cases. We selected the dataset from the GPL570 Affymetrix Human Genome U133 Plus 2.0 Array to annotate as many lncRNAs as possible. MRGs were screened from GeneCards ( https://www.genecards.org ) [ 33 ] based on their relevance score. Furthermore, the somatic mutations were generated with Mutation Annotation Format (MAF) using the “maftools” package (Version 2.16.0) [ 34 ]. Differentially expressed genes screening Linear regression and Empirical Bayesian [ 35 ] were able to shrink the analyzed variances toward a common estimate and the method was conducted using “limma” package (Version 3.10.3) [ 36 ] to screen out the differentially expressed MRGs and lncRNAs. Benjamini-Hochberg was used for multiple test correction to obtain greater power relative based on False Discovery Rate (FDR) [ 37 ]. The threshold of screening differentially expressed genes was set as adjusted P < 0.05 and |logFC| > 0.5. Prognostic genes screening The “survminer” package (Version 0.4.3) was used to determine the optimal cut-point based on the expression of genes, survival time and survival state. The prognostic genes were screened out based on Kaplan-Meier (K-M) curves and logRank test. MRLs screening based on WGCNA We used the “WGCNA” package [ 38 ] (Version 1.61) to analyze the expression matrix of lncRNAs, so as to identify highly synergistic lncRNA modules. Firstly, a series of power was set to calculate the square value of correlation coefficient between connectivity k and p(k) and the average connectivity under each power value. The power value whose square value of correlation coefficient reached above 0.85 for the first time was selected. Secondly, based on dynamic pruning and clustering methods, we aggregated highly correlated lncRNAs into modules (correlation coefficient > 0.8). Finally, the correlation between modules and the prognostic MRGs was calculated, and the lncRNA modules associated with multiple MRGs were identified. We defined the modules with the most obvious positive or negative correlation with multiple MRGs as the key modules, and the lncRNAs in these modules were MRLs. Establishment of the MRL-model After obtaining prognostic MRLs, we applied the high-dimensional index regression method of “glmnet” R package (Version 2.0–18), LASSO Cox regression analysis, to screen the combination of prognostic MRLs by utilizing a penalty proportional to the contraction of the regression coefficient based on 20-fold cross-validation analysis, thus addressing multicollinearity [ 39 ]. The regression coefficient and the expression level of each MRL was applied to calculate the risk score and construct the MRL-model as follows: Herein, β lncRNA was the LASSO regression coefficient of the MRL, and Exp lncRNA represented the expression value of MRL. The highly correlated MRLs were excluded to prevent the MRL-model from overfitting. Validation of the MRL-model We included four external datasets that had lncRNA expression profile and prognostic information to validate the model: GSE19829 (28 OC samples), GSE26193 (107 OC samples), GSE30161 (58 OC samples), and GSE63885 (75 OC samples). The batch effects of the four external datasets were removed by “sva” R package (Version 3.48.0) [ 40 ]. The β lncRNA was first generated based on TCGA training dataset and the risk score of the GEO validation datasets was calculated based on the formula described above. TCGA training and GEO validation datasets were divided into high-risk group (risk score higher than threshold value), or low-risk group (risk score lower than threshold value) based on the threshold value (median of risk score). K-M curves were used to evaluate the survival outcomes of risk groups for TCGA training and GEO validation datasets, thus validating the effectiveness of predicting prognosis. Establishment of the nomogram based on MRL-model We conducted Univariate Cox regression analysis to assess the prognostic value of MRL-model and clinicopathological parameters. Multivariate Cox regression analysis was further implemented to evaluate and validate their independent prognostic value in TCGA training and GEO validation datasets. Subsequently, the “rms” package (Version 6.7.0) was applied to establish the Nomogram based on MRL-model and clinicopathological parameters [ 41 ]. The Nomogram was validated by discrimination and calibration with B = 1000 resampling optimism added to describe the relationship between the actual and the predicted OS probability of the Nomogram, thus evaluating the consistency of the MRL-model. The closer the predicted curve is to 45°, the better the prediction ability. Quantitative real-time PCR A total of 30 OC and 10 normal tissues were collected after approving by Ethics Committee of Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital. The samples obtained were pathologically confirmed as OC or ovarian tissues. Quantitative Real-time PCR analysis (SuperReal PreMix Plus from Tiangen Biotech, Beijing, China) was carried out after extracting total RNA (TRNzol Universal Reagent from Tiangen Biotech, Beijing, China) and reverse transcription (FastKing gDNA Dispelling RT SuperMix from Tiangen Biotech, Beijing, China). The sequence of lncRNA was obtained from LNCipedia ( https://lncipedia.org/ ) [ 42 ]. The primers of the lncRNAs were designed and provided by Sangon Biotech (Shanghai, China). Analysis of functional pathways The protein-protein interaction (PPI) network was established using STRING ( https://string-db.org/ ) [ 43 ] and Cytoscape (Version 3.4.0) [ 44 ]. Gene set enrichment analysis (GSEA) was performed in high-risk group versus low-risk group using “GSEA” (Version 4.3.2) [ 45 ]. The background gene set was the pathway set in MsigDB molecular label database [ 46 ]. Analysis of immunity features The carcinogenesis of OC is strongly correlated with the immune microenvironment [ 47 ]. Utilizing single-sample gene set enrichment analysis (ssGSEA), we calculated enrichment fraction of 28 immune cells using gene set variation analysis (GSVA, Version 1.48.3) to indicate the relative abundance of each tumor microenvironment-infiltrated cell [ 48 ]. In addition, three algorithms, CIBERSORT (Cell-type Identification By Estimating Relative Subsets Of RNA Transcripts, Version 0.1.0) [ 47 ], xCELL (Version 1.1.0) [ 49 ], MCPcounter (Microenvironment Cell Populations-counter, Version 1.2.0) [ 50 ], were used to characterize the cellular composition of complex tissues according to corresponding literature. Further, we estimated immune and stromal scores using ESTIMATE (Estimation of STromal and Immune cells in MAlignant Tumor tissues using Expression data) algorithm (Version 1.1.7) to indicate the presence of stromal and immune cells [ 51 ]. Analysis of therapy We predicted potential responses to immune checkpoint blockade (ICB) using the Tumor Immune Dysfunction and Exclusion (TIDE) tool ( http://tide.dfci.harvard.edu/ ) [ 52 ]. Through contrasting gene expression profiles of OC and dataset of immunotherapy, we compared the discrepancy between the two risk groups in immunotherapy using submap and the P value was Bonferroni corrected [ 53 ]. The reactivity of chemotherapy drugs were extracted from the Genomics of Drug Sensitivity in Cancer (GDSC) database ( https://www.cancerrxgene.org/ ) [ 54 ] and we used “pRRophetic” package (Version 0.5) [ 55 ] to analyze cell line expression profiles and OC gene expression profiles by constructing ridge regression model to assess IC50 levels of drugs. Construction of ceRNA network Pearson correlation coefficient (correlation coefficient > 0.2) between mRNAs and lncRNAs was calculated and FDR value (FDR < 0.05) was obtained from Benjamini-Hochberg correction. The local software miranda (Version 3.3a) [ 56 ] was used to screen the lncRNA-mRNA pairs (Score ≥ 140 and Energy≤ − 20). We used miRWalk3.0 ( http://mirwalk.umm.uni-heidelberg.de/search_genes/ ) [ 57 ] to obtain the miRNA-mRNA pairs which had been verified by experiment. Further, lncRNAs and mRNAs regulated by the same miRNA with positive co-expression relationship were screened to establish the ceRNA (competing endogenous RNA) network. We used Cytoscape software (Version 3.4.0) for network graph construction [ 44 ]. The Degree Centrality of network node were analysed using CytoNCA plug-in (Version 2.1.6) [ 58 ]. Statistical analysis The statistical analysis and graph visualization were performed by using R programming language [ 59 , 60 ] or GraphPad Prism. The software, packages and their versions used for statistical analysis were listed in Supplementary Table S 1 . The genes with prognostic value were identified based on the hazard ratio (HR) and 95% confidence interval (CI). K-M curves and log-rank test were applied to contrast the survival outcome between two subgroups. Univariate and Multivariate Cox analyses were conducted to determine the independent prognostic value. Wilcox test was used to compare the immune characteristic or drug sensitivity between two groups. The two-tailed P lower than 0.05 was considered statistically significant.
Results The research flowchart was plotted to summarize the main design of our study (Fig. 1 ). Differentially expressed and prognostic genes screening Compared with normal, there were 52 MRGs differentially expressed in OC (adjusted P < 0.05 and |logFC| > 0.5) (Fig. 2 A). Through prognostic analysis, we found that 22 of the 52 MRGs were significantly correlated with the prognosis (Fig. 2 B). Among the 22 prognostic MRGs, there were four MRGs correlated with favorable prognosis (HR < 1), including E2F1, MAPK8, MTX1, and UBE2L3. In contrast, the remaining 18 MRGs were associated with a poor prognosis (HR > 1), including BCL2L1, BECN1, CSNK2A1, CSNK2A2, FOXO3, GABARAPL1, MAP1LC3A, MFN2, NBR1, PINK1, RAB7A, SNCA, TBC1D15, TBK1, TFE3, TIGAR, USP30, and VPS13D. The box diagram visually demonstrated the expression differences of these 22 prognostic MRGs between OC and normal tissues (Fig. 2 C). To further observe the relationship between the 22 prognostic MRGs and clinicopathological parameters, box plots for each MRG were drawn between different clinical groups. We found that TBC1D15 ( P < 0.05), UBE2L3 ( P < 0.05), VPS13D ( P < 0.05), TFE3 ( P < 0.01), NBR1 ( P < 0.01), MFN2 ( P < 0.01), PINK1 ( P < 0.05), USP30 ( P < 0.05), and CSNK2A1 ( P < 0.01) was associated with the stage of OC (Supplementary Fig. S 1 A). Most of the MRGs were not significantly different among other clinical factors, except SNCA ( P < 0.05) and E2F1 ( P < 0.05) in Grade (Supplementary Fig. S 1 B), CSNK2A2 ( P = 0.02) in Age (Supplementary Fig. S 1 C), TIGAR ( P < 0.05) and MTX1 ( P < 0.05) in Macroscopic disease (Supplementary Fig. S 1 D). In summary, the differentially expressed and prognostic MRGs can be used as diagnostic markers to identify cancer and non-cancer as well as different clinical stages. These MRGs are expected to be involved in OC progression and deserve further study. The interactions among MRGs Gene Ontology (GO) enrichment analysis revealed that the MRGs were enriched in mitophagy, mitochondrion disassembly, organelle disassembly, macroautophagy, cellular component disassembly, regulation of mitochondrion organization, and so on (Fig. 3 A), suggesting that these MRGs were indeed involved in mitophagy and their biological implications for wet experiments. Interestingly, the correlations among the expression of the 22 prognostic MRGs were mostly positive, and CSNK2A2 and NBR1 ( P < 0.05, Cor = 0.85) were the most positively correlated gene pair (Fig. 3 B), which further hinted at their similarity in biological functions. To further explore the interactions of these 22 MRGs, the PPI analysis was performed (Fig. 3 C). By ranking the degree in the PPI network, we could find that BECN1, GABARAPL1, PINK1, SNCA, MAP1LC3A, MEN2, and NBR1, FOXO3, RAB7A, and BCL2L1 were the top nine hub genes (Supplementary Fig. S 2 ), indicating that these MRGs play a more prominent role in mitophagy in OC. Genetic mutations of the majority of MRGs were not detected in OC samples, except TP53, HUWE1, and VPS13C (Supplementary Fig. 3 ), hence most MRGs are wild-type in OC. MRLs screening based on WGCNA We conducted WGCNA analysis on the lncRNAs obtained. Firstly, the soft threshold was set to 9 (Fig. 4 A). We set β = 3 since the power when the square value of the correlation coefficient between k and p(k) reaches 0.8 for the first time. Based on dynamic pruning and clustering, high correlation genes were aggregated into modules. Then we clustered these modules and merge modules with a correlation coefficient greater than 0.8. To wit, modules with a coefficient of dissimilarity less than 0.2 (Fig. 4 B), and finally integrated into five modules (Fig. 4 C). The correlation between 22 prognostic MRGs and module eigengene were further calculated. The blue module (containing 369 lncRNAs) revealed the strongest positive correlation with most MRGs, while the green module (containing 70 lncRNAs) showed the strongest negative correlation with most MRGs (Fig. 4 D). Therefore, the subsequent analysis was mainly according to the lncRNAs in the two modules. We defined these lncRNAs as MRLs. After the above complex comprehensive analysis, we reliably obtained MRLs closely related to mitophagy, which laid the foundation for the following studies. Screening of prognostic MRLs According to the MRLs in the blue and green modules mentioned above, we performed Univariate Cox regression analysis first. Our data showed that nine MRLs were significantly associated with survival prognosis. Then, eight optimal lncRNA combinations were screened by LASSO Cox Regression algorithm combining the expression value of MRLs, survival time and survival state (Fig. 5 A-B). The forest map revealed the results of LASSO regression coefficient and Cox regression analysis of the eight optimal MRLs (Fig. 5 C), including RP5-1120P11.1 ( P = 0.002; HR = 0.673, 95% CI:0.527–0.860; Coef = − 0.133), RP11-195F19.9 ( P = 0.002; HR = 1.475, 95% CI:1.152–1.888; Coef = 0.007), USP30-AS1 ( P = 0.002; HR = 0.683, 95% CI:0.533–0.873; Coef = − 0.049), AC004540.5 ( P = 0.003; HR = 0.685, 95% CI:0.536–0.875; Coef = − 0.093), ZFAS1 ( P = 0.003; HR = 1.455, 95% CI:1.138–1.860; Coef = − 0.085), RP11-10A14.5 ( P = 0.003; HR = 0.691, 95% CI:0.542–0.882; Coef = − 0.011), AC010761.10 ( P = 0.003; HR = 0.691, 95% CI:0.540–0.883; Coef = − 0.022), and AC003075.4 ( P = 0.010; HR = 0.725, 95% CI:0.568–0.926; Coef = − 0.111). K-M curves were drawn to evaluate the association between the expression levels of the eight optimal MRLs and OC survival prognosis, including RP5-1120P11.1 (log-rank test P = 0.0014), RP11-195F19.9 (log-rank test P = 0.0019), USP30-AS1 (log-rank test P = 0.0022), AC004540.5 (log-rank test P = 0.0024), ZFAS1 (log-rank test P = 0.0026), RP11-10A14.5 (log-rank test P = 0.0028), AC010761.10 (log-rank test P = 0.003), and AC003075.4 (log-rank test P = 0.0096) (Fig. 5 D-K). In summary, except for RP11-195F19.9 and ZFAS1, which were associated with poor prognosis, the remaining MRLs were associated with better prognosis of OC. Therefore, we have screened out the optimal combination of MRLs that are involved in OC progression and will build a prognostic MRL-model to calculate risk score for OC based on the results. Identification and validation of the MRL-model The expression levels of the eight optimal MRLs varied in different samples with risk score and clinical information as shown in Fig. 6 A. We could see that high expression of ZFAS1 and RP11-195F19.9 was associated with high risk score, but the opposite was true for the remaining six MRLs. Using the same regression coefficient, the risk score of TCGA training and GEO validation datasets was calculated based on the formula described in Methods section. Patients with risk score higher than the median were included in high-risk group, otherwise, they were included in low-risk group. Figure 6 B, C illustrated the distribution of risk score in the two risk groups, and the good prognosis of patients in the low-risk group was observed in both the TCGA training (log-rank test P < 0.0001) (Fig. 6 D) and GEO validation (log-rank test P = 0.012) (Fig. 6 E) datasets, thus proving the accuracy of the validation. Based on Univariate Cox regression analysis, the MRL-model was proved to be a prognostic marker ( P < 0.001; HR = 1.960, 95% CI:1.520–2.528) in TCGA training dataset (Fig. 7 A). Besides, even though we performed Multivariate Cox regression analysis combining the MRL-model and clinicopathological parameters, the MRL-model remained a significant independent predictive predictor ( P < 0.001; HR = 1.795, 95% CI:1.371–2.350) in TCGA training dataset (Fig. 7 B). We further carried out Univariate (Fig. 7 C) and Multivariate (Fig. 7 D) Cox regression analyses for GEO validation datasets to validate the above results. Our data revealed that the MRL-model was also a prognostic marker ( P = 0.011; HR = 1.439, 95% CI:1.086–1.906) and a significant independent predictive predictor ( P = 0.038; HR = 1.349, 95% CI:1.017–1.789) in the validation datasets. The Nomogram was plotted to make the prediction results more intuitive and readable for TCGA training (Fig. 7 E) and GEO validation (Fig. 7 F) datasets. The Nomogram was further validated by discrimination and calibration to describe the relationship between the actual OS probability and the predicted OS probability. We observed that the predicted curve was adjacent to 45° in TCGA training (Supplementary Fig. S 4 A) and GEO validation (Supplementary Fig. S 4 B) datasets, indicating the favorable prediction ability. Ulteriorly, we implemented stratification analyses based on clinicopathological parameters to further validate the effectiveness of the MRL-model in predicting OC prognosis. We observed that OC patients in the high-risk group still had the unfavorable survival in consideration of Age < 60 (log-rank test P < 0.001) (Supplementary Fig. S 5 A), Age > =60 (log-rank test P < 0.001) (Supplementary Fig. S 5 B), Stage I-II (log-rank test P = 0.581) (Supplementary Fig. S 5 C), Stage III-IV (log-rank test P < 0.001) (Supplementary Fig. S 5 D), Grade I-II (log-rank test P = 0.348) (Supplementary Fig. S 5 E), Grade III-IV (log-rank test P < 0.001) (Supplementary Fig. S 5 F), Tumor Residual Disease 1-10 mm (log-rank test P = 0.002) (Supplementary Fig. S 5 G), Tumor Residual Disease > 10 mm (log-rank test P = 0.053) (Supplementary Fig. S 5 H), White (log-rank test P < 0.001) (Supplementary Fig. S 5 I) and Nonwhite (log-rank test P = 0.113) (Supplementary Fig. S 5 J). To sum up, the MRL-model is a reliable prognostic risk stratification tool for OC and is worthy of further large sample validation for clinical implications. Quantitative real-time PCR The eight optimal MRLs in the MRL-model were examined to compare the difference between normal and cancer tissues via Quantitative Real-time PCR experiments. In TCGA dataset (Supplementary Fig. S 6 A-H), AC003075.4 ( P < 0.0001), AC004540.5 ( P < 0.0001), AC010761.10 ( P < 0.0001), RP5-1120P11.1 ( P < 0.0001), RP11-10A14.5 ( P < 0.0001), USP30-AS1 ( P < 0.0001) were highly expressed in OC, conversely, the expression levels of RP11-195F19.9 ( P < 0.0001) and ZFAS1 ( P < 0.0001) were low in OC tissues. This difference in expression was also observed in our cohort (Supplementary Fig. S 6 I-P): AC003075.4 ( P = 0.0834), AC004540.5 ( P < 0.01), AC010761.10 ( P < 0.05), RP5-1120P11.1 ( P < 0.05), RP11-10A14.5 ( P < 0.05), RP11-195F19.9 ( P < 0.05), USP30-AS1 ( P < 0.01) and ZFAS1 ( P < 0.01). Our results further confirm the potential of these MRLs as diagnostic markers for OC. However, more sample size and prognostic follow-up information are needed in future studies. Evaluation of functional pathways for the MRL-model GSEA enrichment analysis was performed on KEGG pathway in high-risk group versus low-risk group based on GSEA software. There were 18 KEGG pathways which were significantly enriched in low-risk group and14 KEGG pathways were significantly enriched in high-risk group. Due to the large number of results, only P value< 0.01 was shown in Fig. 8 A, B, including 12 enrichment pathways in the high-risk group ( P < 0.01) (Fig. 8 A) and six enrichment pathways in the low-risk group ( P < 0.01) (Fig. 8 B). We could find that the high-risk group was enriched in some classic tumor-related signaling pathways, for example, Wnt signaling pathway, TGF-beta signaling pathway, and Hedgehog signaling pathway (Fig. 8 A). The low-risk group was mainly enriched in metabolic pathways, such as nicotinate and nicotinamide metabolism, pyrimidine metabolism (Fig. 8 B). These enriched pathways could provide new insights into the underlying biological implications of OC patients with different risk stratifications. Evaluation of mutation for the MRL-model Our study revealed that TMB was higher in low-risk group than in high-risk group ( P < 0.05), implying that patients with lower risk score may benefit from immunotherapy (Supplementary Fig. S 7 A). The Spearman correlation coefficient between risk score and TMB was negative (r = − 0.1279, P = 0.0294), demonstrating that TMB was negatively associated with risk score (Supplementary Fig. S 7 B). The distribution variations of the somatic mutations between the two risk groups were also analyzed. The top 20 mutated genes in the two risk groups were TP53 (78, 88%), TTN (21, 26%), MUC16 (7, 6%), CSMD3 (12, 6%), NF1 (6, 6%), TOP2A (6, 5%), USH2A (5, 7%), HMCN1 (3, 4%), FAT3 (7, 4%), RYR2 (8, 4%), MUC17 (5, 4%), FLG (4, 4%), APOB (3, 4%), MACF1 (6, 4%), LRP1B (4, 4%), BRCA1 (3, 3%), DNAH3 (2, 3%), LRRK2 (5, 4%), LRP2 (3, 6%), and SYNE1 (6, 3%) (Fig. 8 C-D). OC patients with higher risk score had observably lower frequencies of TP53 and TTN (Fig. 8 C). However, the mutated levels of CSMD3 and RYR2 were opposite (Fig. 8 D). Overall, patients in the low-risk group had a greater mutation rate, and the lower risk score may be an indicator that immunotherapy is effective. Analysis of immunity features and immunotherapy for the MRL-model To further explore the relationship between immune features and MRL-model, five algorithms, including CIBERSORT (Fig. 9 A), ssGSEA (Fig. 9 B), MCPcounter (Fig. 9 C), xCELL (Fig. 9 D), and ESTIMATE (Fig. 9 E), were used to analyze immune features for the MRL-model. The results suggested that OC patients in the two risk groups differed at the level of immune cells (Fig. 9 A-D). Higher risk score correlated strongly with higher stromal ( P < 0.001) and estimated ( P < 0.05) scores, while lower risk score correlated with higher tumor purity ( P < 0.05) (Fig. 9 E). Risk score was shown to be significantly positively correlated with Stromal Score (R = 0.17, P = 0.00044) (Fig. 9 F). However, there was no significant correlation between risk score and ESTIMATE Score (R = 0.089, P = 0.071), Immune Score (R = -0.004, P = 0.93), and Tumor Purity (R = -0.089, P = 0.071) (Supplementary Fig. S 8 A-B). Furthermore, we investigated the association between the risk score and seven immune checkpoints. Four immune checkpoints, including CD274 ( P < 0.05), CD47 ( P < 0.001), LAG3 ( P < 0.01), and VTCN1 ( P < 0.001) were under-expressed in high-risk group (Supplementary Fig. S 9 ). Nevertheless, the expression values of the remaining immune checkpoints did not differ between the two risk groups. Higher TIDE score was not only associated with worse immune checkpoint inhibition therapy, but also with worse survival with anti-CTLA4 and anti-PD1 therapy. From Fig. 9 G, we could find that TIDE score of OC patients in the low-risk group was lower than that in high-risk group, suggesting that OC patients with low risk score were more sensitive to immune checkpoint blockade therapy ( P < 0.05). In addition, through the results of subclass mapping, we found that OC patients in the low-risk group may be more sensitive to PDL1 response (Bonferroni corrected P = 0.01) (Fig. 9 H). Therefore, we could conclude that patients in the low-risk group identified by the MRL-model may be more sensitive to immunotherapy, which may provide a reference for clinical immunotherapy of OC. Analysis of drug sensitivity for the MRL-model Based on data from GDSC database, the Spearman’s correlation coefficients between drug susceptibility and expression levels of the eight MRLs in the risk model was calculated (Supplementary Fig. S 10 A). Our data showed that AC010761.10 was highly expressed and resistant to most drugs (such as bleomycin) and the level of USP30-AS1 was negatively correlated with several drugs (such as paclitaxel). The results provided new insights into the molecular resistance mechanisms of these MRLs. We found that the IC50 levels of Paclitaxel ( P = 0.005) and ABT.888 (Veliparib, P = 0.002) in the high-risk groups were observably higher than those in the low-risk group, suggesting a negative correlation between risk score and the drug susceptibility (Fig. 10 A, B). Nevertheless, the exact opposite was observed regarding AG.014699 (Rucaparib, P = 0.005) (Fig. 10 C), Axitinib ( P = 3.344e-07) (Fig. 10 D), OSI.906 (Linsitinib, P = 9.015e-07) (Fig. 10 E), AZD.0530 (Saracatinib, P = 4.276e-05) (Fig. 10 F), AMG.706 (Motesanib, P = 0.022) (Fig. 10 G), AP.24534 (Ponatinib, P = 0.002) (Fig. 10 H), and Imatinib ( P = 4.047e-04) (Fig. 10 I). Besides, other drugs commonly used in OC chemotherapy, such as Cisplatin ( P = 0.248), Bleomycin ( P = 0.347), Gemcitabine ( P = 0.32), and Vinorelbine ( P = 0.848) showed no difference between the two subgroups (Supplementary Fig. S 10 B-E). The results suggest that chemotherapy drugs have different clinical implications for OC patients with different risk score and OC patients need personalized treatment. Construction of ceRNA network We predicted 35,007 miRNA-mRNA pairs and 878 lncRNA-miRNA pairs primitively. The lncRNA-miRNA-mRNA relationship pairs regulated by the same miRNA were further screened, and mRNA-lncRNA co-expression should be positively correlated (correlation coefficient > 0.2), thus obtaining 3668 lncRNA-miRNA-mRNA pairs. Definitively, there were 539 miRNAs, 73 mRNAs and 8 lncRNAs. Because of the large number of miRNAs, we further counted the number of miRNAs simultaneously regulating multiple lncRNA-mRNA relationship pairs. If a miRNA could simultaneously regulate multiple lncRNA-mRNA relationships, this miRNA may play an important role. Therefore, we focused on screening TOP50 miRNA and extracting its corresponding lncRNA-miRNA-mRNA relationship pair and carried out the construction of ceRNA network. It can be seen Fig. 11 A, the network consisted of 7 lncRNAs, 50 miRNA and 71 mRNA. The network contained 122 lncRNA-mRNA co-expression pairs, 798 miRNA-mRNA pairs and 116 lncRNA-miRNA pairs. We analyzed the connectivity of each node of the network to obtain the connectivity of mRNA, miRNA and lncRNA. By ranking the connectivity of each node, RNA molecules that may play important roles were identified (Fig. 11 B). The constructed ceRNA network initially described that MRLs affect mRNA expression by sharing miRNA, which provides foundations for further exploration of the regulatory mechanism of OC based on mitophagy.
Discussion OC has hidden early symptoms and a poor 5-year survival rate. The accuracy of OC biomarker screening is still low. OC is also a multifactorial and complex disease, and the goal of treatment is to reduce the tumor burden, prolong survival time and improve the quality of life of patients. Patients with different pathologic types receiving similar treatment may have significantly different progression free survival (PFS) and OS [ 61 ]. Traditional prognostic factors based on clinicopathological parameters are not sufficient to predict the prognosis of patients [ 61 ]. There is still controversy surrounding the existing predictive models’ ability to assess prognosis, hence, there is no marker that can accurately predict the clinical outcome of OC. Identifying OC patients with high-risk clinical outcomes and actively improving their prognosis is the focus of current research. The guidelines and consensus are gradually integrating genetic testing into the standard treatment [ 62 ]. The expression of genes associated with OC is gradually improving the situation and future prognostic models based on gene expression profiles need to be further explored. With the continuous development of technology (such as high-throughput sequencing reserved in TCGA), the methods for predicting the prognosis of OC are maturing and improving, and we can improve the standards for searching prognostic factors closely related to clinical outcomes and treatment decisions. WGCNA analysis can help us to understand the interactions between MRGs and, ultimately, the gene networks or modules associated with mitophagy. The gene expression profiles of OC we extracted from the TCGA database provided sufficient data support for the application of WGCNA analysis in our study. Ulteriorly, we aggregated highly correlated lncRNAs into modules (correlation coefficient > 0.8). The blue module with strong positive correlation with MRGs and green module with strong negative correlation with MRGs were selected by weighted calculation of gene expression profiles several times according to the correlation coefficient and P value to screen the lncRNAs with high correlation with mitophagy, thereby reducing the loss of useful information. Traditional lncRNA-mRNA co-expression was calculated based on the Pearson correlation coefficient between genes and then set a hard thresholding to determine whether the network exists [ 63 – 65 ]. However, setting the threshold only based on Pearson can lead to the loss of real information. Different from traditional Pearson method, we used soft thresholding (R 2 > 0.85) of WGCNA to determine whether MRLs and MRGs were associated and weighed the correlation coefficients between genes to obtain the gene co-expression matrix. The connections between genes should meet the scale-free network distribution. The expression patterns of genes in each constructed module are very similar, and hub-gene in the module helps to understand the pathogenesis of disease at the molecular level. In a word, WGCNA analysis can filter out irrelevant noisy data and find key molecular mechanisms related to mitophagy in our study. In subsequent studies, experimental methods are needed to confirm the molecular biological correlation between the MRLs we identified and mitophagy, such as mitochondrial membrane potential measurement [ 66 ], mitochondrial morphology observing, mitophagy markers detecting and so on. Data mining based on bioinformatics can be used to explore important biological phenotypes associated with high-dimensional datasets. TCGA and GEO are databases with large-scale genomic analysis capabilities to assess molecular biological signatures associated with OC. Recent developments in next-generation sequencing technologies have greatly expanded our understanding of lncRNAs, which are more abundant in both quantity and function than mRNAs. There have been some successful cases of molecular marker screening by bioinformatics. Bioinformatics analysis based on a large sample (such as samples from TCGA or GEO) can avoid accidental factors more and has stronger generalization. However, a single bioinformatics algorithm is often used in previous studies, which may lead to excessive data perturbation and poor reliability of results [ 67 – 69 ]. Therefore, using dividing gene modules based on clustering, the target module needs to be selected for regression analysis to analyze the correlation degree between genes and features, thus improving the accuracy of screening lncRNA for prognosis of OC. We carried out comprehensive analyses based on bioinformatics in this study. LASSO Cox Regression analysis was carried out after WGCNA, which can improve the precision of screening prognostic MRLs and provide a basis for improving the prognosis of OC. Non-coding RNA regulates various physiological and pathological processes in the human body. It has been confirmed that the reproductive disorders are related to non-coding RNA to some extent. The fertility-sparing measures, including hormonal treatment, hysteroscopic resection [ 70 ], gametes cryopreservation [ 71 ], should be appropriately reserved in the treatment of early-stage or low-risk endometrial cancer (EC) [ 70 , 72 , 73 ], cervical cancer (CC) [ 74 ] and OC [ 75 ], and the non-coding RNA-based diagnostics and therapeutics are the valuable options for implications for the fertility-sparing process [ 76 ]. Some lncRNAs have also been reported to be involved in the anti-EC effects of progesterone, which may provide new insights into fertility-sparing process [ 77 ]. As for MRLs involved in mitophagy, knockdown lncRNA MALAT1 can reduce mitophagy in hepatocellular carcinoma [ 78 ]. The lack of methionine down-regulates LINC00079, thus activating mitophagy to inhibit cell proliferation in gastric cancer [ 79 ]. Overexpression of the peptide encoded by LINC-PINT decreases the mitophagy of hepatocellular carcinoma in vitro and in vivo [ 80 ]. In general, studies on lncRNA regulation of mitophagy are still very rare in human cancers and even blank in OC. A single gene is often unable to predict the prognosis and treatment outcome of tumor patients accurately and stably, but the comprehensive score that integrates the contribution of multiple genes model can overcome this shortcoming. Eight optimal MRL combinations with prognostic value (log-rank test P < 0.05) were screened by integrating WGCNA and LASSO analyses in our study, including AC003075.4 (HR < 1), AC004540.5 (HR < 1), AC010761.10 (HR < 1), RP5-1120P11.1 (HR < 1), RP11-10A14.5 (HR < 1), USP30-AS1 (HR < 1), RP11-195F19.9 (HR > 1), and ZFAS1 (HR > 1). We also found the difference of the eight MRLs between normal and cancer tissues via Quantitative Real-time PCR experiments and TCGA. Hence, the differentially expressed and prognostic MRLs can be used as diagnostic markers to identify cancer and non-cancer as well as be expected to be involved in OC progression and deserve further study. RP5-1120P11.1 was identified to participate in proliferation, cycle regulation, and invasion of OC cells [ 81 ]. ZFAS1 has been reported to be involved in biological functions of OC. To be specific, ZFAS1 could regulate OC cell malignancy through ZFAS1/miR-150-5p/Sp1 axis [ 82 ]; ZFAS1 could also regulates metastasis and platinum resistance [ 83 ] of OC via let-7a/BCL-XL/S axis [ 84 ]. For other tumors, it was reported that the repression of mitophagy mediated by lncRNA USP30-AS1 could lead to glioblastoma tumorigenesis [ 85 ]. USP30-AS1 was proven to regulate the mass and protein expression of mitochondria, thus mediating mitophagy in glioblastoma cells [ 85 ]. Mengyue Chen et al. determined the molecular mechanisms of USP30-AS1/miR-299-3p/PTP4A1 axis in CC malignancy [ 86 ]. In acute myeloid leukemia, USP30-AS1 may be a regulator of cancer cell survival [ 87 ]. ZFAS1 has been widely proved to be related with the development and progression of human cancers [ 88 ], including colorectal cancer [ 89 , 90 ], nasopharyngeal carcinoma [ 91 ], oral squamous cell carcinoma [ 92 ], pancreatic cancer [ 93 ], and so on. Therefore, the study of the biological relevance of MRLs to OC is still in its infancy, and biological experiments are needed to prove how these MRLs play the role of mitophagy in OC. Besides, our constructed ceRNA network initially described the regulatory mechanism of mitophagy, which needs experimental validation, such as Dual-Luciferase Reporter Assay. Recognition of prognostic factors and characterization of the molecular classification has great significance in physiology, pathology, treatments, and clinical trials for gynecologic malignant tumors [ 94 , 95 ]. Accurate prognosis assessment and stratified management of OC patients is the key to improve patient survival. Using multiple influencing factors to establish a prognostic model for OC has been attempted in the past decade with unsatisfactory results. Previous research have established prognostic indexes for patients with OC, including FIGO stage, residual lesion size, histological grade, and ascites [ 96 ]. However, the study failed to be generalized due to its small sample size and short follow-up time. Although some clinicopathological parameters affect the prognosis of OC patients have reached some consensus in clinical treatment [ 97 ], no recognized diagnostic guidelines have clearly pointed out. Therefore, there is still a long way to go to construct and standardize the prognostic model of OC and popularize it in clinic. Herein, we established the prognostic model based on the eight optimal MRL combinations. The prognosis and independent prognostic value of the MRL-model was verified and validated in TCGA and GEO databases using K-M (log-rank test P < 0.0001; log-rank test P = 0.012), Univariate Cox regression ( P < 0.001, HR = 1.960, 95% CI:1.520–2.528; P = 0.011, HR = 1.439, 95% CI:1.086–1.906), and Multivariate Cox regression analyses ( P < 0.001, HR = 1.795, 95% CI:1.371–2.350; P = 0.038, HR = 1.349, 95% CI:1.017–1.789), indicating the generalisability. Although there have been articles published on prognostic models based on mRNAs [ 67 – 69 , 98 ] or the prognostic models established using clinicopathological factors [ 99 , 100 ] for OC, none of these prognostic models has been externally validated. The prognostic models were controversy surrounding the existing predictive ability to assess prognosis and there are no uniform prognostic models in clinical practice. It is worth mentioning that the MRL-model can also stratify OC patients with different prognostic risks in consideration of clinicopathological parameters, including ethnic and demographic factors (White or Nonwhite). Our study proposes a new prognostic MRL-model whose clinical applicability deserves further exploration. We also detected the expression of several MRLs in the prognostic MRL-model in tissues, which can also lay a foundation for subsequent studies on lncRNA regulation of mitophagy to a certain extent. The Nomogram can integrate various clinicopathological parameters to evaluate the possibility of occurrence of clinical events, assign and sum different influencing factors, and show them graphically [ 101 ]. In our study, we found that the prognostic MRL-model we established was superior to other clinicopathological parameters in predicting OC survival. Further, the actual OS probability and the predicted OS probability of the Nomogram was validated by discrimination and calibration, which indicated that the Nomogram can be used to quantify risk and assess prognosis in patients with OC by combining multiple factors to determine prognosis. In addition, we compared the discrepancies between the two risk groups based on the MRL-model in functional pathways ( P < 0.01), TMB, somatic mutation features, immunity features (Wilcox test, P < 0.05), chemotherapeutic drug sensitivity (Wilcox test, P < 0.05). Our results suggested that OC patients in the two risk groups differed at the level of immune cells. The tumor immune microenvironment is heterogeneous between patients and tumor types, and these differences in composition may suggest different barriers to anti-tumor immunity that affect a patient’s response to specific immunotherapies [ 102 ]. It is necessary to look for heterogeneity of OC patients, stratify the population benefit most from immunotherapy. We also found that the TMB level was higher in low-risk group than in high-risk group (Wilcox test, P < 0.05), implying that patients with lower risk score may benefit from immunotherapy. OC patients with low risk score may be more sensitive to immune checkpoint blockade therapy was further confirmed by TIDE score (Wilcox test, P < 0.05) and subclass mapping (Bonferroni corrected P = 0.01). Cytoreduction surgeries along with neoadjuvant chemotherapy are modern therapeutic regimens for advanced-stage OC, nevertheless, its safety and efficacy still need to be explored [ 103 ]. Poly (ADP-ribose) polymerase inhibitors (PARP inhibitors) showed particular benefit for OC patients [ 104 ]. Since several patients develop resistance to chemotherapy and PARP inhibitors, we need to further identify effective patient subgroups. Drug susceptibility analysis was implemented, and the results showed that OC patients in the high-risk group were resistant to Paclitaxel ( P = 0.005) and Veliparib ( P = 0.002), while patients with low risk score were resistant to Rucaparib ( P = 0.005), Axitinib ( P = 3.344e-07), Linsitinib ( P = 9.015e-07), Saracatinib ( P = 4.27e-5), Motesanib ( P = 0.022), Ponatinib ( P = 0.002), and Imatinib ( P = 4.047e-4). For patients who are not sensitive to anti-cancer drugs, the treatment regimen should be changed in time to improve the prognosis of patients to a greater extent. However, there is a lack of indicators that can indicate drug reactivity for clinical quality decision making. Therefore, the prognostic MRL-model we established has a certain suggestive effect on the drug sensitivity of OC patients, but further validation using clinical samples is needed. The mechanism of direct or indirect influence of MRL on drug sensitivity is also worthy of further study using experimental validation. Our study hopes to provide feasible ideas for prognosis screening and precise treatment of OC patients by constructing prognostic risk model. However, there are still some limitations of this study worth mentioning. This study is a retrospective study, which has inherent bias inevitably, such as selection bias that may occur when incomplete information is excluded. The factors that affect the prognosis of OC are complex and diverse, and more clinical factors are not included in the study due to lack of data (such as ethnic and demographic factors). Hence, we are considering including more than 300 Chinese patients to validate the MRL-model. We established the prognostic MRL-model based on the data sources from public databases, and the prognosis and independence of the MRL-model was identified and validated in TCGA and the external GEO datasets. However, the sample size still needs to be expanded and analysis based on mRNA expression profile (such as drug sensitivity) could not be carried out on GEO dataset due to lack of data. Hence, more clinical tissues are needed to verify the reliability of prediction after follow-up. Although the expression levels of eight optimal MRLs in the prognostic MRL-model were examined in the clinical samples we collected, insufficient sample size and lack of clinical data need to be further addressed to make the evidence more solid. The specific mechanism of lncRNA related to mitophagy identified by us has not been developed yet by wet experiment, which needs to be addressed in future studies.
Conclusion The comprehensive analysis of MRGs and MRLs revealed their roles in expression, prognosis, chemotherapy, immunotherapy and molecular mechanism of OC. By analyzing prognosis, functional pathways, mutation, immunity features, immunotherapy, and drug susceptibility, our findings demonstrated the molecular and clinical significance of the MRL-model, thus stratifying patients with high risk and improving clinical outcomes for OC patients. The MRL-based model we constructed and validated deserves further study for future clinical application after addressing the limitations, such as insufficient sample size, missing demographic factors, lack of external validation and wet experiments.
Background Both mitophagy and long non-coding RNAs (lncRNAs) play crucial roles in ovarian cancer (OC). We sought to explore the characteristics of mitophagy-related gene (MRG) and mitophagy-related lncRNAs (MRL) to facilitate treatment and prognosis of OC. Methods The processed data were extracted from public databases (TCGA, GTEx, GEO and GeneCards). The highly synergistic lncRNA modules and MRLs were identified using weighted gene co-expression network analysis. Using LASSO Cox regression analysis, the MRL-model was first established based on TCGA and then validated with four external GEO datasets. The independent prognostic value of the MRL-model was evaluated by Multivariate Cox regression analysis. Characteristics of functional pathways, somatic mutations, immunity features, and anti-tumor therapy related to the MRL-model were evaluated using abundant algorithms, such as GSEA, ssGSEA, GSVA, maftools, CIBERSORT, xCELL, MCPcounter, ESTIMATE, TIDE, pRRophetic and so on. Results We found 52 differentially expressed MRGs and 22 prognostic MRGs in OC. Enrichment analysis revealed that MRGs were involved in mitophagy. Nine prognostic MRLs were identified and eight optimal MRLs combinations were screened to establish the MRL-model. The MRL-model stratified patients into high- and low-risk groups and remained a prognostic factor ( P < 0.05) with independent value ( P < 0.05) in TCGA and GEO. We observed that OC patients in the high-risk group also had the unfavorable survival in consideration of clinicopathological parameters. The Nomogram was plotted to make the prediction results more intuitive and readable. The two risk groups were enriched in discrepant functional pathways (such as Wnt signaling pathway) and immunity features. Besides, patients in the low-risk group may be more sensitive to immunotherapy ( P = 0.01). Several chemotherapeutic drugs (Paclitaxel, Veliparib, Rucaparib, Axitinib, Linsitinib, Saracatinib, Motesanib, Ponatinib, Imatinib and so on) were found with variant sensitivity between the two risk groups. The established ceRNA network indicated the underlying mechanisms of MRLs. Conclusions Our study revealed the roles of MRLs and MRL-model in expression, prognosis, chemotherapy, immunotherapy, and molecular mechanism of OC. Our findings were able to stratify OC patients with high risk, unfavorable prognosis and variant treatment sensitivity, thus improving clinical outcomes for OC patients. Supplementary Information The online version contains supplementary material available at 10.1186/s12905-023-02864-5. Keywords
Supplementary Information
Abbreviations Ovarian cancer Long non-coding RNA Carbohydrate Antigen 125 Human Epididymis Protein 4 Risk of Ovarian Malignancy Algorithm The Cancer Genome Atlas Gene Expression Omnibus Mitophagy-relate lncRNA Mitophagy-relate gene Weighted coexpression network analysis Least absolute shrinkage and selection operator Overall survival Tumor mutational burden Mutation Annotation Format False discovery rate Kaplan–Meier Protein-protein interaction Gene set enrichment analysis Single-sample gene set enrichment analysis Gene set variation analysis Cell-type Identification By Estimating Relative Subsets Of RNA Transcripts Microenvironment Cell Populations-counter Estimation of STromal and Immune cells in MAlignant Tumor tissues using Expression data Immune checkpoint blockade Tumor immune dysfunction and exclusion Genomics of Drug Sensitivity in Cancer competing endogenous RNA Hazard ratio Confidence interval Gene Ontology Progression free survival Endometrial cancer Cervical cancer Acknowledgements We greatly thank the data gained from The Cancer Genome Atlas (TCGA) and Gene Expression Omnibus (GEO). Authors’ contributions Yang Sun conceived, designed, and supervised the study. Jianfeng Zheng performed data analysis and drafted the manuscript. Shan Jiang and Xuefen Lin helped to perform data analysis and revise the manuscript. Huihui Wang, Li Liu, and Xintong Cai collected the data and arranged the figures. Funding This project was funded by grant from the National Natural Science Foundation of China (82374081), the Natural Science Foundation of Fujian Province of China (2020 J011115), Joint Funds for the Innovation of Science and Technology of Fujian Province (2021Y9209), and the Medicine Innovation Foundation of Fujian Province of China (2020CXB007). Availability of data and materials The RNA sequencing profiles and clinical information of ovary and ovarian cancer can be gained from UCSC-Xena ( https://xenabrowser.net/datapages/?dataset=TcgaTargetGtex_rsem_isoform_tpm&host=https%3A%2F%2Ftoil.xenahubs.net&removeHub=http%3A%2F%2F127.0.0.1%3A7222 ) platform. The RNA sequencing profiles and clinical information of ovarian cancer from Gene Expression Omnibus (GEO) database are available at the following link: GSE19829 ( https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE19829 ), GSE26193 ( https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE26193 ), GSE30161 ( https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE30161 ), and GSE63885 ( https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE63885 ). The gene annotation information is available at GENCODE database ( https://www.gencodegenes.org/ ). The mitophagy-related genes can be gained from GeneCards ( https://www.genecards.org ). The R packages can be acquired or installed from CRAN ( https://cran.r-project.org/mirrors.html ), Biocductor ( https://www.bioconductor.org/ ), GitHub ( https://github.com/GitHub ), or native software R Studio ( https://posit.co/downloads/ ). Further inquiries can be directed to the corresponding author. Declarations Ethics approval and consent to participate The study has been conducted in accordance with the ethical standards, according to the Declaration of Helsinki, and according to national and international guidelines. TCGA belongs to public databases. The patients involved in the database have obtained ethical approval. Users can download relevant data for free for research and publish relevant articles. The studies involving human participants were reviewed and approved by Ethics Committee of Fujian Cancer Hospital. The patients/participants provided their written informed consent to participate in this study. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:48
BMC Womens Health. 2024 Jan 13; 24:37
oa_package/10/95/PMC10788026.tar.gz
PMC10788027
38218840
Background Globally, adolescents aged 10 to 19 years make up about 16% of the population [ 1 ]. This age group makes up a higher proportion of sub-Saharan Africa and Nigeria, accounting for 23% and 22.3% respectively [ 1 , 2 ]. Adolescence is a period of rapid human development which includes physical, neurodevelopmental, psychological, and social changes with implications for their peculiar health needs. The majority of the serious health challenges in adulthood have roots in the period of adolescence, and about 70% of premature deaths among adults are mostly related to behaviours initiated at adolescence [ 1 ]. Sexual and Reproductive Health (SRH) issues, including sexually transmitted infections and unintended pregnancies, account for a significant proportion of disease burden among adolescents [ 3 ]. A 12-year review of Nigerian adolescents sexual practices and behaviours found that they engage in risky sexual behaviours consisting of early sexual debut, unsafe sexual practices, and concurrent multiple sexual partners [ 4 ]. There is growing evidence to show that SRH of adolescents can be improved through Comprehensive sexuality education (CSE) [ 5 ]. The CSE curriculum may also be known as "life skills," "family life," or "HIV education” or "holistic sexuality education" implying the difference in the emphasis of the curricula [ 6 ]. The policy of the Nigerian government at the national level identifies the pressing SRH needs of adolescents and has acted on its policy commitments by implementing a near-nationwide CSE. Family Life and HIV Education (FLHE) is the form of CSE being implemented by the government into school curricula at the basic and secondary school levels in Nigeria, in addition to teacher's training institutions [ 7 ]; its main aim is to prevent HIV/AIDS through awareness and education. Given the limitations associated with the delivery of FLHE in Nigeria which is mainly via didactic physical lectures, and consequently, low nation-wide implementation and uptake, there is a need for more innovative and effective strategies to reach these adolescents [ 7 , 8 ]. mHealth is one of such innovation with the potential of wider acceptability by the adolescent population. mhealth is the use of "emerging mobile communications and network technologies for healthcare" and it has gained prominence in recent years [ 9 ]. Globally, mobile phone subscriptions have been exponentially increasing, especially in developing countries where mobile subscriptions increased from 1.2 billion in 2005 to over 5.5 billion in 2015 [ 10 ]. A study done among 726 females between the ages of 12 and 30 years in six states in Nigeria showed that about 98.6% of them had access to a mobile phone [ 11 ]. Another study conducted among 249 in-school teenagers in Enugu State in southeast Nigeria found that about 69% of them had access to the internet via their phones, laptops and tablets [ 12 ]. Adolescents use the internet for their health-related needs, and the proportion who use this service is projected to increase in the next few years [ 13 , 14 ]. Many adolescents cannot discuss SRH issues with their parents due to poor communication and cultural norms on sexuality issues, and they would rather rely on information from the internet or their peers who may have incorrect or inadequate information [ 15 ]. Within the context of the current gaps in the delivery of the FLHE in Nigeria and the revolutionization of information access through mHealth in the country, we developed and implemented a mHealth-based CSE curriculum over 12 weeks and assessed its effect on the SRH, attitude, and sexual behaviour of in-school adolescents in Ilorin, Nigeria.
Methods Trial design A two-arm Cluster Randomized Controlled Trial (cRCT) of 8 schools (clusters) with equal allocation was conducted. This number meets the minimum number of clusters required for cRCTs [ 16 ]. Individual students served as participants and outcome measures were at the individual participant level. SRH knowledge, attitude and sexual behaviour were assessed at baseline (T 0 ), immediately after the 12-week intervention (T 1 ), and 3 months after the intervention (T 2 ). Study setting The study was conducted between 10th of February 2020 and 28th of August 2020 in secondary schools located in Ilorin, Kwara State, Nigeria. Ilorin is the capital city of Kwara State and has a youth literacy rate of 76.9% and total gross school enrolment ratio of 50.13% (52.57% for males and 47.64% for females) [ 17 ]. One of the focus areas of the National School Health Programme by the Federal Ministry of Education is the provision of skill-based education and FLHE is part of the skill-based curriculum [ 18 ]. Eligibility criteria for schools To be eligible to participate, schools had to be registered with the Kwara State Ministry of Education. Secondary school commences after 5–6 years of primary (elementary) school, and the system is divided into the junior secondary school (year 1–3) and senior secondary school (year 3–6). Eligibility criteria for students In-school adolescents (aged 10–19 years) in senior secondary school and who had access to the internet at least once a week throughout the study duration were eligible to participate. The students either owned these devices or had access to these devices through their parents/guardians. Students who had cognitive or visual impairments were excluded from the trial. Group assignment and masking Eligible schools (n = 161) were stratified into public (n = 80) and private schools (n = 81). Eight schools (4 public schools and 4 private schools) were selected using simple random sampling by computer-generated random numbers (Fig. 1 ). Schools were then assigned to a cluster design to avoid contamination bias following consent from the school principals. Researchers were not blinded to the assignment, but students were not informed of their school’s allocation. To reduce contamination bias in this study, schools formed cluster units for allocation into study groups and we ensured they were at least 40 m apart. Sample size and sampling strategy The target sample size for this study was 1280 participants (640 per group) from 8 schools. The superiority trial formula for continuous variables which is used to verify that a new intervention is more effective than the usual intervention from a statistical/clinical point of view was used to calculate the sample size [ 19 ]. The statistical power was set at 0.80, alpha at 0.05 and attrition rate of 10% among participants. Mean scores in the control and intervention group were set at 16.61 and 17.47 using a previous study [ 20 ]. A design effect of 2 was calculated assuming an intraclass correlation of 0.05 and number of individuals per cluster of 21 to allow for possible clustering effect. Recruitment and consent/assent Based on the student enrolment profile of each school (Additional file 1 ), proportional allocation was used to allocate sample size to each of the selected schools based on their population in the first stage. In the second stage, disproportionate stratified sampling for between-strata analysis which is used to maximize sample size of each stratum using equal allocation for comparative analysis was employed [ 23 , 24 ]. In the third stage, using the nominal roll which contains the list of students in each class, participants in each class were selected using a systematic sampling technique. A letter of introduction was obtained from the Department of Epidemiology and Community Health, University of Ilorin Teaching Hospital and the Kwara State Ministry of Education to the principals of the schools selected. Prior to the commencement of the study, multiple advocacy visits were paid to the principals, head teachers and others in authority in the selected schools. The visits involved discussions about the study objectives and the link to the government-approved FLHE curriculum, data collection methodology and timeframe, parental/guardian consent forms with the study information leaflet, study questionnaire, etc. Following adequate briefing and the approval to conduct the study in their school, the principal/head teacher or designated officers introduced the study and the research team to the students. Class-to-class interactive sessions about the study were conducted. Simplified study information leaflets were also distributed to the students and there were opportunities for them to ask questions during the sessions. The selected respondents were given forms which included the study information leaflet to obtain signed written consent from one of their parents or a guardian at least 2 weeks to the commencement of data collection. In Nigeria, a minor is defined as one who is below the age of 18 years. In this study, those who were 18 years and above gave written consent to participate in the study by themselves. For those less than 18 years, only those who submitted a consent form signed by a parent or guardian and gave verbal assent to participate in the study (obtained on the first day of data collection and witnessed by the research team) were recruited into the study. Intervention Schools allocated to the intervention group were given access to the mHealth-based CSE, which contained 12 modules via accessible online (link: http://flhe.noubug.com ) over a 12-week period (24th February 2020 to 23rd May 2020). The 12-module CSE was an adoption of the approved FHLE curriculum for secondary schools in Nigeria that covered six themes: human development, personal skills, sexual health relationships, sexual behaviour, and society and culture [ 22 ]. Topics across these six themes were covered over the 12-week period (Additional file 2 ). Each participant was given a username (not linked to any personal identifier) and password (which each user could change) to access the CSE curriculum online. Participants could ask questions anonymously via the website, and responses were given within 24 h. During this period, a total number of 51 questions were asked by 47 respondents. Majority of the questions, 38 (80.9%) were related to the course contents while 9 (19.1%) were questions requesting for technical support in navigating the site. Participants in both public and private schools were not provided with free data to browse the internet for this study but used their existing internet data sources prior to the study. This was done to assess the sustainability of in-school adolescents utilising mHealth-based interventions without the availability of incentives such as the provision of free data for browsing. Control The control was a 12-week school-as-usual condition. Participants in the control group were not exposed to the mHealth-based intervention. Instead, they were to continue with the usual classroom-based CSE according to the existing school curriculum during the intervention period. However, due to the coronavirus disease 2019 (COVID-19) pandemic, all schools in study area were shut during this period, and this disrupted the regular educational routine of these students. Study instrument We used a questionnaire adapted from the World Health Organisation’s questionnaire for collecting data on SRH behaviours [ 23 ]. The questions were modified based on the modules covered in the intervention. Section 1 addressed the respondents’ sociodemographic characteristics; Sections 2, 3, 4 and 5 assessed the respondents’ SRH knowledge, attitude and sexual behaviour respectively. The questionnaire was pre-tested among students of two senior secondary schools (one public, one private) other than the selected schools (n = 128). The schools chosen for the pre-test were at least 10km from the study and control schools. All study tools were tested for accuracy and content validity through the consultation of relevant literature on SRH education for adolescents. They were also reviewed by academic experts, including eight Consultant Public Health Physicians with expertise in SRH, for its content and structure validity. The coefficient of internal reliability analysis of the tool was 0.757, which is fairly high [ 24 ]. Pre-testing helped determine its level of difficulty, complexity, logical sequence, spot inconsistencies, and standardise questionnaire administration language and style. Data collection Data was collected by Six Research Assistants (RAs) who were trained on the content and administration of the research instrument. Three RAs were Medical Officers in Ilorin, and the other three were adolescents between the ages of 18 and 19 years. Baseline assessment (T 0 ) was done using paper-based pre-tested interviewer-led, self-administered questionnaires in classrooms under the guidance of the Lead Researcher (OWA) and the RAs (10th–22nd of February 2020). Attitudinal and behavioural change among adolescents takes substantial time [ 25 ]. Thus, to give ample time to measure the effect of the intervention among the respondents, all students in both groups were followed up to assess SRH knowledge, attitude towards SRH and sexual behaviour immediately after the intervention (T 1 ) and 3 months after the end of the intervention (T 2 ). Post assessment data at T 1 (24th May 2020–5th June 2020) and T 2 (17th August 2020–28th August 2020) were collected using the pre-tested interviewer-led, self-administered questionnaire administered at baseline. Due to the COVID-19 pandemic, schools in the study area were temporarily shut down on 23rd March, 2023, 4 weeks into the study. Following the announcement and before the schools were closed, a visit was made to all the schools to inform the respondents about the use of an online Google form for the collection of the post-intervention data. Thus, T 1 and T 2 data were collected online from both the control and intervention groups using a Google form. To maximize response rate, text messages which included the link to the questionnaire were sent to all respondents. Furthermore, class representatives who were selected in each class of all the schools were urged to remind and encourage their peers to fill the online questionnaire using their existing WhatsApp platforms. Following the final assessment at T 2 , all respondents (including those in the control group) were given access to the CSE via the website for 4 months. Outcome measures The primary outcome was participants mean scores in SRH knowledge, SRH attitude and RSB of participants, measured at baseline, T 1 and T 2 . Computation of composite scores Section 2 of the study questionnaire contained 65 multiple choice questions that covered the knowledge assessment of SRH. These questions covered knowledge on puberty and pubertal changes, reproductive health, sexually transmitted infections (STIs) including human immunodeficiency virus (HIV), acquired immunodeficiency syndrome (AIDS) and modern contraceptives. Based on the core questionnaire measurement and reference to similar research that adapted the same instrument, a score of 1 was assigned to every correct answer, while a score of 0 was assigned to every incorrect answer [ 20 ]. Thus, the maximum score for knowledge was 65 points, and the minimum score was 0 point. The scores were summed up and converted to 100%. Mean scores were calculated for both groups. Also, the individual scores were categorised into 3: good (> 66%), fair (34.0–65.9%) and poor (< 34%). This is as categorised in a previous study that assessed the SRH knowledge of adolescents in Ibadan, Nigeria [ 26 ]. Section 3 of the questionnaire focused on the attitudinal assessment of SRH. It consisted of a list of 13 statements describing attitudinal disposition (such as their perception towards premarital sex, contraceptive use, and sex education) which were answered on a 5-point Likert scale (1—agree a lot, 2—agree, 3—indifferent, 4—disagree, 5—disagree a lot). Each item was rated 1 to 5 with total scores ranging from 13 to 65. Questions 44, 45 and 46 were reverse scored. The items were summed up and converted to 100%. Mean scores were obtained in both groups. Individual scores were also categorised into 2: positive (≥ 50%) and negative (< 50%). This is as categorised in a previous study that assessed the attitude of adolescents towards SRH [ 27 ]. Section 4 of the questionnaire consisted of 15 questions regarding sexual behaviour. The first item asked respondents if they were sexually active. The prevalence of risky sexual behaviour was defined as reporting one or more of the following: multiple sexual partners, exchange of material gift or money for sex, inconsistent/incorrect/non-use use of condoms at least once during sexual intercourse, getting infected by an STI, and sexual debut before the age of 18 years [ 28 ]. An affirmative answer to any of the questions was scored one. Thus, the total scores for risky sexual behaviour ranged from 1 to 5. Those who did not report any of the listed behaviour were categorised as practising protective sexual behaviour, while those who affirmed to practising any of the listed behaviour were categorised as practising risky sexual behaviour. The prevalence of risky sexual behaviour was calculated, and mean scores were also calculated in both groups. Among respondents in the intervention group, uptake of CSE was scored using the number of modules completed at P1 and P2. Completion of each module was given a score of 1. Number of completed modules were summed up and converted to 100%. Mean scores were also calculated among respondents in private and public schools. In addition, we identified factors influencing the primary outcomes (SRH knowledge, attitude and sexual behaviour) using multivariate analysis using binary logistic regression. Statistical analysis Statistical analyses were performed using StataCorp. 2019. Stata Statistical Software: Release 16. College Station, TX: StataCorp LLC. Data visualisations were created using R-Studio Version 1.3.1073. All continuous data were first tested for normality using the Shapiro–Wilk and Shapiro-Francia tests. All continuous variables including scores for dependent variables were normally distributed and thus mean and standard deviation were used as summary statistics. Respondents’ baseline socio-demographic characteristics measured as categorical variables were summarized using frequencies and percentages and presented in tabular form. The between-group differences in the distribution of continuous data were visually inspected using box plots and statistically compared using the independent samples t-test. Pearson’s Chi Square and Fisher’s exact tests were used to assess whether there are statistically significant relationships between categorical predictor variables and categorical outcome variables. The predictor variables which yielded a p-value less than 0.25 during bivariate analysis were used for multivariate binary logistic regression analysis for the identification of factors influencing SRH knowledge, attitude, and sexual behaviour. In the multivariate model, factors associated with dependent variables were evaluated using adjusted Odds Ratios (AORs) and 95% Confidence Intervals (CI). For the AOR estimator, the Hosmer–Lemeshow test was used to determine the model’s goodness of fit with the likelihood ratio test as a primary measure of model fit. The relative importance of individual predictors in the model were assessed using the t-statistic for each model parameter. The main analysis was intention-to-treat based on the randomisation of clusters. Repeated measures ANOVA was used in assessing the effectiveness of the study intervention. Throughout the analysis, a p-value < 0.05 was considered statistically significant. Patient and public involvement statement This trial did not involve patients but rather in-school adolescents. The intervention developed upon an already existing programme targeting in-school adolescents. The choice of an mHealth intervention is premised on the interest and high levels of update of mobile technology by adolescents. In developing the intervention, a pilot study was conducted which enabled incorporation of the inputs of adolescents as users of the intervention. Furthermore, adolescents were included among the data collectors.
Results Characteristics of the participants More than half of the respondents in both groups were in the age range 15–17 years (Table 1 ). The proportion of males and females in both groups were almost equal. Public school enrolment accounted for 480 (75.0%) and 459 (71.7%) in the control and intervention groups respectively. More than two thirds of the respondents had been exposed to sexuality education at home, accounting for 475 (74.2%) and 468 (73.1%) in the control and intervention groups respectively. For all the aforementioned variables, there were no statistically significant differences between the two study groups thereby confirming group equivalence due to effective randomization. Baseline SRH knowledge, attitude, and RSB Most respondents (63.9%) had a fair knowledge of SRH, accounting for 401 (62.7%) and 417 (65.2%) in the control and intervention groups respectively (Table 2 ). The mean knowledge score was 62.67 (SD = 9.90) in the control group and 61.97 (SD = 10.35) in the intervention group (p value = 0.218). Furthermore, most of the respondents had a positive attitude towards SRH, accounting for 475 (74.2%) and 483 (75.5%) in the control and intervention groups respectively (p value = 0.607). The mean attitude score was 64.54 (SD = 20.48) in the control group and 75.46 (SD = 18.32) in the intervention group (p value = 0.063). The prevalence of RSB was found to be 9.7% in the control group and 9.2% in the intervention group at baseline. Among those who were sexually active, almost all of them practised risky sexual behaviour, accounting for 86.1% and 93.5% in the control and intervention groups respectively. Regarding the mean score of RSB, the scores at baseline in the control and intervention groups were 4.69 (SD = 15.56) and 4.66 (SD = 14.42) respectively. There was no statistically significant difference in SRH knowledge, attitude and risky sexual behaviour between both groups. Intervention effect In the intervention group, uptake rates (completion of at least 75% of the mHealth-based curriculum and 100% completion of the questionnaire) at T 1 and T 2 were 94.9% and 97.5% respectively. Table 3 presents the results from the Repeated Measures ANOVA to assess the effect of the intervention on knowledge, attitude and sexual behaviour in the study groups. Figure 2 provides a graphic illustration of these results. The analysis shows that in the control group there were no statistically significant changes in the mean SRH knowledge score, the mean SRH attitude score, and the mean RSB score (p = 0.073, 0.142 and 0.572 respectively) from T 0 to T 2 . However, in the intervention group, there was a statistically significant main effect of the mHealth-based intervention on the mean knowledge score [F (1.431, 875.761) = 2117.252, ρ = < 0.001, ηp2 = 0.776). Bonferroni post hoc tests showed that the respondents had significantly higher mean knowledge score at T 0 , compared to T 1 (59.45 ± 13.99 versus 83.09 ± 12.98, respectively; ρ = < 0.001). At T 2 , the mean knowledge score increased to 88.19 (SD = 9.45), which was significantly higher than the mean at T 0 (ρ = < 0.001) and T 1 (ρ = < 0.001). Similarly, the intervention has a statistically significant effect on the mean attitude score [ F (1.485, 908.885) = 148.493, ρ = < 0.001, ηp2 = 0.195) and the Bonferroni post hoc tests showed that the respondents had significantly higher mean attitude score at T 0 , compared to T 1 (75.46 ± 18.32 versus 82.07 ± 20.46, respectively; ρ = < 0.001). At T 2 , mean attitude score increased to 89.61 ± 10.19, which was significantly higher than the mean at T 0 (ρ = < 0.001) and T 1 (ρ = < 0.001). Nevertheless, even though the mean RSB score declined from T 0 to T 1 in the intervention group (4.89 ± 15.87 versus 4.76 ± 15.50, respectively) and again at T 2 , (4.73 ± 15.48), these improvements were not statistically significant [F (2, 1224) = 0.558, ρ = 0.572, ηp2 = 0.001)]. Predictive analysis As shown in Table 4 , gender (p = 0.012) and type of school (p = 0.001) were significantly associated with knowledge. Age range was also found to be significantly associated with attitude (p = 0.003). Age, gender, class, and father’s employment type were statistically associated with RSB (p < 0.001; p < 0.001; p = 0.004; and p < 0.001 respectively). The multivariate analysis showed that females had higher odds of having good SRH knowledge compared with females (AOR = 2.5, 95% CI 1.04, 6.13). Male respondents had less odds of practising protective sexual behaviour (AOR = 0.3, 95% CI 0.15, 0.55). Based on class, respondents in SS2 (AOR = 5.2, 95% CI 1.75, 15.33) and SS3 (AOR = 6.2, 95% CI 1.93, 20.06) had higher odds of practising protective sexual behaviour compared to those in SS1. Respondents whose fathers were self-employed had higher odds (AOR = 3.0, 95% CI 1.12, 8.01) of practising protective sexual behaviour. Attrition At T 1 and T 2 the attrition rate in the control group was 3% and 5% respectively, whereas in the intervention group it was 2.5% and 4.2% respectively. Total number of respondents at T 2 was 1221 (attrition rate of 4.6%).
Discussion To the best of our knowledge, this is the first cRCT to assess the effect of an mHealth-based CSE on the SRH knowledge, attitude, and practice of RSB among in-school adolescents in Ilorin, Nigeria. The study was conducted as a proof of concept to promote the national uptake of the FLHE curriculum using mHealth. At baseline, the respondents in the two study groups had comparable sociodemographic characteristics and on average, their baseline SRH knowledge, attitude and RSB profiles were not significantly dissimilar. This suggests that the randomization achieved equivalence in both study groups. More than half of the respondents in both groups were in the middle adolescence stage (15 to 17 years). Similar studies have shown that most in-school adolescents in senior secondary schools in Nigeria were in the middle/late adolescence stage, and they were unmarried [ 29 , 30 ]. This stage of adolescence is typified by advanced development of secondary sexual characteristics [ 31 ]. During this period, they crave identification to affirm self-image, pre-occupied with fantasies and idealism, and in terms of sexuality, they are testing their ability to attract the opposite sex [ 31 ]. At baseline, more than three-fifths and one-third of respondents in the study groups had fair and good SRH knowledge respectively. The survey showed that some adolescents had misconceptions regarding the reproductive system and sexual maturity. Similar studies have also shown that despite having an overall good knowledge of reproductive health, misconceptions regarding the need to have sex multiple times before a girl can get pregnant persist [ 29 , 32 , 33 ]. These misconception could put adolescents at risk of unwanted pregnancies and STIs. Generally, knowledge of STIs, including HIV/AIDS was good in the current study. However, less than half of them were aware of hepatitis B, chlamydia, and genital herpes as STIs. Previous studies have also found knowledge of HIV to be consistently higher than other STIs among adolescents in sub Saharan Africa [ 34 , 35 ]. HIV/AIDS receives relatively higher attention which may be due to its perceived risk compared to other STIs in Nigeria. Numerous programmes focus on HIV/AIDS among adolescents, including a National HIV Strategy for Adolescents and Young People [ 36 ]. This may suggest that adolescents may be less concerned about STIs other than HIV/AIDS which can equally put their reproductive health at risk. Good knowledge can lead to positive attitude, which can, in turn, lead to less RSB practice. The health belief model hinges on this relationship [ 37 ]. About three-quarters of the respondents had positive attitude towards SRH and majority of the students expressed conservative attitudes towards premarital sex. However, the notions of those who had negative attitude should be addressed. Almost a third of the respondents in this study had the perception that having multiple sexual partners is a norm. Only about two thirds of respondents in this study thought contraceptives were important in preventing STIs and another one third of them did not see the need for them or their partners to use a condom. Studies in Nigeria, Ghana and Uganda have shown that a significant proportion of adolescents allude to this perception [ 38 – 40 ]. These findings suggest that a significant number of adolescents have negative SRH attitude which may be detrimental to their reproductive health. This study found that about one-tenth were sexually active. Of these, the prevalence of risky sexual behaviour (i.e. reported multiple sexual partners, exchange of material gift/money for sex, inconsistent/incorrect/non-use use of condoms at least once, infection by an STI, and sexual debut before the age of 18 years) was found to be more than four-fifths in both study groups. Findings from northern Nigeria and Cape Coast Metropolis Ghana showed that 10% and 13.8% respectively of in-school adolescents were sexually active [ 29 , 41 ]. However, many studies have reported a significantly higher proportion in other parts of the country and Africa, ranging from 24.7% to 73.8% [ 34 , 42 – 46 ]. Scientific evidence has shown a high and increasing rate of sexual activity among adolescents in Nigeria, and an early sexual debut is becoming a concern, particularly among females [ 42 , 47 , 48 ]. Early sexual debut among females has been associated with a high rate of STIs including HIV/AIDS and unintended pregnancies—the latter of which could in turn lead to unsafe abortions, high maternal mortality and infant mortality [ 4 ]. Intra-country and inter countries disparities are not unexpected, as these could be linked to rapid urbanization, sociocultural and socioeconomic factors [ 43 , 46 ]. The higher rates of sexual activity among adolescents in other settings may be linked to differences in data collection methods, relatively higher rates of rapid urbanisation and the cultural differences in these cities compared to Ilorin. Intervention effect The advancement in information technology can be leveraged to improve SRH knowledge. In the current study, the level of completion of the mHealth-based CSE curriculum was high. Within 12 weeks, more than two-thirds of the respondents had completed the course. Within 24 weeks, more than four-fifths had completed the course. The high level of uptake of the curriculum suggests the feasibility of using mHealth-based interventions for SRH interventions among adolescents. Post intervention (T 1 and T 2 ), there was no statistically significant difference in knowledge, attitude, and sexual behaviour of respondents in the control group. In the intervention group, however, there was a statistically significant increase in the proportion of respondents who had good knowledge of SRH and an increase in mean knowledge score from baseline to T 1 and T 2 among respondents in the intervention group. Also, there was a statistically significant increase in the proportion of respondents who had positive attitude at T 1 and T 2 and an increase in mean attitude score in the intervention group. However, there was no statistically significant difference in proportion of respondents who practised RSB among respondents in the intervention group at T 1 and T 2 , and in the mean risky sexual behaviour score compared to baseline in the intervention group. As put forward by many authors, this finding highlights the importance of CSE in improving adolescents' knowledge and attitude towards SRH [ 30 , 49 – 51 ]. In 2007, an internet-based and mobile helpline sexual health information platform was implemented in Nigeria [ 52 ]. In 2012, when the programme was evaluated, it was found to be 10–20% more effective as a teaching method than classroom-based teaching of CSE. These findings suggest that mHealth-based interventions are effective in improving the knowledge and attitude of adolescents. Given the current global reality, as seen during the COVID-19 pandemic, online learning plays and will continue to play a significant role in educational institutions. Educational and health institutions in Nigeria should consider implementing mHealth-based strategies in reaching adolescents. There was no statistically significant decrease in the prevalence of RSB in the control and intervention groups. In contrast, a quasi-experimental study in which in-school adolescents in Ilorin were exposed to a sex education programme found that post-intervention (immediately after the 8-week programme), those in the intervention group reported less at-risk sexual behaviours compared with the control group [ 49 ]. The disparity in findings might be due to the difference in study designs and the sample size. The current study was a cRCT with 1280 respondents while the other study was a quasi-experimental study with 24 participants. Furthermore, the findings from this study regarding the effect of the study intervention on RSB is not unexpected due to the interval between implementing the intervention and evaluating behavioural change. Behavioural change among adolescents is not straightforward; it is a spiral process that usually requires ample time and motivation before adopting healthy sexual behaviours [ 25 ]. However, using the construct of the health belief model, good knowledge and positive attitude are steps in the right direction towards reducing the practise of risky sexual behaviour [ 53 ]. This study showed that being a female was a positive predictor of good SRH knowledge. This is consistent with findings from Iran, where females were found to have better knowledge of SRH compared to their male colleagues [ 54 ]. However, this finding is in contrast to the report from Nicaragua, Central America, where adolescent males were more likely to have a better knowledge of SRH because they are more exposed to the media and education [ 55 ]. A review of the gender differences in academic performance in the global north and global south found that girls predominantly outperform boys across these settings [ 56 ]. In addition, there has been a significant increase in girls' enrolment into schools in Nigeria [ 57 ]. These reasons may account for the reported differences. Being male was found to be a positive predictor of RSB, while being in a more senior class and having a self-employed father were negative predictors. Similar studies have shown that males are more likely to practise RSB compared to females [ 58 , 59 ]. This may be related to the notion that boys are more adventurous and more likely to take risks than girls [ 60 ]. Respondents in more senior classes are more likely to be aware of the consequences of RSB from lessons taught in class which might explain the practice of less RSB among this group compared to those in junior classes. A study conducted in Cameroon corroborates the fact that adolescents whose fathers are unemployed are more likely to practice RSB [ 61 ]. Transactional sex has been identified as a means of survival for adolescents from low socioeconomic backgrounds, particularly among females [ 40 , 62 ]. Adolescents whose fathers are unemployed are likely to have financial constraints and may practice RSB for financial gains.
Conclusion This study has contributed to the body of knowledge on the effect of mHealth-based CSE among in-school adolescents. A structured mHealth-based intervention delivered over a period of 12 weeks was found to have improved the SRH knowledge and increased positive attitude towards SRH among in-school adolescents who took the course. Such an intervention could help bridge the SRH knowledge and attitude gap among in-school adolescents. Our study findings also suggest that in large scale programmes, males should also be targeted in the implementation of SRH interventions for adolescents. They are less likely to have good SRH knowledge and more likely to practice RSB. Age-appropriate sexuality education curriculum should be implemented as early as possible so that younger adolescents in junior classes can benefit from SRH knowledge which will help them practice protective sexual behaviour. Also, the association between the practice of RSB and unemployment of their fathers, shows the effect of multi-causal factors including socioeconomic factors on the sexual behaviour of adolescents. This study suggests that an improved standard of living in the society especially among parents of adolescents could help reduce risky sexual behaviour among in-school adolescents.
Background The implementation of the country-wide comprehensive sexuality education (CSE) curriculum among in-school adolescents remains abysmally low and mHealth-based interventions are promising. We assessed the effect of a mHealth-based CSE on the sexual and reproductive health (SRH) knowledge, attitude and behaviour of in-school adolescents in Ilorin, northcentral Nigeria. Methods Using schools as clusters, 1280 in-school adolescents were randomised into intervention and control groups. Data was collected at baseline (T 0 ), immediately after the intervention (T 1 ) and 3 months afterwards (T 2 ) on SRH knowledge, attitude and practice of risky sexual behaviour (RSB). Data analysis included test of associations using Chi-square, independent t-test and repeated measures ANOVA. Predictors were identified using binary logistic regression. Results In the intervention group, there was a statistically significant main effect on mean knowledge score (F = 2117.252, p = < 0.001) and mean attitude score (F = 148.493, p = < 0.001) from T 0 to T 2 compared to the control group which showed no statistically significant main effects in knowledge (p = 0.073), attitude (p = 0.142) and RSB (p = 0.142). Though the mean RSB score declined from T 0 to T 2 , this effect was not statistically significant (F = 0.558, p = 0.572). Post-intervention, being female was a positive predictor of good SRH knowledge; being male was a positive predictor of RSB while being in a higher-class level was a negative predictor of RSB. Conclusion The mHealth-based CSE was effective in improving SRH knowledge and attitude among in-school adolescents. This strategy should be strengthened to bridge the SRH knowledge and attitude gap among in-school adolescents. Trial registration Retrospectively registered on the Pan African Clinical Trial Registry (pactr.samrc.ac.za) on 19 October 2023. Identification number: PACTR202310485136014 Supplementary Information The online version contains supplementary material available at 10.1186/s12978-023-01735-4. Plain Language Summary In Nigeria, the implementation of a nationwide sex education programme for adolescents going to schools is below expectation but using mobile health (mHealth) interventions could help. In this study, we looked at how a mHealth-based sex education programme affected the sexual and reproductive health (SRH) knowledge, attitude, and behaviour of in-school adolescents in Ilorin, Nigeria. We divided 1280 students into two groups, one received the mHealth-based intervention and the other did not receive it. We collected data before the intervention, right after it, and 3 months later to see any changes in SRH knowledge, attitudes, and risky sexual behaviours. We used various statistical tests to analyze the data and find patterns. The results showed that the group that received the mHealth intervention had significant improvements in their knowledge and attitudes about SRH from the start of the study to 3 months after the intervention. However, the control group, which didn't get the intervention, didn't show these improvements significantly. While the risky sexual behaviour score decreased slightly in the intervention group, this change was not significant. After the intervention, we found that being female was associated with better SRH knowledge, while being male was linked to more risky sexual behaviours. Also, being in a higher class level was associated with low risky behaviour. In conclusion, using mHealth for sex education helped improve the SRH knowledge and attitudes of students. This approach could be scaled to fill the gap in SRH knowledge and attitudes among adolescents in schools. Supplementary Information The online version contains supplementary material available at 10.1186/s12978-023-01735-4. Keywords
Implications for policy and practice Stakeholders in the Federal and State Ministries of Education are urged to implement an mHealth-based FLHE curriculum in the country. This mode of delivery has the potential to scale-up the country-wide coverage of the curriculum which is currently low due to the associated challenges with the current classroom-based mode of delivery. However, equity considerations should be made in the implementation of this approach. Provision should be made to students without the required technology to ensure equitable access to the curriculum. Programme managers in governmental and non-governmental organisations are advised to be intentional in targeting adolescent males during the planning and implementation of SRH programmes. Males were found to be more likely to have poor SRH knowledge and practise risky sexual behaviour compared to females. Targeted programmes could help improve the SRH knowledge of males, and also reduce their practice of risky sexual behaviour. Policymakers and implementers in the educational sector are advised to implement age-appropriate comprehensive sexuality education early in secondary schools. This could address the poor attitude towards SRH found among respondents in lower senior secondary school classes. These stakeholders are also urged to consider the socioeconomic factors of adolescents and their families. The determinants of sexual behaviour are multi-causal, and they include factors beyond the adolescents. This could help address the higher prevalence of risky sexual behaviour among adolescents with unemployed fathers. Study limitations The self-reported nature and sensitivity of the questions asked could have led to respondents under-reporting their sexual behaviours. This was minimised by continuously reassuring the respondents of the confidentiality of their responses and persuading them to be as sincere as possible. Furthermore, during the implementation of the study, students in the control group were expected to continue receiving comprehensive sexuality education as part of the existing curriculum. However, schools were shut down due to the COVID-19 pandemic and this disrupted the regular educational routine of students. This might have had an effect on their performance in the post-intervention evaluation. Post-intervention data from the control and intervention groups were analysed separately to reduce the effect of this limitation. The post intervention effect was measured immediately after the intervention and 3 months after the intervention. Usually, 3 months follow-up period is not long enough to confidently report a sustained behavioural impact of the intervention. Due to the nature of the study, only students who had access to the internet participated in the study. Therefore, findings may not be representative of students without internet access and out-of-school adolescents. Despite these limitations, however, the study provides useful information for policymakers and stakeholders involved in adolescent SRH in Nigeria. Future studies could consider (1) a study which exposes in-school adolescents to mHealth-based CSE over a longer period of time, to assess the long-term effects of this intervention e.g. 6 months or 12 months (2) a study that involves out-of-school adolescents. Supplementary Information
Abbreviations Acquired immunodeficiency syndrome Risky sexual behaviour Confidence interval Coronavirus disease 2019 Cluster Randomized Controlled Trial Comprehensive sexuality education Family Life and HIV Education Human immunodeficiency virus Sexual and reproductive health Sexually transmitted infections Assessment at baseline Assessment immediately after the 12-week intervention Assessment 3 months after the intervention Acknowledgements We would like to acknowledge trainers in the Department of Epidemiology and Community Healthm University of Ilorin Teaching Hospital, Nigeria who made valuable contributions to this work: Prof. G.K. Osagbemi, Prof. A.G. Salaudeen, Prof. I.S. Abdulraheem, Prof. S.A. Aderibigbe, Dr. H.A. Ameen, Prof. M.M.B. Uthman, Prof. M.J. Saka and Dr. S.T. Abdulsalam. We also thank Mrs Yemisi Oyetunde and other staff members of the Kwara State Ministry of Education for their support. We also appreciate the cooperation of the Principals and other staff members of the participating schools. We appreciate Akinwole Akinpelu and Emmanuel Agwasim for the Information and Communication Technology support provided during the implementation of this study. We also thank Mr Adegboye and Mrs Aworinde for the support during data collection and analysis. We are grateful to all the research participants who used their time and resources during the course of this study. Author contributions OWA contributed to the conceptualization, data curation, formal analysis, investigation, methodology, project administration, resources, visualization, writing (original draft), MM contributed to the data curation, formal analysis, visualization, writing (review and editing), EUI contributed to the methodology, validation, writing (review and editing), KE contributed to writing (review and editing) and visualisation, OAB contributed to the methodology, supervision, validation, writing (review and editing), OM contributed to the methodology, supervision, validation, writing (review and editing), and TMA contributed to the methodology, supervision, validation, writing (review and editing). All authors read and approved the final manuscript. Funding This study was self-funded. It is drawn from the dissertation submitted to the National Postgraduate Medical College of Nigeria as part of requirements for the award of the Fellowship of the College in the Faculty of Public Health and Community Medicine. Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate Ethical approval (Ref: ERC PAN/2019/07/1928) was obtained from the Ethical Review Committee of the University of Ilorin Teaching Hospital prior to the start of the study. The trial was also registered in the Nigeria Clinical Trial Registry (14911136). A letter of introduction was obtained from the Department of Epidemiology and Community Health, University of Ilorin Teaching Hospital, and the Kwara State Ministry of Education to the Principals of the schools selected. The trial was also retrospectively registered on the Pan African Clinical Trial Registry (PACTR202310485136014). Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-15 23:43:48
Reprod Health. 2024 Jan 13; 21:6
oa_package/6e/6f/PMC10788027.tar.gz
PMC10788028
38218876
Introduction Renal cell cancer (RCC) accounts approximately 2% of global cancer diagnoses worldwide [ 1 ]. RCCs have recently been re-classified pathologically with molecular-driven criteria as well as cytoplasmic feature-based diagnoses [ 2 ]. RCC has survival from 40 to 91% according to various subtypes when non-metastatic [ 3 ]. However, these rates decreases significantly less than 20% in case of distant metastases [ 4 ]. Upon growing evidence on carcinogenesis, tumor-promoting inflammation as well as genomic instability and mutability have been suggested to be enabling characteristics of cancer [ 5 ]. Inflammatory cells have been shown to accelerate tumoral genetic evolution towards malignancy via actively mutagenic reactive oxygen species [ 6 ]. As well, inflammation have been suggested to produce molecules including growth factors, proangiogenic factors, extracellular matrix-modifying enzymes within tumoral microenvironment, thereby facilitate angiogenesis, invasion, and metastasis [ 7 , 8 ]. Systemic Inflammatory Response Index (SIRI) and systemic immune-inflammation index (SII) are markers of such inflammatory tumor-supportive microenvironment. SIRI includes the counts of neutrophils, monocytes and lymphocytes with the formulation of [monocyte count x neutrophil count / lymphocyte count]. SII includes the counts of lymphocyte, neutrophil and platelet with the formulation of [ platelet count x neutrophil count / lymphocyte count] [ 9 ]. SII has been suggested to be an independent predictor of overall survival and cancer-spesific survival of patients with non-metastatic RCC [ 10 ]. Else, both SII and SIRI has been associated with advanced stages and larger tumors in localized renal cancers [ 11 ]. In this study, we aimed to evaluate predictive value of SIRI and SII for metastases in RCC.
Materials and methods Seventy-two patients who were diagnosed with RCC and underwent surgery in Urology Clinic and Medical Oncology Clinic of Istanbul Training and Research Hospital between July 2022 and January 2023 or were included in the treatment planning in the medical oncolgy unit were included in the study. Male and female patients older than 18 years of age who had preoperative laboratory tests and inflammatory indices could be calculated were included in the study. Information related to patients was obtained from patients’ medical records at the hospital system. Patients were diagnosed with renal cell carcinoma through surgery or biopsy. The diagnoses of metastases of patients was determined by lymph node dissection or FDG-PET imaging. 51 of the patients were male and 21 were female. Twenty-two of the patients had metastatic RCC. 50 patients had non-metastatic RCC. Patients older than 18 years of age, who underwent radical or partial nephrectomy due to kidney tumor and were diagnosed with RCC in their pathology, who did not undergo surgery but were diagnosed with RCC in their pathology by biopsy, and whose hematological parameter studies were performed in the last 1 week were included in this study. Patients with another malignancy, patients who were diagnosed with or suspected infection within 1 week before admission, patients who received steroid therapy or immunosuppressive therapy at the time of admission, patients with known autoimmune disease, and patients who received blood transfusion within the last 1 month were excluded from the study. Laboratory results and histopathological findings, tumor stages and grades of the patients of the patients included in the study were recorded. The metastatic and non-metastatic groups were compared with each other by confirming the metastasis status by imaging methods of the patients whose histopathological findings were recorded. Using the laboratory results of these two groups, inflammatory indices such as SIRI and SII were calculated and their effectiveness in terms of metastasis were compared. Statistical analyses were performed with SPSS statistics software (IBM Corp. Released 2011. IBM SPSS Statistics for Windows, Version 20.0. Armonk, NY: IBM Corp.). Comparisons of groups were done with Chi-square test and Student’s t test where appropriate. The mean values were presented with their 95% Confidence intervals. Receiver operating characteristic curve (ROC) was used to illustrate related sensitivity and specificity of ADC values. Statistical significance was set at less than 0.05. All methods were carried out in accordance with relevant guidelines and regulations. All experimental protocols were approved by university ethics committee. Informed consent was obtained from all subjects and/or their legal guardian.
Results A total of 72 patients who met the exclusion and inclusion criteria during the study period were included in the study as stated in the methods. Twenty (28%) of the patients were female patients and the remaining 52 (72%) were male patients. The mean age of the patients was 60.25 ± 11.72 years. The mean age for women was 62.35 ± 13.84 years, and the mean age for men was 59.44 ± 10.84 ( p = 0.349). The mean age of metastatic patients was 60.60 ± 12.46 years, while the mean age of non-metastatic patients was 60.12 ± 11.55 ( p > 0.05). Twenty-two (31%) of the patients had metastatic RCC and 50 of the patients (69%) had non-metastatic RCC. The mean body mass index (BMI) of the patients was 28.10 ± 6.18. The BMI was 30.38 ± 9.27 in women and the mean BMI in men was 27.10 ± 3.93 ( p = 0.046). While the mean BMI of the metastatic patients was 30.17 ± 10.10, it was 27.49 ± 4.41 in non-metastatic patients ( p > 0.05). At least 1 comorbid disease was present in 66% of the patients. According to the frequency of comorbid diseases of the patients, 44% had hypertension, 20% had diabetes mellitus, 17% had cardiovascular disease, 9% had chronic obstructive pulmonary disease and 10% had other diseases. Tumors were located unilaterally in all patients included in the study, and right and left locations (57% right and 43% left) had similar rates ( p = 0.239). The diagnosis was made by percutaneous renal biopsy in 8 of the patients (11%), while the diagnosis was made by surgical excision (radical nephrectomy/partial nephrectomy) in 64 cases (89%). As the surgical approach in surgical excision, laparoscopic surgery was used in 89% (57 patients) and open surgery in 11% (7 patients). Due to the fact that there were signs of metastases in the imaging performed at the time of diagnosis, a biopsy was performed on these 8 patients for verification purposes and they were diagnosed with RCC as a result of biopsy. In addition, only two patients were diagnosed with RCC by biopsy of their metastatic mass. Radical total nephrectomy was performed in 38% of patients ( n = 24) and partial nephrectomy was performed in 62% of the remaining ( n = 40) patients. Renal ischemia was performed in 75% of the patients who underwent partial nephrectomy, and the remaining 25% did not. Simultaneous lymph node dissection was performed in 9% (6/64) of the patients who underwent surgical excision. Surgical complication developed as pleural injury in 1.5% of the patients. The histological subtypes of RCC specimens in our study consisted of 72% clear cell, 17% chromophobe cell, 7% papillary type and 4% other subtypes. T stages of the patients in our study consisted of 29% pT1a, 33% pT1b, 6% pT2a and 32% pT3a. When the metastatic and non-metastatic groups were compared, statistically significant differences were observed between the two groups in terms of lymphocyte and platelet counts ( p < 0.01) (Table 1 ). When the metastatic and non-metastatic groups were compared, statistically significant differences were observed between the two groups in terms of SIRI and SII values ( p < 0.05 for SIRI, p < 0.001 for SII) (Table 2 .) Median SIRI values for non-metastatic and metastatic groups were 1.26 and 2.1, respectively (mean ± standard deviation 1.76 ± 1.9 and 3.12 ± 4.22 respectively ( p < 0.05). Median SII values for non-metastatic and metastatic groups were 566 and 1434, respectively (mean ± standard deviation 870 ± 1019 and 1537 ± 917 respectively ( p < 0.001). The area under the curve in metastatic patients was 0.809 for SII and 0.737 for SIRI. The ROC curve is shown in the Fig. 1 . The various cut-off values, specificity, and sensitivity are shown in the Table 3 .
Discussion About one-third of patients present with metastatic RCC at the time of presentation. RCC is one of the cancers in which the immune system is most activated [ 12 ]. In order to better evaluate the outcomes of the patients, it is necessary to identify some predictive factors for reliable prognostic and metastasis prediction. In this study, thrombocyte, lymphocyte, SIRI and SII were found to be independent predictive factors in predicting metastasis from the blood parameters of the patients at the time of admission. Increasing evidence suggests a complex interaction between leukocytes and various types of cancer, including RCC. SIRI, which is an indicator of inflammation and mainly based on peripheral neutrophil, lymphocyte and monocyte counts, was first suggested to be a reliable prognostic factor in a study conducted by Qi et al. in 2016 including 177 patients with pancreatic cancer [ 13 ]. Nebojsa et al. showed that SIRI is an independent prognostic factor for the presence of lymphovascular invasion (LVI) in a study of 491 patients who underwent cystectomy due to BC [ 14 ]. This study suggests to us that a high SIRI will contribute metastasis through LVI. Therefore this situation can also be adapted to our study. Hu et al. conducted a study of patients with non-metastatic RCC involving 646 patients. Multivariate analysis conducted in this study has shown that SII is an independent predictor of overall survival (OS) and cancer-specific survival (CSS). In addition, it was found that SII was associated with lymphovascular invasion, positive lymph node and more aggressive phenoptype [ 10 ]. We did not compare SII of phenotypes in our study. Zhang et al. conducted a retrospective study on 209 BC patients who underwent radical cystectomy. In this study, it was found that SII is an independent predictor for overall survival. In addition, SII was an accurate prognostic marker than neutrophil/lymphocyte ratio (NLR), platelet/lymphocyte ratio (PLR) and C-reactive protein/albumin ratio [ 15 ]. In another study, Jan et al. showed that SII was superior to NLR, PLR and monocyte-to-lymphocyte ratio (MLR) for prognostıc factor in patients with upper urinary tract cancer [ 16 ]. In the meta-analysis of patients with urological cancer, which included 14 studies with 3744 patients, it was shown that high SII value is associated with poor prognosis [ 9 ]. On the other hand, there is no study in the literature investigating the effectiveness of inflammation biomarkers in predicting metastasis in patients with RCC. In this sense, we hope that our study will contribute to the literature. Aktepe et al., in a retrospective review of the data of 150 people with metastatic RCC who received tyrosine kinase inhibitor, showed that the PLR was superior to the NLR in terms of assessing OS [ 17 ]. In our study, when compared with the non-metastatic group, especially high platelet and low lymphocyte levels were observed in the metastatic group. In this case, it is seen that the rate of PLR is higher in the metastatic group. Therefore, high platelet count and low lymphocyte count can guide us about the risk of metastasis. In a study by Takuya et al., in which the records of 268 nephrectomized patients were examined, it was shown that reactive thrombocytosis in renal cell carcinomas developed due to hypercytocinemia. It has also been reported that the presence of IL-6 and high CRP in the liver triggers thrombocytosis and IL-6 induces differentiation from megakaryocytes to platelets and leads to an abnormal inflammatory response. In addition, it has been stated that the tumor itself triggers thrombocytosis. It has been reported that thrombocytosis and tumor progression may also be a marker [ 18 ]. When the platelet counts were compared in our study, a statistically significant difference was observed between the metastatic and non-metastatic groups, and it is thought that it may contribute to the prediction of metastasis. Zheng et al. investigated the relationship of SIRI with lymph node metastasis in patients with upper system urothelial carcinoma who underwent radical nephrectomy between 2003 and 2016. SIRI value was found to be associated with lymphovascular invasion and lymph node metastasis [ 19 ]. Chen et al. also investigated the association of SIRI with 3-year and 5-year survival and prognosis in clear cell RCC. They found that other inflammatory parameters, NLR, were statistically more significant than PLR values in both 3-year and 5-year follow-up [ 20 ]. In our study, we have not done any research on the comparison of SIRI, NLR and PLR values. According to the results of the meta-analysis, which included 30 retrospective studies published between 2016 and 2020, although it was found to be associated with SIRI value, TNM stage and lymphovascular invasion, its relationship with metastasis was not evaluated. This meta-analysis study includes cohort studies of different numbers of gastrointestinal cancers, lung cancer, cervical cancer, breast cancer, urological cancer and soft tissue cancers [ 21 ]. Our study shows that SIRI can be a parameter that can be used to predict metastasis. High SII value was found to be associated with advanced TNM stage and poor prognosis [ 22 ]. In our study, it was shown that the level of SII is associated with the risk of metastasis. This study has several limitations. Our study involves small number of patients. More comprehensive results and possible mechanisms can be revealed by evaluating the results of more patients that can be done in this regard. First, carrying out our study with a larger group and in a wider period will contribute more to the results. Obtaining the data of patients in a single center is one of the factors that restrict the study. A study involving multicenter patients is needed. Some of the diagnoses of metastases established by imaging methods that is not confirmed by biopsy is one of the factor that restricted this study. The diagnoses of metastases have been performed by biopsy in 32% (7/22) and radiologically 68% (15/22). However, the diagnoses performed radiologically were clear due to properties of cross-sectional imaging including MRI and CT both of which had been proved to have high sensitivity and specificity in cases with aforementioned properties [ 23 ]. Incorporating SIRI and SII into routine assessments could provide nuanced prognostic insights, aiding clinicians in identifying patients at an elevated risk of metastatic progression. The integration of these markers may guide personalized treatment strategies, allowing for interventions tailored to an individual’s inflammatory profile. SIRI and SII could serve as valuable tools for monitoring treatment responses dynamically, offering insights into the effectiveness of specific therapeutic approaches. Combining these markers with emerging technologies, such as radiomics and genomics, may offer a more comprehensive understanding of RCC. The combination of radiomics features and genomics data has achieved good results [ 24 ]. Considering the intrinsic heterogeneity of renal lesions, the integration of both radiogenomics and hematological markers could potentially provide a more comprehensive risk stratification for RCC patients. This collaborative approach has the potential to refine predictive models for RCC metastases, improving the accuracy of prognostic assessments and guiding clinical decision-making. As a result of our study, inflammation parameters obtained from venous blood samples taken from patients can be used to predict metastasis. Low lymphocyte, high platelet count, increased SIRI and SII values indicate a high probability of metastasis. We think that it would be beneficial to conduct more comprehensive studies based on repeated measurement results by evaluating the results of more patients.
Conclusions According to the results of this study, it is seen that the risk of metastasis may be higher in patients with RCC who have high SIRI and SII values, low lymphocyte count and increased platelet count, which are among the inflammatory parameters obtained from the venous blood sample at the time of diagnosis of patients with RCC. This technique is cheap and accessible. High SIRI, SII, neutrophils and low lymphocytes at the time of diagnosis alert us in terms of metastasis research. These laboratory tests may show us the way for early recognition of metastasis in the future. We hope that laboratory tests will be able to show whether imaging is necessary for the diagnosis of metastasis of RCC in the future. A predictive model can be developed using these tests in the future. Therefore, early recognition of metastasis may be useful in planning treatment and follow up.
Objectives In this prospective cross-sectional clinical study, we aimed to determine the efficiency of preoperative hematological markers namely SIRI (systemic inflammatory response index) and SII (systemic inflammatory index) for renal cell cancer to predict the possibility of postoperative metastases. Methods Istanbul Education and Research Hospital, Clinic of Urology and Medical Oncology in the clinic between the dates of June 2022 to 2023 February, a diagnosis of renal cell cancer by surgical or medical oncology units imported into the treatment planning of 72 patients were included in the study. All cases with diagnoses of renal cell carcinoma were searched from hospital records. Patients with secondary malignancy, hematological or rheumatological disorders or ones with recent blood product transfusion or diagnoses of infection within the 1-month-time of diagnoses were excluded for data analyses. The data within complete blood counts (CBC) analyzed just before the time of renal biopsy or surgery were studied for SIRI and SII calculations. Twenty-two metastatic and 50 non-metastatic RCC patients were included. SIRI and SII values were compared among groups to seek change of values in case of metastasis and in non-metastatic patients a cut-off value were sought to indicate malignancy before pathological diagnosis. Results Mean age of non-metastatic RCC patients were 60.12+/-11.55 years and metastatic RCC patients were 60.25+/-11.72. Histological sub-types of the RCC specimens were clear cell (72%), chromophobe cell (17%), papillary cell (7%) and others (4%). Median SIRI values for non-metastatic and metastatic groups were 1.26 and 2.1 (mean+/-S.D. 1.76 +/-1.9 and 3.12+/-4.22 respectively ( p < 0.05). Median SII values for non-metastatic and metastatic groups were 566 and 1434 (mean+/-S.D. 870 +/-1019 and 1537+/-917) respectively ( p < 0.001). AUC for detection of metastasis were 0.809 for SII and 0.737 for SIRI. Conclusions SIRI and SII indexes seem to show a moderate efficiency to show metastases in RCC. Keywords
Abbreviations Area Under the Curve Body Mass Index Bladder Cancer Complete Blood Count Computed Tomography Fibroblast Growth Factor-2 Granulocyte Colony Stimulating Factor Granulocyte-Macrophage Colony Stimulating Factor Magnetic Resonance Imaging Memorial Sloan Kettering Cancer Center Neutrophil/Lymphocyte Ratio Nuclear Factor kappa B Overall Survival Platelet/Lymphocyte Ratio Renal Cell Carcinoma Systemic Inflammatory Index Systemic Inflammatory Response Index Signal Transducer and Activator of Transcription 3 Tumor Microenviroment Tumor Necrosis Factor Vascular Endothelial Growth Factor Author contributions All authors contributed to the study of conception and design. HK coordinated and managed all parts of the study. TE carried out the literature search. All authors conducted data collection and performed preliminary data preparations. HK conducted data analyses and all the authors contributed to the interpretation of data. EA wrote the draft of the paper and all authors provided substantive feedback on the paper and contributed to the final manuscript. All authors read and approved the final manuscript. Funding None applicable. Data availability All data is available from corresponding author on reasonable request. Declarations Ethics approval and consent to participate Approved by the Health Sciences University Istanbul Health Practice and Research Center, Clinical Research and Ethics Committee (22.07.2022/Desicion Number: 235). All procedures performed in studies involving human participants were in accordance with the ethical standards the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:48
BMC Urol. 2024 Jan 13; 24:14
oa_package/1e/cb/PMC10788028.tar.gz
PMC10788029
38218761
Introduction Although various measures had been taken globally to address highly transmissible SARS-CoV-2 variants, the Omicron wave swept across China in 2022 [ 1 ]. Particularly, the SARS-CoV-2 outbreak increased the risk of death in patients with malignant hematological diseases. Among them, the mortality rate of hospitalized patients was as high as 31.2% [ 2 ]. Furthermore, the lymphoma patients are more susceptible to coronavirus disease 2019 (COVID-19). In addition to the pathogenesis of malignant cloning of immune cells, the anti-tumor regimens, such as monoclonal antibodies, pathway inhibitors, and autologous hematopoietic stem cell transplantation (ASCT), also exerted harmful effects on the immune system [ 3 ]. The dynamics of specific antibody (Ab) levels following SARS-CoV-2 infection in healthy populations have been well documented and confirmed by a large body of research data [ 4 , 5 ]. Accordingly, Ab levels reached the initial peak at around one month after infection, and then gradually decreased into the plateau period [ 6 – 8 ]. Maintaining Ab levels might be an effective mean of preventing reinfection or reducing the incidence of severe cases. In contrast, lymphoma patients often exhibited an adaptive humoral immune deficiency, and their Ab response to SARS-CoV-2 was usually not ideal [ 9 , 10 ]. Meanwhile, the studies about the immune response to SARS-CoV-2 in lymphoma patients mainly focused on the immune response after vaccination (i.e., inactivated virus) [ 3 , 10 , 11 ]. For example, Chang et al. found that the anti-CD20 treatment and the number of circulating B lymphocytes strongly predicted the vaccine response [ 11 ]. Nevertheless, data from studies on the ability of lymphoma patients to produce specific Ab after SARS-CoV-2 infection and the factors that influence this ability, particularly against Omicron, remain limited. CD20 is a surface protein of B cells that is expressed from pre-B cells to mature B cells, making it an important target for B-cell lymphomas [ 12 ]. A growing body of evidence suggested that the application of CD20 monoclonal Ab (mAb) was one of the main causes of humoral immunodeficiency in lymphoma patients [ 13 – 16 ]. Concretely, long-term use of the drug depleted mature B lymphocytes along with secondary hypogammaglobulinemia and weakened the humoral immune response to new pathogens in lymphoma patients. This not only increased the complications of infection, but also significantly reduced the ability of producing specific Ab and Ab titers following viral infection. All of these could increase the risk of reinfection with the virus. Ultimately, it might affect the long-term prognosis of lymphoma patients who survived during acute infection with SARS-CoV-2. Currently, some researchers have demonstrated the impact of anti-CD20 treatment on the production of anti-SARS-CoV-2 IgG Ab [ 9 , 16 ]. However, further research is needed regarding the relationship between anti-SARS-CoV-2 Ab levels and the clinical characteristics, as well as the details of treatment regimens in lymphoma patients. To elucidate these points, the present study conducted a prospective study on 80 Chinese lymphoma patients and 51 healthy controls infected with COVID-19. Here, the data of anti-SARS-CoV-2 IgG Ab positivity rate (APR) and Ab levels about two months after infection in those two groups were reported. More importantly, we analyzed the factors influencing APR and Ab levels and followed up the clinical outcome of the patients.
Patients and methods Patients and healthy controls This was a prospective observational study with longitudinal follow-up of lymphoma patients infected with COVID-19. Those participants were recruited from December, 2022 to January, 2023, and the follow-up period was up to December, 2023. This study was performed by the Hematology Department of Daping Hospital Affiliated to the Army Medical University. Inclusion criteria: patients who were diagnosed with lymphoma and received formal treatment before December, 2022; Lymphoma patients who survived after the acute phase of COVID-19 infection. Exclusion criteria: patients who had no history of COVID-19 infection; patients with non-treated lymphoma or diagnosed after COVID-19 infection. COVID-19 infection was confirmed by nucleic acid or antigen testing. Anti-SARS-CoV-2 IgG Ab levels were tested about two months (50–70 days) after the positive record of the virus. Patients’ demographic and clinical data were collected from medical recordings including age, gender, vaccination history, diagnosis, disease stage, time of COVID-19 infection, severity of COVID-19, lymphocyte subsets, treatment regimen, and therapeutic efficacy. The severity of COVID-19 infection was classified (mild, moderate, severe, and critical) according to the Guideline for Coronavirus Disease [ 17 ]. 51 individuals without hematological and other chronic underlying diseases were simultaneously recruited as the healthy controls. The controls lived in the same city and were diagnosed with COVID-19 at the same period as the lymphoma group. Anti-SARS-CoV-2 IgG Ab detection Peripheral blood samples were collected from both the lymphoma patients and healthy controls at about two months after COVID-19 infection. Data from participants two months after infection were analyzed because this period was about the plateau of the initial humoral immune response against SARS-CoV-2 and Ab concentrations were largely maintained at a relatively stable high level [ 4 ]. Following the procedure of 2019-nCoV IgG Ab test kit (Maccura, Cat.20203400496, Chengdu, China), the levels of IgG Ab to the SARS-CoV-2 total proteins were measured using magnetic particle-based chemiluminescence enzyme immunoassay (CLEIA) [ 18 ]. The sensitivity and specificity of the assay were 87.78% (95% CI: 83.95% ~ 90.80%) and 99.01% (95% CI: 97.71% ~ 99.58%), respectively. The cut-off value for anti-SARS-CoV-2 IgG Ab positivity and negativity was 0.999 S/CO. All IgG Ab detection and analysis were carried out in the same machine (Michael i3000 automatic chemiluminescence immune analyzer) in the hospital. Statistical analysis The data of clinical characteristics and outcomes of the patients were collected. T-test or F-test were applied to compare the impact of different clinical characteristics and treatment regimens on the continuous values of anti-SARS-CoV-2 IgG Ab levels. The chi-square (χ 2 ) test was used to compare the impact of different clinical characteristics and treatment regimens on the categorical variable, i.e., anti-SARS-CoV-2 IgG Ab positivity rate (APR). APR is defined as the percentage of the population with IgG Ab values greater than 0.999 S/CO [ 19 ]. A two-sided P value < 0.05 was considered to be statistically significant. Variables with P value < 0.05 in the univariate analysis were entered into the final multiple regression as independent variables. Multiple linear regression and binary logistic regression were used to perform multifactorial analyses of SARS-CoV-2 IgG Ab levels and APR, respectively.
Results Clinical characteristics of patients and healthy controls A total of 80 lymphoma patients (37 DLBCL, 8 MZL, 6 MCL, 5 FL, 7 HL, 13 TCL, and 4 other types of lymphoma) and 51 healthy controls with SARS-CoV-2 infection were enrolled. The clinical characteristics of all patients are shown in Table 1 . The clinical characteristics of each patient were shown in the supplementary materials ( https://www.jianguoyun.com/p/DcFlZ2MQsJ6gDBjxhq8FIAA ). The median age of the lymphoma cohort was 58 years (range: 18 ~ 85 years) and 41 of the patients were male. The healthy controls included 15 males with a median age of 32 years (range: 15 ~ 46 years). All healthy controls were vaccinated with COVID-19, while only 56 (70.0%) of the lymphoma patient group were vaccinated. 70 patients (87.5%) received anti-lymphoma therapy within one year prior to COVID-19 infection, while the remaining 10 ended treatment due to achieving complete remission. 58 patients (72.5%) were treated with CD20 mAb (rituximab or ortuzumab), among whom 47 patients were combined with chemotherapy, 2 patients were combined with bruton tyrosine kinase inhibitor (BTKi), and 9 patients were combined with all three treatments. 12 patients (15%) were treated with BTKi, and 12 patients (15%) received ASCT prior to COVID-19 infection. 57 lymphoma patients (71.3%) and all 51 healthy controls with mild COVID-19 received only symptomatic treatment, such as antipyretic and antitussive treatment. 23 patients (28.8%) received oxygen therapy, among whom 19 patients (23.8%) were treated with antiviral drugs, and 9 patients (11.3%) were treated with dexamethasone. 3 patients (3.8%) received mAb or convalescent plasma therapy. 2 patients (2.5%) required mechanical ventilation in the ICU. Humoral response of lymphoma patients to SARS-CoV-2 The anti-SARS-CoV-2 IgG APR and Ab levels in lymphoma patients and healthy controls are exhibited in Fig. 1 . The χ 2 test and t-test test revealed that the IgG APR and average Ab levels in lymphoma patients were significantly lower than those in healthy controls (70% vs. 100%, P < 0.001; 4.69 vs. 9.69 S/CO, P < 0.001, see Fig. 1 A and C ). Based on the classification of lymphoma, the subtypes of lymphoma with low to high IgG APR were FL (40%), MCL (50%), DLBCL (65%), MZL (75%), HL (86%), and TCL (92%), respectively (see Fig. 1 B), and Ab levels were FL (2.10 S/CO), MCL (3.20 S/CO), DLBCL (4.28 S/CO), MZL (4.86 S/CO), TCL (5.96 S/CO), and HL (6.30 S/CO), respectively (see Fig. 1 D). Compared with the healthy controls, the IgG APR (Ps < 0.05) and Ab levels (Ps < 0.004) in each subgroup were significantly decreased. Factors affecting humoral response in lymphoma patients Effects of clinical characteristics on anti-SARS-CoV-2 IgG APR and Ab levels We first analyzed the impact of age, gender, vaccination history, lymphoma staging, disease status before COVID-19 infection, severity of COVID-19, and the use of dexamethasone for COVID-19 treatment on anti-SARS-CoV-2 IgG APR and Ab levels (see Table 2 ). The results found that vaccinated lymphoma patients had significantly higher IgG APR (76.8% vs. 54.2%, P = 0.04) and Ab levels (5.63 vs. 2.48 S/CO, P < 0.001) than unvaccinated patients. Regardless of whether the patients were vaccinated, the IgG APR (Ps < 0.001) and Ab levels (Ps < 0.001) of the two groups were significantly lower than those of the healthy controls. Additionally, the use of dexamethasone for COVID-19 treatment had a negative impact on Ab levels (2.22 vs. 5.00 S/CO, P = 0.004). Age, gender, lymphoma staging, disease status, and severity of COVID-19 had no significant effects on both APR and Ab levels (Ps > 0.06). Meanwhile, the information of 75 lymphoma patients’ lymphocyte subsets was collected (five patients were not tested) two months after COVID-19 infection. The patients were divided into anti-SARS-CoV-2 IgG Ab positive group ( n = 52) and negative group ( n = 23). The results showed that the absolute value of B lymphocytes in the IgG positive group was significantly higher than that in the negative group (0.0715 vs. 0.0204 × 10 9 /L, P = 0.01) (see Fig. 2 ), while there were no significant differences in CD4 + T, CD8 + T and NK cells between the two groups (Ps > 0.38). Effect of treatment on anti-SARS-CoV-2 IgG APR and Ab levels Anti-CD20 treatment In this study, 58 patients underwent anti-CD20 treatment, including 77.6% aggressive and 20.7% indolent B-cell lymphoma. The results showed that the anti-SARS-CoV-2 IgG APR and Ab levels were significantly lower in patients who were previously received anti-CD20 treatment than those in patients who were not received it two months after COVID-19 infection (62.1% vs. 90.9%, P = 0.01; 4.19 vs. 5.99 S/CO, P = 0.04) (see Fig. 3 A and B). Then, we analyzed the Ab production ability between the subgroup with their last anti-CD20 treatment within 3 months prior to infection and the subgroup with their last anti-CD20 treatment more than 3 months prior to infection. The results revealed no significant differences on APR (56.1% vs. 76.5%, P = 0.15) and IgG levels (4.22 vs. 4.12 S/CO, P = 0.92). Next, the impact of the times of receiving CD20 mAbs treatment on the IgG APR and Ab levels of patients was also analyzed. The results showed that there were no significant differences on both when the boundary was 4 times (58.1% vs. 73.3%, P = 0.30; 3.66 vs. 5.71 S/CO, P = 0.07). There was only a significant difference in Ab levels when the boundary was 5 times (52.8% vs. 77.3%, P = 0.06; 3.16 vs. 5.88 S/CO, P = 0.007). Furthermore, the IgG APR and Ab levels were significantly lower in patients who received ≥ 6 times CD20 mAbs than those who were treated 1 ~ 5 times CD20 mAbs (46.4% vs. 76.7%, P = 0.02; 2.76 vs. 5.52 S/CO, P = 0.004) (see Fig. 3 C and D). Additionally, among these 58 patients, 23, 9, 8, and 7 patients were treated with anti-CD20 Ab combined with CHOP-like, Bendamustine, Gemox, and MTX regimens within one year prior to infection, respectively. There were no significant differences among the four subgroups on IgG levels (5.94 vs. 4.44 vs. 4.13 vs. 2.56, Ps > 0.08). BTKi treatment A total of 12 patients (15.0%) received BTKi treatment, including 7 DLBCL, 4 MCL, and 1 VM, accounting for 20% of B-cell lymphoma (BCL). Further nonparametric Mann-Whitney test showed that the IgG Ab level in BCL patients who were previously treated with BTKi was slightly lower than that in patients who were not treated with BTKi two months after infection (2.62 vs. 4.62 S/CO, P = 0.08). However, there was no significant difference in APR between the two groups (50% vs. 66.7%, P = 0.28) (see Fig. 4 A and B). In addition, oral BTKi had no significant effects in APR (71.4% vs. 63.3%, P = 0.69) and IgG levels (3.52 vs. 4.46 S/CO, P = 0.51) among DLBCL patients (see Fig. 4 C and D). ASCT treatment A total of 12 patients (15.0%) received ASCT therapy, including 5 DLBCL, 1 MCL, 1 FL, 2 HL, and 3 TCL. Detailed analysis showed that the anti-SARS-CoV-2 IgG APR and Ab levels of patients treated with ASCT were significantly lower than those of patients treated without ASCT (33.3% vs. 76.5%, P = 0.003; 2.08 vs. 5.15 S/CO, P = 0.007) (see Fig. 5 A and B). In addition, the time interval between transplantation and infection did not significantly correlate to the Ab levels in patients who received ASCT therapy ( r = 0.15, P = 0.64). Lymphoma patients were further divided into BCL and non-BCL subgroups. The results showed that, in the BCL subgroup, the IgG APR and Ab levels in the ASCT group were significantly lower than those in the non-ASCT group (14.3% vs. 69.8%, P = 0.004; 0.83 vs. 4.67 S/CO, P < 0.001), whereas in non-BCL subgroup, there was a significant difference between ASCT group and non-ASCT group in APR (60.0% vs. 100.0%, P = 0.01), but not in Ab levels (3.82 vs. 6.83 S/CO, P = 0.18) (see Fig. 5 C and D). Multiple regression analysis on anti-SARS-CoV-2 IgG APR and Ab levels We further performed the multiple regression analyses on anti-SARS-CoV-2 IgG APR and Ab levels, taking the number of anti-CD20 treatment, ASCT, the absolute value of B lymphocytes, vaccination history, and treatment for COVID-19 with dexamethasone as independent variables. The regression analysis confirmed that the number of anti-CD20 treatment (Exp(B) = 0.795 [CI: 0.669 ~ 0.946], P = 0.009) and ASCT (Exp(B) = 0.057 [CI: 0.007 ~ 0.445], P = 0.006) were independent predictors on anti-SARS-CoV-2 IgG APR. Furthermore, the number of anti-CD20 treatment was an independent predictor on anti-SARS-CoV-2 IgG Ab levels (B = -0.232 [CI: -0.414 ~ -0.051], P = 0.01) (see Table 3 ). Follow-up of clinical outcome Finally, we followed up the clinical outcomes of the 80 lymphoma patients one year after infection. 33 patients (41.3%) continued to receive anti-lymphoma treatment and had progression-free survival, among whom 12 patients subsequently received ASCT. 21 patients (26.3%) stopped receiving treatment and had progression-free survival. 17 patients (21.3%) experienced disease progression, among whom 9 patients died due to disease progression. In addition, there were 2 deaths, one died of severe pneumonia caused by COVID-19 reinfection, and one died of severe peripheral neuropathy. 7 patients (8.8%) failed to be followed up. Further logistic regression analysis revealed the SARS-CoV-2 IgG levels did not significantly correlate with the clinical outcomes (Exp(B) = 0.96 [CI: 0.84 ~ 1.11], P = 0.61).
Discussion In the prospective study, we investigated the ability of producing anti-SARS-CoV-2 IgG Ab in 80 lymphoma patients after two-month COVID-19 infection and further analyzed the factors influencing the Ab levels. The results revealed that the Ab levels were significantly decreased in lymphoma patients compared with that in healthy controls. During the initial response, B cells are activated and terminally differentiate into long-lived plasma cells (LLPCs). The specific Abs secreted by LLPCs can be maintained for months or even years [ 20 , 21 ]. Thus, the core of protective humoral immunity is precisely the production ability of LLPCs [ 21 ]. However, the above ability in lymphoma patients was defective, which reduced the production and maintenance ability of SARS-CoV-2 specific Abs in those patients [ 9 , 10 , 16 ]. Therefore, lymphoma patients may be the hardest hit by infection following COVID-19 outbreaks due to a severe deficiency in B lymphocyte-mediated specific immune response [ 22 ]. Further subgroup analysis of lymphoma patients was performed according to the disease diagnosis. The results showed that the immune response ability was in the order of FL, MCL, DLBCL, MZL, TCL, and HL from weak to strong, which was consistent with the treatment characteristics of different types of lymphoma [ 23 – 25 ]. BCL, including FL, MCL, DLBCL, and MZL, requires long-term application of CD20 mAbs and/or B-cell pathway inhibitors, leading to a decrease in humoral immune response ability [ 10 ]. The clinical factors that might lead to defective humoral immune response in lymphoma patients were first analyzed. The results confirmed that age, gender, lymphoma staging, disease status, and COVID-19 severity seem to have little impact on immune response. However, vaccination history significantly affected the intensity of humoral immune response, i.e., IgG APR and Ab level. The results demonstrated that vaccination before infection could improve the humoral response to live SARS-CoV-2 in lymphoma patients, which is consistent with previous studies [ 16 , 26 ]. Thus, as a key aspect for clinical management, protecting vulnerable groups with immune deficiencies, such as lymphoma through vaccines, can reduce the burden on the healthcare system [ 27 , 28 ]. In addition, our results showed that the use of dexamethasone in the treatment of COVID-19 would affect the Ab level in lymphoma patients. Due to the fact that glucocorticoids interfered with humoral immunity by inhibiting the conversion of B cells to plasma cells, resulting in decreased Ab production [ 29 ]. From the viewpoint of therapy-related factors, the times of anti-CD20 and ASCT treatments before infection had adverse effects on the production of anti-SARS-CoV-2 IgG Ab. Previous investigations had reported that the humoral immune response of patients with BCL to SARS-CoV-2 was related to bendamustine and the timing of last anti-CD20 treatment prior to infection [ 16 , 30 ]. The current study further revealed the impact of the number of anti-CD20 treatments on the humoral response of lymphoma patients when facing SARS-CoV-2 infection. In fact, our results revealed that patients who received anti-CD20 treatment ≥ 6 times exhibited significantly reduced anti-SARS-CoV-2 IgG APR and Ab levels before COVID-19 infection. This indicated that if lymphoma patients were frequently exposed to CD20 mAbs, leading to continuous depletion of B cells, and it would seriously affect their initial immune response to new pathogens. Multifactorial analysis likewise confirmed that the number of anti-CD20 treatment was an independent predictor on APR and Ab levels. The treatment with B-cell-directed therapies led to the depletion of B cells, which might be detrimental to the production of Abs against SARS-CoV-2 in lymphoma patients [ 10 , 31 , 32 ]. Therefore, patients who had been actively treated with CD20 mAbs for a prolonged period might fail to produce protective Abs even after multiple vaccinations and require stronger physical protection against SARS-CoV-2 reinfection [ 33 ]. Hematopoietic stem cell transplantation recipients are considered to be at high risk for adverse outcomes after COVID-19 infection due to their immunosuppressive status [ 34 ]. Not surprisingly, the results showed that the anti-SARS-CoV-2 IgG APR and Ab levels in lymphoma patients who underwent ASCT prior to COVID-19 infection were significantly lower than those in non-transplant patients. Multifactorial analysis also confirmed that pre-infection ASCT reduced APR. This might be due to the fact that it took a long time for humoral immunity reconstitution after high-dose chemotherapy during ASCT, and the recovery time of peripheral blood B lymphocytes was about three months to over one year [ 35 ]. Furthermore, the functional recovery of B cells even took a longer time. This is because the functional recovery of B cells requires the assistance of T cells, which are functionally deficient for a long period after ASCT, thus affecting the functional reconstitution of B cells [ 36 ]. Consistent with previous reports [ 37 ], the present study likewise confirmed higher absolute B-lymphocyte count in the anti-SARS-CoV-2 IgG positive group than in the negative group in lymphoma patients. Multifactorial analysis found a marginally positive correlation between B-lymphocyte count and anti-SARS-CoV-2 IgG APR. The above results suggested that CD19 + B lymphocyte counts were critical for obtaining anti-SARS-CoV-2 IgG after COVID-19 infection in lymphoma patients. In addition, Bange et al. [ 38 ] found that patients with higher number of CD8 + T cells, including those treated with anti-CD20, had improved survival when humoral immunity was deficient. Therefore, CD8 + T cells might contribute to the recovery of COVID-19. However, there were no significant differences in CD4 + T, CD8 + T, and NK cells between the IgG positive and negative groups in the present study. This might be due to the fact that the data in this study on lymphocyte subsets were collected two months after infection, at which the acute phase of viral infection had passed and the cellular immune response was essentially over. Finally, all lymphoma patients completed a one-year follow-up regarding their clinical outcomes after COVID-19 infection. Although the results indicated that SARS-CoV-2 IgG levels did not relate to clinical outcomes, up to 21.3% of patients in this study had disease progression and nine cases eventually died, due to the interruption of lymphoma treatment caused by COVID-19 infection. Thus, for these patients, poor prognosis due to delayed treatment of lymphoma should be avoided as much as possible. However, the present study has a few limitations to be improved in future studies. First of all, expect DLBCL, relatively few cases were included in other disease subgroups. Results for those subgroups need to be interpreted with caution and confirmed in a larger cohort. For the influencing factors on Ab level, more detailed hierarchical analyses are still needed, such as the time difference between the last COVID-19 vaccination and the infection, and whether the patients are combined with other infectious diseases or autoimmune diseases. In summary, the investigation of the humoral immune response ability of lymphoma patients to SARS-CoV-2 has significant clinical and epidemiological significance. The current findings provide strong evidence regarding the reduced ability of producing Ab to SARS-CoV-2 in lymphoma patients. More importantly, the results showed that multiple factors have impacts on the anti-SARS-CoV-2 IgG APR and Ab levels in lymphoma patients, including the vaccination history, the number of anti-CD20 treatments received prior to COVID-19 infection, ASCT therapy before infection, and B-lymphocyte counts. These results may provide references for vaccination strategies and clinical management in lymphoma patients.
Background The ability of generating effective humoral immune responses to SARS-CoV-2 infection has not been clarified in lymphoma patients. The study aimed to investigate the antibody (Ab) production after SARS-Cov-2 infection and clarify the factors affecting the Ab generation in these patients. Patients & methods 80 lymphoma patients and 51 healthy controls were included in this prospective observational study. Clinical factors and treatment regimens affecting Ab positive rate (APR) and Ab levels were analyzed by univariate and multivariate methods. Results The anti-SARS-CoV-2 IgG APR and Ab levels in lymphoma patients were significantly lower than those in healthy controls. Lymphoma patients with COVID-19 vaccination had significantly higher APR and Ab levels compared with those without vaccination. Additionally, the use of dexamethasone for COVID-19 treatment had a negative impact on Ab levels. For the impact of treatment regimens on the APR and Ab levels, the results showed that patients treated with ≥ 6 times CD20 monoclonal Ab (mAb) and patients treated with autologous hematopoietic stem cell transplantation (ASCT) prior to infection produced a statistically lower APR and Ab levels compared with those treated with 1–5 times CD20 mAb and those treated without ASCT, respectively. Furthermore, multiple regression analysis indicated that the number of anti-CD20 treatment was an independent predictor for both APR and Ab levels. Conclusions Humoral immune response to SARS-CoV-2 infection was impaired in lymphoma patients partly due to anti-CD20 and ASCT treatment. COVID-19 vaccination may be more needed for these patients. Supplementary Information The online version contains supplementary material available at 10.1186/s12865-024-00596-1. Keywords
Electronic supplementary material Below is the link to the electronic supplementary material.
Acknowledgements We thank all lymphoma patients and healthy controls who agreed to take part in the test. We also thank the investigators and the study teams who participated in the investigation. Author contributions Huan Xie: Conceptualization, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing-original draft, Writing-review & editing. Jing Zhang: Conceptualization, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing-original draft. Ran Luo: Data curation, Investigation, Writing-review & editing. Yan Qi: Data curation, Investigation, Writing-review & editing. Yizhang Lin: Data curation, Investigation, Writing-review & editing.Changhao Han: Data curation, Investigation, Writing-review & editing.Xi Li: Conceptualization, Formal analysis, Methodology, Resources, Supervision, Validation, Visualization, Writing-original draft, writing-review & editing. Dongfeng Zeng: Conceptualization, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Visualization, Writing-original draft, writing-review & editing. Funding This work was supported by Chongqing Medical Scientific Research project (Joint project of Chongqing Health Commission and Science and Technology Bureau, grant number 2020FYYX153). Data availability No datasets were generated or analysed during the current study. Declarations Ethics approval and consent to participate This study was conducted in accordance with the principles of the Helsinki Declaration and was approved by the Institutional Review Board of Army Medical University, Chongqing, China (Protocol 2022366). Written informed consent was obtained from all participants before participation. Consent for publication Not applicable. Conflict of interest The authors declare that the research was conducted in the absence of commercial or financial relationships that could be construed as a potential conflict of interest.
CC BY
no
2024-01-15 23:43:48
BMC Immunol. 2024 Jan 13; 25:5
oa_package/bc/c3/PMC10788029.tar.gz
PMC10788030
38218811
Background Worldwide, aggressive B-cell lymphoma is the most common subtype of non-Hodgkin lymphoma (NHL) [ 1 , 2 ]. The standard first-line R-CHOP (rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone) immunochemotherapy achieves long-term remission in approximately two-thirds of adult patients and others suffer from primary refractory or relapsed (R/R) lymphoma after an initial response [ 1 , 3 ]. Although many efforts have been made to improve patient survival over the past two decades, including increase dose-send/intensity of systemic therapy, maintenance therapy, and R-CHOP plus a novel drug (R-CHOP + X), the standard of care for unspecified patients has not changed [ 4 , 5 ]. Hence, many new therapeutic approaches have been developed that focus on R/R diseases [ 6 – 10 ]. The standard of care for patients with late relapse (> 12 months) is high-dose chemoimmunotherapy with autologous stem-cell transplantation (ASCT) if the disease is responsive to salvage regimens [ 1 , 5 , 7 , 11 , 12 ]. However, because of aging, concurrent morbidities, and chemoresistance, only 25% patients are considered candidates for transplantation [ 7 , 13 – 16 ]. Autologous chimeric antigen receptor (CAR) T-cell therapy, a gene-modified cellular treatment, represents a major paradigm shift in the management of R/R B-cell lymphomas [ 6 , 17 , 18 ]. To avoid a delay in constitutes infusion, several retrospective trials have used radiotherapy (RT) as a bridging or salvage strategy for CAR T-cell therapy, with reported response rates of 80–88% [ 19 – 26 ]. The efficacy of RT to improve local control of aggressive B-cell lymphoma is well established [ 27 – 34 ]. In addition, several large database analyses have shown improved survival with the addition of RT after controlling for confounding factors through multivariate analysis in the rituximab era [ 35 – 38 ]. Recently, in a comprehensive retrospective study (British Columbia Cancer Lymphoid Cancer Database), the positron emission tomography (PET)-positive sites of some patients who received RT for nonprogressive disease showed results comparable to those with PET-negative findings [ 39 ]. Additionally, the predominant pattern of relapse following systemic therapy (including first-line chemotherapy, ASCT, and CART) often involve sites of initial [ 21 , 40 – 43 ]. These predictable patterns of relapse emphasize the utility of RT to improve local control to all sites of disease. However, an interval of over 4 weeks induced by RT, which can delay systemic salvage therapies for R/R patients, is a crucial concern for hematologists. Regardless of a consolidation or salvage setting, conventional RT has been shown to be a safe and promising tool to help control the disease; however, the clinical value of hypofractionated RT is still poorly understood. The aim of this study was to investigate the outcomes and toxicity of hypofractionated RT in R/R patients in a single facility.
Methods Eligibility and study population Patients with R/R aggressive B-cell lymphoma between January 2020 and August 2022 at a single institution were retrospectively reviewed ( n = 59). The eligibility criteria included R/R patients who had received hypofractionated RT prior to or after salvage systemic treatment. Patients who had received conventional fractionated RT ( n = 17), showed central nervous system (CNS) involvement, or had primary CNS lymphoma ( n = 12) were excluded. Eventually, 30 patients were eligible for the final analysis. Evaluation and definition Patients were initially staged according to the Ann–Arbor staging system and scored using the international prognostic index. The tumor response was evaluated after completion of chemotherapy, RT, or a combination of chemotherapy and RT. Complete response (CR) was defined as the elimination of all signs of disease in the clinical and imaging examinations. Refractory disease was defined as an incomplete response after primary chemotherapy. Relapsed disease was defined as new disease found on imaging or biopsy after CR. All patients were re-evaluated with CT scan before RT, and 26 patients (86.7%) also underwent a PET scan. Adverse events were evaluated using CTCAE (common terminology criteria for adverse events) version 5.0. In- and out-of-field relapses for RT were defined based on imaging or biopsy. If the failure occurred in the same area of the lymph node that had been irradiated, it was deemed to be an in-field relapse. If the failure occurred in an area of the distant lymph node other than outside the irradiated area, it was considered an out-of-field relapse. Out-of-field relapse after RT was categorized as pre-existing sites only, new sites only, or both. Relapse at pre-existing sites was defined as a recurrent disease at the same sites before first-line chemotherapy. Relapse at new sites was identified as a recurring disease outside of sites prior to first-line treatment. Treatment Immunochemotherapy was considered the primary treatment of aggressive B-cell lymphoma. All patients were treated with immunochemotherapy and the regimens were R-CHOP ( n = 26) and dose-adjusted EPOCH-R (etoposide, prednisone, vincristine, cyclophosphamide, doxorubicin, rituximab, n = 4). The median number of chemotherapy cycles was 4 (range: 3–8). Radiotherapy was given with a 6-MV linear accelerator. As directed by the International Lymphoma Radiation Oncology Group (ILROG), involved-site radiation therapy (ISRT) was administered [ 44 , 45 ]. PET or magnetic resonance imaging (MRI) were obtained and co-registered with planning CT to improve delimitation of the treatment volume. Gross tumor volume (GTV) was defined as residual diseases in PET/CT or CT. Adjacent nodal diseases that responded to chemotherapy may be included in the clinical target volume (CTV), as long as their inclusion was not associated with significant toxicity. A 3–7-mm margin was added to the GTV and CTV to generate the corresponding planning gross target volume (PGTV) and planning target volume (PTV), respectively. The median dose for GTV was 36 Gy (range: 30–39 Gy), at a dose per fraction of 2.3–5 Gy. Since December 2021, 24 Gy to PTV with a simultaneous integrated boost 36 Gy to PGTV in 12 fractions were widely applied at our institution ( n = 23, 76.7%). The numbers of treated sites was defined as the numbers of radiation field required to treat all target volumes. Organs at risk (OAR) included the parotid glands, larynx, spinal cord, lungs, heart, kidney, liver, small intestine, bladder, rectum, and head of the femur. Statistical analysis Continuous variables were reported in medians and ranges, and categorical variables were reported in frequencies and percentages. The primary endpoint was response to RT, defined as either CR or partial response (PR); secondary endpoints included progression-free survival (PFS) and overall survival (OS). PFS was defined as the period from the date of RT to the date of any relapse, progression, last follow-up, or death from any cause. OS was calculated from the date of RT to the date of death from any cause or until the last follow-up. PFS and OS were estimated using the Kaplan–Meier method and compared using log-rank tests stratified by prognostic factors. P < 0.05 was considered to indicate statistically significant differences. All statistical analyses were performed using SPSS (version 26.0; IBM Corporation, Armonk, NY, USA) and R (version 3.5.3) software.
Results Clinical characteristics Final analyses were conducted on 30 patients, and the baseline clinical features and initial treatments are summarized in Table 1 . The median age was 55 years (range: 19–79 years) and 60% patients were female. At initial diagnosis, extranodal involvement was present in 76.7% patients, bulky disease (≥ 7.5 cm) was present in 46.7%, and the majority had advanced-stage disease (stage III/IV, 63.3%). The distribution of medical histology is as follows: diffuse large B-cell lymphoma not otherwise specified (DLBCL-NOS, n = 20); primary mediastinal large B-cell lymphoma (PMBL, n = 6); transformed DLBCL ( n = 2); primary breast DLBCL ( n = 1); and high-grade B-cell lymphoma (MYC, BCL2, and BCL6 rearrangement, n = 1). Radiotherapy outcomes Baseline patient characteristics at the time of RT are listed in Table 2 . Prior to RT, most patients experienced PR after initial therapy (86.7%), and the remaining 4 (13.3%) patients had progressive disease (PD) after chemotherapy. Second-line chemotherapy was used in 7 (23.3%) patients, and 1 (3.3%) patient received third-line treatment before RT. Three-quarters of RT patients exhibited localized disease (76.7%), with a total of 45 treated sites. The median maximum diameter of residual lesions was 4.5 cm, and the median volumes of GTV and CTV were 53 mL and 372 mL, respectively. All patients received either intensity-modulated radiation therapy (IMRT) or volumetric-modulated arc therapy (VMAT). Subsequently, 19 patients received salvage chemotherapy. Among the 30 evaluable patients, 27 (90%) achieved an objective response after the completion of RT: 24 (80%) CR and 3 (10%) PR. In the 45 lesions being treated, 39 (86.7%) achieved CR, 4 (8.9%) had PR, and 2 (4.4%) exhibited PD. Specifically, among the 8 patients who had multiple lesions at the time of RT, the CR rate was 87% (20/23) for a total of 23 treated sites. With a median follow-up of 10 months (range, 2–27), 10 of the 30 (33.3%) patients experienced disease progression, and three patients died. The 1-year OS and PFS rates for all patients were 81.8% and 66.3%, respectively (Fig. 1 ). The corresponding 1-year OS and PFS rates for patients who obtained CR after RT were 95.8% and 83.1%, respectively, and 0% ( P = 0.001, Fig. 2 A) and 0% ( P = 0.001, Fig. 2 B) for patients who had not. The 1-year PFS rate was 82.4% for patients who had a single lesion at the time of RT compared with a 1-year PFS rate of 14.3% for patients who had multiple lesions ( P < 0.001); there was no statistically significant difference in OS ( P = 0.132) (Fig. 3 ). Failure patterns and associated factors For the entire cohort, failure analysis showed that the majority of post-RT progressions involved out-of-field relapses (Table 3 ). After RT, 2 (6.7%) relapses were completely in-field, 3 (10%) were a combination of in- and out-of-field relapses, and 5 (16.6%) were completely out-of-field relapses (Fig. 4 ). All out-of-field relapse patients ( n = 8) had extranodal involvement; 7 patients had initial stage III/IV disease; and in 5 patients with only out-of-field relapse, all occurred at new sites only after RT. According to univariate analysis, four factors have a significant impact on the incidence of out-of-field relapses: refractory/relapsed (refractory [18.5%] vs. relapsed [100%], P = 0.002); response to systemic therapy before RT (yes [19.2%] vs. no [75%]. P = 0.019); number of residual sites (single lesion [8.7%] vs. multiple lesions [85.7%], P < 0.001); and response to RT (CR [16.7%] vs. no-CR [66.7%], P = 0.013). RT toxicity and dose to normal tissues No serious non-hematological adverse effects (≥ grade 3) associated with RT were reported. Radiation-related adverse events included leukocytopenia in three patients (grade 2: two patients, grade 4: one patient) and oral mucositis (grade 2); radiation dermatitis (grade 1); asymptomatic pneumonia (grade 1); and nausea (grade 2) in one patient each, respectively. Owing to the heterogeneity of RT schemes, we present the DVH statistics for the critical normal tissues of the 23 patients with 36 radiated sites treated with 36 Gy in 12 fractions (Table 4 ). For five RT sites in the head and neck, the median mean dose (Dmean) to the parotid gland and larynx was 13.2 Gy and 9.7 Gy, respectively, and the median maximal dose (Dmax) to the spinal cord was 14.2 Gy. For 15 RT sites in the thorax (mediastinum and axilla dominate the list), the median lung irradiated by 20 Gy or more (V20) was 4.7%, the median Dmean to the heart was 1.1 Gy, and the median Dmax to the spinal cord was 16.8 Gy. For 10 RT sites in the abdomen, the median V20 of the kidney was 7.47%, and the median Dmax to the small intestine and spinal cord was 33.4 Gy and 15.6 Gy, respectively. For six RT sites in the pelvis, the Dmean to the bladder and rectum was 5.52 Gy and 3.65 Gy, respectively, and the median Dmax to the head of the femur was 16.6 Gy.
Discussion Although the standard treatment for R/R aggressive B-cell lymphoma with late relapse (> 12 months) is dose-intensity chemotherapy followed by ASCT, most older patients are not considered ideal transplant candidates. The addition of consolidation or salvage RT unequivocally reduces the risk of local failure; however, a critical concern has been how to deliver RT in a short period of time, which did not delay effective systemic therapy. To our knowledge, this is the first study to provide valuable data of comprehensive hypofractionated RT for R/R aggressive B-cell lymphoma. Hypofractionated short-course RT exhibits excellent local control with mild toxicities. The treatment options for R/R aggressive B-cell lymphoma show physician discrepancy and geographic variations between different countries or institutions, including chemotherapy alone, CAR T-cell therapy, and a sequential combination of chemotherapy and RT with or without ASCT [ 46 – 51 ]. Owing to heterogeneous treatments, a small number of patients receiving RT with different doses and fractions [ 28 , 45 , 52 – 54 ]. Recent studies have demonstrated that short-course bridging RT prior to CAR T-cell therapy provides excellent local control and a sustainable response. Theoretically, patients who will never be suitable for CAR T-cell therapy because of medical insurance-related issues and physical performance that may benefit from comprehensive hypofractionated RT [ 19 – 24 , 55 ]. In this study, we present a homogenous cohort of 30 patients suffering from R/R aggressive B lymphoma. The comprehensive hypofractionated RT had an excellent response, with ORR and CR rates of 90% and 80%, respectively. Salvage RT as part of potential treatment strategy is generally considered after second- or third-line systemic therapy. According to the ILROG guidelines for nodal NHL, patients with R/R disease unsuitable for transplantation may benefit from RT with doses up to 55 Gy [ 54 ]. Consequently, subsequent systemic treatment may be delayed for up to 6 weeks. The 2020 ILROG emergency RT guideline recommend hypofractionated schemes (36–39 Gy in 12–13 fractions or 30 Gy in six fractions) for chemorefractory NHL [ 44 ]. Recently, a cross-sectional study conducted by Memorial Sloan Kettering Cancer Center identified that the increased usage of hypofractionated RT was unique to sites affiliated with the hospital [ 54 ]. In our institution, the majority of lymphoma patients received IMRT or VMAT, and all R/R aggressive B-cell lymphoma received hypofractionated schemes since 2021 (36 Gy in 12 fractions). The median RT fraction was 12 in this study, fewer than the recent large retrospective study from British Columbia Cancer Agency (30–40 Gy in 15–20 fractions) [ 39 ]. As a non-cross-resistant therapy, RT could be a bridge to ASCT or CAR T-cell therapy to deepen remissions and improve cure rates. Metabolic tumor volume (MTV), as a representative of the total burden of disease, is the most important predictor of outcome in DLBCL and other lymphoma subtypes, regardless of the measurement method and study time points [ 56 – 59 ]. Here, we also showed that patients achieving CR after RT showed higher survival rates than those without CR. However, this high ORR rate was not entirely translated into an OS benefit. Out-of-field relapses continue to be a challenge, particularly in patients with advanced-stage disease, non-response to initial chemotherapy, or with multiple residual lesions at the time of RT. Similarly, 80% relapsed diseases occurred in new sites in our study. Therefore, the new agent should be added to RT to enhance the effects without obvious toxicity. At present, there are a number of clinical trials establishing the effects of immune checkpoint inhibitors in Hodgkin’s lymphoma [ 60 – 62 ]. However, DLBCL patients had a low response rate to the immune checkpoint inhibitor because chromosome 9p24.1 genetic alterations and PD-L1 or PD-L2 expression are rare in DLBCL. Hypofractionated RT can enhance the release of tumor antigens, increase tumor-reactive T cells, and work synergistically with immune checkpoint inhibitors in many solid tumors [ 63 ]. Presently, the combination of pembrolizumab and hypofractionated RT (20 Gy in five fractions) is in the phase 2 trial with R/R NHL (NCT04827862). To validate the above assumptions, we also performed a multicenter, single-arm, phase 2 study (ChiCTR2200060059) to assess the potential impact of Zimberelimab plus hypofractionated RT in patients with primary refractory DLBCL. The study is currently enrolling patients. The clinical benefit of hypofractionation RT and immune checkpoint inhibitors needs to be further investigated in these prospective studies. This study has some limitations, mainly related to its retrospective nature. While the data support important findings regarding a high response rate and mild toxicities with hypofractionated RT, the treatments were not randomly assigned. Additionally, none of the patients received CAR T-cell therapy. Although CAR T-cell therapy has been recommended based on the guidelines, is not cost effective and may not be feasible for most patients in China. In fact, the data we observed that could provide an option for CAR T-cell therapy-eligible patients. Furthermore, because of the short follow-up period, we were unable to adequately assess the late toxicities. However, hypofractionated RT has been widely employed in several types of solid tumors with long-term follow-up. We believe that hypofractionated RT is efficacious and safe.
Conclusion We showed that hypofractionated RT achieved high response rates and was well tolerated in patients with R/R aggressive B-cell lymphoma. These findings provide additional evidence supporting hypofractionated RT as a treatment for reduction of tumor burden in aggressive B-cell lymphomas.
Background Radiotherapy (RT) is an effective and available local treatment for patients with refractory or relapsed (R/R) aggressive B-cell lymphomas. However, the value of hypofractionated RT in this setting has not been confirmed. Methods We retrospectively analyzed patients with R/R aggressive B-cell lymphoma who received hypofractionated RT between January 2020 and August 2022 at a single institution. The objective response rate (ORR), overall survival (OS), progression-free survival (PFS) and acute side effects were analyzed. Results A total of 30 patients were included. The median dose for residual disease was 36 Gy, at a dose per fraction of 2.3–5 Gy. After RT, the ORR and complete response (CR) rates were 90% and 80%, respectively. With a median follow-up of 10 months (range, 2–27 months), 10 patients (33.3%) experienced disease progression and three died. The 1-year OS and PFS rates for all patients were 81.8% and 66.3%, respectively. The majority (8/10) of post-RT progressions involved out-of-field relapses. Patients with relapsed diseases, no response to systemic therapy, multiple lesions at the time of RT, and no response to RT were associated with out-of-field relapses. PFS was associated with response to RT ( P = 0.001) and numbers of residual sites ( P < 0.001). No serious non-hematological adverse effects (≥ grade 3) associated with RT were reported. Conclusion These data suggest that hypofractionated RT was effective and tolerable for patients with R/R aggressive B-cell lymphoma, especially for those that exhibited localized residual disease. Keywords
Abbreviations Non-Hodgkin lymphoma Rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone Refractory or relapsed Autologous stem-cell transplantation Chimeric antigen receptor Radiotherapy Positron emission tomography Central nervous system Complete response Etoposide, prednisone, vincristine, cyclophosphamide, doxorubicin, rituximab International Lymphoma Radiation Oncology Group Involved-site radiation therapy Magnetic resonance imaging Gross tumor volume Clinical target volume Planning target volume Organs at risk Partial response Progression-free survival Overall survival Diffuse large B-cell lymphoma Primary mediastinal large B-cell lymphoma Progressive disease Intensity-modulated radiation therapy Volumetric-modulated arc therapy Objective response rate Median mean dose Median maximal dose Volume irradiated by 20 Gy or more Acknowledgements Not applicable. Authors’ contributions Conception and design: Y.Y and T.B.L. Financial support: Y.Y, T.B.L, H.Y.F, and B.H.X. Administrative support: Y.Y and T.B.L. Provision of study material or patients: All authors. Collection and assembly of data: C.H, T.L.T, G.Q.S, S.Q.L, J.H.C, H.Y.F, T.B.L and Y.Y. Data analysis and interpretation: C.H, J.H.C, T.B.L and Y.Y. Manuscript writing: All authors. Final approval of manuscript: All authors. Accountable for all aspects of the work: All authors. Funding This work was sponsored by Major Scientific Research Program for Young and Middle-aged Health Professionals of Fujian Province, China [grant numbers 2022ZQNZD002], the National Natural Science Foundation of China [grant numbers 82274268], the Fujian Key Laboratory of Intelligent Imaging and Precision Radiotherapy for Tumors (Fujian Medical University) and the Clinical Research Center for Radiology and Radiotherapy of Fujian Province (Digestive, Hematological and Breast Malignancies). The funding sources had no influence on the design, performance, or reporting this study. Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate All aspects of this study were reviewed and approved by the institutional review board of Fujian Medical University Union Hospital (2022WSJK019), which waived the requirement for signed informed consent because of the retrospective nature of the study. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:48
BMC Cancer. 2024 Jan 13; 24:72
oa_package/8d/1b/PMC10788030.tar.gz
PMC10788031
38218834
Introduction Bergenin ( 1 ) is a C -glucoside derived from 4- O -methylgallic acid (Fig. 1 ) that occurs in several plants, and it has various biological activities, such as anti-inflammatory [ 1 ], in vivo antinociceptive [ 2 , 3 ], cholinesterase inhibition [ 4 ], and serum urate reduction in hyperuricemia [ 5 ], among others. A recent review summarizes all the pharmacological and biological activities already described for this compound [ 6 ]. Thus, this compound is remarkable due to the above described in vitro and in vivo activities. Recently, the in vivo employment in rats of this compound in pretreatment, it dose-dependently relieved amnesia induced by scopolamine. It also could significantly ameliorate streptozotocin (STZ) induced behavioral deficits, inhibit the acetyl and butyril cholinesterase activities in parallel with an increase in the reduced glutatione levels in a dose-dependent way, indicating preventive and ameliorative potential of bergenin in the management of Alzheimer’s disease [ 7 ]. However, for further biological tests is necessary obtaining it in good yields. Its synthesis is possible, but the yield is lower than the isolation from natural sources [ 8 ]. Nevertheless, this compound is distributed in various species of different plant families and is usually isolated in small amounts, except few examples such as barks of Amazonian yellow uxi ( Endopleura uchi Humiriaceae), Cenostigma macrophyllum (Fabaceae), and Saxifraga atrata (Saxifragaceae) [ 3 , 9 , 10 ]. In a previous screening study of crude extracts, the antimicrobial activities of different plant species from Argentina were reported, and the methanolic extract of Peltophorum dubium Spreng. Taub. (Fabaceae) showed inhibitory activity against Staphylococcus aureus [ 11 ]. The extracts of different parts of this plant were purified, and bergenin was isolated in high yields, especially from the roots and barks of P. dubium [ 12 , 13 ]. This plant is a large tree from the Fabaceae family, commonly known as “canafístula” or “angico-vermelho”, which grows in some South American countries, particularly in Brazil’s central and southern regions. This tree is easily adaptable to tropical habitats and has economic and ornamental value. Its wood is used in civil construction, furniture, and naval industries [ 14 ]. Besides bergenin and derivatives, species of this genus is known to biosynthesize flavonoids, phenoxychromenes, terrestribisamide, and lignans [ 13 , 15 , 16 ]. Microwave Assisted Extraction (MAE) is a technique that is widely used because it is simple, cost-effective, allows fast extractions, and reduces solvent consumption, making the process more environmentally friendly. MAE and ultrasound assisted extraction (UAE) have increased significant focus and consideration because the costs of the instruments, especially in laboratory scale. As consequence, in the last years, there are an increment of using MAE applied to different phenolic compounds from plants and foods [ 17 ]. Molecular Imprinted Polymers (MIP) are another helpful tool that has been employed in various fields, such as membrane separations [ 18 ], dye removal in aqueous media [ 19 ], chromatographic separation of essential natural compounds like camptothecin [ 20 ], and extraction of metabolites in biological liquids [ 21 ]. As part of our ongoing investigations on the bioactivities of natural product derivatives, we present new methods for extracting and isolating bergenin ( 1 ) from the roots and barks of P. dubium with high yields. We used MIP to separate bergenin from the extracts, and we applied a two-level experimental design to optimize the MAE of bergenin. We also performed a dendrological analysis of P. dubium heartwood by HPLC/DAD to investigate the correlation between the presence of bergenin and other phenolic compounds and the growth of this tree.
Results and discussion The extract of the roots yielded 2.32% (28.63 g) from 1.23 kg of dried material employing MeOH at room temperature. The CHCl 3 soluble fraction of this extract (8.23 g) after conventional chromatography techniques and recrystallization furnished 1.04 g of pure bergenin (3.62% of the composition of the MeOH extract and 8.39 × 10 −2 % of the dried roots. This compound was identified by UV and NMR analysis, including Heteronuclear Single Quantum Coherence Spectroscopy (HSQC) and Heteronuclear Multiple Bond Correlation (HMBC) included in Additional file (supplementary information) 1 . The data obtained was also compared to the literature [ 3 ]. The procedures for extracting and purifying this compound by chromatographic methods are time expending, with various laboratory steps, and expensive. Thus, a MAE method for extraction and employment of MIP for diminishing the steps for purification was proposed. These procedures were accomplished by HPLC analysis, and this methodology was validated to bergenin and its precursor, gallic acid. The parameters (Table 1 ) show that the values are within the recommended by the literature. The limit of detection and quantification (LoDs and LoQs) suggest that the method has good sensitivity for the detecting these two phenolic compounds (Table 2 ). The HPLC flow rate and temperature parameters for robustness studies were varied in a 10% of deviation to observe whether the method resists minor and deliberate variations in the analytical parameters. The robustness was evaluated by checking the concentrations determined by the standards. No changing in the peak was observed demonstrating that the method was robust for determining these compounds. Three different solvent systems (MeOH, EtOH, and EtOH:H 2 O) were evaluated in the optimization of bergenin extraction from the roots of P. dubium by MAE experiments in order to accelerate the extraction of compounds. Other two variable factors employed in the experimental design were temperature and time of extraction. The experimental domain with de-codified and absolute values of the two factors of the two level-design response was measured by the HPLC peak area and, consequently the concentration of bergenin (Tables 2 and 3 ). The Pareto graphs (Fig. 2 ) were obtained from the analytical responses (peak areas and yield of bergenin concerning extracts). They show that none of the factors significantly influenced the analytical response and, consequently, the bergenin extraction process. However, temperature and time had a positive effect, implying that the response increased as both factors increased. Surprisingly, when pure EtOH was employed as solvent extraction, the response was poorest than MeOH and hydroethanolic solution and the bergenin was not detected in the extracts. None of the studied factors proved to be significant, so the values used in the central points of the mixture (115 °C and 10 min) were used as the optimal values for the next extractions. Comparing the yields of bergenin extraction using MAE and the conventional method in terms of amounts of root was higher for MAE than for the conventional chromatographic method. The latter yielded 8.39 × 10 −2 % of bergenin, while MAE yielded 0.45% in MeOH (using the central points and the average of the root quantities, extracts, and yield of bergenin determined by HPLC). Moreover, MAE had a shorter extraction time and minimal solvent amount use. To develop a method for isolation and purification of bergenin from crude extract, MIP based on methacrylic acid and ethylene glycol dimethacrylate using bergenin as molding compound was prepared and the extraction evaluated. The prepared MIP and NIP were characterized by Attenuated Total Reflection Fourier Transform Infrared (ATR-FTIR, Fig. 3 ), whose display similar characteristics. The main IR bands that characterize the printed (MIP) and non-printed (NIP) polymers can be observed, indicating that the process of addition of the template molecule did not affect the main structure of the polymer. They show the stretching bands of OH, C=O and COO groups of the carboxylic acids/esters (υ 3.430, 1720, and 1.253 cm −1 ), H–C sp3 methylene and methyl groups (υ 2980–2880 cm −1 ), C=C vinyl group (υ 1.637 cm −1 ), all indicative of the polymerization. The polymer images obtained by Scanning Electron Microscopy (SEM) show the influence of bergenin as a template molecule on the morphology of polymers. Figure 4 compares the surfaces of MIP and NIP, revealing that NIP has a more compacted and smoother appearance than MIP [ 22 ]. Unlike the NIP polymer, SEM reveals that MIP has a surface with greater roughness formed by granular and porous morphology. This feature is because imprinted polymers have larger surface areas than non-imprinted ones. This structural difference suggests there will have more binding sites and which are better distributed throughout the MIP surface [ 23 ] However, the presence of irregular particles, although not considered a problem when the polymer is used in Solid Phase Extraction (SPE), makes its use as solid support in chromatographic columns not viable because the irregular particles do not pack well and create voids in the column [ 24 ]. Bergenin adsorption experiments comparing MIP and NIP followed by molecular imprinted solid phase extraction (MISPE) were carried out by HPLC analysis compared with the pure standard. The variation of the content of the analyte in different prepared MeOH solutions permitted to verify the adsorption of bergenin in both polymers. The amount of bergenin adsorbed by NIP and MIP polymers was estimated (“ Experimental ” section) and is expressed in Table 4 . The adsorption of bergenin in imprinted and non-printed polymers exhibited differences, with the analyte demonstrating a clear preference for MIP, as evidenced by the higher B 1 value in all analyzed intervals. This characteristic can be observed in the adsorption isotherm obtained by HPLC (Fig. 5 ). The isotherms exhibited an increasing linear tendency, without a saturation region, where specific and non-specific binding sites are occupied, and the concentration of bergenin bound to the polymer remains constant [ 25 ]. The separation of bergenin from the extract solution (1 mg mL −1 ) by the MIP and NIP cartridges permitted to evaluate the MIP’s selectivity. Compared with the standard chromatogram, the HPLC quantification of the eluate of the two polymers in triplicate allowed the MIP to show higher selectivity for bergenin than the NIP. Besides, the chromatogram indicating some impurities in the eluate from MIP, it presented fewer interferences, thus facilitating the bergenin purification (Fig. 6 ). Concerning to the dendrochronological study and based on the validated methodology, both gallic acid and bergenin were detected and quantified (Table 5 ) in five growth rings of a heartwood (TPD1–TPD5) of an approximately 31 years old tree, besides the phelloderm (TPD6) and barks (TPD7). The results indicate bergenin was present in higher concentrations in the heartwood of the 11–14th growth year, and its presence in the tree diminished from heartwood to barks. Besides, different from roots, gallic acid, the biosynthetic precursor of bergenin, is not present in detectable quantities in almost all growth periods. The observed variation on specific metabolite could contribute to understanding how trees respond to environmental factors such as climate, air pollution, nutrient availability, and water stress. To date there are few studies of chemical variation in dendrochronology ring growth analysis, and it can correlate growth ring patterns with changes in the chemical composition of trees and investigate how these factors affect their development over time. For instance, the higher content of copaiba oil from Copaiba multijuga is found in species older than 50 years and is related to the diameter at breast height (DBH), another common technique to measure tree growth [ 26 ].
Results and discussion The extract of the roots yielded 2.32% (28.63 g) from 1.23 kg of dried material employing MeOH at room temperature. The CHCl 3 soluble fraction of this extract (8.23 g) after conventional chromatography techniques and recrystallization furnished 1.04 g of pure bergenin (3.62% of the composition of the MeOH extract and 8.39 × 10 −2 % of the dried roots. This compound was identified by UV and NMR analysis, including Heteronuclear Single Quantum Coherence Spectroscopy (HSQC) and Heteronuclear Multiple Bond Correlation (HMBC) included in Additional file (supplementary information) 1 . The data obtained was also compared to the literature [ 3 ]. The procedures for extracting and purifying this compound by chromatographic methods are time expending, with various laboratory steps, and expensive. Thus, a MAE method for extraction and employment of MIP for diminishing the steps for purification was proposed. These procedures were accomplished by HPLC analysis, and this methodology was validated to bergenin and its precursor, gallic acid. The parameters (Table 1 ) show that the values are within the recommended by the literature. The limit of detection and quantification (LoDs and LoQs) suggest that the method has good sensitivity for the detecting these two phenolic compounds (Table 2 ). The HPLC flow rate and temperature parameters for robustness studies were varied in a 10% of deviation to observe whether the method resists minor and deliberate variations in the analytical parameters. The robustness was evaluated by checking the concentrations determined by the standards. No changing in the peak was observed demonstrating that the method was robust for determining these compounds. Three different solvent systems (MeOH, EtOH, and EtOH:H 2 O) were evaluated in the optimization of bergenin extraction from the roots of P. dubium by MAE experiments in order to accelerate the extraction of compounds. Other two variable factors employed in the experimental design were temperature and time of extraction. The experimental domain with de-codified and absolute values of the two factors of the two level-design response was measured by the HPLC peak area and, consequently the concentration of bergenin (Tables 2 and 3 ). The Pareto graphs (Fig. 2 ) were obtained from the analytical responses (peak areas and yield of bergenin concerning extracts). They show that none of the factors significantly influenced the analytical response and, consequently, the bergenin extraction process. However, temperature and time had a positive effect, implying that the response increased as both factors increased. Surprisingly, when pure EtOH was employed as solvent extraction, the response was poorest than MeOH and hydroethanolic solution and the bergenin was not detected in the extracts. None of the studied factors proved to be significant, so the values used in the central points of the mixture (115 °C and 10 min) were used as the optimal values for the next extractions. Comparing the yields of bergenin extraction using MAE and the conventional method in terms of amounts of root was higher for MAE than for the conventional chromatographic method. The latter yielded 8.39 × 10 −2 % of bergenin, while MAE yielded 0.45% in MeOH (using the central points and the average of the root quantities, extracts, and yield of bergenin determined by HPLC). Moreover, MAE had a shorter extraction time and minimal solvent amount use. To develop a method for isolation and purification of bergenin from crude extract, MIP based on methacrylic acid and ethylene glycol dimethacrylate using bergenin as molding compound was prepared and the extraction evaluated. The prepared MIP and NIP were characterized by Attenuated Total Reflection Fourier Transform Infrared (ATR-FTIR, Fig. 3 ), whose display similar characteristics. The main IR bands that characterize the printed (MIP) and non-printed (NIP) polymers can be observed, indicating that the process of addition of the template molecule did not affect the main structure of the polymer. They show the stretching bands of OH, C=O and COO groups of the carboxylic acids/esters (υ 3.430, 1720, and 1.253 cm −1 ), H–C sp3 methylene and methyl groups (υ 2980–2880 cm −1 ), C=C vinyl group (υ 1.637 cm −1 ), all indicative of the polymerization. The polymer images obtained by Scanning Electron Microscopy (SEM) show the influence of bergenin as a template molecule on the morphology of polymers. Figure 4 compares the surfaces of MIP and NIP, revealing that NIP has a more compacted and smoother appearance than MIP [ 22 ]. Unlike the NIP polymer, SEM reveals that MIP has a surface with greater roughness formed by granular and porous morphology. This feature is because imprinted polymers have larger surface areas than non-imprinted ones. This structural difference suggests there will have more binding sites and which are better distributed throughout the MIP surface [ 23 ] However, the presence of irregular particles, although not considered a problem when the polymer is used in Solid Phase Extraction (SPE), makes its use as solid support in chromatographic columns not viable because the irregular particles do not pack well and create voids in the column [ 24 ]. Bergenin adsorption experiments comparing MIP and NIP followed by molecular imprinted solid phase extraction (MISPE) were carried out by HPLC analysis compared with the pure standard. The variation of the content of the analyte in different prepared MeOH solutions permitted to verify the adsorption of bergenin in both polymers. The amount of bergenin adsorbed by NIP and MIP polymers was estimated (“ Experimental ” section) and is expressed in Table 4 . The adsorption of bergenin in imprinted and non-printed polymers exhibited differences, with the analyte demonstrating a clear preference for MIP, as evidenced by the higher B 1 value in all analyzed intervals. This characteristic can be observed in the adsorption isotherm obtained by HPLC (Fig. 5 ). The isotherms exhibited an increasing linear tendency, without a saturation region, where specific and non-specific binding sites are occupied, and the concentration of bergenin bound to the polymer remains constant [ 25 ]. The separation of bergenin from the extract solution (1 mg mL −1 ) by the MIP and NIP cartridges permitted to evaluate the MIP’s selectivity. Compared with the standard chromatogram, the HPLC quantification of the eluate of the two polymers in triplicate allowed the MIP to show higher selectivity for bergenin than the NIP. Besides, the chromatogram indicating some impurities in the eluate from MIP, it presented fewer interferences, thus facilitating the bergenin purification (Fig. 6 ). Concerning to the dendrochronological study and based on the validated methodology, both gallic acid and bergenin were detected and quantified (Table 5 ) in five growth rings of a heartwood (TPD1–TPD5) of an approximately 31 years old tree, besides the phelloderm (TPD6) and barks (TPD7). The results indicate bergenin was present in higher concentrations in the heartwood of the 11–14th growth year, and its presence in the tree diminished from heartwood to barks. Besides, different from roots, gallic acid, the biosynthetic precursor of bergenin, is not present in detectable quantities in almost all growth periods. The observed variation on specific metabolite could contribute to understanding how trees respond to environmental factors such as climate, air pollution, nutrient availability, and water stress. To date there are few studies of chemical variation in dendrochronology ring growth analysis, and it can correlate growth ring patterns with changes in the chemical composition of trees and investigate how these factors affect their development over time. For instance, the higher content of copaiba oil from Copaiba multijuga is found in species older than 50 years and is related to the diameter at breast height (DBH), another common technique to measure tree growth [ 26 ].
Conclusions The validated method proved reliable, accurate, and suitable for quantifying bergenin in MeOH extracts of P. dubium . It also confirmed that the adsorption of the target compound differed between imprinted and non-imprinted polymers, with the analyte showing a clear preference for MIP. However, further tests are needed to compare different monomers, adsorption amounts, and solvents. MAE using MeOH yielded higher amounts of bergenin, and temperature and time had a positive effect, meaning that the response increased with both factors. Lastly, this study suggested that bergenin was more concentrated in the heartwood of the 11–14th growth year, and its presence decreased from heartwood to barks.
This study describes methodologies for extracting and isolating bergenin, a C -glucoside of 4- O -methylgallic acid found in some plants and it presents various in vitro and in vivo biological activities. Bergenin was previously obtained from the Pelthophorum dubim (Fabaceae) roots with a good yield. Conventional chromatographic procedures of the CHCl 3 soluble fraction of the MeOH extract gave 3.62% of this glucoside. An HPLC/DAD method was also developed and validated for bergenin and its precursor, gallic acid quantifications. Microwave extractions with different solvents were tested to optimize the extraction of bergenin, varying the temperature and time. MAE (Microwave Assisted Extraction) was more efficient than conventional extraction procedures, giving a higher yield of bergenin per root mass (0.45% vs. 0.0839%). Molecularly imprinted polymer (MIP) and non-imprinted polymer (NIP) based on bergenin as the template molecule, methacrylic acid, and ethylene glycol dimethacrylate were synthesized and characterized by FTIR and SEM (Scanning Electron Microscopy). Bergenin adsorption experiments using MIP and NIP followed by molecular imprinted solid phase extraction (MISPE) showed that MIP had a higher selectivity for bergenin than NIP. A dendrochronological study using the proposed method for detection and quantification of gallic acid and bergenin in five P. dubium growth rings of a 31-year-old heartwood and in the phelloderm and barks indicated that bergenin was more abundant in the 11–14th growth rings of the heartwood and decreased from the heartwood to the barks. Supplementary Information The online version contains supplementary material available at 10.1186/s13065-024-01112-7. Keywords
Experimental Instruments and software The NMR spectra were recorded on a Bruker Avance III 500 (11.5 T) instrument at LabRMN (Universidade Federal de Goiás). A Shimadzu SPD-M20A HPLC/DAD system was used for the chromatographic analysis. The FT-IR spectra were obtained on a Perkin Frontier instrument in ATR mode. The SEM images were acquired on a Hitachi S-3400 N instrument operating at 5.0 kV (Centro Interdisciplinar de Energia e Ambiente-CIENAM/UFBA). The MAEs procedures were performed using a CEM Discover®-SP, W/Activent (SN: DC6562) instrument at a frequency range of 50–60 Hz, using the 10 mL Pyrex pressure vial for closed vessel reactions, under the indicated power automatically to reach and maintain the set temperature, specified in each case, with IR temperature control and medium stirring speed using cylindrical stir bars (10 × 3 mm), default ramp time of 10 min. Plant samples Peltophorum dubium roots and heartwood were collected at the Ondina Campus of Universidade Federal da Bahia in Salvador, Bahia (13° 0′ 22.584′′ S 38° 30′ 35.918′′ W). The identification of the species were provided by Prof. Maria L. S. Guedes and a voucher is deposited in the Herbarium “Alexandre Leal Costa” of the Institute of Biology under the number #122228 (SISGEN register # AA133B8). Materials The analyses by thin layer chromatography (TLC) were carried out using silica gel (SiO2) plates supported on aluminum foil (silica gel 60 F254 sheet, 0.2 mm thick, 2.5 × 7.5 cm, Riedel-deHäen® or Whatmann). The TLC plates were exposed to UV radiation in a Spectroline Model CM-10 cabinet (lamps of 254 and 365 nm). In column chromatography (CC), Acros® silica gel 60 (63–200 or 40–63 μm) were used as the stationary phase. Solvents were concentrated on IKA® RV10 Digital (40–50 °C, 100–120 rpm) and Buchi Rotavapor RII (50 °C, minimum pressure 25 mbar) rotary evaporators. The solvents (MeOH, CHCl 3 , CH 2 Cl 2 , Hex, EtOH, CAN, DMSO and EtOAc) and reagents (methacrylic acid, ethylene glycol dimethacrylate 98%, and AIBN) used in all procedures were analytical or HPLC grade (Baker, Vetec, Synth or QHEMIS). Methanol-d4 deuterated from Isotech was employed as NMR solvent The plant material was pulverized in a cutting Wiley Mill-Model 4. Isolation and purification of bergenin from the roots The roots (1234.71 g) were dried in a forced circulating oven (40 °C) for 72 h, powdered in a mill and submitted to maceration in 4.0 L of MeOH (48 h) twice. After vacuum evaporation of the solvent, the MeOH extract (28.63 g) was partitioned sequentially between MeOH:H 2 O (8:2) and hexane for deffated and sequentially by CHCl 3 (8.23 g), and EtOAc (7.32 g). The CHCl 3 soluble fraction submitted to a chromatographic column (CC) containing silica gel 60 and it was eluted with CH 2 Cl 2 :MeOH (8:2). The fifth fraction (50 mL) furnished pure bergenin (1.037 g, 3.62% of yield) as standard. Multivariate optimization of bergenin extraction assisted by microwave (MAE) All assays were performed with 0.020 g of plant material from P. dubium . In MAEs were employed MeOH, EtOH:H 2 O (6:4) and pure H 2 O and they were carried out under standardized conditions, with an equipment constant power of 200 W and a fixed solvent volume of 3 mL. In order to perform the multivariate optimization, temperature and time of extraction were the variable parameters selected. Table 6 details the factors with the levels low (−), mean (0) and high (+). The acquired response was the area of the peak of bergenin in the chromatogram, obtained by injecting the samples in the HPLC and the data were submitted to a statistical examination using the software Statistica 7.0. HPLC analysis In the HPLC/DAD analysis of the eluates from MIP and NIP experiments was employed a XBridge BEH RPC18 (100 mmL × 3 mmI.D., 2.5 μm) column (Waters), MeOH:H 2 O (7:3) as mobile phase, and flow rate of 0.5 mL/min. The analysis were carried out in in isocratic mode from 0 a 10 min and from 10 to 13 min as gradient trough MeOH pure, totaling 20 min. The oven temperature was set of 40 ± 1 °C, and volume of injection of 5 μL and the DAD detector was set at λ 254 nm. For the dendrochronological and MAE HPLC analysis were employed a Shimadzu equipment mod. SPD-M20A and a VP-C8 Shim-pack (150 mmL × 2 mmI.D., 5 μm) column (Shimadzu). A H 2 O:MeOH (85:15) mixture was employed as eluent in a 0.25 mL/min rate (0–8 min) and gradient through MeOH from 8 to 15 min, in a total run of 20 min using a oven temperature of 40 °C. The identification of gallic acid and bergenin were identified in the extracts by comparing the retention times and UV spectra with the pure standards. Validation parameters The analytical method was validated for each pattern according to the parameters of selectivity, linearity, precision, accuracy, limit of detection and limit of quantification according to procedures previously published [ 27 ]. Selectivity was determined by comparing the peaks of standard and samples analyzed, considering retention time and UV spectra observed by DAD of at least three different points of the chromatograms (beginning, half, and end peaks). Linearity was obtained by calibration curves using a correlation coefficient (R 2 ). Calibration curves were obtained by triplicate injections ( n = 3) of solutions containing six different concentrations of the external standard (5, 10, 20, 30, 40 and 50 μg/mL). Peak areas were correlated with the averages of each concentration, and a graph was plotted using the least squares method. Precision was determined by injection in triplicate of three solutions of the standards. This parameter was expressed as the relative standard deviation according to the equation RS(%) = SD/AC ∗ 100, where SD is the standard deviation and AC is the average concentration determined. The recovery factor verified the accuracy, where samples with no analytes were spiked with standard solutions of low, medium, and high concentrations. The spiked samples were subjected to the whole extraction process and injected into HPLC. The following equation determined the accuracy: Rec(%) = [obtained concentration]/[absolute concentration] ∗ 100. The detection limit (LoD) and quantification limit (LoQ) were estimated by the ratio of standard deviations and slopes of calibration curves, according to the equations LoD = SDa ∗ 3/S and LoQ = SDa ∗ 10/S, where SDa is the standard deviation obtained from the calibration curve and S is the curve’s slope. Finally, the robustness assessment performed deliberate changes only in the mobile phase flow rate and temperature. Thus, it is noteworthy that this evaluation was simplified without involving a more detailed statistical treatment so that the chromatograms obtained from the corresponding Rt values and UV spectra were compared. Synthesis of the MIP and NIP The MIPs’ synthesis was realized as bulk adapted from previously reported procedure [ 28 ]. The MIP mold molecule (bergenin) was solubilized (0.8 mmol, 0.263 g), in 10 mL of DMSO/acetonitrile (1:1). In sequence, methacrylic acid (4 mmol, 0.344 g) was added in the obtained solution, ethylene glycol dimethacrylate (20 mmol, 3.96 g), the cross-linked reagent, and AIBN (0.131 g) as radical initiator. The reaction was kept under heating at 60 °C, in an inert nitrogen atmosphere, and stirring for 24 h. Extraction of bergenin by MIP/SPE and analysis of the adsorption Empty polypropylene SPE cartridges (6 cm × 1 cm) were filled with 100 mg of MIP or NIP between two frits at the top and the bottom of the polymer layer. These MIP cartridges were conditioned with 3 mL of MeOH followed by deionized H 2 O employing a Manifold to elute the solvents. In sequence, a solution of 0.5 mg/mL of aqueous MeOH of P. dubium root MeOH extract was eluted in the cartridge. The analyte was eluted with 2 mL of MeOH, and the content of bergenin was analyzed by HPLC/DAD. In order to develop the adsorption isotherms, 10 mg samples of MIP and NIP were subjected to agitation for 1 h in a container equipped with a magnetic stirrer, containing 2 mL of a bergenin solution in different concentrations (10, 20, 30, 40, 50, 75 and 100 μg/mL). All solutions were prepared using methanol as solvent. Thus, the amount of bergenin adsorbed by the polymers MIP and NIP was estimated using the following equation: where B is the adsorbed bergenin, I is the initial concentration of the solution (μg/mL); F is the concentration of bergenin in solution (μg/mL) after the adsorption procedure; V is the volume of solution containing bergenin used (mL); and m polym is mass of the MIP/NIP (g). Dendrochronological analysis of the tree and sampling of the heartwood Sample of the trunk of P. dubium was collected at a height of 20 cm from the base, with a diameter of 32.4 cm and a circumference of 100.5 cm. The trunk section was sanded to improve the visibility of the growth rings. The dendrochronological analysis was followed by sampling at seven points, with the first five points ranging from the nucleus (center, indicating the year of germination) to the phloem region, at an interval of 4.5 ± 0.1 cm, and the last two points in the phelloderm and bark (Fig. 7 ). The samples were macerated with MeOH for three days and dried under reduced pressure. The bergenin content was quantified by HPLC, according to the method previously described. Supplementary Information
Acknowledgements The authors are grateful for the scholarships of the CNPq—Conselho Nacional de Desenvolvimento Científico e Tecnológico (# 302848/2022-3), Dr. Luciano Liao of Universidade Federal de Goiás for the NMR spectra, and to CIENAM-Centro de Energia e Ambiente that kindly permitted to use the SEM. Author contributions OCSN: conceptualization, validation, writing—review, extraction and isolation processes, and data analysis. CSAF: conceptualization, validation, experimental design, and statistical analysis. LDOA: Extraction, synthesis, and analysis of polymers. MBDS: SEM resgister, data analysis and writing-review. SC: conceptualization, microwave extractions, writing—review and editing, supervision. JMD: conceptualization, validation, writing—review and editing, data analysis, and supervision. All authors read and approved the final manuscript. Funding CNPq—Conselho Nacional de Desenvolvimento Científico e Tecnológico (# 302848/2022-3). Data availability All data generated, discussed, or analyzed during the development of the present study are included in this current article or in Additional files. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:48
BMC Chem. 2024 Jan 13; 18(1):13
oa_package/65/53/PMC10788031.tar.gz
PMC10788032
38218915
Correction: Harm Reduction Journal (2023) 20:116 10.1186/s12954-023-00844-4 Following publication of the original article [ 1 ], the reference 31 has been added to the reference list and the same has been shown below: 31. Treloar, C., Beadman, K., Beadman, M. et al. Evaluating a complex health promotion program to reduce hepatitis C among Aboriginal and Torres Strait Islander peoples in New South Wales, Australia: the Deadly Liver Mob. Harm Reduct J 20, 153 (2023). 10.1186/s12954-023-00885-9 The original article has been corrected.
CC BY
no
2024-01-15 23:43:48
Harm Reduct J. 2024 Jan 13; 21:8
oa_package/61/f0/PMC10788032.tar.gz
PMC10788033
38218886
Introduction Pain is a major concern for many people living with HIV (PLWH). Any major or persistent pain may be associated with emotional distress and functional impairment among PLWH [ 21 , 67 ]. Much of the existing work on major or persistent pain in PLWH centres on chronic pain (i.e. pain that lasts more than 3 months [ 72 ]), which has a prevalence of 54–83% among PLWH in North America [ 59 ] compared to 21% in the general Canadian population [ 69 ]. Common etiologies include HIV-related peripheral neuropathy; central sensitization syndromes, potentially mediated by HIV-associated inflammation of both nervous and peripheral tissues; antiretroviral side effects; and chronic musculoskeletal disorders (e.g. osteoarthritis) [ 29 , 40 , 42 ]. Chronic pain in PLWH is associated with adverse outcomes along the HIV care continuum, including sub-optimal antiretroviral therapy (ART) adherence [ 42 , 67 ], increased disability, and reduced quality of life [ 30 , 60 ]. Its impacts among PLWH may increase as HIV continues to evolve worldwide from a terminal condition into a chronic illness requiring long-term symptom management [ 2 ]. Pain is a multifactorial experience that benefits from a multidisciplinary, biopsychosocial treatment model [ 35 ]. This is particularly relevant in the context of pain experienced by PLWH. A 2021 systematic review remarked on the low efficacy of analgesic medications in randomized control trials on HIV-related pain [ 61 ], with two studies reporting 50–65% symptom relief on analgesic therapy [ 47 , 49 ]. A comprehensive understanding of the key psychosocial and sociostructural factors contributing to pain among PLWH may therefore facilitate the development of more effective interventions. Previously identified psychological correlates of pain in PLWH include anxiety, depression, post-traumatic stress, and substance use disorder [ 20 , 30 , 53 , 67 ]. Sociostructural correlates of pain in PLWH are less well characterized despite evidence that social interactions modulate the experience of pain [ 34 ] and structural inequities among PLWH limit access to care [ 19 ]. Women living with HIV (WLWH) are twice as likely to report severe pain compared to men with HIV [ 30 ]. This disparity has been hypothesized to arise from a combination of biological factors, such as sex differences in pain modulation and pharmacological response [ 4 ], as well as sociostructural factors, such as increased gender-based violence, intersectional discrimination, and other barriers to care [ 25 ], with WLWH twice as likely to have their pain undertreated compared to men [ 7 ]. The potential significance of sociostructural drivers in pain among WLWH in Canada, where WLWH represent more than one-quarter of PLWH [ 12 ], is corroborated by findings that Canadian WLWH experience poorer quality of care [ 13 ] and greater HIV-associated reductions in life expectancy [ 27 ] than their male counterparts. These factors highlight the importance of a gendered analysis to understanding women’s needs for pain treatment as well as the impacts of pain on women’s health and well-being. Despite the high prevalence and disease burden of pain among WLWH, few studies have examined the specific correlates or outcomes of pain within this population. A 2018 systematic review of psychosocial factors associated with persistent pain in HIV noted that only 5 of 46 studies recruited predominantly WLWH [ 67 ], of which 2 studies examined social correlates of pain and 4 examined functional outcomes. Furthermore, it is unclear whether these studies included transgender (trans) WLWH or non-binary persons, reflecting the frequent erasure of gender minority communities from health research despite the unique inequities affecting these populations [ 54 ]. We have also been unable to identify studies examining the relationship between interpersonal violence and pain in HIV although there is a documented association between violence and pain in the general population [ 78 ] and a high prevalence of violence among WLWH [ 11 , 19 ]. To better characterize pain in WLWH, our objectives were to examine: 1) the prevalence and correlates of self-reported major or persistent pain, herein referred to as “pain”, and 2) the association between pain and quality of life among WLWH in Metro Vancouver, Canada.
Methods Study design and sampling Data for this study were drawn over five years (September 2014–August 2019) from the Sexual Health and HIV/AIDS Women’s Longitudinal Needs Assessment (SHAWNA), an ongoing community-based, longitudinal open enrolment cohort study. SHAWNA was launched in 2014 to investigate the sociostructural factors mediating access to care for cisgender (cis) and trans (inclusive of transgender, transsexual, other transfeminine identity) WLWH. The study was developed through extensive community consultation with WLWH, HIV care providers, and policy experts. SHAWNA represents a partnership of community and HIV organizations and is informed by two advisory boards: a Community Stakeholder Advisory Board, and a Positive Women’s Advisory Board, comprised of WLWH who meet every two to three months. Eligibility criteria included: self-identifying as a cis or trans woman, being 14 years of age or older, having a HIV diagnosis as established by confirmatory testing, and living and/or accessing HIV/AIDS services in Metro Vancouver. Participants were recruited by self-referral; referrals from HIV care providers, peer navigators, and HIV/AIDS advocacy groups (e.g. Canadian Aboriginal AIDS Network); and clinical outreach by partner organizations such as Oak Tree Clinic, the primary referral centre for WLWH in British Columbia. Participants provided informed consent and completed a questionnaire at baseline and every six months on a range of sociostructural (e.g. trauma, violence, stigma, income, housing security), health (e.g. symptoms, treatments, access to care), and sociodemographic (e.g. age, race, sexual identity) variables. Questionnaires were administered by trained community interviewers and followed by a visit with a sexual health research nurse who offered HIV viral load/CD4 count monitoring, testing for sexually transmitted infections and hepatitis C, and referrals to health and social services. Participants received $50 CAD for each visit as compensation for their time and expertise. All tests and referrals were voluntary and did not affect research study participation or compensation. Ethics approval for this study was granted by the Providence Health/University of British Columbia Research Ethics Board and BC Women’s Hospital. Study measures Primary variable of interest Participants reported whether they experienced pain over the last 6 months at each study visit (time updated) by responding to the following question, modified from the Brief Pain Inventory Short Form (BPI-SF) [ 48 ], “Throughout our lives, most of us have had pain from time to time. In the last 6 months, have you had any major or persistent pain (other than minor headaches, sprains, etc.)?”. The BPI-SF has been widely used to characterize pain severity and interference in people with HIV [ 30 , 49 , 60 , 68 ]. Subsequently, they were asked, “Has this pain been diagnosed by a doctor?” and “In the last 6 months, have you taken medication for this pain? Was this prescribed medication, over the counter (OTC) or illicit drugs?”. Explanatory variables and potential confounders Potential explanatory variables (i.e. correlates) of pain were selected based on a literature review. Sociodemographic factors included a variable measuring sexual orientation drawn from the question, “In the last 6 months, which of the following describes your sexual orientation (check all that apply)” and defined as sexual minority at any study visit (lesbian, gay, bisexual, queer, asexual, and/or Two-Spirit) versus only heterosexual at all study visits, as well as a variable measuring gender identity drawn from the question, “In the last 6 months, which of the following best describe(s) your gender identity (check all that apply)” and defined as gender minority at any study visit (trans [transgender, transsexual, other transfeminine identity], non-binary [non-binary, genderqueer], and/or Two-Spirit) versus only cisgender at all visits. Two-Spirit is an identity among people Indigenous to Turtle Island who identify as having both a masculine and a feminine spirit, and may be used to describe any or all of sexual, gender, and/or spiritual identity depending on the individual and context [ 62 ]. Participants had the option to provide more than one response to questions on sexual orientation and gender identity. Based on evidence that minority stress processes affect all gender minority people relative to cis people [ 70 ] and all sexual minority people relative to heterosexual people [ 51 ], for the purposes of analyses, we combined participants with responses to sexual minority identities into one variable and gender minority identities into one variable. Additional sociodemographic variables included and race (Indigenous [First Nations, Métis, or Inuit], other racialized persons [African/Caribbean/Black, Latin American, East/South/Southeast Asian, Middle Eastern, or other visible minority], White). The term Indigenous is used throughout while recognizing great diversity across and within languages, cultures, nations, and lands. While descriptive data were disaggregated, given the small sample size of Black participants, comparable to the BC population, Black women and otherwise racialized women were combined in modelling to understand experiences of racism for non-Indigenous racialized persons. Additional variables included age (measuring continuously in years) high school graduation at baseline; residence in the Vancouver Downtown Eastside, a highly marginalized community where high rates of poverty, unstable housing, substance use, and survival sex work have contributed to an estimated HIV prevalence of 30% [ 38 ]; homelessness (having no place to sleep for at least 1 night) (last 6 months), food insecurity (responding often true or sometimes true to any item on a modified Cornell-Radimer Hunger Scale [ 31 ] as previously described [ 3 ]) (last 6 months), housing insecurity (meeting the Canadian Observatory of Homelessness definition [ 22 ] of unsheltered or otherwise unstably housed as previously described [ 79 ]) (last 6 months). A composite food and/or housing insecurity variable (food and housing secure, food or housing insecure, or food and housing insecure) (last 6 months) was also assessed given previous evidence that separate versus concurrent food and housing insecurity may be associated with different sociostructural inequities among Canadian WLWH [ 39 ]. Mental health factors included feeling downhearted or blue (drawn from the Medical Outcomes Study SF-36 survey [ 77 ] and defined as a response of all the time, most of the time, or a good bit of the time versus some of the time, a little of the time, or none of the time) (last 4 weeks), depression (receiving diagnosis and/or treatment) (last 6 months), and suicidal ideation (contemplating and/or attempting suicide) (last 6 months). Substance use factors included non-injection opioid use (daily, less than daily [more than once a week, once a week, 1–3 times per month, less than once per month], none) (last 6 months), injection opioid use (daily, less than daily [more than once a week, once a week, 1–3 times per month, less than once per month], none) (last 6 months), cannabis use (daily, less than daily [more than once a week, once a week, 1–3 times per month, less than once per month], none) (last 6 months), and accidental overdose (last 6 months). Our analysis focused on opioids and cannabis versus other criminalized substances as both have analgesic effects and previous work has demonstrated that people who use criminalized drugs in British Columbia may turn to non-prescription opioids and cannabis for pain management [ 14 , 36 ]. Further, people who use injection opioid in particular may face increased stigma from healthcare providers limiting access to pain care [ 75 ]. General health factors included ability to access health services when needed (always or usually versus sometimes, occasionally, or never) (last 6 months) and detectable HIV-1 viral load (any test ≥ 50 copies/ml) (last 6 months). Interpersonal factor s included sexual violence by any perpetrator (last 6 months) and physical violence by any perpetrator (last 6 months). All variables were time updated at each semiannual study visit, except for race and high school graduation. Quality-of-life outcomes Time updated quality-of-life outcome variables were drawn from the Medical Outcomes Study SF-36 survey [ 77 ] and included good self-rated health over the last 6 months (assessed with the question, “In general, how would you rate your health?” and defined as a response of excellent, very good, or good versus fair or poor), interference of health with social activities over the last 4 weeks (assessed with the question, “How much of the time during the past 4 weeks has your physical or emotional health interfered with your social activities?” and defined as a response of all the time, most of the time, or a good bit of the time versus some of the time, a little of the time, or none of the time), and interference of health with general function over the last 4 weeks (defined as answering yes to either of the questions, “During the past 4 weeks, have you accomplished less than you would like as a result of your physical health?” or “During the past 4 weeks, have you accomplished less than you would like as a result of your emotional health?” versus no to both). The decision was made not to administer the entire SF-36 survey due to concerns raised in community consultation that the full validated scale had not been developed for marginalized people and that several items contained language likely to be perceived as discriminatory or exclusive by study participants. Statistical analysis Statistical analysis was performed using SAS software (version 9.4; SAS Institute Inc., Cary, NC). Descriptive statistics (i.e. frequency and per cent or median and interquartile range [IQR]) were calculated for all variables at baseline and stratified by pain in the last 6 months. Differences were assessed using Wilcoxon rank-sum tests for continuous variables and Pearson's Chi-square tests (or Fisher’s exact tests where cell counts were small) for categorical variables (Table 1 ). Bivariate and multivariable logistic regression with generalized estimating equations (GEE), which use an exchangeable correlation structure to account for repeated measurements among participants, were performed to identify associations between explanatory variables and pain as the outcome (Table 2 ). The GEE approach uses a complete case analysis to account for missing data, whereby observations with any missing data on a given variable are excluded from the multivariable analysis. An explanatory multivariable model was generated using a manual backward elimination process. Hypothesized explanatory variables with p < 0.10 in bivariate analysis were considered for inclusion in the full multivariable model and assessed for multicollinearity using the variance inflation factor (VIF). Due to concerns about multicollinearity, the individual food insecurity and housing insecurity variables were omitted from the multivariable analysis with only the composite food and housing insecurity variable retained as a potential covariate. The variable with the largest p value of Type-III analysis was removed and the quasi-likelihood under the independence model criterion (QIC) was noted as previously described [ 18 , 57 ]. The final model represented the one with the lowest QIC value, indicating the best model fit. Bivariate and multivariable logistic regression analyses with GEE were also performed to investigate the association between pain and the quality-of-life outcomes (Table 3 ). For each quality-of-life outcome, a confounder model approach was used in which all variables included in the full multivariable explanatory model for pain were considered confounders. As a first step in our confounder model fitting process, we assessed the relationship between all potential confounders described above and each outcome. Variables that were significantly associated with the outcome at a p < 0.10 level were included as potential confounders in the next step of model fitting. Next, for each outcome, the most parsimonious model was determined using the process described by Maldonado and Greenland [ 43 ], in which potential confounders were removed in a stepwise manner, and variables that altered all of the associations of interest by < 5% were systematically removed from the model. The final set of confounders included in the adjusted models are provided in footnotes in Table 3 . The adjusted models used a complete case approach to remove observations with any missing data to ensure the model selection process was performed with nested models using constant sample size. Data are presented as unadjusted odds ratios (ORs) or adjusted odds ratios (aORs) with 95% confidence intervals (CIs). All p values are two-sided.
Results Sample characteristics Overall, 335 WLWH in SHAWNA were included in our sample, who contributed 1632 observations over 5 years from September 2014 to August 2019. The median number of follow-up visits in our study sample is five (interquartile range: 2, 7) with 2.4% of the sample having 10 visits. At baseline, 48.1% (161/335) of participants reported pain in the last 6 months, of which 19.1% (64) reported undiagnosed pain and 26.9% (90) reported that they had managed pain with criminalized drugs. Of those who reported pain, 64.0% (103/161) reported good self-rated health, 38.2% (58) reported interference of health with social activities, and 82.2% (125) reported interference of health with general function. Across all study visits, 77.3% (259) of participants reported pain at least once in the last 6 months, with 46.3% (155) experiencing any undiagnosed pain and 53.1% (178) managing pain with criminalized drugs. Table 1 summarizes the characteristics of women in our sample at their baseline interview, stratified by major or persistent pain in the last 6 months. The median age of participants was 45 years (IQR 38–52 years). Capturing fluidity in sexual and gender identity over time, 40.6% (136) reported sexual minority and 10.5% (35) reported gender minority identity at any study visit, with 6.6% (22) identify as trans women (including transgender women, transsexual women, and other trans feminine identities) and 2.7% (9) reporting non-binary identity. Indigenous women comprised 55.5% (186) of the sample and were overrepresented compared to the population of British Columbia (5.9% in 2016 by Statistics Canada). Among Indigenous women, 14.5% (27/186) were Two-Spirit. Overall, 10.2% (34) were otherwise racialized women and 34.3% (115) were white women. Correlates of pain ORs and aORs for bivariate and multivariable logistic regression using GEEs to assess the relationships between explanatory variables (excluding discrimination and HIV stigma measures) and pain in the last 6 months are shown in Table 2 . Multivariable logistic regression analysis using GEEs indicated that age (aOR 1.04 [1.03–1.06] per year increase), food and housing insecurity (aOR 1.54[1.08–2.19] versus food and housing secure), depression diagnosis (aOR 1.34[1.03–1.75]), suicidal ideation (aOR 1.71[1.21–2.42]), and non-daily, non-injection opioid use (aOR 1.53[1.07–2.17] versus no non-injection opioid use) were associated with higher odds of pain, while daily non-injection opioid use (aOR 0.46[0.22–0.96] versus no non-injection opioid use) and increased access to health services (aOR 0.63[0.44–0.91]) were associated with lower odds of pain. In bivariate analysis, there was no significant association between detectable viral load, cannabis use, injection opioid use, unintentional overdose, sexual violence, or physical violence and major or persistent pain at p < 0.05, although viral load (p < 0.10), physical violence (p < 0.10), and less than daily cannabis use (p < 0.20) trended towards higher odds of pain. Association between pain and quality-of-life outcomes Table 3 presents ORs and aORs for bivariate and multivariable logistic regression with GEE models for the association between pain and quality-of-life outcomes. Pain was associated with lower odds of excellent, very good, or good self-rated health versus fair or poor self-rated health (aOR 0.64[0.48–0.84]), and with increased odds of participants reporting that their health interfered with social activities (aOR 2.21[1.63–2.99]) or general function (aOR 3.24[2.54–4.13]).
Discussion Three-quarters of WLWH in our setting reported pain at ≥ 1 study visit, with half of WLWH reporting undiagnosed pain or pain self-managed with criminalized drugs. Correlates of pain included food and housing insecurity, depression, suicidal ideation, non-daily non-injection opioid use, and difficulty accessing health services. Pain was associated with reduced self-rated health, social participation, and general level of function. These outcomes are consistent with findings that chronic pain increases psychological distress and decreases self-efficacy, resulting in the avoidance of physical, occupational, and social activities [ 37 ]. They add to growing evidence that pain plays a crucial role in health-related quality of life among WLWH [ 58 , 68 ]. The high proportion of participants managing pain with criminalized drugs in our study is concerning as there is an drug toxicity crisis in British Columbia characterized by contamination of the criminalized drug supply. Unintentional overdose now represents the major driver of mortality in PLWH in the province [ 66 ]. While we did not observe an association between pain and overdose, our data are limited to before 2019, after which the annual rate of drug toxicity deaths in British Columbia increased from 19.4 to 42.7 per 100,000 in 2022 [ 8 ]. Additional investigation is required to determine whether WLWH and pain are currently at risk for overdose in the context of an increasingly contaminated and criminalized drug supply. High-risk opioid use is both a facilitator of pain (e.g. through opioid-induced hyperalgesia or increased tolerance to prescription analgesics) and an outcome (e.g. when opioids are used for symptom management) [ 50 , 74 ]. Chronic pain and opioid use stigma also interact to restrict healthcare access (e.g. when individuals requesting pain treatment are dismissed as “drug-seeking”), and are compounded by colonial violence against Indigenous peoples, racism, and marginalization associated with im/migrant status, sexual orientation, and/or gender identity [ 76 ]. While the use of criminalized drugs for pain management in our cohort is consistent with an association between non-daily, non-injection opioid use and increased odds of pain, daily non-injection opioid use was unexpectedly associated with reduced odds of pain while no association was observed between injection opioid use and pain. Further work is needed to clarify these relationships. Daily non-injection opioid use may be effective for pain management in this population, which would be consistent with weak evidence that long-term prescription opioid use can provide clinically significant relief for chronic non-cancer pain [ 56 ]. In addition, daily opioid access may require lower levels of disability, allowing for greater access to care. It is also possible that WLWH using non-prescription opioids for pain management prefer to use non-injection routes of administration due to the shorter half-life of intravenous opioids. While less than daily cannabis use trended towards higher odds of pain, a statistically significant association was not observed. Previous work demonstrates that many PLWH in Metro Vancouver may use cannabis for analgesia [ 14 ] and that cannabis is associated with reduced opioid use in people who use drugs (PWUD) with chronic pain [ 36 ]. However, these study cohorts consisted exclusively of PWUD who reported higher rates of cannabis use than our cohort and may have been more reliant on non-prescription drug use for pain management. The associations between depression and suicidal ideation with pain are consistent with evidence that pain severity in PLWH is correlated with depressive symptoms [ 73 ]. Like substance use, depression has a bidirectional relationship with pain: depression may result in dysfunctional cognitive appraisals of pain and activate a sensitized stress response that facilitates chronic pain development, while pain itself is a negative affective state that increases the risk for depression [ 41 ]. Indeed, a qualitative study of PLWH and pain suggests that emotional and physical distress may be experienced indistinguishably [ 50 ]. While we conceptualized depression as a correlate of pain, future research could explore the potential role of depression in the other associations explored in this study, for example, as a mediator or moderator between pain and quality of life. Structural conditions had a major impact on shaping experiences of pain in WLWH in our study. Half our cohort reported food and housing insecurity, which was associated with increased odds of pain compared to those who were food and housing secure. This is consistent with findings that half the patients at a Vancouver community-based chronic pain clinic lived below the poverty line [ 44 ]. Chronic pain can precipitate disability, limiting employment and socioeconomic status [ 44 ], while poverty can conversely increase the risk of developing chronic pain through allostatic overload [ 41 ] and may intersect with other facilitators of chronic pain. The associations between pain and poverty, substance use, and depression—as well as the documented interrelationships between these factors [ 16 ]—brings into question whether they may be conceptualized as a syndemic among WLWH. A syndemic describes the intersection of social, structural, and health issues that reinforce each other synergistically to increase disease burden, such as the “SAVA syndemic” of Substance Abuse, Violence, and HIV/AIDS among urban-dwelling women in the USA [ 52 ]. To identify high-impact interventional strategies, further work is needed to determine the extent to which poverty, substance use, depression, and chronic pain in WLWH may be mutually or serially causal and/or have interactive effects on functional outcomes. Our results have important implications. The frequent use of criminalized drugs for pain management indicates that many WLWH may have difficulty accessing pain care. A previous examination of barriers to primary care in our study context concluded that equity-oriented approaches may improve access for WLWH [ 19 ]. The EQUIP framework, which operationalizes 4 dimensions of equity-oriented care (i.e. inequity-responsive care, trauma- and violence-informed care, culturally competent care, and contextually tailored care) [ 10 ], has been integrated into several HIV and primary care clinics in British Columbia [ 9 , 33 ], although more work is required to upscale these services. The use of criminalized drugs for analgesia also highlights the importance of harm reduction in mitigating the risks of opioid use for WLWH. Based on our findings, we echo calls for expanded “safe supply” services to provide pharmaceutical-grade alternatives to toxic street drugs along with decriminalization to facilitate destigmatization of substance use and remove police-related barriers to healthcare access [ 24 , 28 , 45 , 65 ]. The association between depression and pain in WLWH highlights the importance of dually indicated interventions, including psychotherapy. Cognitive behavioural therapy is a first-line treatment for depression associated with improved pain in PLWH [ 17 , 71 ]). As conventional psychotherapy is predicated on Western colonial models of mental health [ 5 ] and two-thirds of our cohort were Indigenous or otherwise racialized, the promotion of Indigenous healing practices (e.g. access to Elders, traditional teachings, and land-based activities [ 63 , 64 ]) and/or culturally adapted psychotherapeutic approaches may also be helpful for WLWH and pain. Unfortunately, low-barrier psychotherapy services are sparse in Metro Vancouver and more public investment is required to improve access. As mental distress is a common response to systemic inequities like poverty, racism, and colonial violence [ 55 ], these services must be situated within a wider framework of structural reform. The importance of structural interventions is emphasized by the relationship between food and housing insecurity and pain in WLWH. Previous work has established that the most persistent barrier to managing chronic illness occurs when individuals do not have their basic needs met [ 6 ]. Income assistance and basic income have both been found to improve food and housing security [ 1 , 23 , 32 ], which may empower WLWH to better manage chronic pain. Housing-specific interventions may take the form of rental assistance, tenant advocacy services, and supportive housing environments that are safe, stable, and affordable. To meet the needs of cis and trans WLWH, it is imperative that supportive housing be low-barrier, family-oriented, integrated with other health and social services, and rooted in principles of trauma-informed care, harm reduction, and gender-responsiveness [ 79 ]. Our study has several limitations. First, participants indicated whether they experienced “major or persistent pain” in the last 6 months, a metric that includes severe acute pain, likely overestimates the prevalence of chronic pain among WLWH, and does not indicate changes in pain over time. It is conversely possible that the 6-month recall period may underestimate the occurrence of chronic pain due to recall bias, although this is less likely as a previous meta-analysis found no significant difference in the prevalence of pain reported by PLWH over 3-month to 6-month recall periods [ 59 ]. Ultimately, the prevalence of pain in our cohort is within the range reported for chronic pain by previous ART-era studies of PLWH and WLWH [ 59 ]. Second, stigmatized conditions (e.g. suicidal ideation) may have been under-reported by participants. However, questionnaires were designed with community consultation and administered by trained peer interviewers to optimize participant safety, allowing us to observe a high prevalence of other stigmatized conditions (e.g. criminalized drug use). Third, our relatively small sample size may have prevented us from identifying all associations with pain, but using repeated measures among participants over time effectively increased our statistical power. Fourth, as self-reported pain was assessed over the last 6 months while quality-of-life outcome measures were assessed in the last 6 months (self-rated health) and in the last 4 weeks (health interference in social activities and general function), it is therefore possible that the explanatory variable and outcomes could have overlapping time periods or that pain could have occurred 5–6 months before negative quality of life was assessed. Moreover, causality in the direction that we posit cannot conclusively be established. However, we feel that major or persistent pain is likely to have had an impact on quality of life within the 6-month period, particularly as there is extensive qualitative and quantitative evidence suggesting a directional association between pain and quality of life [ 15 , 26 , 46 ]. Finally, our results may not be generalizable to all WLWH in or beyond Metro Vancouver. However, we feel that our community-based outreach strategy allowed us to engage diverse participants, including those not previously connected to HIV care and whom we subsequently referred for services.
Conclusion In conclusion, a high proportion of WLWH experienced pain correlated with depression, suicidality, opioid use, food and housing insecurity, and poor access to health services. Pain had significant consequences for self-rated health and quality of life. The high proportion of WLWH in our study who reported the use of criminalized drugs for analgesia underscores the importance of harm reduction including access to a safe regulated supply and decriminalization in response to the opioid epidemic. Our study results also emphasize the need for structural change enabling WLWH and pain to meet their basic needs, including those related to food and housing security. While further work will elucidate the interrelationships between pain, substance use, and depression, our findings suggest that equity-informed pain services and anti-poverty interventions are urgently needed to improve quality-of-life outcomes in WLWH.
While women living with HIV (WLWH) are twice as likely to report severe or undertreated chronic pain compared to men, little is known about pain among WLWH. Our goal was to characterize the correlates of pain as well as its impact on quality-of-life outcomes among women enrolled in the Sexual Health and HIV/AIDS Women’s Longitudinal Needs Assessment (SHAWNA), an open longitudinal study of WLWH accessing care in Metro Vancouver, Canada. We conducted logistic regression analyses to identify associations between self-reported major or persistent pain with sociostructural and psychosocial correlates and with quality-of-life outcomes. Data are presented as adjusted odds ratios (aORs) with 95% confidence intervals. Among 335 participants, 77.3% reported pain at ≥ 1 study visit, with 46.3% experiencing any undiagnosed pain and 53.1% managing pain with criminalized drugs. In multivariable analysis, age (aOR 1.04[1.03–1.06] per year increase), food and housing insecurity (aOR 1.54[1.08–2.19]), depression diagnosis (aOR 1.34[1.03–1.75]), suicidality (aOR 1.71[1.21–2.42]), and non-daily, non-injection opioid use (aOR 1.53[1.07–2.17]) were associated with higher odds of pain. Daily non-injection opioid use (aOR 0.46[0.22–0.96]) and health services access (aOR 0.63[0.44–0.91]) were associated with lower odds of pain. In separate multivariable confounder models, pain was associated with reduced odds of good self-rated health (aOR 0.64[0.48–0.84] and increased odds of health interference with social activities (aOR 2.21[1.63–2.99]) and general function (aOR 3.24[2.54–4.13]). In conclusion, most WLWH in our study reported major or persistent pain. Pain was commonly undiagnosed and associated with lower quality of life. We identified structural and psychosocial factors associated with pain in WLWH, emphasizing the need for low-barrier, trauma-informed, and harm reduction-based interventions. Keywords
Acknowledgements We thank all those who contributed their time and expertise to this project, particularly participants, the Positive Women’s Advisory Board, Community Advisory Board members and partner agencies, and the current SHAWNA research project staff, including: Elissa Aikema, Tara Axl-Rose, Emma Kuntz, Melanie Lee, Lois Luo, Desire King, Patience Magagula, Kat Mortimer, Candice Norris, Colleen Thompson, and Larissa Wakatsuki. We also thank Hanah Damot, Riley Tozier, Kate Milberry, Shivangi Sikri, Amber Stefanson, and Peter Vann for their operations, communications, research and administrative support and Mary Kestler, the study physician from Oak Tree Clinic. Author contributions SL conceptualized the work, interpreted the data, and was the main person who drafted the work. KS designed and supported the process for the acquisition, analysis, and interpretation of the data, and substantially reviewed and revised the work. AK substantively reviewed and revised the work and provided important conceptual guidance for the work. MB was responsible for the statistical analysis prior to initial submission of the manuscript. HZ was responsible for statistical analysis during the post-submission review process. KD made substantial contributions and supervised the conception and design of the work and interpretation of the data, and substantively reviewed and revised the work. All authors read and approved the final manuscript. Funding The SHAWNA Project is financially supported by the Canadian Institutes of Health Research (PJT169119) and US National Institutes of Health (R01MH123349). The SHAWNA Project is also a Canadian HIV Trials Network (CTN) Study (CTN-333). Availability of data and materials In accordance with data access policies, our ethical obligation to research that is of the highest ethical and confidentiality standards, and the highly criminalized and stigmatized nature of this population, anonymized data may be made available on request to researchers subject to the UBC/ Providence Health Ethical Review Board, and consistent with our funding body guidelines (NIH, CIHR). The UBC/ Providence Health Ethics Review Board may be contacted at 604-683-2344. Declarations Ethics approval and consent to participate The SHAWNA project has received consent and ethics approval from the Providence Health Care and University of British Columbia Research Ethics Boards (REB number H14-01073). Competing interests The authors have no potential conflicts of interest to declare.
CC BY
no
2024-01-15 23:43:48
Harm Reduct J. 2024 Jan 13; 21:10
oa_package/f2/46/PMC10788033.tar.gz
PMC10788034
38218788
Background In the United States (US), a recent study estimated that 400,000 patients admitted to the hospital may die from a medical error [ 1 ] and another study estimated one million excess injuries following medical intervention [ 2 ]. Evidence suggests that the prevalence of adverse events is higher in more complex domains of care, such as surgery and intensive care, which typically require well-coordinated teamwork [ 2 , 3 ]. There is also evidence that good teamwork in healthcare is related to better performance [ 4 ]. For example, more information exchange during surgical operations can protect against complications [ 5 , 6 ]. However, healthcare teams’ behaviors also contribute to generating errors, adverse events and waste of resources: Lingard and colleagues showed that communication failures during operations are common and may impact team processes [ 7 ]; higher noise levels [ 8 , 9 ] and lapses in discipline [ 10 ] were also predictive of patient outcomes; and numerous disruptions increase workload and stress [ 11 ] and are associated with fewer safety checks carried out during surgical operations [ 12 ] – to name just a few of the known detrimental effects. Interventions to improve teamwork, such as crew resource management (CRM) have been implemented in various acute care settings. In intensive care units (ICU), CRM has repeatedly been found to be beneficial for error management and job satisfaction [ 13 – 15 ]; further intervention studies showed promising results on patient-related outcomes in trauma, surgical, and ICU settings [ 16 – 20 ]. New technological developments can also influence teamwork; for example, the installation of a new communication system reduced noise disturbances in the operating room (OR) while optimizing communication [ 21 ]. In the last two decades, aspects of communication, coordination and teamwork have been identified as prominent topics studied in health care [ 22 ] with a rapid rise in scientific publications related to teams and teamwork [ 23 ]. For example, taxonomies describe key behavioral aspects at the team level [ 24 ], empirical studies relate team processes to patient outcomes [ 4 ], and investigate the impact of team interventions [ 25 ]. Yet, many studies in this domain are descriptive in nature [ 23 ] and heterogeneous, producing varying results. Although teams in healthcare have become a prominent research topic, we currently have a limited understanding of the areas in which we most lack critical knowledge to develop successful interventions that enhance teamwork and/or team skills and, ultimately, increase patient safety [ 26 ]. A European community of researchers who meet annually at the Behavioral Sciences Applied to Acute Care Teams and Surgery (BSAS) conference share a keen interest in developing the knowledge base around surgical and acute care teams’ behaviors,. The BSAS community formed over 15 years ago (2006) and represents a cross-European network of about 260 scientists and clinicians from different disciplines, committed to understanding the role of behavioral sciences in the context of acute care teams, such as surgery and interventional specialties. Most of the researchers come from northern, north-western and central-western European countries and work at universities or university hospitals. The annual conference has several goals: (a) to share research findings and experiences based on evidence-based methodologies, (b) to develop capacity (i.e., new researchers coming into the field), and (c) to ultimately contribute to improved safety, quality and outcomes through the application of behavioral interventions and training. The BSAS community identified the need to develop a prioritized research agenda in the field of acute medical care teams. Here we report the process of developing this agenda and its prioritized areas for future research. For the present research agenda, we specifically focus on acute care teams, working predominantly in hospital settings who are often under time pressure to provide short-term, potentially invasive care to patients. These include surgery, anesthesiology, intensive care medicine, trauma, obstetrical and emergency medicine teams, but excludes teams involved with longer-term care or less acute care. During this process, we asked for suggestions over the next three to five years, implying that these issues should be tackled with more urgency, though the resulting research effort is expected to take much more time.
Methods The process of establishing a research agenda was initiated in 2020 by a core group (authors: MdB, JJ and SK). We used an adapted version of Zwaan and colleagues’s [ 27 ] systematic prioritization method to establish research agendas. This method weights research questions by expert prioritization criteria. The method was calibrated to draw on the expertise of the experts contacted as part of and participating in the BSAS meetings. Using the communication channel established for the BSAS 2020 annual conference preparations, we recruited research experts for participation in establishing the list of research questions in September 2020. For establishing the prioritization weight and the assessment of the research questions according to the prioritization criteria, we collected data during the BSAS conference held virtually in October 2020; this included a half-day discussion session. Data collection was done using the Qualtrics survey software [ 28 ] and the focus groups worked with a Trello® interface [ 29 ]. In 2020 and 2021, the BSAS conference was organized virtually given the COVID pandemic and was free of charge. Identifying research topics To identify the research topics (see Fig. 1 ), experts were asked to generate a list of specific research questions they considered to be the most burning for the next three to five years. Experts were recruited via the invitation to the BSAS conference 2020, including 240 researchers from the organizers’ mailing list. A total of 29 experts (12%) from different disciplines (physicians, nurses, psychologists, other) working in different settings (academic university department, surgery, anesthesiology, emergency medicine, and other fields) agreed to generate research questions. Twenty-four of the participants were active researchers, 10 were active in medical practice, and 16 had teaching assignments (multiple categories possible). A list of 65 research questions was generated. Before categorization, the initial list of research questions submitted by the experts was consolidated by removing duplicates and entries that were too generic to be further analyzed (i.e. only single keyword, such as ‘teamwork’), as well as by separating entries with multiple research questions into several questions; one question was removed because it did not refer to behavioral research. The resulting 59 unique research questions were categorized by two of the authors (JJ and SK) into six broader research topics. Disagreements between JJ and SK were resolved after discussion with MdB until consensus was reached. Prioritization criteria and prioritization of research topics Prioritization criteria To establish priorities for each research question, all participants of the BSAS 2020 online conference were invited to assess the importance of four general criteria for acute care team research. Nineteen of the 20–25 attendees agreed to participate. The criteria used were adapted from Zwaan and colleagues (2021). (i) The first criterion was the usefulness of the research question, i.e. to what extent it improves understanding and contributes to filling a gap in knowledge. (ii) The second criterion was answerability , i.e. to what extent it is realistic to reach the objective, given time, budget and ethical standards, and to what extent the endpoints are well defined. (iii) The third criterion was effectiveness , i.e. the potential to advance research and understanding of acute care teams; and (iv) the fourth criterion was translation into practice , i.e. the potential of the research for translation into practice, either directly or by supporting the development of tools to improve acute care teams. The fifth criterion used by Zwaan et al. (i.e. maximum potential for effect on diagnostic safety) was not relevant for our field and thus not considered [ 27 ]. To establish a prioritization weight for acute care team research, we also adapted the method of Zwaan et al., to the context in which the study was done and the timeline of the BSAS conference; rating only questions previously discussed as high priority by the experts, as performed by Zwaan et al., was not possible. The experts rated each of the four criteria on a sliding scale (i.e. a cursor to place on a line) between 0.5 (low importance) to 1.5 (high importance). We used the mean of the expert ratings for each criterion as the prioritization weight. Assessing each research question along the prioritization criteria The same expert group ( N = 19) was asked to assess, for each research question, to what extent each of the four prioritization criteria (usefulness, answerability, effectiveness, potential for translation into practice) applied. The answering format was a Likert scale ranging from one star (low) to five stars (high). The research questions were presented within topic blocks, and topics were presented in a random order for each participant. Calculating the weighted priority for research topics In the next step, we calculated the weighted priority for each research question; we used a simplified version of Zwaan et al. (2021) [ 27 ] methods, since our study was conducted within a larger research field. The calculation was performed as follows: First we calculated the mean of the sum of the product of the assessment and the weight for all four prioritization criteria for each research question (priority weighting of each research question). Second, we calculated the mean of the priority-weighted research questions within each research topic. In addition, we used paired t-tests to calculate potential differences between the weights given to the criteria. Repeated measures analysis of variance was performed to identify overall differences in the priority ratings across the six topics and two-tailed paired t-tests were calculated to identify specific differences across the topics. P -values below 0.05, were considered significant. Expert focus group meetings In addition to the main data collection, we organized three expert focus groups during the last half-day session of the BSAS 2020 conference with the same group of 19 experts who assessed the questions based on the prioritization criteria. Experts were randomly assigned to one of the focus groups, which were composed of 5 to 7 participants each. The focus groups were asked to prioritize the importance of the research questions as high, medium, or low for acute care team research and to resolve differences of opinion by discussion; our goal was to collect expert opinion on the questions beyond the ratings. The focus groups were blind to the quantitative assessment of the research questions along with the prioritization criteria. Each focus group started with a different topic. Two groups provided an audio-recording of the discussion; the most important discussion points were summarized by (JJ, MdB); in the third group, SK captured field notes directly that summarized the discussion. The audio-recordings and the field notes served as a basis for the discussion; the prioritization made by the focus groups was not analyzed quantitatively, but instead was used exclusively to establish recommendations.
Results We first present the quantitative results for the research topics. For each research topic, we then present key existing literature and list research gaps, identified by the expert discussions. Research topics The six research topics identified based on the research questions generated were: (i) Team processe s, which referred to research questions relating team processes to task execution (e.g. the impact of distractions on team outcomes, stress management in teams); (ii) team interventions , which referred to studying interventions to enhance team performance (e.g. design of effective team interventions, how to involve patients); (iii) Training and health professions education , which referred to research related to teaching, training needs, and design (e.g. teaching skills, maintaining the effects of training); (iv) Use of technology , which concerns research related to either the use of technology to improve teamwork (e.g. the benefits and risks of new technologies for teamwork) or the use of technology as part of research methods (e.g. team assessment technologies); (v) organizational aspects , including organization of work processes (e.g. care pathways), the design of work environment and schedule, and team composition (e.g. effects of changes in team composition); and (vi) organizational and patient safety culture , which included research questions concerning several aspects, such as steep hierarchical structures and just culture. Prioritization criteria Mean expert ratings of the four prioritization criteria (on a scale from 0.5 to 1.5) were 1.21 (SD = 0.24) for usefulness, 1.15 (SD = 0.26) for answerability, 1.15 (SD = 0.26) for translation into practice, and 1.10 (SD = 0.24) for effectiveness. The means were used as weights. There was no significant difference across the means (see Supplementary Table 1 for the results of the statistical tests). Comparison of weighted research topics Figure 2 shows the comparison of the weighted research topics in descending order. All six topics were rated as a high priority, with means above 4 (Fig. 2 ). ANOVA yielded significant differences between the topics (F = 4.64, df = 5, p = 0.023). Post-hoc comparisons revealed that the topic encompassing interventions was assessed as significantly higher in priority than organizational aspects , training/education and organizational and patient safety culture ; technology was significantly higher than training/education and organizational and patient safety culture , and team processes was significantly higher than training/education (see Supplementary Table 2 ). Narrative results In the following, we describe each research topic in the order of descending priority. After a general description of the topic, we present the sub-topic, based on current literature, followed by the research gaps identified during the expert focus group meetings; for a summary, see Table 1 . Interventions to improve team processes Interventions on team processes are defined as any (organizational) intervention aimed at improving care processes through enhanced team effectiveness. The focus group discussed several sub-themes, including briefings, interventions to enhance reflexivity and encourage speaking up, promoting civility, and improving patient involvement. The first sub-topic identified in the discussion of implementation of interventions centered around briefings. Briefings refer to specific time periods that teams dedicate to information exchange and discussion. Examples include structured patient handovers ([ 30 , 31 ], Agency for Healthcare Research and [ 32 ]), or specific briefings such as the World Health Organization (WHO) or SURgical Patient Safety System (SURPASS) checklists for use in the OR [ 33 – 36 ]. Previous work has demonstrated that structured handovers and surgical checklists improve patient outcomes [ 35 , 37 – 39 ]. Recently, in-action briefing interventions encouraging teams to share information or reflect during short task breaks have been investigated [ 19 , 40 , 41 ]. Teams that engage in reflexivity —reflecting on their goals and the team processes—have been found to be more productive [ 42 ]. Team reflexivity interventions often designate specific time slots after or between tasks for teams to review and reflect [ 43 – 47 ]. Incivility in medical teams remains a recurrent concern [ 48 ], as are conflicts [ 49 , 50 ]; both can be a threat to patient safety. The presence of s teep hierarchies and status differences between the professions may also impede optimal interprofessional collaboration in medical teams [ 51 , 52 ], potentially hindering team members from speaking up and voicing their observations, concerns, and opinions [ 53 , 54 ]. Patient involvement [ 55 ] in this context points to patient-delivered checklists used before and after medical procedures [ 56 ], involvement of patients in checklists procedures [ 57 ] as well as patient-oriented applications designed to empower patients to contribute to their own safety whilst undergoing procedures [ 58 ]. The primary research gaps identified by the focus group for those topics were on the one hand to continue research on the relationship between team processes, interventions, and outcomes in emerging or less-explored domains such as and patient involvement. For other topics, the expert group judged that research has well established the value (e.g. briefings; ease of speaking up) or the potentially negative impact (e.g. incivility and conflict). Experts pointed out that studies from fields outside of medicine addressed these topics and should be acknowledged by scholars in the medical domain. For well-studied topics, the experts identified gaps related to implementation and strategies for effective execution. They suggested more research to compare and identify the most efficient interventional designs. Furthermore, implementation research should also explore the sustainability of the effects of interventions over time, considering that many interventional studies only include short-term effects. Technology: Dealing with and implementing new technologies New technologies are rapidly introduced in healthcare teams; they can facilitate or impair teamwork. Examples are the use of interactive whiteboards on electronic devices for collaborative decision-making of emergency cases [ 59 ], the use of artificial intelligence for OR planning tools [ 60 ], and the substitution of pagers with mobile technology [ 61 , 62 ]. Notably, robotic-assisted surgery constitutes an important technological innovation, albeit presenting particular challenges for teamwork and communication [ 63 – 65 ], as the inclusion of a robot influences team dynamics and impacts team performance [ 66 ]. Technologies like patient portals and health monitoring wearables for patients are used to support self-management and patient engagement. Devices that gather information can facilitate shared decision making and may allow for more personalized coaching, and can expedite information sharing and exchange among team members and with patients [ 67 ]. Related to the team process, real-time data gathered from devices that continuously track team functioning indicators can provide real-time information about team performance and rapid warning signals in case of teamwork breakdown [ 68 – 70 ]. Additionally, the increasing integration of artificial intelligence into medical care [ 71 – 73 ] may extend to teamwork aspects in the future. There is limited research on the conditions and implications of current clinical practices and new technologies [ 74 ], as well as on ethical aspects related to new technology adoption [ 75 ]. The experts identified fundamental research on the relationship between the use and impact of emerging technologies as an important research gap related to these topics. They emphasized the importance of intensifying research to better understand the influence and impact of specific technologies on team processes and emergent states such as situational awareness, communication, coordination of care, team collaboration, leadership, individual and team learning processes, as well as on timely and accurate provision of performance feedback. The experts also highlighted the necessity for research r elated to the usability, design, and the integration of new technologies within existing clinical practice: Inadequate system design and functionality can potentially lead to increased cognitive burden, impair clinical work, and reduce job satisfaction. Furthermore, research is needed to investigate whether technologies hamper patient privacy and psychological safety (e.g., if information is used for performance assessments or used by an insurance company). The experts stressed that the effectiveness of new technology may be moderated by the local contexts and the organizational and patient safety culture, emphasizing that such moderators also need to be studied. Team processes: Understanding, measuring and relating team processes to outcomes Team research for acute care teams has established solid evidence that team processes influence performance [ 4 ]. Literature reviews focused on healthcare care teams (e.g. [ 76 ]) identified similar aspects as the general teamwork literature [ 77 ], notably emphasizing team processes such as situation awareness, communication, and coordination as core non-technical skills, for example in surgery [ 78 – 81 ], anesthesia [ 82 ], and ICU teams [ 83 ]. Research on social and relational aspects in medical teams focused on the detrimental effects of disruptive or rude behaviors [ 84 ], on speaking up [ 85 , 86 ], and on teamwork quality [ 87 ], as well as on healthcare employees’ work satisfaction and health [ 88 ]. The quantitative results showed that out of the fifteen research questions relating to team processes proposed by the experts, ten mentioned measuring performance, patient outcome or effectiveness, and six focused on how processes affect outcomes. This indicates important research gaps in relating team process to specific team task performance, including the need to develop specific indicators for medical team performance and the methodological challenges associated with performance measurement for highly complex medical tasks. Other identified gaps were related to team composition and team diversity, specifically with regard to the optimal knowledge and skill mix of team members. Gaps were identified for both the issue of what the characteristics or behaviors of effective teams are and how diverse team processes impact performance. In addition, identified research gaps were related to contextual aspects of teamwork, including impacts of distractions and other stressful conditions at work. Organizational aspects impacting teamwork Numerous organizational aspects impact teamwork. Three topics were identified as particularly relevant by our expert group; (1) work processes, (2) work environment and work schedules, and (3) team composition. Work processes: Organizational interventions (e.g. the introduction of standardized care pathways) have been shown to have positive effects on teamwork and reduce risks of burnout [ 89 ]. Research also indicates that both classroom-based team training at the department level and applying principles of a high-reliability organization (HRO) may improve job satisfaction [ 13 ] and reduce the risk of burnout [ 90 ]. However, the influence of information technology in the workplace has mainly been studied in relation to individual professional performance [ 91 – 93 ], whereas it may also impact teamwork in modifying or inhibiting interpersonal communication [ 94 ]. Work environment and schedules: Health care teams often have to provide 24/7 care and work in a context with strong hierarchies and explicit status differences [ 95 , 96 ]. A strong organizational hierarchy [ 97 ] as well as inter-professional differences [ 98 , 99 ] are well-known barriers to open and safe communication. The need for continuous and emergency care can only be upheld with shiftwork, which directly affects individual and team performance. Occupational safety, job satisfaction, work-life balance, and burnout are important organizational influences on teamwork [ 100 , 101 ]. Team composition: With increasing complexity in healthcare, collaboration between multiple teams becomes increasingly important; and multiteam collaboration is a new and important research area [ 102 , 103 ]. Many health care teams have low temporal stability [ 104 ] (i.e. the team composition changes daily of even for specific tasks), posing specific challenges to continuity of care as well as to the development of shared mental models and situation awareness [ 104 ]. The experts acknowledged the plethora of research in this domain, but discussed the need for research that aims to better understand how specific work environments in medicine can be optimized for functional teams. As technological innovation in health care evolves rapidly, impacting work processes (including in acute care), care is increasingly delivered by geographically dispersed teams. However, organizational aspects have mainly been studied in teams working at one location. Important research gaps pertain to the development of new theories and empirical studies on optimizing teamwork in dispersed or virtual teams or multiteam systems. In addition, the expert group identified a gap in the analyses of the impact of the work environment and schedules in terms of work shift on teamwork and outcomes. Training and health professions education Training and education of health professionals traditionally rely on an apprenticeship model of experiential learning while on-the-job with accompanying didactic approaches, including studying in the classroom and reading [ 105 ]. A rapid growth in simulation-based educational methods in the last decades aims to provide safe, effective and reproducible training [ 106 , 107 ]. Simulation-based training is the most frequently investigated type of team training in medicine, followed by principle-based training (i.e. CRM and Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPS)) [ 108 ] as well as general team trainings that contain multiple educational forms (e.g. team building, coaching, and communication skills training) [ 26 , 109 , 110 ]. The expert group identified research gaps for training and education that include team training under adverse conditions (e.g. over-crowded complex wards, stressful conditions, resource constraints, rapid environmental changes in demands; including evaluations of training related to specific events, for example a pandemic [ 111 ]). An important research gap relates to training for quickly changing teams, especially during crisis situations when additional people join in patient management as the crisis unfolds [ 112 ]. Furthermore, the experts emphasized that research is needed to develop training with a focus on non-technical skills that are directly connected to technical skills training, so ideally both aspects can be trained together. Another important gap is research on sustainability of training results over time and in practice, as skills learned during training are not always implemented in practice right away or at all. A proposed strategy is to provide multiple training opportunities rather than training as one-time events. Organizational and patient safety culture Current thinking about organizational and safety culture is dominated by the concept of “Just Culture” [ 113 , 114 ] in relation to HROs [ 90 ], incorporating increasing complexity due to unpredictable or invisible interactions between system components and human workers. A just culture recognizes the role of the organization and its system components in providing high quality of care, and thus its responsibility in the case of adverse events, and at the same time the accountability of individual employees [ 115 ]. These aspects are emphasized in the concept of a psychosocial safety climate [ 116 ]. In order to react to disruptions and unexpected situations in a resilient manner, risks need to be managed rather than regulated [ 117 ]. Resilience research suggests that individuals and teams play an important role in managing risks and disruptions through adaptation [ 118 , 119 ]. Organizations are learning systems, continually optimizing the interaction between the work system and the worker [ 120 ]. One well-known way to achieve this aim is the willingness of the organization and its employees to admit their own failures by reporting them rather than keeping them secret [ 121 , 122 ]. Therefore, a “just culture” is needed, entailing an atmosphere of trust in which providers and patients are encouraged, and even rewarded, for providing essential safety-related information, but in which they are also clear about where the line must be drawn between acceptable and unacceptable behavior. The role of leaders in influencing collective perceptions of values and priorities is frequently emphasized to establish a just culture. Psychological [ 123 ], social, and occupational safety [ 124 ] have been extensively studied as prerequisites. Leader inclusiveness, such as supporting others’ contributions, is recognized as an important determinant of team functioning and learning [ 125 ]. Theoretical understanding in this domain has grown considerably, but methods to operationalize and implement it are still in its infancy (but see Dollard and colleagues [ 126 ]). After years of regulation and focus on leadership, there is a need for a more holistic, systemic approach, involving all team members, over a longer time frame to improve organizational and patient safety culture . The investigation of the relation between organizational and patient safety culture and patient safety outcomes was found to be of utmost importance to convince health care leaders. In accordance with these topics, questions that scored highest were related to how patient safety culture can be improved in health care organizations, as well as how to achieve a better understanding of the barriers in acute care teams to embrace team skills and strategies for inclusion of team skills in clinical curricula [ 127 , 128 ]. Regarding to this topic, the expert group identified as the need to study the changes of patient safety culture over time as a research gap, as temporal changes and longitudinal studies are scarce. They also suggested to focus on studying the association between safety culture and patient outcomes more closely. Another neglected topic is research on the conditions to improve the organizational and patient safety culture. Future research should embrace a broader focus, shifting from concentrating on the role of the leader to the role of all team members. Finally, as described in the paragraphs on themes 1 to 5, considerable interdependency exists between organizational and patient safety culture and team processes, technology, organization and education. For instance, organizational culture and patient safety can be strongly affected by technological and organizational structure at the hospital level and the team level. Vice versa, improvement of teamwork by tools or training can have a positive effect on organizational and patient safety culture. Furthermore, in the focus groups, local culture was discussed as a barrier to the implementation of teamwork interventions, with healthcare workers often not identifying themselves with those working outside the medical field (the “others”), and with teamwork requirements being perceived as obvious and thus undeserving of attention and resources. Research on such aspects is needed to better understand and manage these interdependencies. Proper implementation strategies, suiting the situation and context of the teams involved, should be identified for this purpose [ 129 , 130 ]. Strengths and limitations We applied a systematic methodology to generate and prioritize research questions from a multidisciplinary group of experts in the field. Even though we likely missed important research questions (e.g. due to the low response rate to generate research questions, the participation of a limited number of mainly European experts) we believe the identified topics currently represent areas of high relevance. The circumstances of the COVID pandemic in 2020 and the fact that the conference was held virtually may have contributed to the low response rate of the experts of the BSAS community, particularly of the front-line clinicians, who were essential personnel during the pandemic. Thus, we acknowledge that the representativeness is limited by the small sample of highly specialized experts and low participation of other relevant professional groups. The adaptation of the method used by Zwaan et al. allowed us to build the research agenda with a solid community of experts in our field; however, the limitation was that experts could be involved in both the generation of the questions and their ratings, which does not align with the methods of Zwaan et al. [ 27 ]. Furthermore, even though we prioritized these research areas, we should be aware that hospitals and the broader health care setting are complex systems with many interacting parts, necessitating a more holistic or integrated approach [ 131 ]. For example, culture impacts organizational aspects, which in turn influence team processes. Consequently, the categorization of the topics performed as part of the research agenda may not reflect a more complex reality. In addition, the current research agenda primarily represents the views of experts in this field but lacks input from other relevant stakeholders such as diverse administrators (e.g. OR administrators), frontline clinicians, technology developers, and patients.
Conclusion We developed a research agenda with experts from the BSAS community and identified research priorities in behavioral science applied to acute care teams and surgery for the years to come. Six high-priority topics based on inputs from an expert group include: interventions; technology; team processes; organizational aspects; training and health professions education; and culture. Notably, research questions in the areas of interventions, technology, and team processes were prioritized and identified as areas where more research is needed in the near future. Interestingly, this list aligns well with the recommendations of Salas and colleagues [ 77 ] who also emphasize technology for team assessment and application among the most important future topics for teams in general. We can glean additional lessons from the research priorities identified by our group of experts, namely the urgent need to translate knowledge about impactful implementation strategies [ 132 ] effectively and sustainably. Thus, the small and highly specialized group of experts from the BSAS network identified top research priorities in the near-term for behavioral science applied to acute care teams; these are useful for both researchers and funding agencies that operate within applied health research.
Background Multi-disciplinary behavioral research on acute care teams has focused on understanding how teams work and on identifying behaviors characteristic of efficient and effective team performance. We aimed to define important knowledge gaps and establish a research agenda for the years ahead of prioritized research questions in this field of applied health research. Methods In the first step, high-priority research questions were generated by a small highly specialized group of 29 experts in the field, recruited from the multinational and multidisciplinary “Behavioral Sciences applied to Acute care teams and Surgery (BSAS)” research network – a cross-European, interdisciplinary network of researchers from social sciences as well as from the medical field committed to understanding the role of behavioral sciences in the context of acute care teams. A consolidated list of 59 research questions was established. In the second step, 19 experts attending the 2020 BSAS annual conference quantitatively rated the importance of each research question based on four criteria – usefulness, answerability, effectiveness, and translation into practice. In the third step, during half a day of the BSAS conference, the same group of 19 experts discussed the prioritization of the research questions in three online focus group meetings and established recommendations. Results Research priorities identified were categorized into six topics: (1) interventions to improve team process; (2) dealing with and implementing new technologies; (3) understanding and measuring team processes; (4) organizational aspects impacting teamwork; (5) training and health professions education; and (6) organizational and patient safety culture in the healthcare domain. Experts rated the first three topics as particularly relevant in terms of research priorities; the focus groups identified specific research needs within each topic. Conclusions Based on research priorities within the BSAS community and the broader field of applied health sciences identified through this work, we advocate for the prioritization for funding in these areas. Supplementary Information The online version contains supplementary material available at 10.1186/s12913-024-10555-6. Keywords
Supplementary Information
Acknowledgements We thank Laura Zwaan for sharing her experience with research agendas with our research group, Annalena Welp for her participation in the discussions at the very beginning of the project, Matthias Weigl for his comments on a previous version of the manuscript and all the experts who accepted to share their research questions with our group. Authors’ contributions MdB, JJ, and SK contributed to the conception and design of the work, the analysis and interpretation of the data and drafted the work. All authors (MdB, JJ, SK, FT, NS, RML, JC, LKM, WE, NKS, IvH, KPH) contributed to the acquisition of the data and substantively revised the work. All authors (MdB, JJ, SK, FT, NS, RML, JC, LKM, WE, NKS, IvH, KPH) have approved the submitted version (and any substantially modified version that involves the author's contribution to the study); All authors (MdB, JJ, SK, FT, NS, RML, JC, LKM, WE, NKS, IvH, KPH) have agreed both to be personally accountable for the author's own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature. Funding Not available. Availability of data and materials The datasets generated and analyzed during the current study are not publicly available due to the confidentiality of the research questions generated by the group of experts involved in the project, but a fully coded dataset is available from the corresponding author upon reasonable request. Declarations Ethics approval and consent to participate We declare that all methods were performed in accordance with relevant guidelines and regulations. The responsible ethical committee (Kantonale Ethikkommission für die Forschung (KEK), Bern, Switzerland) decided to waive ethical approval (decision #Req-2023–00201), reasoning that the Swiss human research act (Art. 2, Abs. 1) does not apply for this research. Thus, an extended written consent form was not provided. However, prior to logging in to the data collection page, all participating experts were informed that their information would be used for this study. In addition, at the beginning of the online focus groups, we asked all participating experts for permission to record the meeting and the use of the data for our research purpose. This corresponds to an opt-in consent. Consent for publication Not applicable. Competing interests RML receives per diem honoraria from Paedsim e.V. for interprofessional team training. NS is the director of the London Safety and Training Solutions Ltd, which offers training in patient safety, implementation solutions and human factors to healthcare organisations and the pharmaceutical industry. IvH is supported by a Senior Clinical Fellowship (802314N), Fund for Scientific Research – Flanders, Belgium. MdB, JJ, SK, FT, JC, LKM, WE, NKS, KPH declare no competing interests.
CC BY
no
2024-01-15 23:43:48
BMC Health Serv Res. 2024 Jan 13; 24:71
oa_package/75/a9/PMC10788034.tar.gz
PMC10788035
38218937
Introduction It is essential to view databases not only as repositories of experimental results but also as valuable resources for data exploration and exploitation, particularly when mining data from publicly accessible databases. Among these, the Protein Data Bank (PDB), Cambridge Structural Database (CSD), and ChEMBL all contain rich implicit information that can be leveraged for drug discovery. ChEMBL, which aggregates chemical, bioactivity, and genomic data, is a meticulously curated database of bioactive molecules with drug-like properties [ 1 ]. EMBL-EBI recently released ChEMBL 30, which includes approximately 2.2 million compounds, 1.5 million assays, and 43,000 indications, all deposited and well-archived. Both CSD and PDB consist of ASCII files containing three-dimensional (3D) atomic coordinates of molecules, although they differ in terms of molecule size. Established in 1965, CSD serves as the global repository for organic crystal structures of small molecules, managed by the Cambridge Crystallographic Data Centre and updated thrice annually. As part of this commercialized project, several tools, including the CSD System, DASH, Mercury Menu, GOLD, and SuperStar, have been developed to provide comprehensive knowledge derived from CSD, making it widely utilized by the research and industrial communities. Established in 1971 by the structural biology community as a central repository for macromolecular structure data, the PDB has consistently upheld a culture of open access and is now widely employed in fundamental biology, with millions of users leveraging its data to advance biomedical research [ 2 ]. Structural biology and structural bioinformatics have profoundly influenced our understanding of the mechanisms and functions of biological macromolecules. The PDB serves as a custodian for all this data, representing the repository for the vast majority of accomplishments and milestones in the structural biology community. It also offers numerous additional sequence and structural annotations, along with tools for pairwise and multiple structure comparisons, including those for the analysis of ligands and their interactions. Therefore, PDB has the potential to be further utilized for specific applications. The cheminformatics and bioinformatics knowledge within PDB can be extracted through in-silico parsing of textual files. For instance Borrel et al. characterized the frequency, type, and density of the salt bridges during the ligand-receptor recognition [ 3 ], which can greatly benefit drug design. However, the development of tools and applications based on PDB data has fallen short of expectations, not to mention commercialized products. A key challenge for medicinal chemists is to modulate the potency and selectivity of small therapeutics toward their biological targets and some believe that bioisosteric replacement is an effective strategy to expedite the process of identifying analogues with improved potency, intending to bypass existing patents [ 4 ]. Bioisosterism, described as functional group exchanges to achieve similar biological outcomes, has garnered significant attention among practitioners. Bioisosteric replaceability relies on broader structural similarities to elicit the desired biological effects, rather than adhering strictly to physical or electronic mimicry. Typically, in medicinal chemistry, one modifies a promising pharmacophore by replacing specific functional groups with the aim of achieving the same biological response. Examples have demonstrated that bioisosterism is a powerful tool for guiding successful drug development projects [ 5 ]. The replacement of the amide moiety and benzene ring of the phase II clinical candidate GSK’772 led to the discovery of more potent compounds with EC 50 values of 2.8 nM toward the target [ 6 ]. The surrogation of l -proline in melanostatin with 3-furoic acid has afforded two potent analogues with 2- and 4.3-fold improved EC 50 to dopamine D 2 receptors, respectively [ 7 ]. Instead of improving the potency of parent ligands by using local structural replacement approach, a brand-new molecule can also be created. Starting with a kinase inhibitor, Grigorii et al. searched for commercially available replacements of the individual building blocks that constitute the parent ligand, then determined which fragments were suitable for merging into new compounds with a high binding affinity [ 8 ]. Referring to bioisosteric replacements strategy, Yang et al. developed DrugSpaceX database which dramatically diversified the modifications of the molecular framework thereby extended drug space [ 9 ]. Bioisosteric replacement as a tool for either anti-HIV drug design [ 10 ] or specific chemical moieties, including amide [ 11 ], phenyl [ 12 ] has been reviewed. From a molecular perspective, bioisosteric replacement enable the conservative interactions between a ligand and a target protein [ 13 ] and this mutual recognition can be depicted in silicon . Nowadays, computational tools have become indispensable in drug discovery process and have emerged to accelerate the acquisition of bioisosteric information from bio- or/and cheminformatic database. Analysing data from the PDB, the investigation into tetrazole-carboxylic acid bioisosterism revealed that protein binding site needs to be flexible enough to establish robust hydrogen bonds with tetrazolate ligands, especially when compared to carboxylate counterparts [ 14 ]. In a computational lead optimization process using bioisosterism, structural data of the target protein–ligand complex are leveraged [ 15 ] to modify the parent scaffold, following the principle of ensuring a suitable fit and interaction compatibility within the specific subpocket of the target protein [ 16 ]. Other than the extraction of bioisosteric information through computational tools, the identification of appropriate bioisosteres heavily relies on the experience of individual practitioners, making it subjective and potentially influenced by personal biases. While these semiempirical methods have been praised for offering alternatives, they frequently fall short in elucidating the underlying interaction mechanisms, particularly in how the bioisostere in question consistently interacts with the receptor in comparison to the reference moiety. Furthermore, having an excessive number of bioisosteres to choose from without proper organization and categorization could lead to the pitfalls of trial-and-error screening, frustrating researchers who prefer a clear ranking of top candidates. As drug development costs rise, there is a growing need for a user-friendly, readily applicable system for bioisosteric information. However, it is currently lacking in this regard. Due to the discrepancy between the vast, but underused data repository and the increasing demand of medicinal chemists for valuable bioisosteres, especially those with implicit characteristics that are difficult to imagine or have not been previously experienced, there is a pressing need for computational methods that can efficiently traverse the database for such information. SwissBioisostere, hosted by the Swiss Institute of Bioinformatics and being accessible via a web interface [ 17 ], uses the ChEMBL database as a primary data source to identify matched molecular pairs by applying the Hussain and Rea algorithm after data curation. sc-PDB-Frag [ 18 ], differentiating from ligand based scaffold hopping, searches bioisosteric replacements from the protein–ligand interaction pattern. In contrast, KRIPO [ 19 ], quantifies the similarities of binding site subpockets not only intra- but also interprotein family, broadening the application spectrum of bioisosterism. Seddon et al. fragmented the ligands for a given target using the BRICS scheme, then considered a pair of extracted moieties to be bioisosteric if they occupy a similar volume of the protein binding site [ 20 ]. A web tool to automate bioisosteric functional groups identification was developed by Novartis through the calculation of electronic, hydrophobic, steric, and hydrogen bonding properties as well as by the drug-likeness index of about 8.5 million unique organic substituents [ 21 ]. The web server MolOpt assists in drug design using bioisosteric transformations, with rules derived from data mining, deep generative machine learning, and similarity comparisons [ 22 ]. After the input of a protein and a ligand structure and users’ selection of specific substructures which intended to replace, computational tool FragRep [ 23 ] tried to find suitable fragments that simultaneously match the geometric requirements of the remaining part of the ligand and well complementary with local protein environments. One crucial aspect of structure-based drug design is the use of GRID software to identify potential chemical modifications that can be made to known ligands. Recently Cross et al. proposed FragExplorer approach aiming to show users which fragments would best match the GRID molecular interaction fields in a protein binding pocket [ 24 ]. Craig Plot 2.0 fragmented ChEMBL database bioactive molecules, determined Hammett σ and Hansch-Fujita π values for their substituents, and grouped them by root or atom type, aiding in the selection of bioisosteric analogs [ 25 ]. Successful application of bioisosteric transformation hinges upon a thorough understanding of the physicochemical attributes of frequently encountered substituents, which can be accurately represented. For example, R-group descriptors encoding the distribution of atomic properties at increasing distances from a substituent’s point-of-attachment to a central ring scaffold for identifying structurally similar pairs of substituents were reported by Holliday et al. [ 26 ] 3D descriptors Flexsim-R were calculated based on docking of small building blocks drug-like molecules into a reference panel of protein binding sites for bioisosteric functional groups [ 27 ]. So far, the acquisition of the bioisosteric information depending on (1) the experience of medicinal chemists working many years in the field; (2) mining the medicinal chemistry literature and extracting information by querying an internal library containing bioisosteric families [ 28 ]; (3) similarity in molecular physicochemical properties, including size, hydrophobicity, 3D substituents [ 29 ] or electron-donating profiles and (4) deep neural network trained on experimentally validated analogues extracted from medicinal chemistry literature [ 30 ]. The structural replacement of phosphate [ 31 ] and ribose [ 32 ] group identification was executed using our previously developed computational workflow, yielding some intriguing results. This protocol can be streamlined and led to the development of a user-friendly web server, BioisoIdentifier (BII), equipped with fragment sketching tools. The process involves drawing the replacement fragment, converting it into Simplified Molecular Input Line Entry System (SMILES) code, and then processing it through the main program (Python and R). The program interfaces with third-party software, including Blastp, US-align, and RDKit, to organize individual PDB files. In this virtual system, spherical probes (2.5 Å radius) are created, targeting atoms within the reference ligand's chemical moiety for replacement as centroids. The sensed atoms serve as structural replacements for the reference fragment. To enhance output visualization, potential bioisosteric moieties are clustered based on structural similarity or unsupervised machine learning.
Method Workflow of BII BII identifies bioisosteres in six steps, as illustrated in Fig. 1 . Users sketch the target functional group using JSME in the Django frontend and obtain the SMILES code, which is transmitted to the backend. The backend searches the database for stored bioisosteres based on the provided SMILES code. If found, results are directly retrieved. If not, further processing occurs, with ligands containing the target functional group queried from the PDB using RDKit's substructure search. These reference ligands undergo a sequential search to obtain and save bioisosteres. The notable benefit of this approach arises from its ability to be explained through a molecular interaction perspective, leveraging information derived from PDB data to uncover details about local structural replacements. Figure 1 B illustrates the specific calculation process. PDB download: RCSB PDB provides a shell script, named “batch_download.sh” (in S1), which can download multiple PDB archive files by providing a file containing a comma-separated list of PDB IDs. An essential prerequisite for running this script is to have the ‘curl’ tool installed. However, during our attempts to acquire the PDB archive, we encountered slow download speeds. Therefore, we developed a Python-based web crawler to swiftly retrieve the data. Pretreatment of target protein: The small-molecule ligands with substructures intended to be bioisosterically replaced are selected from the PDB archive, with the macromolecular structures containing these ligands serving as reference proteins. We obtain the FASTA sequences of these proteins and input them into Blastp [ 33 ] to compare them with the sequences in the PDB, then output protein homologues with very close or identical structure. Protein structure superimposition: Protein homologues exhibiting remarkably similar or identical structures are meticulously superimposed onto the reference protein using TM-align [ 34 ]. Subsequently, these alignments are further refined through the application of US-align [ 35 ] to achieve a more precise protein structure alignment. Local structure extraction: Upon the successful alignment of these protein homologues, the atomic coordinates of the reference fragment earmarked for replacement within the reference protein are extracted. Each atom of the fragment functions as the centroid of a sphere with a radius of 2.5 Å. These spheres are employed to explore target ligand fragments, capturing atoms that come into contact, which are subsequently extracted and regarded as potential bioisosteric replacements for the reference substructure. Fitness evaluation of extracted fragment with reference substructure: To assess the extent of overlap between the extracted fragments and the reference moiety, we utilized ShaEP [ 36 ], a tool designed for evaluating the similarity of ligand-sized molecules in terms of both shape and electrostatic potential. As per its definition, the fitness of a molecule pair based on ShaEP falls within the range of [0,1], with 1 signifying a perfect match. In this context, we established a threshold of 0.2 based on empirical rules and experience. Output of extracted fragment with SMILES code: While computers are well-suited for processing textual strings, the human brain often finds graphical information more intuitive and comfortable to work with. To address both of these requirements, Open Babel [ 37 ], which enables the interconversion of more than 100 formats of chemical structures, was employed to specifically convert the SMILES string into an output fragment graph. To classify the structural isosteres of the 3-substituted catechol, a clustering post-processing step was employed, utilizing unsupervised machine learning. In this regard, several algorithms were experimented with and underwent parameter adjustments to optimize each one individually. The detailed process is illustrated in Fig. 1 C and is described as follows: Search result format conversion: To calculate molecular similarity for the subsequent calculations, the format of all search results was converted from SMILES to SDF format using custom-written code. Converting from SMILES to SDF format can result in potential loss of information. As a precaution, it is necessary to clean the data, which involves removing entries with missing content and eliminating duplicates. Molecular fingerprint and molecular similarity calculation: The molecular Morgan fingerprints were calculated at first, and then the RDKit tool was used to calculate the molecular similarity matrix through Tanimoto distance, as depicted in the zoomed-in view in Fig. 1 D1 Data classification by using machine learning unsupervised clustering algorithms: we explored the application of various unsupervised clustering algorithms, as illustrated in Fig. 1 D2. These algorithms can be broadly categorized into two groups. The first category comprises algorithms like K-means and Dbscan, which necessitate specifying the hyperparameter for the number of clusters. In contrast, the second category includes algorithms such as AgglomerativeClustering and AffinityPropagation, which do not require specifying the number of clusters. Optimization of algorithms parameters: For algorithms that necessitate the specification of additional hyperparameters, including the number of clusters, we employed techniques like the elbow method, silhouette coefficient method, and hyperparameter random search to optimize the clustering results by searching for the best parameters. Dimension reduction of clustering results for visualization: As previously mentioned, data points are stored in the form of 2048-bit MFF, which makes it challenging to effectively visualize clustering results in such high-dimensional space. Therefore, we employ principal component analysis (PCA) to reduce the data dimension from 2048 dimensions to 2D or 3D. We utilize the matplotlib tool to create visual representations and display the clustering results graphically. Web server Interface features and usage Figure 2 displays a screenshot of the BII homepage, featuring a concise introduction and a web server input interface. Users can draw the chemical structure of the target functional groups in the molecular editor JSME. The ‘R’ denotes the vertex where the target functional group bifurcates, indicating that only the sketched core substructure requires replacement. The input fragment is always assumed to be complete. Once the structural construction is complete, users can obtain the SMILES code corresponding to the target functional group by clicking the “Get Smiles” button on the page. Subsequently, they can initiate the LSR search by clicking the “search” button. Implementation The Django web framework and Python code are employed to develop the interface functionality of the web server and execute MySQL database queries for ligand substructure replacement. RDKit [ 38 ] is utilzied to facilitate fragment database construction, calculate molecular descriptors, and depict 2D molecular structures. Case study Catechol, an unsaturated six-carbon ring (phenolic group) with two hydroxyl groups attached to adjacent carbons (dihydroxyphenol), is a widely observed group in neurotransmitters such as dopamine and noradrenaline. The nitrocatechol based compounds tolcapone and entacapone are successfully used as adjuncts to treat Parkinson’s Disease. Meanwhile, bisubstrate and non-nitro hydroxypyridone catechol O -methyltransferase (COMT) inhibitors have also been reported for the same disease. However, tolcapone and entacapone mainly act peripherally and poorly penetrate brain as centrally acting drugs. Besides, phenolic compounds are prone to high metabolic clearance due to their acidity and polarity. Therefore, next generation COMT inhibitor prefer replace catechol with corresponding bioisostere [ 39 ]. This need has drawn our attention to explore catechol bioisosteres, which we present as a case study. Apart from the two contact points of the hydroxyl group in the benzene ring, four other positions are available for ligand extension, representing three types (Fig. 3 ) of possible catechol containing ligands.
Results and discussion The LSR of catechol When inputting a 3-substituted catechol encoded as Oc1cccc([R])c1O into the server, it suggests over 496 replacement ideas, all of which are displayed in a table, paginated for convenience. Figure 4 provides a snapshot of the first page, showcasing the clustering results represented in both two-dimensional and three-dimensional structures. The remaining replacements are documented in Additional file 1 : Figure S2. Each entry in the table includes valuable information such as SMILE codes, 2D and 3D representations, a similarity index, as well as the associated reference protein complex and its corresponding ligand PDB ID, along with details of the target protein complex and its related ligand PDB ID. The LSR of 3-substituented catechol are first sorted according to their ShaEP index and subsequently recorded in a table. Based on their structural similarity, they are then hierarchically classified into 32 distinct groups. Users can easily visualize this classification by clicking on the “Classification” tab. For a more detailed view, specific LSR included in the “C+O+N” group are exemplified in Fig. 5 , accessible by clicking the corresponding group name. Moreover, unsupervised learning algorithms have been employed to further refine and narrow down the number of subgroups. Figure 6 illustrates the categorization of LSR for 3-substitued catechol recognized using BII. They are sorted into 24 categories based on the SMILES code. Among these, 240 bioisosteres, although belonging to cyclic structures, do not fall into any predefined category; therefore, they are grouped under [cycle other], making it the largest family. This is followed by 215 members categorized under [cycle C+N], and there is only one bioisostere in the [F] category. For further insights, bioisosteres of 4-substituted and 3,4-substituted catechol are also presented individually in Additional file 1 : Figure S3 and S4. Notably, the primary focus of this work is on the conservativity of interactions between the parent ligand moiety and the protein, without explicitly discriminating between the replacement of the moiety and the generation of entirely new molecules. While BII may suggest local structural replacements for specific moieties in the catechol example, our goal is to identify bioisosteric replacements with greater stringency. Our approach involves superimposing proteins with identical groups but accommodating different ligands. We then concentrate on the space where the intended moiety is to be replaced. The docking of replacement moieties into the original catechol's position may induce a shape change in the binding pocket due to its flexibility. Importantly, our approach can be applied to scaffold hopping and the generation of combinatorial libraries to a certain extent. Unsupervised clustering methods are employed to categorize structural replacements of 3-substituent catechol into fewer categories, utilizing the SMILES encoding approach. This unsupervised clustering unveils latent similarities among these structural replacements, thereby simplifying data complexity and enhancing comprehensibility and visualization. This simplification streamlines the selection of representative samples from each cluster, facilitating in-depth research and, consequently, enhancing screening efficiency. In Fig. 7 , you can observe the results obtained from the application of various algorithms and their respective optimization techniques. The algorithms are divided into two categories based on the necessity of pre-specifying the number of clusters, each category employing unique hyperparameter optimization strategies. For algorithms where pre-specifying the cluster number is unnecessary, as exemplified by the MeanShift algorithm, we construct an optimization curve that correlates the “bandwidth” hyperparameter with the silhouette coefficient to determine the optimal “bandwidth” value of 446. This corresponds to a cluster count of 47 with an average silhouette coefficient of 0.561. The Birch clustering algorithm employs a similar approach to ascertain the optimal “n_neighbors” hyperparameter value, achieving the highest silhouette coefficient of 0.519 when “n_neighbors” equals 3. In the case of algorithms requiring a predefined number of cluster groups, a more intricate method is employed to determine the optimal cluster count. Figure 8 illustrates the process of determining the optimal number of clusters for the K-Means algorithm. The optimal number of clusters was determined using the elbow rule and the silhouette coefficient method, individually for rational segregation of the structural replacements in the chemical space. The elbow method and silhouette coefficient method are used to determine the optimal number of clusters. Figure 8 A shows that the elbow of the sum of squares due to error (SSE) sharply drops when the number of classes is less than 15. It can be observed that the largest value of k for the contour coefficient is 2. However, the elbow diagram of k and SSE reveals that the SSE is still relatively large when k is taken as 2. This is due to that the contour coefficient takes into account the degree of separation, and so it is an irrational number of clusters for k = 2. Therefore, retreating to the second largest value of k for the contour coefficient, we consider the second largest value of k for the contour coefficient. Further analysis of the relationship between the silhouette coefficient and the number of clusters (Fig. 8 B) reveals that the best cluster number (the number of clusters with the maximum silhouette coefficient) is 5. To verify this conclusion, silhouette coefficient diagrams for each class were plotted separately for clustering with 5 and 6 classes, and the average silhouette coefficients of the clustering results are indicated by the red dashed line. As shown in Figs. 8 C and D, each class was more uniformly distributed when the cluster number was 5, supporting the empirical division of the LSR of 3-substituent catechol into 5 groups accordingly. It should be noted that the presented computational results are illustrative of our computational process using 3-substited catechol as an example, which is why some algorithms may have lower silhouette scores. To provide a detailed view of the clustering results of 3-substituted catechol LSR, principal component analysis (PCA) was employed to reduce the dimensionality of the 2048-dimensional data to 2D or 3D, as demonstrated in Fig. 9 A for 2D visualization and Fig. 9 B for additional perspectives on the 2D and 3D visualization, which are summarized in Additional file 1 : Figure S5. In Fig. 9 , dots of the same color represent a category, and two categories are chosen as examples to present a list of classified molecules. The acidity dissociation constants for catechol are p K a1 of 9.25 and p K a2 of 13.0 [ 40 ], suggested that the catechol is slightly acidic at biological environment of pH 7.4, it is therefore thought acidic groups are intrinsic biosisosteres of catechol to conserve molecular interactions where possible. However, we envision it is likely that basic groups might be suggested by our BII tool. It is not surprise since our previous investigation revealed that basic –CH 2 NH 3 + replaced acidic phosphate group and a Mg 2+ concurrently [ 31 ]. The metal cations hence may play an important role during local structure replacement of catechol since they can readily coordinate. Three optional LSRs of catechol are displayed in Fig. 10 , where it can be observed that these newly identified substructures exhibit similarities in shape to catechol. To elucidate structure–activity relationship of catechol and corresponding replacements, the structural and biological data are compiled from reference publications. In addition, we leveraged the structure diversification of identified new chemicals with activity change toward a selected target, discussed how substitutes deletion or protrusion impacts the biological activity of resulting molecules. The therapeutic impact of catechol in lung cancer treatment was achieved by inhibiting the activity of extracellular signal-regulated kinase 2 (ERK2), and its direct binding to the active site of ERK2 (PDB code: 4ZXT) was confirmed through X-ray crystallography [ 41 ]. Catechol was anchored to the hinge loop of the ATP-binding site of ERK2, with its hydroxyl groups interacting with the main chain of Asp106, Met108, and the side chain of Gln105, all located on the hinge loop. The azaindole ligand (compound 3 in Ref. [ 42 ] PDB code: 42A) occupied the same binding site where catechol was positioned in ERK2. In detail, the pyrrole NH of 7-azaindole formed a strong hydrogen bond (d = 2.8 Å) with the backbone carboxyl oxygen of Asp104, and the pyridine nitrogen served as a hydrogen bond acceptor (d = 3.0 Å) for the Met106 backbone NH. The ligand (compound 46 in Ref. [ 43 ] PDB code: 9N8) binds in the ATP-binding site of ERK5. The pyrrole NH and amide carbonyl formed hydrogen bonds (d = 2.8 Å, d = 2.7 Å) with the backbone carbonyl of Asp138 and amide of Met140 in the ERK5 hinge-region, respectively. Noticeably, the pyrrole-2-carboxamide took the position of catechol. The chloro-substituted aminopyrimidine moiety of ER8 (compound 15 in Ref. [ 44 ]) took the space of catechol as so that halogen bond (d = 2.7 Å) between the the chloro atom and amide residue oxygen of gatekeeper Gln105. Hydrogen bonds (d = 3.1 Å, d = 2.9 Å) were observed between the ligand’s pyrimidine N, amino NH and the backbone NH, C=O of hinge residue Met108 respectively. C=O of hinge residue Met108 respectively. The p38αMAPK inhibitor hit (compound 3 in Ref. [ 45 ] PDB code: MWL) occupied the active site space of p38αMAPK. The pyridine ring nitrogen allowed for hydrogen bonding (d = 2.8 Å) with the peptide backbone of Met109 from the hinge region. In this context, the pyridine moiety can be considered a structural replacement for the C=O of hinge residue Met108, effectively taking the place of catechol. The idea bioisosteres by definition, entails both steric and but electronic conservatism. However, achieving a perfect match for both criteria simultaneously can be challenging and may require some degree of compromise. It's conceivable that an imperfect match in electronic conservativity could be compensated for by a precise steric fit, thereby maintaining overall binding affinity. It should be acknowledged that the inability of BII to distinguish between hydrogen bond donors and acceptors, as it primarily focuses on the conservativity of the interaction itself. For instance, the hydroxyl group in catechol serves as a hydrogen bond receptor in the reference, whereas the –C=O group of the carboxamide in ligand 9N8 can only function as a hydrogen bond (HD) acceptor due to its electron-rich nature. The same applies to the cationic –N(CH 3 )– group, which acts as a HD acceptor. The human enzyme 17β-hydroxysteroid dehydrogenase 14 (17β-HSD14), using NAD + as cofactor, oxidizes estradiol and 5-androstenediol. The human HSD17B14 gene is widely expressed in major organs, such as brain, liver and kidney. It has also been identified in breast cancer tissue, but the physiological function of this enzyme was poorly understood. The use of inhibitors can be important tools to study the physiological role of 17β-HSD14 in vivo. The methanone compound 1 (compound 12 in Ref. [ 46 ] PDB code: 5Q6) inhibits the activity of 17β-HSD14 with K i of 64 nM. The hydroxyl residue of Tyr154 forms two hydrogen bonds bifurcately (d = 2.5 Å, d = 3.1 Å) with hydroxyl groups of the catechol moiety. Besides, the 4-OH hydrogen bond (d = 2.5 Å) also extends toward Ser141 hydroxyl residue (Fig. 11 A). Four of 5Q6’s optional analogues are shown in Fig. 11 B and suggested that 4-fluoro-3-phenol is the bioisostere of the 3-substituent catechol, offering a ligand (compound 9 in Ref. [ 46 ] PDB code: 6QO) with increased affinity (a Ki of 13 nM). The 3-OH groups at the C-ring of 9 and compound 12 in Ref. [ 46 ] interact through remarkably short H-bond interactions with the side chain of Tyr154 (9, d = 2.3 Å, 12, d = 2.5 Å) and the side chain of Ser141 (9, d = 2.5 Å, 12, d = 2.5 Å) from the catalytic triad. The 4-F group at the C-ring of 9 is possibly involved in forming a halogen bond (d = 2.8 Å) with Ser141 hydroxyl side reside. The 3-OH groups at the C-ring of 12 hydrogen bond toward the side chain of Tyr154 (d = 3.1 Å). The replacement of the ketone linker of compound 9 with ethenyl resulted in an eightfold more potent inhibitor (compound 5 in ref. PDB code: 9JW) with a K i of 1.5 nM; while methylamine (compound 4 in ref. PDB code: 9JQ) and ether (compound 2 in reference PDB code: 9 MB) surrogate each individually deteriorated the binding affinity to a K i of 42 and 58 nM. Keeping the B and C ring of 6QO unchanged, the equipotent quinoline base inhibitor (compound 9 in Ref. [ 47 ], PDB code: 9ME), and a two folds more active naphthalene derivative (compound 10 in Ref. [ 47 ]) were obtained, but the quinoline analog was found to be four times more soluble than the naphthalene compound. Herein, we rather than concentrate on the structural replacement of catechol, where it is replaced by a 4-fluoro-3-hydroxyphenyl moiety, instead emphasize that the linker connecting replacements to other parts can vary. However, it's crucial to acknowledge that the choice of linker may impact the physicochemical properties of the ligand. Comparison with other tools The fundamental of isostere replacement lies matching of protein moieties, but sometimes this concept of replacement not aligned with the intended objective of functional group/ring/core replacement for a ligand. Therefore, BII was compared with other bioisosteric search tools, such as the SwissBioisostere database and the MolOpt network server. The SwissBioisostere database is a comprehensive resource containing information about molecular substitutions and their performance in biochemical analysis. This data is obtained by matching molecular pairs and mining biological activity data from the ChEMBL database. Notably, SwissBioisostere not only provides information about molecular substitutions but also offers interactive analysis capabilities. On the other hand, the MolOpt network server is constructed through a combination of data mining, chemoinformatics similarity comparison, and machine learning techniques. Users have the flexibility to query for bioisosteres of specific molecular substructures and even generate entirely new molecular alternatives. To perform a comparative analysis, three distinct substructures, namely the 3-substituent, 4-substituent, and 3,4-substituent, were input into each of the three search tools. Consequently, users can access the corresponding bioisosteric data for their chosen substructures. In Table 1 , we have summarized the number of bioisosteres identified by SwissBioisostere, MolOpt, and BII. Additionally, it's important to note that MolOpt offers four distinct bioisosteric replacement rules. MolOpt-1 is based on data mining principles, MolOpt-2 utilizes similarity comparison, MolOpt-3 incorporates data mining techniques, and MolOpt-4 is designed around a deep generative model. It becomes evident that when compared to the SwissBioisostere database and the MolOpt web server, BII excels in providing a more extensive array of bioisosteric ideas, making it a valuable resource for medicinal chemistry research. The bioisosteres with the top-ten rankings from each tool are depicted in Fig. 12 , illustrating consistent results. The chemical accessibility represents an important concern indeed for the novel structure generated based on this tool, but we want to emphasize that BII focus on local structural replacements yet did not consider how to incorporate suggested moieties into new ligands, but definitely it will be put into consideration as a filter of replacement moieties in updated BII version. In addition, we recognized that a retrospective validation is not satisfactory to launch BII since experimental validation in any case is a benchmark of computational tool. In fact, we conducted both wet lab synthetic and bioassay experiments in-house. It has been demonstrated that a squaryldiamide or an amide group is the bioisosteric replacement of phosphate moiety [ 48 ], NH in the urea serves as isostere of carboxylic acid [ 49 ]. After previous computational investigation of phosphate [ 31 ], ribose [ 32 ] bioisosteric replacement, the bioisosterism of these moieties have been verified. Consequently, we think it is necessitated to develop a generic tool to facilitate bioisostere identification of any chemical fragment, which pillars the basement of our current attempt.
Results and discussion The LSR of catechol When inputting a 3-substituted catechol encoded as Oc1cccc([R])c1O into the server, it suggests over 496 replacement ideas, all of which are displayed in a table, paginated for convenience. Figure 4 provides a snapshot of the first page, showcasing the clustering results represented in both two-dimensional and three-dimensional structures. The remaining replacements are documented in Additional file 1 : Figure S2. Each entry in the table includes valuable information such as SMILE codes, 2D and 3D representations, a similarity index, as well as the associated reference protein complex and its corresponding ligand PDB ID, along with details of the target protein complex and its related ligand PDB ID. The LSR of 3-substituented catechol are first sorted according to their ShaEP index and subsequently recorded in a table. Based on their structural similarity, they are then hierarchically classified into 32 distinct groups. Users can easily visualize this classification by clicking on the “Classification” tab. For a more detailed view, specific LSR included in the “C+O+N” group are exemplified in Fig. 5 , accessible by clicking the corresponding group name. Moreover, unsupervised learning algorithms have been employed to further refine and narrow down the number of subgroups. Figure 6 illustrates the categorization of LSR for 3-substitued catechol recognized using BII. They are sorted into 24 categories based on the SMILES code. Among these, 240 bioisosteres, although belonging to cyclic structures, do not fall into any predefined category; therefore, they are grouped under [cycle other], making it the largest family. This is followed by 215 members categorized under [cycle C+N], and there is only one bioisostere in the [F] category. For further insights, bioisosteres of 4-substituted and 3,4-substituted catechol are also presented individually in Additional file 1 : Figure S3 and S4. Notably, the primary focus of this work is on the conservativity of interactions between the parent ligand moiety and the protein, without explicitly discriminating between the replacement of the moiety and the generation of entirely new molecules. While BII may suggest local structural replacements for specific moieties in the catechol example, our goal is to identify bioisosteric replacements with greater stringency. Our approach involves superimposing proteins with identical groups but accommodating different ligands. We then concentrate on the space where the intended moiety is to be replaced. The docking of replacement moieties into the original catechol's position may induce a shape change in the binding pocket due to its flexibility. Importantly, our approach can be applied to scaffold hopping and the generation of combinatorial libraries to a certain extent. Unsupervised clustering methods are employed to categorize structural replacements of 3-substituent catechol into fewer categories, utilizing the SMILES encoding approach. This unsupervised clustering unveils latent similarities among these structural replacements, thereby simplifying data complexity and enhancing comprehensibility and visualization. This simplification streamlines the selection of representative samples from each cluster, facilitating in-depth research and, consequently, enhancing screening efficiency. In Fig. 7 , you can observe the results obtained from the application of various algorithms and their respective optimization techniques. The algorithms are divided into two categories based on the necessity of pre-specifying the number of clusters, each category employing unique hyperparameter optimization strategies. For algorithms where pre-specifying the cluster number is unnecessary, as exemplified by the MeanShift algorithm, we construct an optimization curve that correlates the “bandwidth” hyperparameter with the silhouette coefficient to determine the optimal “bandwidth” value of 446. This corresponds to a cluster count of 47 with an average silhouette coefficient of 0.561. The Birch clustering algorithm employs a similar approach to ascertain the optimal “n_neighbors” hyperparameter value, achieving the highest silhouette coefficient of 0.519 when “n_neighbors” equals 3. In the case of algorithms requiring a predefined number of cluster groups, a more intricate method is employed to determine the optimal cluster count. Figure 8 illustrates the process of determining the optimal number of clusters for the K-Means algorithm. The optimal number of clusters was determined using the elbow rule and the silhouette coefficient method, individually for rational segregation of the structural replacements in the chemical space. The elbow method and silhouette coefficient method are used to determine the optimal number of clusters. Figure 8 A shows that the elbow of the sum of squares due to error (SSE) sharply drops when the number of classes is less than 15. It can be observed that the largest value of k for the contour coefficient is 2. However, the elbow diagram of k and SSE reveals that the SSE is still relatively large when k is taken as 2. This is due to that the contour coefficient takes into account the degree of separation, and so it is an irrational number of clusters for k = 2. Therefore, retreating to the second largest value of k for the contour coefficient, we consider the second largest value of k for the contour coefficient. Further analysis of the relationship between the silhouette coefficient and the number of clusters (Fig. 8 B) reveals that the best cluster number (the number of clusters with the maximum silhouette coefficient) is 5. To verify this conclusion, silhouette coefficient diagrams for each class were plotted separately for clustering with 5 and 6 classes, and the average silhouette coefficients of the clustering results are indicated by the red dashed line. As shown in Figs. 8 C and D, each class was more uniformly distributed when the cluster number was 5, supporting the empirical division of the LSR of 3-substituent catechol into 5 groups accordingly. It should be noted that the presented computational results are illustrative of our computational process using 3-substited catechol as an example, which is why some algorithms may have lower silhouette scores. To provide a detailed view of the clustering results of 3-substituted catechol LSR, principal component analysis (PCA) was employed to reduce the dimensionality of the 2048-dimensional data to 2D or 3D, as demonstrated in Fig. 9 A for 2D visualization and Fig. 9 B for additional perspectives on the 2D and 3D visualization, which are summarized in Additional file 1 : Figure S5. In Fig. 9 , dots of the same color represent a category, and two categories are chosen as examples to present a list of classified molecules. The acidity dissociation constants for catechol are p K a1 of 9.25 and p K a2 of 13.0 [ 40 ], suggested that the catechol is slightly acidic at biological environment of pH 7.4, it is therefore thought acidic groups are intrinsic biosisosteres of catechol to conserve molecular interactions where possible. However, we envision it is likely that basic groups might be suggested by our BII tool. It is not surprise since our previous investigation revealed that basic –CH 2 NH 3 + replaced acidic phosphate group and a Mg 2+ concurrently [ 31 ]. The metal cations hence may play an important role during local structure replacement of catechol since they can readily coordinate. Three optional LSRs of catechol are displayed in Fig. 10 , where it can be observed that these newly identified substructures exhibit similarities in shape to catechol. To elucidate structure–activity relationship of catechol and corresponding replacements, the structural and biological data are compiled from reference publications. In addition, we leveraged the structure diversification of identified new chemicals with activity change toward a selected target, discussed how substitutes deletion or protrusion impacts the biological activity of resulting molecules. The therapeutic impact of catechol in lung cancer treatment was achieved by inhibiting the activity of extracellular signal-regulated kinase 2 (ERK2), and its direct binding to the active site of ERK2 (PDB code: 4ZXT) was confirmed through X-ray crystallography [ 41 ]. Catechol was anchored to the hinge loop of the ATP-binding site of ERK2, with its hydroxyl groups interacting with the main chain of Asp106, Met108, and the side chain of Gln105, all located on the hinge loop. The azaindole ligand (compound 3 in Ref. [ 42 ] PDB code: 42A) occupied the same binding site where catechol was positioned in ERK2. In detail, the pyrrole NH of 7-azaindole formed a strong hydrogen bond (d = 2.8 Å) with the backbone carboxyl oxygen of Asp104, and the pyridine nitrogen served as a hydrogen bond acceptor (d = 3.0 Å) for the Met106 backbone NH. The ligand (compound 46 in Ref. [ 43 ] PDB code: 9N8) binds in the ATP-binding site of ERK5. The pyrrole NH and amide carbonyl formed hydrogen bonds (d = 2.8 Å, d = 2.7 Å) with the backbone carbonyl of Asp138 and amide of Met140 in the ERK5 hinge-region, respectively. Noticeably, the pyrrole-2-carboxamide took the position of catechol. The chloro-substituted aminopyrimidine moiety of ER8 (compound 15 in Ref. [ 44 ]) took the space of catechol as so that halogen bond (d = 2.7 Å) between the the chloro atom and amide residue oxygen of gatekeeper Gln105. Hydrogen bonds (d = 3.1 Å, d = 2.9 Å) were observed between the ligand’s pyrimidine N, amino NH and the backbone NH, C=O of hinge residue Met108 respectively. C=O of hinge residue Met108 respectively. The p38αMAPK inhibitor hit (compound 3 in Ref. [ 45 ] PDB code: MWL) occupied the active site space of p38αMAPK. The pyridine ring nitrogen allowed for hydrogen bonding (d = 2.8 Å) with the peptide backbone of Met109 from the hinge region. In this context, the pyridine moiety can be considered a structural replacement for the C=O of hinge residue Met108, effectively taking the place of catechol. The idea bioisosteres by definition, entails both steric and but electronic conservatism. However, achieving a perfect match for both criteria simultaneously can be challenging and may require some degree of compromise. It's conceivable that an imperfect match in electronic conservativity could be compensated for by a precise steric fit, thereby maintaining overall binding affinity. It should be acknowledged that the inability of BII to distinguish between hydrogen bond donors and acceptors, as it primarily focuses on the conservativity of the interaction itself. For instance, the hydroxyl group in catechol serves as a hydrogen bond receptor in the reference, whereas the –C=O group of the carboxamide in ligand 9N8 can only function as a hydrogen bond (HD) acceptor due to its electron-rich nature. The same applies to the cationic –N(CH 3 )– group, which acts as a HD acceptor. The human enzyme 17β-hydroxysteroid dehydrogenase 14 (17β-HSD14), using NAD + as cofactor, oxidizes estradiol and 5-androstenediol. The human HSD17B14 gene is widely expressed in major organs, such as brain, liver and kidney. It has also been identified in breast cancer tissue, but the physiological function of this enzyme was poorly understood. The use of inhibitors can be important tools to study the physiological role of 17β-HSD14 in vivo. The methanone compound 1 (compound 12 in Ref. [ 46 ] PDB code: 5Q6) inhibits the activity of 17β-HSD14 with K i of 64 nM. The hydroxyl residue of Tyr154 forms two hydrogen bonds bifurcately (d = 2.5 Å, d = 3.1 Å) with hydroxyl groups of the catechol moiety. Besides, the 4-OH hydrogen bond (d = 2.5 Å) also extends toward Ser141 hydroxyl residue (Fig. 11 A). Four of 5Q6’s optional analogues are shown in Fig. 11 B and suggested that 4-fluoro-3-phenol is the bioisostere of the 3-substituent catechol, offering a ligand (compound 9 in Ref. [ 46 ] PDB code: 6QO) with increased affinity (a Ki of 13 nM). The 3-OH groups at the C-ring of 9 and compound 12 in Ref. [ 46 ] interact through remarkably short H-bond interactions with the side chain of Tyr154 (9, d = 2.3 Å, 12, d = 2.5 Å) and the side chain of Ser141 (9, d = 2.5 Å, 12, d = 2.5 Å) from the catalytic triad. The 4-F group at the C-ring of 9 is possibly involved in forming a halogen bond (d = 2.8 Å) with Ser141 hydroxyl side reside. The 3-OH groups at the C-ring of 12 hydrogen bond toward the side chain of Tyr154 (d = 3.1 Å). The replacement of the ketone linker of compound 9 with ethenyl resulted in an eightfold more potent inhibitor (compound 5 in ref. PDB code: 9JW) with a K i of 1.5 nM; while methylamine (compound 4 in ref. PDB code: 9JQ) and ether (compound 2 in reference PDB code: 9 MB) surrogate each individually deteriorated the binding affinity to a K i of 42 and 58 nM. Keeping the B and C ring of 6QO unchanged, the equipotent quinoline base inhibitor (compound 9 in Ref. [ 47 ], PDB code: 9ME), and a two folds more active naphthalene derivative (compound 10 in Ref. [ 47 ]) were obtained, but the quinoline analog was found to be four times more soluble than the naphthalene compound. Herein, we rather than concentrate on the structural replacement of catechol, where it is replaced by a 4-fluoro-3-hydroxyphenyl moiety, instead emphasize that the linker connecting replacements to other parts can vary. However, it's crucial to acknowledge that the choice of linker may impact the physicochemical properties of the ligand. Comparison with other tools The fundamental of isostere replacement lies matching of protein moieties, but sometimes this concept of replacement not aligned with the intended objective of functional group/ring/core replacement for a ligand. Therefore, BII was compared with other bioisosteric search tools, such as the SwissBioisostere database and the MolOpt network server. The SwissBioisostere database is a comprehensive resource containing information about molecular substitutions and their performance in biochemical analysis. This data is obtained by matching molecular pairs and mining biological activity data from the ChEMBL database. Notably, SwissBioisostere not only provides information about molecular substitutions but also offers interactive analysis capabilities. On the other hand, the MolOpt network server is constructed through a combination of data mining, chemoinformatics similarity comparison, and machine learning techniques. Users have the flexibility to query for bioisosteres of specific molecular substructures and even generate entirely new molecular alternatives. To perform a comparative analysis, three distinct substructures, namely the 3-substituent, 4-substituent, and 3,4-substituent, were input into each of the three search tools. Consequently, users can access the corresponding bioisosteric data for their chosen substructures. In Table 1 , we have summarized the number of bioisosteres identified by SwissBioisostere, MolOpt, and BII. Additionally, it's important to note that MolOpt offers four distinct bioisosteric replacement rules. MolOpt-1 is based on data mining principles, MolOpt-2 utilizes similarity comparison, MolOpt-3 incorporates data mining techniques, and MolOpt-4 is designed around a deep generative model. It becomes evident that when compared to the SwissBioisostere database and the MolOpt web server, BII excels in providing a more extensive array of bioisosteric ideas, making it a valuable resource for medicinal chemistry research. The bioisosteres with the top-ten rankings from each tool are depicted in Fig. 12 , illustrating consistent results. The chemical accessibility represents an important concern indeed for the novel structure generated based on this tool, but we want to emphasize that BII focus on local structural replacements yet did not consider how to incorporate suggested moieties into new ligands, but definitely it will be put into consideration as a filter of replacement moieties in updated BII version. In addition, we recognized that a retrospective validation is not satisfactory to launch BII since experimental validation in any case is a benchmark of computational tool. In fact, we conducted both wet lab synthetic and bioassay experiments in-house. It has been demonstrated that a squaryldiamide or an amide group is the bioisosteric replacement of phosphate moiety [ 48 ], NH in the urea serves as isostere of carboxylic acid [ 49 ]. After previous computational investigation of phosphate [ 31 ], ribose [ 32 ] bioisosteric replacement, the bioisosterism of these moieties have been verified. Consequently, we think it is necessitated to develop a generic tool to facilitate bioisostere identification of any chemical fragment, which pillars the basement of our current attempt.
Conclusions To optimize the efficiency of BII, we integrated the extended multiprocessing library of Python into the code. BII stands out as a user-friendly and robust tool for generating innovative ligand replacement ideas. The substructure replacement identification process for a specific single task typically takes about two to eleven hours using a machine with a CPU of 24 processors. Notably, the web server is designed to be accessible without the need for computational or programming skills, a feature particularly advantageous for medicinal chemists. These results affirm BII’s capability to identify suitable LSR where the chemical structure differs, yet the interaction patterns with the protein pocket remain conserved. Moreover, our application of BII has led to the rediscovery of scaffold hopping ideas, underscoring the utility of our web server in providing valuable insights for ligand design. In essence, BII serves as a valuable tool to assist medicinal chemists during the hit/lead optimization process, aiding in the search for appropriate molecular fragments. As part of our commitment to ongoing improvement, the BII server will receive regular updates as new data and advancements become available. We are pleased to offer this service freely to the public at http://www.aifordrugs.cn/index/ .
Within the realm of contemporary medicinal chemistry, bioisosteres are empirically used to enhance potency and selectivity, improve adsorption, distribution, metabolism, excretion and toxicity profiles of drug candidates. It is believed that bioisosteric know-how may help bypass granted patents or generate novel intellectual property for commercialization. Beside the synthetic expertise, the drug discovery process also depends on efficient in silico tools. We hereby present BioisoIdentifier (BII), a web server aiming to uncover bioisosteric information for specific fragment. Using the Protein Data Bank as source, and specific substructures that the user attempt to surrogate as input, BII tries to find suitable fragments that fit well within the local protein active site. BII is a powerful computational tool that offers the ligand design ideas for bioisosteric replacing. For the validation of BII, catechol is conceived as model fragment attempted to be replaced, and many ideas are successfully offered. These outputs are hierarchically grouped according to structural similarity, and clustered based on unsupervised machine learning algorithms. In summary, we constructed a user-friendly interface to enable the viewing of top-ranking molecules for further experimental exploration. This makes BII a highly valuable tool for drug discovery. The BII web server is freely available to researchers and can be accessed at http://www.aifordrugs.cn/index/ . Scientific Contribution: By designing a more optimal computational process for mining bioisosteric replacements from the publicly accessible PDB database, then deployed on a web server for throughly free access for researchers. Additionally, machine learning methods are applied to cluster the bioisosteric replacements searched by the platform, making a scientific contribution to facilitate chemists’ selection of appropriate bioisosteric replacements. The number of bioisosteric replacements obtained using BII is significantly larger than the currently available platforms, which expanding the search space for effective local structural replacements. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s13321-024-00801-8.
Supplementary Information
Acknowledgements This research was sponsored by the Joint Research Funds of Department of Science & Technology of Shaanxi Province, Northwestern Polytechnical University (No. 2020GXLH-Z-017), funded by Ningbo Natural Science Foundation (No. 202003N4006) and the key research program of Ningbo (No. 2023Z210). Author contributions The study was designed and conceptualized by YZZ and RZW. The workflow was developed by THZ and TL. The deployment and operation of cloud services was performed by SHS and BCG. The results were discussed and interpreted by all authors. The manuscript was written by YZZ and advanced by all authors. Funding This study was supported by Ningbo Natural Science Foundation (202003N4006), the key research program of Ningbo (2023Z210), the Joint Research Funds of Department of Science & Technology of Shaanxi Province. Availability of data and materials The focus of our manuscript is on the online webserver development computational to identify local structural replacements/bioisosteres for drug design. ChemDraw 19.0 was used to sketch the structure of ligands. The PyMoL 1.8.x used in this work to visualize and demonstrate the interactions between ligand and receptor is free and open-source software. All code, data and deployment environments for this work have been uploaded to Zeodo and can be accessed via the following link: https://doi.org/ 10.5281/zenodo.8215113. Declarations Competing interests There are no conflicts to declare.
CC BY
no
2024-01-15 23:43:48
J Cheminform. 2024 Jan 13; 16:7
oa_package/ee/0c/PMC10788035.tar.gz
PMC10788036
38218783
Background Acute Respiratory Distress Syndrome (ARDS) is a clinical syndrome characterized by severe continuous hypoxemia, which can be caused by intrapulmonary and/or extrapulmonary causes. The most common cause is pneumonia, especially bacterial and viral pneumonia. In terms of extrapulmonary factors, sepsis due to non-pulmonary sources is the most common cause of ARDS. ARDS is mainly characterized by diffuse alveolar injury, including excessive inflammation, increased epithelial and vascular permeability, alveolar edema, and hyaline membrane formation. Although there are many pieces of research on the pathogenesis of ARDS, few specific pharmacotherapies for this disease can be used clinically [ 1 ]. Treatment of ARDS is generally supportive with lung-protective mechanical ventilation. Thus, the mortality of ARDS remains unacceptably high. The latest data from the Large Observational Study to Understand the Global Impact of Severe Acute Respiratory Failure study reports 40% hospital mortality of ARDS [ 2 ]. The outbreak of coronavirus disease 2019 (COVID-19) worldwide caused by severe acute respiratory syndrome coronavirus 2 has correspondingly increased the mortality of ARDS, leading to devastating economic and medical burden worldwide. Recently, multiple studies have been published on the molecular mechanisms involved in pathogenesis and pathophysiology of ARDS and have made substantial progress. Some potential pharmacotherapeutic agents have proven efficacy in preclinical models of ARDS by targeting specific molecules or regulating related signal pathways. Considering the complex pathophysiology of ARDS characterized by inflammation-mediated disruptions in alveolar-capillary permeability, reduced alveolar fluid clearance (AFC), and oxidative stress, a comprehensive understanding of the underlying signal transduction within the pathogenesis and pathophysiology of ARDS significantly offers deep insight into development and progress of ARDS. This helps to provide a theoretical foundation and motivation for the discovery of novel therapeutic strategies to treat ARDS. In this review, we outline the available literature on mechanisms of pathophysiology and signal transduction for ARDS. Both novel and canonical signal transduction pathways are summarized and functionally classified according to their pathophysiological roles in ARDS, including inflammation, increased alveolar-capillary permeability, reduced AFC, and oxidative stress. We highlight these pathophysiological mechanisms by presenting the location and effects of the underlying signaling pathways in tissue, cells, and organelles. Furthermore, we introduce the recent findings of potential therapeutic strategies agents that target specific signaling pathways to modulate the above four aspects of pathophysiological mechanisms of ARDS.
Conclusions ARDS is a syndrome characterized by high morbidity and mortality. Despite substantial progress has been made over the past five decades in understanding the pathogenesis and pathophysiology of ARDS, few pharmacological interventions have shown a clear mortality benefit in its therapy. Clinical treatment mainly relies on supportive care with mechanical ventilation. Therefore, there is an urgent need for novel therapeutic strategies in ARDS treatment. As recent studies have shown, the developed therapeutic strategies have taken into consideration the regulation of signaling pathways involved in the pathophysiological mechanisms of ARDS. In this review, we combed the existing evidence of molecular mechanisms in ARDS pathophysiology, involving inflammation, increased alveolar-capillary permeability, impaired AFC and oxidative stress. Moreover, we reviewed the recent promising therapeutic strategies for managing ARDS, highlighting the pathophysiological basis and the influences on cell signaling molecule expression for their use. Of note, pharmacologic therapies that achieved promising effects in preclinical studies have often failed to show efficiency in clinical trials involving unselected ARDS populations. These outcomes can be attributed to the clinical and biological heterogeneity of ARDS patients. Secondary analysis of data from randomized controlled trials has revealed distinct responses to simvastatin treatment, fluid strategy, and positive end-expiratory pressure strategy between the hypo-inflammatory and hyper-inflammatory subphenotypes [ 251 , 340 , 342 ]. By employing CT imaging data and physiological characteristics, latent class analysis revealed the presence of two subphenotypes exhibiting varying responses to lung recruitment [ 348 ]. In view of these findings, it is essential to identify homogenous biological and clinical phenotypes of ARDS and to conduct further investigations into the underlying variations in molecular mechanisms among different subphenotypes. These efforts are critical for advancing more effective targeted pharmacologic therapies and achieving precision medicine.
Acute respiratory distress syndrome (ARDS) is a common condition associated with critically ill patients, characterized by bilateral chest radiographical opacities with refractory hypoxemia due to noncardiogenic pulmonary edema. Despite significant advances, the mortality of ARDS remains unacceptably high, and there are still no effective targeted pharmacotherapeutic agents. With the outbreak of coronavirus disease 19 worldwide, the mortality of ARDS has increased correspondingly. Comprehending the pathophysiology and the underlying molecular mechanisms of ARDS may thus be essential to developing effective therapeutic strategies and reducing mortality. To facilitate further understanding of its pathogenesis and exploring novel therapeutics, this review provides comprehensive information of ARDS from pathophysiology to molecular mechanisms and presents targeted therapeutics. We first describe the pathogenesis and pathophysiology of ARDS that involve dysregulated inflammation, alveolar-capillary barrier dysfunction, impaired alveolar fluid clearance and oxidative stress. Next, we summarize the molecular mechanisms and signaling pathways related to the above four aspects of ARDS pathophysiology, along with the latest research progress. Finally, we discuss the emerging therapeutic strategies that show exciting promise in ARDS, including several pharmacologic therapies, microRNA-based therapies and mesenchymal stromal cell therapies, highlighting the pathophysiological basis and the influences on signal transduction pathways for their use. Keywords
Pathophysiology and pathogenesis of ARDS The normal lung functions to facilitate oxygen transfer and carbon dioxide excretion, a process established by the alveolar–capillary unit. The pulmonary endothelium consists of a monolayer of endothelial cells linked by adherens junctions and tight junctions. It contributes significantly to the precise regulation of fluid and solutes to prevent lung flooding [ 3 ]. The alveolar epithelium is lined by alveolar type I cells, which form a tight barrier allowing gas exchange, and alveolar type II cells, responsible for producing surfactant to reduce surface tension and keep alveoli open. Both types of cells can absorb edema fluid from the alveolar space that help oedema resolution. The composition of the normal alveolus also includes alveolar macrophages (AMs), which provide host defense [ 4 ]. Regardless of the primary disease, the pathophysiologic manifestations of ARDS are very similar. Essentially, these syndromes reflect severe injury resulting in dysfunction of the alveolar-capillary barrier, impaired AFC, and oxidative injury due to unregulated acute inflammatory responses (Fig. 1 ). Excessive inflammation Acute lung injury (ALI) is initially caused by inflammation, which is mediated by an intricate interplay of inflammatory cytokines and chemokines released by various cell types in the lungs [ 5 ]. In response to direct insults such as bacteria, viruses and gastric contents, the pattern recognition receptors (PRRs) expressed in innate immune cells in alveolar, such as AMs, alveolar epithelial cells (AECs), and dendritic cells (DCs) are initially activated [ 6 ]. They release inflammatory cytokines to amplify immune response by acting locally on other cells and recruiting circulating immune cells into the airspace. This effect further induces amplification of inflammation and aggravates lung injury [ 7 ]. Neutrophils have been widely implicated in playing a critical role in the pathogenesis of ARDS. Activation of accumulated neutrophils in the alveolar space and lung microvasculature produces numerous cytotoxic substances, including granular enzymes, pro-inflammatory cytokines, and neutrophil extracellular traps (NETs), resulting in sustained inflammation and alveolar-capillary barrier injury [ 8 ]. Additionally, the influx of adaptive immune cells also plays an essential role in promoting inflammatory injury and thrombosis by producing various cytotoxic molecules such as cytokines, perforin, granzyme B, and autoantibodies [ 9 – 11 ]. Different from intrapulmonary ARDS, where alveolar inflammation occurs initially, inflammatory injury caused by indirect factors is driven from systemic compartment and spreads towards the alveolar compartment [ 12 ]. Lung endothelium activation, triggered by circulating stimuli released from extrapulmonary lesions into the blood, can also produce proinflammatory molecules to facilitate the adherence and infiltration of immune cells, further leading to vascular inflammation and alveolar damage [ 13 ]. Unlike other organs, the lung is continually exposed to various environmental challenges, including microbial pathogens, pollution, dust, and more [ 14 ]. Similarly, pulmonary endothelial cells are exposed to circulating inflammatory components, hormones, exotoxins, and endotoxins, which interact with both local and systemic inflammatory responses [ 15 ]. The alveolar epithelium, lung endothelium, and the cross-talk within the immune system collectively constitute the physical barrier and immune homeostasis in the lung [ 16 ]. Traditionally, a systemic inflammatory cascade has been used to describe immune dysregulation during ARDS, but this perspective has been challenged by the recognition of immune compartmentalization response [ 17 ]. It was previously observed that intratracheal administration of lipopolysaccharide (LPS) leads to a significant increase in tumor necrosis factor-α (TNF-α) levels in bronchoalveolar lavage fluid (BALF) but not in plasma, whereas intravenous LPS administration results in a potential increase in TNF-α levels in blood but not in BALF [ 14 ]. Recently, the compartmentalization of inflammation specific to the lung has also been observed in COVID-19-related ARDS [ 18 ]. Hence, when exploring biomarkers for diagnosis and subphenotyping, as well as investigating the pathophysiology and signaling pathways of ARDS, it is important to consider this organ-specific immune compartmentalization. Endothelial and epithelial permeability Another core pathophysiologic derangement is the increased permeability of two separate barriers, the lung endothelium and alveolar epithelium. As a result of dysregulated immune response, the impaired endothelial barrier can occur owing to disruption of intercellular junctions, endothelial cell death, and glycocalyx shedding. In normal lungs, maintenance of the endothelial barrier is mediated by vascular endothelial cadherin (VE-cadherin), which tightly connects adjacent endothelial cells, and prevents leucocyte migration and vascular leak [ 19 , 20 ]. During lung injury, inflammatory factors mediate the phosphorylation of VE-cadherin, resulting in its endocytosis. Endocytosis of VE-cadherin induces gaps between endothelial cells, leading to increased permeability [ 21 ]. Moreover, the disruption of endothelial tight junctions, such as reduction in protein levels of occludins and zonula occludens (ZOs), can also promote intercellular permeability [ 22 ]. In addition, endothelial cell death can cause increased permeability to proteins and solutes [ 23 ]. Similar to endothelial injury, disruptions of epithelial barrier function involve the dissociation of intercellular junctions, primarily E-cadherin junctions, and alveolar epithelial cell death. In ARDS, various damaging factors can ruin the alveolar epithelium directly or by inducing inflammation. The inflammatory injury caused by immune response inevitably aggravates the direct damage to AECs, including cell death and intercellular junction disruption, leading to increased alveolar epithelial permeability [ 3 ]. Alveolar fluid clearance The failure to absorb alveolar edema fluid significantly contributes to increased mortality in ARDS. Basal AFC is determined by ion and fluid transportation of alveolar epithelium. In normal epithelium, sodium is transported through the apical surface via the epithelial Na + channel (ENaC) and then pumped from the basolateral surface into the lung interstitium by the sodium–potassium adenosine triphosphatase (Na,K-ATPase). while chloride is transported through the cystic fibrosis transmembrane conductance regulator (CFTR) channels [ 24 ]. The directional ion transport establishes an osmotic gradient that passively drives the removal of water from the alveoli to the interstitium through aquaporins or intracellular routes [ 25 ]. Subsequently, fluid can be eliminated via lymphatic drainage and lung microcirculation [ 4 ]. However, these transport systems and functions are impaired in ARDS patients due to epithelium injury caused by elevated levels of proinflammatory cytokines, leading to the loss of ion channels and pumps [ 26 ]. Increased permeability of liquid and protein into the alveolar space greatly exceeds the capability of AFC. The alveolar space filled with oedematose fluid decreases diffusion of carbon dioxide and oxygen, thus leading to hypoxia and hypercapnia and further impair AFC by inhibiting the Na,K-ATPase activity or inducing Na,K-ATPase endocytosis [ 27 – 29 ]. Patients with severe hypoxia frequently require mechanical ventilation to facilitate breathing. High tidal volumes and elevated airway pressures can induce biomechanical inflammatory injury and reduce Na,K-ATPase activity [ 30 ]. All of these events significantly inhibit AFC, leading to persistent alveolar edema, refractory hypoxemia and/or carbon dioxide retention. Oxidative stress and lung injury Oxidative stress, resulting from the production of reactive oxygen species (ROS), plays an important role in ARDS progression and lung injury. In response to inflammatory stimuli, various cell types in the lung can generate ROS. Most of the damaging ROS produced by innate immune cells like AMs and recruited leukocytes, cause cell injury by inducing oxidation and cross-linking of proteins, lipids, DNA, and carbohydrates [ 31 ]. Significantly, ROS produced by neutrophils disrupt the endothelial barrier, facilitating the migration of recruited inflammatory cells across it and thereby aggravating inflammation [ 32 ]. Similarly, activated AECs and pulmonary endothelial cells can produce ROS, directly contributing to signaling transduction that increases alveolar-capillary permeability and impairs sodium ion transport, thereby impairing the reabsorption of fluid from the alveolar compartment [ 33 , 34 ]. In fact, oxidative stress and inflammatory response always reinforce each other in the progression of ARDS. Although there are many checks and balances in this system in the form of antioxidant defenses in ALI/ARDS, an excessive production of ROS overwhelms endogenous antioxidants, leading to oxidative cell injury and exacerbation of inflammatory responses [ 35 ]. Molecular mechanisms and signaling pathways In this part, we discuss the specific functions of signaling pathways in regulating pathophysiological processes of ARDS including lung inflammation, alveolar-capillary permeability, and AFC, which contributes to discovering the potential and novel therapeutic strategies. Signaling pathways related to inflammation Pattern recognition receptors The innate immune activation in both direct or indirect lung injury is considered a potent driver of lung inflammation. It is triggered by endogenous damage-associated molecular patterns (DAMPs) released by cells under conditions of stress, injury, or cellular death, as well as microbial-derived pathogen-associated molecular patterns (PAMPs). DAMPs and PAMPs can be recognized by PRRs expressed in host cells, initiating PRR-induced signaling pathways that lead to the expression of inflammatory factors. The following section mainly introduces the signaling pathways induced by PRRs, including toll-like receptors (TLRs), nucleotide-binding leucine-rich repeat receptors (NLRs), retinoic acid-inducible gene I (RIG-I) -like receptors (RLRs), cytoplasmic DNA sensors (CDSs), and receptors for advanced glycation end products (RAGEs) in relation to ARDS. To date, 10 functional TLRs have been identified in humans. TLR1,2,4,5 and 6 are surface-expressed, while TLR3,7,8 and 9 are located in lysosomal or endosomal membranes. In the lung, different TLRs are expressed in various cell types and recognize specific PAMPs and DAMPs to generate inflammatory signals (Table 1 ) [ 6 , 36 ]. The ability of TLRs to activate transcription factors interferon regulatory factors (IRFs) or nuclear factor-κB (NF-κB), requires the recruitment of adaptor proteins, including myeloid differentiation primary response gene 88 (MyD88) and Toll/interleukin-1 (IL-1) receptor-domain-containing adaptor-inducing interferon-β (TRIF). MyD88 is utilized by all TLRs except TLR3, and TRIF is specifically recruited by TLR3 [ 37 ]. Activation of IRFs and NF-κB triggered by TLRs are actively involved in the production of type I-interferons (IFNs) and pro-inflammatory cytokines, respectively (Fig. 2 a). However, in the context of ARDS, TLR signals are accompanied by an overwhelming production of pro-inflammatory cytokines. Notably, TLRs elicit special responses in polymorphonuclear neutrophil granulocytes to aggravate inflammation during ARDS. Previous studies have revealed that TLR9 and TLR4 contribute to the release of NETs, which contain DAMPs of proteases, histones, and self-DNA to induce inflammation and thrombi development (Fig. 2 d) [ 37 – 40 ]. A recent finding indicates that TLR9 activation induces neutrophil elastase and proteinase 3-mediated shedding of the complement component 5a receptor, resulting in a decreased ability to clear bacteria and prolonged ALI in mice [ 41 ]. Interestingly, the activation of TLR4 has been shown to play a dual role in regulating lung inflammation. TLR4 of AMs activated by heat shock protein (HSP) 70 conditionally promotes clearance of apoptotic neutrophils by preventing a disintegrin and metalloprotease 17 -mediated cleavage of Mer receptor tyrosine kinase, thereby promoting the outcome of ventilator-induced lung injury (VILI) [ 42 ]. Therefore, precise regulation of TLR signals to suppress inflammation and promote the resolution of ARDS may be an effective strategy. NLRs are also the commonly studied innate immune system receptors involved in ARDS. NLRs are cytoplasmic PRRs, which can be divided into different subfamilies according to their N-terminal domains, including nucleotide-binding oligomerization domain (NOD), nucleotide-binding domain leucine-rich repeat protein (NLRP), neuronal apoptosis inhibitory protein (NAIP), and nucleotide-binding oligomerization domain like receptor subfamily C (NLRC) [ 6 , 43 ]. The NOD1 and NOD2 mainly detect bacterial components, recruiting downstream receptor-interacting-serine/threonine-protein kinase 2, which leads to NF-κB or mitogen-activated protein kinase (MAPK) activation [ 36 ]. The NLRP1, NLRP3, NLRC4 and NAIP have been characterized to assemble inflammasomes in the lung (Table 1 ) [ 6 ]. In general, the activation of the inflammasome requires two independent signals. The priming signal is the upregulation of NLRs, pro-IL-1β, pro-IL-18, and pro-caspase-1 through NF-κB activation. The second step is induced by NLRs responding to a variety of PAMPs and DAMPs inside the cell (Table 1 ) [ 44 ]. The activated NLRs further assemble inflammasomes to mediate caspase-1-dependent cleavage of pro-IL-1β and pro-IL-18. The secretion of mature forms of IL-1β and IL-18 can further induce inflammation by recognizing their respective cytokine receptors to trigger MyD88/NF-κB signaling [ 45 ]. In addition, inflammasome activation can induce cell pyroptosis through caspase-1-mediated proteolysis of gasdermin D, resulting in the formation of pores on the cell membrane and subsequent cell rupture [ 46 ]. Pyroptosis leads to a large release of DAMPs and inflammatory mediators (incl. IL-1β and IL-18) that enhances further inflammatory responses [ 45 ]. The synergistic interaction between NF-κB and NLRs may account for the supranormal release of cytokines. Thus, inhibition of NF-κB signal and targeting NLRs appear to hold potential for mitigating inflammasome-induced ARDS and the subsequent cytokine storm. The surveillance of abnormal nucleic acids from invading pathogens or damaged cells is conducted by PRRs including RIG-I, melanoma differentiation-associated gene 5 (MDA5), cyclic GMP-AMP synthase (cGAS), and absent in melanoma 2 (AIM2). Both the cytosolic receptor RIG-I and MDA5 expressed in various host cells belong to RLRs, which provide important defense against viral infections (Table 1 ) [ 47 , 48 ]. They recognize viral RNA containing a 5′-triphosphate end and subsequently activate the downstream adapter mitochondrial antiviral signaling protein (MAVS) to induce the activation of IRFs and NF-κB (Fig. 2 c) [ 49 ]. These signals result in the expression of antiviral type I-IFNs and other inflammatory cytokines [ 50 ]. However, there is evidence that RLR signaling cascades induce excess inflammation in ARDS, clinically manifested by upregulation of inflammatory cytokines in the bronchoalveolar lavage fluid of patients with severe viral infections [ 51 ]. The abnormal presence of DNA in cytoplasm either from infection or cellular damage induces immune responses through the cytoplasmic DNA sensors, including cGAS and AIM2 (Table 1 ). cGAS binds to double-stranded DNA, driving the synthesis of cyclic dinucleotide cyclic GMP-AMP (cGAMP), which activates the stimulator of the interferon gene (STING), an endoplasmic reticulum (ER) membrane protein. Activated STING induces the production of inflammatory factors through the activation of downstream NF-κB and IRF 3 (Fig. 2 b) [ 52 ]. AIM2 detects double-stranded DNA to assemble an AIM2 inflammasome complex, which contains AIM2, apoptosis-associated speck-like protein containing a CARD (ASC), and caspase-1. This complex regulates the maturation of IL-1β and IL-18, as well as induces cell pyroptosis [ 43 , 53 ]. Similarly, sustained activation of these pathways is detrimental to the host. For instance, self-DNA released by cell death or cellular stress after severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection may activate the cGAS-STING pathway, leading to excessive production of inflammatory factors and exacerbating the severity of COVID-19 [ 52 ]. Of note, the cGAS-STING signaling pathway has been shown to phosphorylate the signal transducer and activator of transcription (STAT) 1, which results in the production of adhesion molecules and chemokines to promote immune cell adhesion and migration during ARDS in vivo [ 54 ]. Thus, targeting signaling pathways associated with nucleic acid sensors offers potential avenues for anti-inflammatory therapy in ARDS. RAGE is a PRR highly expressed in the lungs, particularly in AECs (Table 1 ) [ 55 ]. It exists in two forms: membrane-bound and soluble. Membrane-bound RAGE can recognize a variety of ligands (Table 1 ), triggering various intracellular cascades including NF-κB, MAPK, and phosphatidylinositol 3-kinase (PI3K)/protein kinase B (AKT), ultimately leading to the induction of inflammatory factors (Fig. 2 a) [ 56 ]. Since the expression of RAGE is significantly upregulated during ARDS, persistent inflammation from RAGE activation may induce harmful effects [ 57 ]. In contrast, soluble RAGE is thought to be protective, as it retains the ability of ligand binding while lacking signaling function [ 58 ]. Together, targeting membrane-bound RAGE to inhibit inflammatory signaling pathways or competitive binding RAGE ligands through the administration of soluble RAGE could be a therapeutic strategy for ARDS. In addition to PRRs, there are other receptors that serve as positive regulators of inflammation by recognizing various DAMPs or PAMPs (Table 1 ). The purinergic ionotropic receptors P2X7 are membrane ion channels involved in the activation of NLRP3 inflammasomes by recognizing extracellular ATP (Fig. 2 a) [ 59 , 60 ]. The transient receptor potential (TRP) channels on the cell surface allow Ca 2+ influx to initiate the NF-κB-dependent inflammatory responses, which can be triggered by environmental irritants such as inflammatory cytokines and pathogens [ 61 – 63 ]. The N-formyl peptide receptor (FPR) is a member of G-protein-coupled receptors (GPCRs). It recognizes N-formylated peptides derived from bacterial or mitochondria, activating downstream MAPKs, AKT, and NF-κB pathways to induce inflammation [ 64 ]. Activation of neutrophils in response to the FPR signal leads to inflammatory responses, such as elastase release, oxidative burst, and chemotactic migration (Fig. 2 d) [ 65 ]. The activation of these signaling pathways significantly contributes to the development of robust inflammatory responses during ARDS. Conversely, adenosine receptors, which also belong to GPCRs, have been reported for their advantageous anti-inflammatory effects in ARDS. Notably, there is an essential link between hypoxia and inflammatory signaling, serving as a vital physiological protective mechanism to alleviate acute lung inflammation [ 66 ]. Mechanistically, cytoplasmic hypoxia-inducible factors (HIFs) stabilize in response to hypoxia and translocate to the nucleus to induce the transcription of adenosine receptors. Besides, the increased release of extracellular ATP/ADP during inflammation also raises the adenosine levels, which promotes a feedback loop that attenuates inflammation [ 67 , 68 ]. A subtype of adenosine receptors, the A2A receptor, has been identified as the target gene of HIF-1α in alveolar epithelium contributes to lung protection during ALI [ 69 ]. In addition, Ko et al. found that A2A receptor exerts anti-inflammatory functions by inhibiting downstream MAPK and NF-κB [ 70 ]. However, the physiological protecting HIF/adenosine signaling is often compromised in ARDS patients due to the necessity of hyperoxic conditions in the lung, which may exacerbate acute inflammatory lung injury [ 71 ]. NF-κB signaling pathway NF-κB is a transcription factor named for its specific binding to a conserved sequence in the nuclei of activated B lymphocytes [ 72 ]. In a resting state, NF-κB exhibits no transcriptional activity as it binds to the NF-κB inhibitor (IκB) in the cytoplasm. Upon activation of the IκB kinase by upstream signals, it induces the dissociation and subsequent degradation of IκB protein from NF-κB. Consequently, NF-κB translocates to specific DNA target sites in the nucleus, initiating the transcription and expression of inflammatory genes [ 73 ]. The NF-κB signal can be triggered by multiple stimuli, including cytokines (e.g., TNF-α and IL-1β), microbial infection (LPS), activated PRRs as described above, stress (e.g., ER stress and ROS), as well as elevated CO 2 during hypercapnia [ 74 , 75 ]. Notably, aberrant regulation of NF-κB is implicated to induce detrimental inflammation in ARDS [ 76 ]. The NF-κB pathway produces a variety of cytokines, chemokines and adhesion molecules, contributing to processes encompassing inflammation, immune cell recruitment, cell adhesion and cell differentiation. For instance, after the exposure of the lungs to noxious agents, NF-κB activation can be initiated by PRRs in both epithelial and endothelial cells, as well as resident immune cells, primarily AMs. Subsequently, these activated cells release cytokines (such as TNF-α and IL-1β) and chemokines (such as IL-8 and monocyte chemoattractant protein-1), amplifying inflammation by activating adjacent cells and recruiting additional immune cells from the peripheral tissues [ 74 ]. NF-κB activation also promotes AM polarization into classically activated (M1) macrophages, which overproduce cytokines to drive cytokine storm [ 77 ]. In the endothelium, activated NF-κB triggers the expression of adhesion molecules, which promotes the recruited immune cells to adhere and cross the alveolar-capillary barrier to reach the alveolar space, where they propagate inflammation and injury through the production of sustained inflammatory cytokines [ 78 ]. Besides, NF-κB can downregulate the anticoagulation proteins to cause intravascular coagulation and thrombin generation, which in turn aggravates inflammatory lung injury [ 79 ]. All of these events together contribute to the progression of ARDS. Therefore, modulating the activation of NF-κB and inhibiting the degradation of IκB hold potential for mitigating the cytokine storm and ameliorating the severity of ARDS. Notch signaling pathway The Notch signaling pathway is well-studied for regulating cell proliferation and differentiation in respiratory system [ 80 ]. Currently, four Notch receptors (Notch 1, 2, 3, and 4) and five ligands (Jagged-1, 2 and delta-like ligand 1, 3, and 4) have been identified in mammals. Upon activation, the Notch intracellular domain is released and translocates to the nucleus, where it activates the transcription of target genes such as Hairy/Enhancer of Split 1 (Hes1) [ 81 ]. Several studies have highlighted the significant role of the Notch pathway in sepsis and ARDS, particularly in promoting proinflammatory cell polarization. Notch activation in macrophages drives M1 polarization to induce inflammation, whereas the inactivation of Notch signaling typically contributes to alternatively activated (M2) macrophage polarization that alleviates inflammation [ 82 , 83 ]. Li et al. [ 84 ] reported that Notch signaling is involved in T helper 17 (Th17) cell differentiation, which releases IL-17 and IL-22 to aggravate lung inflammation and neutrophil infiltration in an LPS-induced ALI model. Conversely, some research have also suggested a role for Notch signaling in anti-inflammatory responses. Lu et al. [ 85 ] found that mesenchymal stem cells activate Notch signaling, leading to the production of regulatory DCs, which inhibit inflammatory responses against LPS-induced ALI. Whether this signaling pathway facilitates or inhibits ARDS remains inconclusive. Janus kinase (JAK)/STAT signaling pathway The essential role of the JAK/STAT signaling pathway in apoptosis, differentiation, and inflammation is widely studied. The JAK family consists of four members (JAK1, 2, 3, and tyrosine kinase 2), while the STAT family comprises seven members (STAT1, 2, 3, 4, 5A, 5B, and STAT6) [ 86 ]. This pathway functions by transmitting extracellular signals from cytokines or growth factors to the nucleus, triggering the transcription of target genes [ 87 ]. JAK/STAT mediated by cytokines plays a critical role in amplifying inflammatory signals. For example, JAK/STAT pathway can amplify inflammation both in immune and non-immune cells. The activation of JAK/STAT3 by IL-6 promotes the differentiation of Th17 cells, CD8 + T cells, and B cells while inhibiting the development of regulatory T cells [ 88 ]. Type II-IFN-mediated JAK/STAT1 activation induces pro-inflammatory M1 phenotypes (Fig. 2 c) [ 89 ]. STAT1, STAT3 and STAT5, when activated by granulocyte colony-stimulating factor (G-CSF) promote the accumulation and activation of neutrophils in the lung (Fig. 2 d) [ 90 ]. Excessive activation of these immune cells leads to an unrestrained release of pro-inflammatory cytokines and chemokines, aggravating lung injury [ 91 ]. In non-immune cells, such as pulmonary endothelial cells and AECs, the IL-6/JAK/STAT3 axis induces the releasing of various inflammatory cytokines and chemokines, significantly associated with the severity of ARDS (Fig. 2 b) [ 92 ]. Moreover, JAK/STAT is implicated in the differentiation of anti-inflammatory immune cells, suggesting its potential effect for relieving the inflammatory reaction on ARDS. IL-4 and IL-2 participate in Th2 differentiation, leading to the release of anti-inflammatory cytokines that counteract Th1 cells through the activation of STAT6 and STAT5, respectively [ 91 , 93 ]. Activation of JAK/STAT6 triggered by Th2-related cytokines such as IL-4 and IL-13, along with the JAK1/STAT3 signaling triggered by IL-10, may promote M2 polarization. This polarization is pivotal in inflammatory resolution and lung fibroproliferative response in the late phase of ALI/ARDS (Fig. 2 c) [ 89 , 94 ]. Here, the function of JAK/STAT pathway is precisely controlled by diverse inflammatory cytokines. Modulating the effects of downstream JAK/STAT pathway by targeting pro-inflammatory cytokines or/and their respective receptors may have therapeutic efficacy in ARDS. MAPK signaling pathway The MAPKs are a class of serine/threonine protein kinases in cells that transmit signals through a three-tiered sequential phosphorylation cascade and induce various cellular responses [ 55 ]. MAPKs are subdivided into four distinct subfamilies, namely extracellular signal-regulated kinase (ERK) 1 and ERK2, c-Jun N-terminal kinase (JNK), p38MAPK, and ERK5 [ 95 ]. The activation of MAPK pathway can be initiated by multiple stimuli, such as growth factors and cytokines, through their interaction with specific receptors. Additionally, environmental stress and infections can directly trigger MAPK activation [ 96 ]. Recent studies have underscored the pivotal role of the MAPK pathway in the inflammatory processes in ARDS, primarily through the facilitation of the release of inflammatory cytokines and chemokines [ 97 – 99 ]. Besides, accumulating evidence have demonstrated that MAPK activation aggravates lung inflammation of ARDS by upregulating the activity of NLRP3 inflammasome and NF-κB in animal models [ 100 – 102 ]. In addition, previous studies have revealed the involvement of the MAPK pathway in eliciting tissue factor expression in endothelial cells under inflammatory stimuli such as TNF-α and C-reactive protein, which results in activating the coagulation system and fibrin deposition (Fig. 2 b) [ 103 – 105 ]. Thus, blocking the MAPK signaling may reduce lung damage from ARDS by alleviating inflammatory response and the clotting cascade. PI3K/AKT signaling pathway The PI3K/AKT pathway is ubiquitous in cells and participates in numerous pathophysiological processes of ARDS [ 106 ]. Cell surface receptor tyrosine kinases and GPCRs recognize their ligands, activating PI3K, which in turn converts phosphatidylinositol 4,5-bisphosphate into phosphatidylinositol 3,4,5-trisphosphate to activate AKT [ 107 ]. The role of the PI3K/AKT pathway in regulating inflammation during ARDS remains controversial. Zhong et al. [ 108 ] recently reported that the mammalian target of the rapamycin (mTOR), a downstream target of PI3K/AKT, phosphorylates the downstream transcription factor HIF-1α to induce a glucose metabolic reprogramming of macrophages, resulting in NLRP3 inflammasome activation and aggravate macrophage-mediated inflammation in LPS-induced ALI model. Besides, several studies have shown that the activation of the PI3K/AKT pathway increases inflammatory cytokines by activating the downstream NF-κB signal [ 109 – 111 ]. However, some recent studies have drawn a different conclusion, suggesting that the activation of PI3K/AKT is related with the alleviation of lung inflammation [ 112 , 113 ]. Zhong et al. [ 113 ] reported that PI3K/AKT activation inhibited downstream NF-κB and NLRP3 inflammasome to alleviate inflammation in LPS-induced ALI model. Therefore, further investigation is necessary to explore the potential positive and/or negative effects of PI3K/AKT pathway in regulating inflammation in ARDS. ER stress-mediated signaling pathway Various pathological conditions, such as sepsis, trauma, ischemia, and viral infections, can induce ER stress, defined as the accumulation of unfolded or misfolded proteins in the ER lumen [ 114 ]. Protein kinase RNA-like ER kinase (PERK), inositol requiring kinase 1α (IRE1α), and activating transcription factor 6 (ATF6) are transmembrane proteins of ER. They transduce ER stress signals induced by cellular homeostasis imbalances, initiating the unfolded protein response, which protects the cell by degrading these unfolded or misfolded proteins [ 115 ]. However, severe or prolonged ER stress has been observed to promote inflammation in ARDS by activating a series of signals, such as MAPK and NF-κB [ 116 – 118 ]. Ye et al. [ 119 ] reported that the phosphorylated IRE1α during mechanical ventilation activates NF-κB signaling to promote lung injury and inflammatory processes (Fig. 2 b). Given its role in the inflammatory cascade, pharmacological interventions targeting ER stress might be a potential strategy for ARDS therapy. Transforming growth factor-β (TGF-β)/Small mothers against decapentaplegic (Smad) signaling pathway The TGF-β signaling pathway is well accepted to induce lung fibrosis resulting from various diseases [ 120 ]. The interaction between TGF-β and its membrane TGF-β receptor complex leads to the phosphorylation of cytoplasmic effectors Smad2/3, forming a complex with Smad4 that translocates into the nucleus to regulate gene expression [ 121 ]. It was shown earlier that the TGF-β pathway contributes to the development of ARDS through the promotion of lung permeability, impaired epithelial ion transport, and fibrosis [ 34 , 122 , 123 ]. In addition, TGF-β exhibits potent proinflammatory properties. As early as 1994, Shenkar et al. [ 124 ] showed that mice administered anti-TGF-β antibodies exhibited reduced pro-inflammatory cytokine levels in comparison to untreated mice in a hemorrhage-induced ALI model. Similarly, a recent study demonstrated a reduction in inflammatory cytokine levels in an ALI model after inhibiting TGF-β/Smad signaling in vitro [ 125 ]. Another proinflammatory mechanism is that TGF-β activates MAPK and NF-κB in a Smad-independent pathway, which can occur in M1 phenotype transformation to induce inflammation (Fig. 2 c) [ 89 , 126 ]. In summary, active TGF-β signaling plays a critical role in ARDS, making it a potential therapeutic target. TNF-α signaling pathway TNF-α is a key cytokine involved in initiating and perpetuating inflammation in ARDS, produced by various cells in response to inflammatory stimuli [ 127 ]. TNF exerts its cellular effects through two cell surface receptors, TNF-receptor (TNFR) 1 and TNFR2. Binding of TNF-α to TNFR1 recruits adaptor proteins, including TNFR-associated death domain (TRADD) protein and TNFR–associated factor (TRAF) 2, activating NF-κB, MAPK, and activator protein-1 (AP-1) (Fig. 2 b). These activated signals subsequently increase the expression of TNF-α to amplify the inflammatory effects [ 128 ]. The recent documentation of TNF-α's proinflammatory role in animal models of ALI/ARDS induced by LPS and severe acute pancreatitis suggests that targeting TNF-α could be an attractive therapeutic approach for ARDS [ 127 , 129 , 130 ]. Increased endothelium and epithelium permeability Emerging evidence has suggested that different modalities of cell death, such as necrosis, apoptosis, necroptosis, ferroptosis, and pyroptosis, coexist in the endothelium and epithelium of lung during ARDS, leading to barrier dysfunction and pulmonary edema. Besides, the disruption of intercellular junctions and cytoskeleton reorganization is required for the loss of alveolar-capillary barrier integrity. These events ultimately lead to the accumulation of leaking fluid and proteins in alveolar spaces. Alveolar epithelial and pulmonary endothelial cell death Apoptosis has been widely demonstrated to cause the injury of endothelium and epithelium in the lung during ARDS. This programmed type of cell death can be triggered by extrinsic or intrinsic apoptosis pathways. Several well-studied signaling pathways led by death receptors have been implicated to mediate extrinsic apoptosis, which includes the Fas/Fas ligand (FasL), TNF-α/TNFR1 and TNF-related apoptosis-inducing ligand (TRAIL)/TNF-related apoptosis-inducing ligand receptor (TRAILR) [ 131 ]. Many studies have supported the significant role of Fas/FasL signaling in epithelial apoptosis in ARDS, with elevated concentrations of Fas and FasL detected in the BALF of ARDS patients [ 132 ]. In vitro experiments demonstrated that BALF from ARDS patients induced apoptosis in a lung epithelial cell line, which could be reversed by blocking Fas/FasL signaling [ 133 ]. Besides, multiple animal studies have pointed out the role of Fas/FasL in inducing AEC apoptosis and lung edema during ALI/ARDS [ 134 – 136 ]. In addition, TNF-α/TNFR1-mediated apoptosis may contribute to endothelial injury in ARDS. Hamacher et al. [ 137 ] demonstrated that BALF from ARDS patients exhibited cytotoxicity towards human lung microvascular endothelial cells. This cytotoxic activity was effectively inhibited by neutralizing TNF-α antibodies. Contradictory findings regarding the role of TNF-α/TNFR1 in mediating alveolar epithelial apoptosis have complicated the understanding of ARDS. A previous in vivo study demonstrated that intratracheal TNF-α instillation did not dramatically affect the early apoptotic cell death in lung after LPS exposure [ 138 ]. While Sun et al. [ 139 ] recently found that TNF-α significantly enhances IFN-β-mediated apoptosis of airway epithelial cells in vitro. The involvement of TRAIL/TRAILR signaling in apoptosis and lung barrier dysfunction during ARDS has also been described [ 140 ]. Previous reports have identified type I-IFNs as potent inducers of TRAIL in AMs. The substantial release of TRAIL from AMs upon type I-IFN stimulation may lead to apoptosis in alveolar epithelium [ 141 ]. ARDS is also associated with the intrinsic apoptosis, which occurs due to increased permeability of the mitochondrial outer membrane, known as mitochondrial-dependent apoptosis. This form of apoptosis can be induced by various stimuli, such as elastase, ROS and LPS [ 33 , 142 ]. Additionally, dynamin-related protein 1 (Drp1), a cytoplasmatic GTPases, has been proven to trigger mitochondrial-dependent apoptosis by inducing mitochondrial fission in AECs recently [ 143 , 144 ]. In addition to the extrinsic or intrinsic apoptosis pathways, several other signaling pathways play a role in regulating apoptosis in ARDS. Apoptosis signal-regulating kinase 1 (ASK1), a member of MAPK kinase kinase kinase family, is ubiquitously expressed in various cell types. When the cells are exposed to inflammatory factors, ASK1 becomes activated to phosphorylate JNK and further induces cell apoptosis [ 145 ]. ASK1/JNK-mediated apoptosis in alveolar epithelium and endothelium has already been reported in various ALI models [ 145 – 147 ]. In contrast, the PI3K/AKT pathway exerts a protective role in resisting apoptosis by inactivating proapoptotic proteins, with its activation partially dependent on the binding to vascular endothelial growth factor (VEGF) [ 142 , 148 ]. However, this protective pathway is downregulated during ARDS, partially attributed to decreased VEGF expression in injured epithelial cells, leading to aggravating alveolar-capillary injury [ 149 , 150 ]. Necroptosis is another cell-destruction procedure that has been implicated in inducing endothelial/epithelial injury in ARDS. Necroptosis is initiated by various receptors (e.g., Fas, TNFR, TLRs), inflammatory cytokines and mitochondrial dysfunction. Subsequently, receptor-interacting protein kinase (RIPK) 1 and/or RIPK3 are recruited, leading to the activation of mixed lineage kinase domain-like protein (MLKL), which damages cell membrane integrity and induces necroptosis [ 151 , 152 ]. Various ALI/ARDS preclinical models have recently demonstrated evidence of necroptosis in epithelial and/or endothelial barrier dysfunction, as assessed by RIPK and MLKL measurements [ 153 – 156 ]. Besides, a large ICU cohort study has implicated RIPK3 in the development of VILI. Subsequent animal experiments have indicated the importance of necroptotic function of RIPK3, evident in the protective effect observed in RIPK3 knockout mice, whereas MLKL knockout mice remained unaffected by VILI [ 157 ]. Moreover, lung autopsy of COVID-19 ARDS patients has found that angiopoietin (Ang) 2 levels are correlated with necrotic lung endothelial cell death, as shown by a linear correlation between levels of Ang2 and RIPK3 [ 158 ]. Based on these studies, the use of inhibitors targeting the necroptosis pathways involving RIPK and MLKL shows promise as a potential therapy for ARDS. Autophagy, a catabolic process that degrades cytoplasmic components to maintain cell homeostasis, can have either beneficial or injurious effects in response to different stimuli [ 159 ]. While autophagy is an adaptive process, excessive autophagy can lead to cell death. Nonetheless, there is a paradoxical role of autophagy in mediating alveolar-capillary barrier function in ARDS. In previous research, H5N1 infection induced autophagy of AEC via inhibition of PI3K/AKT/mTOR1 pathway [ 160 ]. Besides, exposure to LPS was reported to induce autophagic death in human alveolar epithelial cell death via the activation of the PERK pathway upon ER stress in vitro [ 161 ]. However, a recent study suggests that LPS-induced autophagy decreases cell death in mouse lung epithelium [ 162 ]. These discrepant findings may be attributed to variations in experimental conditions, highlighting the intricate role of autophagy in the pathogenesis of ARDS. Thus, there is still high research value on the effect of autophagy in this field. Differing from other forms of cell death, pyroptosis is an inflammatory programmed cell death induced by various pathological stimuli or microbial infections, accompanied by release of inflammatory cytokines [ 163 ]. The pyroptotic pathway comprises the canonical pathway, which is mediated by caspase-1 and relies on inflammasome activation, as well as the non-canonical pathway associated with caspase-4/5/11. The formation of canonical inflammasomes primarily involves cytoplasmic sensors, with NLRs and AIM2 being the most common [ 164 ]. Numerous studies have indicated that pyroptosis of epithelial and endothelial cells mediated by PAMPs and DAMPs could lead to increased barrier permeability and amplification of inflammatory cascade [ 165 – 167 ]. Thus, we suggest that modulating specific elements within the pyroptotic pathways of epithelial and endothelial cells could potentially mitigate the development of ARDS, preserving alveolar-capillary integrity and attenuating the secretion of cytokines. Intercellular junction impairment of epithelium and endothelium The normal endothelium forms connections through intracellular tight junctions (TJs) and adherens junctions (AJs). TJs consist of transmembrane proteins, including claudins, occludins and junctional adhesion molecules (JAMs), as well as cytoplasmic ZO proteins responsible for anchoring tight junctions to actin cytoskeleton [ 168 ]. VE-cadherin serves as the primary component of AJs and establishes connections with p120 catenin, β-catenin, and α-catenin to link with the actin cytoskeleton. The stabilization of VE-cadherin is achieved through the receptor tyrosine kinase Tie2 and vascular endothelial protein tyrosine phosphatase (VE-PTP), both of which prevent its internalization and thus protect against endothelial barrier disruption [ 4 ]. The intracellular structure of alveolar epithelium is similar to endothelium, but its main component of AJs is E-cadherin. The coordinate expression and interplay of AJs, TJs and the actin cytoskeleton play a key effect in maintaining the alveolar-capillary barrier integrity. Multiple signaling pathways have been found to downregulate the integrity of epithelial and endothelial barriers during ARDS, some of which are associated with reduced expression and distribution of AJs proteins. Xiong et al. [ 169 ] found that the disruption of the endothelial barrier by IL-1β was attributed to the downregulation of the transcription factor cyclic adenosine monophosphate (cAMP) response element binding (CREB), along with its target VE-cadherin (Fig. 3 ). Besides, the internalization of VE-cadherin may also lead to endothelial barrier disruption through the separation between intercellular VE-cadherin bonding. Research has provided evidence that TLR4 activation triggers Src kinase phosphorylation, subsequently leading to the phosphorylation of p120-catenin and VE-cadherin. This results in VE-cadherin internalization and increased paracellular permeability in sepsis-induced ALI model (Fig. 3 ) [ 170 ]. Likewise, decreased expression of TJs proteins may contribute to barrier disruption. It has been reported that Drp1-meditated mitochondria fission could induce deregulation of ZO-1 and occludins on ALI models [ 143 ]. Significantly, the interaction of RAGE and high-mobility group box 1 (HMGB1) plays a crucial role in the dysregulation of both TJs and AJs during ARDS. Studies have indicated that the HMGB1/RAGE signaling pathway downregulates the expression of VE-cadherin and E-cadherin in endothelium and epithelium, respectively, paralleled with decreased expression of TJs proteins such as occludins, claudins and ZO-1 in preclinical ARDS models (Fig. 3 ) [ 67 , 171 ]. In addition, barrier hyperpermeability is also related to the cytoskeleton rearrangement in epithelium and endothelium, which induces actin cytoskeleton shortening, cell contraction and intercellular junction rupture [ 172 ]. The intracellular Rho GTPase family, RhoA, Rac and Cdc42, play pivotal roles as regulators of cytoskeletal rearrangement [ 173 ]. Rac1 and RhoA exhibit opposing effects: Rac1 facilitates the assembly and maintenance of AJs, whereas RhoA induces cytoskeletal contraction through the activation of Rho-associated protein kinase (ROCK) and subsequent myosin light chain (MLC) phosphorylation [ 174 , 175 ]. Various inflammatory agents, such as IL-1, TGF-β, thrombin, endothelin-1 and angiotensin II, have been shown to activate the RhoA/ROCK signaling in the pathogenesis of ARDS [ 174 ]. Besides, some molecular pathways also participate in regulating RhoA/ROCK signaling. Sphingosine-1 phosphate (S1P) released by activated platelets can recognize its receptors S1P2 and S1P3 on the surface of endothelial cells, thereby inducing RhoA/ROCK-dependent barrier disruption (Fig. 3 ) [ 173 ]. HMGB1/RAGE has been previously studied to induce cytoskeleton rearrangement through downstream activation of p38MAPK and phosphorylation of actin-binding protein HSP27. Recently, it has been reported to activate downstream RhoA/ROCK to enhance alveolar-capillary permeability [ 171 ]. Ca 2+ influx triggered by the activation of transient receptor potential‐vanilloid 1 channels has been reported to induce cytoskeleton rearrangement in alveolar epithelium of seawater inhalation-induced ALI model, but whether RhoA/ROCK is involved has not been elucidated (Fig. 3 ) [ 62 ]. Moreover, the epithelial–mesenchymal transition (EMT) is considered a pivotal phenomenon during the progression of ARDS. During this process, an epithelial cell line loses its epithelial morphology and gains mesenchymal morphology, as manifested by the downregulation of intracellular junction proteins along with the expression of profibrotic proteins such as α-smooth muscle actin. Studies have demonstrated the activation of the Wnt/β-catenin pathway in ALI/ARDS. Wnt protein released by macrophages combines to its receptor Frizzled on the membrane of alveolar epithelium, resulting in β-catenin translocation to the nucleus and subsequent regulation of various genes. This process ultimately promotes EMT and induce pulmonary fibrosis [ 176 – 178 ]. Interestingly, recent reports have indicated that Wnt signaling upregulated by mesenchymal cells under hypercapnia condition impairs the proliferative capacity of alveolar epithelial cells by inhibiting downstream β-catenin signaling, leading to epithelial barrier dysfunction and exacerbates pulmonary edema [ 179 ]. Thus, modulating this pathway could serve as a therapeutic strategy to alleviate fibrosis and promote lung repair after injury. Several signaling pathways exert barrier protective functions in lung tissue, although they are generally downregulated during ARDS. Many molecular pathways within the endothelium collaborate to enhance barrier function through the stabilization and increased expression of VE-cadherin. The Ang-Tie2 signaling axis has been extensively studied as one of the pathways implicated in inducing endothelial barrier dysfunction during inflammatory diseases like sepsis and ARDS [ 180 ]. Both Ang1 and Ang2 are ligands of Tie2 but exert an opposite role in this signaling pathway by competitive binding Tie2. Activation of Tie2 by Ang1 leads to the inhibition of Src kinase, preventing the internalization of VE-cadherin [ 181 ]. Besides, the Ang1/Tie2 signaling also activates downstream PI3K/AKT to activate Rac1 kinase, leading to prevent cytoskeleton rearrangement (Fig. 3 ) [ 182 ]. Nevertheless, the elevated levels of Ang2 during ARDS hinder these vascular protective effects of Tie2 activation [ 183 ]. Additionally, the upregulation of Tie2 expression can provide additional protection for vascular barrier integrity by preventing the disruption of VE-cadherin junctions. In a recent study, the protective role of endogenous bone morphogenetic protein 9 (BMP9) has been demonstrated in a murine ALI model. Exogenously applied BMP9 binds to its receptor, activin receptor-like kinase 1 (ALK1) exclusively expressed in endothelial cells, leading to increase Tie expression and preventing further VE-cadherin internalization (Fig. 3 ). However, the protective effect of BMP9 for barrier integrity is diminished due to the cleavage of BMP9 by neutrophil-derived proteases during ARDS [ 184 ]. The Roundabout 4 (Robo4) is an endothelial-specific receptor, which prevents VE-cadherin internalization via interacting with the endothelium-derived ligand Slit2 to suppress vascular permeability (Fig. 3 ) [ 185 ]. Exogenously applied Slit2 N-terminal fragment has previously been demonstrated to protect mice against vascular leakage in the lung exposed to various conditions [ 186 ]. However, a recent study by Morita et al. [ 187 ] revealed that BMP9/ALK1 signaling negatively regulates the Robo4 expression. In their studies, inhibition of ALK1 in mouse COVID-19 models was found to upregulate Robo4 expression and suppress vascular permeability in the lung. Further studies are required to investigate the interaction between BMP9/ALK1 and Robo4 signaling and their exact role in ARDS. Hypoxemia, a hallmark of ARDS in patients, has previously been implicated in the activation of HIF-2α, leading to increased VE-PTP gene expression and enhancement of the adhesive function of VE-cadherin (Fig. 3 ) [ 188 ]. Controlling this endogenous protective mechanism properly may be of value in patients with ARDS. Moreover, epithelial regeneration following ARDS is recognized as crucial for improving respiratory function in the remaining lung. Recently, the Hippo/yes-associated protein (YAP) pathway has emerged as a contributor to lung repair and recovery after ALI. In the late phase of ARDS, the key effector molecule YAP transfers from the cytoplasm into nucleus to govern the expression of target genes, promoting AT II proliferation and the reassembly of epithelial AJs (Fig. 3 ) [ 189 , 190 ]. Impaired alveolar fluid clearance It is well recognized that the active fluid transport is impaired in ARDS, which is associated with increased permeability of alveolar-capillary integrity and impaired AFC controlled predominately by ENaC, Na,K-ATPase, CFTR channels and aquaporins. Here we focus on the signaling pathways that mediate ion and water transport across the lung epithelium during ARDS. Previous studies have shown that elevated levels of proinflammatory factors during ARDS resulted in reduced expression of alveolar ion channels and impaired AFC. For example, IL-1β and LPS reportedly reduce the expression of ENaC via p38MAPK activation to decrease AFC [ 191 , 192 ]. TNF-α induces declines in ENaC activity and expression through binding to TNFR1 [ 193 ]. TGF-β/Smad signaling decreases ENaC and CFTR expression, resulting in AFC failure [ 34 , 194 ]. Elevated levels of angiotensin II after lung injury have been implicated to decrease ENaC expression through the inhibition of cAMP [ 195 ]. Besides, it has been suggested that TRAIL/TRAILR signaling induces the degradation of Na,K-ATPase independent of cell death pathway elicited by caspases, mediated by the cytoplasmic AMP-activated protein kinase (AMPK) [ 196 ]. Moreover, the expression of aquaporins has been shown to be downregulated through RAGE signaling and p38MAPK activation [ 197 ]. These signaling pathways strongly reveal a cross-link between inflammatory amplification and AFC impairment during ARDS (Fig. 4 ). Furthermore, low oxygen or high carbon dioxide hypoxemia resulting from ventilation-perfusion mismatch and alveolar edema in ARDS can also downregulate the transportation of alveolar fluid. Hypoxia has been shown to directly induce protein kinase C-ζ (PKC-ζ) phosphorylation, leading to the endocytosis of Na,K-ATPase and subsequent reduction in AFC [ 198 ]. Similarly, elevated CO 2 levels during hypercapnia have been demonstrated to increase intracellular Ca 2+ concentration, which activates the Ca 2+ /calmodulin-dependent kinase kinase-β (CAMKK-β) and AMPK to phosphorylate PKC-ζ, resulting in Na,K-ATPase endocytosis (Fig. 4 ) [ 199 , 200 ]. Similar findings are reported recently that alveolar epithelium exposed to CO 2 activates ERK1/2 and the subsequent AMPK activation, leading to the activation of ubiquitin protein ligase neuronal precursor cell expressed developmentally down-regulated protein4-2 (Nedd4-2), which is a key molecular to drive the ubiquitination of ENaC. The activated Nedd4-2 thereby promotes alveolar edema via the induction of ENaC endocytosis [ 201 , 202 ]. Conversely, there are many essential signaling pathways in enhancing AFC, suggesting their potential value for targeted reabsorption treatments of ARDS. An increasing number of studies have confirmed the beneficial effects of PI3K/AKT activation on AFC. It has been reported that the PI3K/Akt signaling pathway stimulates the serum and glucocorticoid-inducible kinase-1, which is a critical regulator of ENaC [ 203 ]. Han et al. [ 204 ] suggested that cAMP-regulated AFC enhancement by activating downstream PI3K/AKT signaling, which then phosphorylates Nedd4-2 to reduce ENaC degradation. Additionally, Magnani et al. [ 198 ] found that during prolonged hypoxia, HIF-1α signaling is activated to inhibit the endocytosis of Na,K-ATPase by causing degradation of PKC-ζ, which provides another potential therapeutic target for preserving AFC. ROS-mediated signaling pathways The excessive generation of ROS is well-established to be causative in the pathogenesis and progression of ARDS. In brief, the biological origins of ROS are associated with NADPH oxidase (NOX), xanthine oxidoreductase (XOR), nitric oxide synthase (NOS), and dysfunctional mitochondria [ 172 ]. NOX family is one of the most well-known sources of cytoplasmic ROS. NOX1, NOX2, and NOX4 are members of NOX family that have been detected to produce ROS in the lung tissue [ 205 ]. In addition, uncoupling of the dimeric endothelial NOS (eNOS) can also induce oxidative injury through a dysregulated NO response, which produces peroxynitrite to induce protein nitration [ 172 ]. Similarly, mitochondrial-derived oxidative stress led by mitochondrial-derived ROS (mtROS) is important for regulation of inflammatory progression under cellular stress conditions such as inflammation, hypoxia, mechanical stretch and Ca 2+ influx [ 32 ]. The activation of these ROS-producing pathways is concomitant in the course of inflammation, leading to the amplification of tissue damage and pulmonary edema. During ARDS, inflammation enhances ROS production by increasing the expression and activity of ROS-producing enzymes, which in turn aggravates inflammation by initiating proinflammatory signals. For example, NF-κB is demonstrated to be activated by NOX-derived ROS in LPS-induced ALI model, resulting in the expression of inflammatory cytokines [ 206 ]. The production of ROS by NOX2 in neutrophils plays a role in TNF-α-induced NF-κB-dependent lung inflammation in mice [ 207 ]. Similarly, mtROS promotes inflammation via initiating the activation of NLRP3 inflammasome and TLR9 signaling [ 208 ]. Recently, Zeng et al. [ 209 ] found that the TLR4 activation induces NOX2 assembly in alveolar epithelium, leading to ROS-stimulated ER stress and subsequent inflammation. ROS has also been demonstrated to contribute to the disruption of the alveolar-capillary barrier, manifested by epithelial/endothelial cell death and loss of intercellular connections. ROS acts as an upstream signal of NLRP3 inflammasome activation, leading to cell pyroptosis in both epithelium and endothelium, ultimately increasing permeability [ 210 , 211 ]. Ferroptosis is a newly recognized form of programmed cell death, which is manifested by elevated ROS levels and lipid peroxidation [ 212 ]. Studies have shown that excessive ferroptosis can aggravate lung tissue damage in ALI models [ 213 , 214 ]. In addition to the direct disruption of intercellular junction proteins, ROS can also trigger associated signaling pathways that negatively regulate alveolar-capillary barrier [ 215 ]. Previous studies have demonstrated that eNOS uncoupling disrupts the pulmonary endothelial barrier [ 216 ]. Rafikov et al. [ 217 ] found that peroxynitrite produced by eNOS leads to RhoA nitration, which enhances cytoskeletal rearrangement to increase endothelial permeability. Recently, it has been reported that increased NOX4-dereived ROS during ALI disrupt the endothelial barrier by activating cytosolic Ca 2+ /calmodulin-dependent protein kinase II to trigger MLC-mediated cytoskeletal contraction, leading to the aggravation of sepsis-induced ALI [ 218 ]. Besides influencing cell–cell interactions, ROS also contributes to the reduction of AFC. It is implicated that mtROS released from mitochondria under hypoxia directly activates PKC-ζ, which further induces Na,K-ATPase endocytosis and impairs AFC [ 219 ]. NOX4-mediated ROS generation triggered by upstream TGF-β/Smad signaling has been shown to promote ENaC endocytosis, resulting in reducing alveolar fluid reabsorption [ 34 ]. Considering the involvement of ROS in ARDS development and progression, targeting the enzymes responsible for ROS generation could be a promising therapeutic approach for ARDS/ALI. The current understanding is that the protective mechanism involves the nuclear factor erythroid 2-related factor (Nrf2) pathway against oxidative stress. Nrf2 is a transcription factor that remains sequestered in the cytosol when bound to Kelch-like ECH-associated protein 1 (Keap1) under resting condition. Upon oxidative stress, free Nrf2 translocates into the nucleus to initiate the expression of antioxidative genes such as heme oxygenase (HO), superoxide dismutases, glutathione peroxidase 4 and catalase [ 220 ]. However, during ARDS, the anti-oxidative effects by Nrf2 pathway are rapidly overwhelmed by excessive ROS production, or are dysregulated in damaged tissues, leading to aggravate oxidative injury [ 221 , 222 ]. The importance of Nrf2 pathway in mediating antioxidant effects has been well characterized both in vivo and in vitro models of ALI/ARDS [ 102 , 223 ]. In addition to considering general Nrf2 activation, researchers have explored alternative signaling pathways as potential targets for therapy. Guo et al. [ 224 ] have shown that hyperoxia exposure induces S-glutathionylation of fatty acid binding protein (FABP) 5 in macrophages, which enhances FABP5 ability to activate PPARβ/δ and inhibit inflammation. In their study, macrophage-specific glutaredoxin 1 deficiency alleviates ALI inflammation via enhancing the levels of S-glutathionylation FABP5. Besides, Cai et al. [ 225 ] have recently revealed the important role of the Notch pathway in regulating oxidative stress. Notch1/Hes1 activation inhibits NOX4 expression, leading to attenuate ROS-mediated endothelial cell apoptosis in burn-induced ALI model. Most recently, the activation of mitochondrial uncoupling proteins UCP2, a key regulator of intracellular ROS homeostasis, has been reported to suppress ROS generation through the activation of downstream Sirtuin 3 and the antioxidant peroxisome proliferator-activated receptor gamma coactivator 1-alpha in severe acute pancreatitis-induced ALI [ 226 ]. Thus, targeting these molecular pathways may present a promising therapeutic strategy for alleviating oxidative injury in ARDS. Emerging pharmacologic therapies for ARDS Pharmacotherapeutic approaches for ARDS have been attempted and tested for more than 50 years. Nonetheless, effective and targeted treatments for the disease remain elusive. Here, we will discuss the prospective pharmaceutical interventions associated with the pathophysiological and molecular targets, along with their effects on the relevant signal transduction pathways involved in ARDS management. Pharmacologic therapies for ARDS Therapeutic agents potentially targeting anti-inflammation Numerous pharmacological agents have been reported to appear promising in ALI/ARDS for mitigating inflammatory responses. As PRRs play an important role in provoking inflammatory injury, therapeutic strategies focused on targeting these receptors have emerged (Table 2 ). Recently, it has been found that Cirsilineol [ 227 ], Diacerein [ 115 ], and Taurine [ 228 ] ameliorate inflammatory injury via the TLR4/NF-κB signaling pathway after ALI. Glycyrrhizin is reported to reduce LPS-induced ALI by TLR2 signaling inhibition [ 229 ]. Of note, a recent study showed that Omeprazole encapsulated by applying nanostructured lipid carriers effectively target lung macrophages and inhibit multiple TLR pathways, including TLR3, TLR4, and TLR7/8 in a murine model of ALI, supplying a new therapy for clinical needs of ARDS [ 230 ]. For NLRs family, most of the research have focused on targeting the NLRP3 inflammasome to reduce the release of cytokines. Preclinical studies have shown that Glibenclamide [ 231 ], Tetracycline [ 232 ], and 4-hydroxynonena (an endogenous product of lipid peroxidation) [ 233 ] inhibit NLRP3 inflammasome activation independently of NF-κB signaling. Besides, an observational study for Tetracycline to investigate inflammasome activation in clinical ARDS is recruiting (NCT04079426). Similarly, limiting RAGE-mediated inflammation may be beneficial in ARDS treatment. It is reported that Dexmedetomidine [ 234 ], Calycosin [ 235 ], and Tanreqing [ 236 ] alleviate inflammation mediated by RAGE and TLR4 receptors via inhibiting HMGB1 signaling. Notably, a clinical trial of Dexmedetomidine for ARDS in critical care COVID-19 patients is under investigation (NCT04358627). The targeted inhibition of the cGAS-STING pathway is also a valuable idea, and several small-molecule inhibitors of cGAS-STING pathway, such as H-151 and RU.521 have been proven to alleviate lung injury in ALI models [ 237 , 238 ]. The central role of NF-κB in proinflammatory signaling pathways makes it an attractive target for pharmaceutical intervention. Considering that NF-κB integrates numerous upstream signals, many targets indirectly inhibit NF-κB-mediated inflammation via modulating its upstream signaling. It has been shown that Chrysosplenol D [ 239 ], Daphnetin [ 165 ] and Inula japonica [ 102 ] ameliorate acute lung inflammation by suppressing the MAPK-mediated NF-κB pathway. The molecular BAP31 as well as Schisandrin B have presented therapeutic value by targeting MyD88 to reduce NF-κB activation in ALI mice [ 240 , 241 ]. Moreover, there are a variety of targets that inhibit NF-κB signaling to negatively regulate NLRP3 inflammasome, such as Loganin [ 242 ], Dapagliflozin [ 243 ], Hederasaponin C [ 244 ], Artesunate [ 245 ], Syringaresinol [ 246 ], all of which have been studied in preclinical models of ALI/ARDS. Of note, the neutrophil elastase inhibitor Sivelestat and Simvastatin, two promising therapeutic drugs for treating ALI/ARDS, were also proven to target NF-κB inhibition in LPS-induced ALI [ 247 , 248 ]. The administration of Sivelestat to ARDS patients has been confirmed to provide a 90-day mortality advantage by a retrospective study [ 249 ]. A phase III trial is in progress to assess the impact of Sivelestat on ARDS patients with sepsis (NCT04973670). For Simvastatin, a phase IIb randomized trial of Simvastatin therapy in ARDS patients did not yield improvements in clinical outcomes (ISRCTN88244364) [ 250 ]. However, a secondary analysis of this Simvastatin trial suggested that patients with hyperinflammatory subphenotypes who received Simvastatin exhibited lower 28-day mortality [ 251 ]. Given the promising efficacy of JAK inhibitors in the treatment of COVID-19-induced ARDS, the potential therapeutic effect by targeting JAK/STAT signaling pathway in ARDS has been revealed and may become a new strategy for treating other types of ARDS. Multiple clinical trials of JAK inhibitors including Baricitinib, Nezulcitinib, Pacritinib, Ruxolitinib, and Tofacitinib are completed or under investigation, and some published results have shown their safety and efficacy in COVID-19 patients [ 252 ]. Notably, Baricitinib has received approval for the treatment of hospitalized adults with COVID-19 who require supplemental oxygen, noninvasive or invasive mechanical ventilation, or extracorporeal membrane oxygenation [ 253 ]. Besides, numerous preclinical animal studies have demonstrated the anti-inflammatory effects of JAK2/STAT3 inhibition in other etiologies-induced ARDS (Table 2 ) [ 254 – 256 ]. In addition, targeting the potent activator of JAK/STAT signaling, such as the IL-6 inhibitor Tocilizumab, has also shown beneficial effects in severe COVID-19 patients [ 257 , 258 ]. Given the diverse etiology of ARDS, it is now imperative to conduct additional clinical trials for IL-6 inhibitors in non-COVID-19 ARDS patients. Inactivation of MAPK signaling is also demonstrated to contribute to the anti-inflammatory effects in ARDS treatment. Nicotinamide [ 101 ], Irigenin [ 259 ] and β-Caryophyllene [ 260 ] have been recently found to inhibit MAPK signaling and reduce expression of inflammatory factors in ALI model. Particularly, Dilmapimod, a specific p38MAPK inhibitor, has been reported to have a satisfactory safety profile in trauma patients at risk of developing ARDS and to reduce the concentration of pro-inflammatory cytokines [ 261 ]. Recently, the aqueous extract of Descuraniae Semen (AEDS) as well as Vitamin-D are reported to possess an anti-inflammatory effect in preclinical ALI models via targeting the ER stress markers IRE1α and ATF6, respectively, offering new insights for the treatment of ALI/ARDS [ 262 , 263 ]. However, a largest published randomised controlled trial showed no benefit of vitamin D on 90-day mortality in critically ill patients at high risk for ARDS [ 264 ]. With the development of more anti-inflammatory drugs, there is an increasing demand for additional high-quality clinical trials to confirm the therapeutic effects of these drugs in human ALI/ARDS patients. Therapeutic agents potentially protecting the alveolar-capillary barrier An alternative therapeutic strategy under consideration is to promote alveolar-capillary barrier function via enhancing intercellular junctions and diminishing pulmonary epithelial and endothelial cell injury. Oxypeucedanin [ 265 ], Forsythiae [ 266 ] and the andrographolide derivative AL-1 [ 267 ] have been found to contribute to the maintenance of alveolar-capillary integrity by increasing the expression of TJs proteins in ALI model. Pazopanib is shown to increase pulmonary barrier function via specifically inhibiting MAP3K2 and MAP3K3 phosphorylation in neutrophils, which leads to moderately ROS production that activates the Rac1-mediated protective effects in alveolar-capillary barrier in animal models. It also exhibits benefits in reducing lung edema in preliminary human study of five pairs of lung transplantation patients [ 268 ]. In recent years, some promising drugs have shown potential clinical benefits for treating ARDS through the protection of pulmonary endothelium. Research has shown that Ruscogenin upregulates the expression of p120 catenin and VE-cadherin via inactivating the TLR4/Src signaling in mice with sepsis-induced ALI [ 170 ]. Verdiperstat, a myeloperoxidase inhibitor, enhances VE-cadherin stability by reducing the activation of myeloperoxidase/μ-calpain/β-catenin signaling pathway on experimental ARDS in rats [ 269 ]. Blebbistatin is a myosin II inhibitor that resists pulmonary endothelial barrier dysfunction in mice. Results indicated that Blebbistatin downregulates the Wnt5a/β-catenin pathway and exerts a protective effect on lung injury [ 270 ]. In addition, certain compounds or drugs have been reported to mitigate alveolar epithelial and pulmonary endothelial cell death in preclinical models of ARDS, of which safety and efficacy remain to be further examined in clinical studies [ 153 , 258 , 271 ]. Of note, multiple RIPK1 inhibitors that suppress necroptosis have progressed beyond Phase I safety trials in human clinical studies for other inflammatory conditions like ulcerative colitis and rheumatoid arthritis [ 272 ]. Besides, Necrostatin-1 [ 273 ] and Aloperine [ 274 ] have obtained promising results in ARDS experimental models by reducing necroptosis and inflammation. Considering the vital role of necroptosis in the type of alveolar epithelial death in ARDS, these RIPK inhibitors merit further investigation in clinical trials [ 275 ]. Therapeutic agents potentially enhancing AFC It is accepted that enhancement of AFC is pivotal for patient survival, thus the potentially effective drugs that promote excessive fluid clearance during ARDS merit investigation. β-adrenergic agonist is a commonly studied agent to improve AFC in animal models, mechanistically acting by increasing intracellular cAMP levels to increase the expression of ion transport channels [ 276 ]. Previous clinical trials involving Salbutamol for ARDS patients indicated poor tolerance and the potential to worsen mortality [ 277 , 278 ]. However, a prospective study showed that inhalation of Formoterol and Budesonide reduced the incidence of ARDS [ 279 ]. A recent study reported similar results that inhaled salbutamol as monotherapy or combined with corticosteroids reduced the incidence of ARDS development among hospitalized patients [ 280 ]. Thus, this evidence may suggest the potential protective benefit of prior administration of β-adrenergic agonists in preventing ARDS. A synthetic peptide agent (a.k.a. AP301, solnatide) was shown to markedly reduce pulmonary edema by activating sodium channels in animal models of ARDS [ 281 , 282 ]. A small phase 2 randomized blinded trial suggested that inhaled AP301 every 12 h for 7 days was shown to decrease pulmonary edema and reduce ventilation pressures in patients with ARDS [ 283 ]. Another trial testing AP301 in patients with moderate–severe ARDS is currently enrolling (NCT03567577). Moreover, the important role of the macrophage-derived specialized pro-resolving mediators (SPMs) in promoting AFC during ARDS has been gradually recognized [ 284 ]. For example, the resolvin conjugates in tissue regeneration 1 [ 285 ] as well as the maresin conjugates in tissue regeneration 1 [ 204 ] belong to SPMs, which have been recently implicated to upregulate ENaC and Na,K-ATPase by activating the cAMP/PI3K/AKT signaling pathways, and results in alleviating pulmonary edema in preclinical ARDS models. However, there is a lack of significant clinical studies of either administered exogenously or induction of endogenous SPMs in ARDS patients. Besides, Ursodeoxycholic acid [ 286 ] and Aldosterone [ 203 ] have been proven to exert therapeutic effects in mitigating LPS-induced pulmonary edema in animal models by modulating the cAMP/PI3K/AKT pathway. Further researches into the efficacy and safety of these drugs in ARDS patients are required. Therapeutic agents potentially attenuating oxidative injuries Considering the crucial role of oxidative injury in ARDS pathogenesis, therapeutic targets for suppressing oxidative stress have aroused considerable attention. A variety of antioxidant therapies including vitamin C supplementation or N-acetylcysteine administration, have been applied to ARDS patients. Regarding vitamin C, a phase 2 clinical trial involving 167 patients with ARDS and sepsis found that high-dose vitamin C infusion, when compared to a placebo, did not significantly reduce organ failure or improve inflammatory biomarkers, but improved the secondary outcomes of 28-day mortality, ICU-free days, and hospital-free days [ 287 ]. N-acetylcysteine is well-known for its mucolytic effect and robust antioxidant activity. However, the usefulness of N-acetylcysteine in ARDS patients is controversial [ 288 , 289 ]. Recently, its potential to inhibit the progression of COVID-19 has rendered it a highly promising therapy for the disease [ 290 ]. Moreover, the Nrf2 pathway plays a crucial role in protection against oxidative lung injuries during ARDS, and thus antioxidants activating Nrf2 pathway may be an effective intervention. It has been found that Panaxydol [ 291 ], Melatonin [ 292 ] and Sitagliptin [ 293 ] act on Nrf2 pathway to increase the expression of antioxidants in lung tissue, leading to alleviate oxidative injury in animal ARDS models. But further clinical trials are certainly required to determine its precise efficacy in protecting against ARDS. In addition, targeting the enzymes responsible for ROS generation might offer a promising therapeutic approach for ARDS. Pharmacological inhibitors of NOX2, such as Quercetin [ 294 ], Apocynin [ 295 ] and VAS2870 [ 215 ], as well as NOX1/4 inhibitor G137831 [ 296 ], have been shown to protect lung tissue damage induced by oxidative stress during ARDS. However, these have only undergone preclinical studies and evidence in human is still required. MicroRNAs in ARDS In recent times, there has been a growing focus on the involvement of microRNAs (miRNAs) in ARDS. MiRNAs are a category of small noncoding RNAs that modulate gene expression by either inhibiting the translation of target mRNAs or facilitating the early degradation of complementary mRNAs [ 106 ]. Multiple results from preclinical studies have indicated that miRNAs may play pivotal roles in the pathophysiology of ARDS by targeting specific genes to regulate the signaling pathways. These regulatory effects extend to cellular, receptor, signaling pathways, and gene transcription levels [ 297 ]. For instance, Xu et al. [ 298 ] found that increased miR-199a-3p exacerbates LPS-induced ARDS via silencing PAK4 expression in AMS, resulting in the release of pro-inflammatory autophagosomes and cytokines in mice, all of which can be reversed by miR-199a-3p inhibitors. Yang et al. [ 299 ] observed that miR-16 overexpression mitigated LPS-induced ALI in mice by inhibiting TLR4 expression and subsequently downregulating the TLR4/NF-κB signaling pathways in mice. Furthermore, miRNA localization is a critical factor influencing its function. Extracellular miR146a-5p has been reported to trigger TLR7-dependent inflammation and endothelial barrier disruption while also exerting intracellular negative regulation of TLR signaling by targeting IRAK1 and TRAF6 expression [ 300 , 301 ]. Recent research on miRNAs and their roles in preclinical ALI/ARDS models have been summarized in the table [ 298 , 302 – 323 ] (Table 3 ). Of note, the influence of long non-coding RNAs (lncRNAs) and circular RNAs (circRNAs) upon microRNA function has also emerged rapidly, which regulates the downstream pathways by sequestering and competitively suppressing miRNA activity. For example, lncRNA NLRP3 promotes NLRP3 inflammasome activation by sponging miR-138-5p [ 317 ]. Similarly, circRNA N4bp1 that increased in ARDS patients has been demonstrated to facilitate M1 polarization via targeting miR-138-5p in CLP-induced ALI of mice [ 322 ]. lncRNA MINCR negatively regulates miR-146b-5p to activate the NF-κB-mediated inflammation [ 311 ]. Considering the regulatory roles of miRNAs in animal models of ARDS, the concept of employing miRNA mimics or antagomirs (synthetic miRNA inhibitors with sequences complementary to specific miRNAs) emerges as an appealing option for targeted therapy in ALI/ARDS. At present, there are few clinical trials involving miRNAs for diagnosing or treating ALI/ARDS. One ongoing clinical trial is currently recruiting participants with the expectation of validating several non-coding RNAs as new biomarkers for predicting the severity of ALI/ARDS in patients (NCT03766204). Another trial that aims to explore the expressions of miR-27b and Nrf2 in the development and treatment of ARDS patients is also recruiting (NCT04937855). Hence, there is an urgent need for clinical trials investigating the potential therapeutic targeting of miRNAs in ALI/ARDS, and the clinical application of miRNAs in ALI/ARDS deserves significant attention. Mesenchymal stromal cell therapy Mesenchymal stromal cell (MSC) therapy has shown promising results in ARDS, thanks to its multi-directional differentiation potential, migration ability and immunomodulatory effects [ 324 ]. It spontaneously migrates to the injured region to influence the tissue microenvironment by secreting soluble bioactive molecules or cell–cell contact, leading to alleviate inflammation, enhancing epithelial and endothelial regeneration and improving AFC [ 325 ]. Preclinical studies have demonstrated that MSCs participate in a variety of signaling pathways associated with the pathophysiology of ARDS. Administration of human umbilical cord-derived MSCs alleviated inflammation in LPS-induced ALI of mice via downregulation of NF-κB signaling [ 326 ]. MSC-expressed jagged-1 interacts with Notch2 on mature DCs, which differentiate into regulatory DCs to negatively regulate inflammation [ 85 ]. MSC has also been demonstrated to promote barrier function and restoration of alveolar epithelium by activating Wnt/β-catenin signaling pathway in ALI mice [ 176 ]. Marrow-derived MSCs are reported to exert antioxidant effects via upregulating Nrf2/HO-1 signaling in rat models with LPS-induced ALI [ 327 ]. Inhibition of the Hippo signaling increased MSCs differentiation into ATII cells and alleviated LPS-induced ALI [ 328 ]. The safety and efficacy of transplanted MSCs for patients with ARDS have been substantiated by many clinical trials [ 329 – 332 ]. A phase 1/2 trial demonstrated the safety of MSC administration in ARDS patients, with the potential to reduce 28-day mortality and the requirement for ventilator support [ 333 ]. The transplantation of menstrual blood-derived MSCs has the potential to lower mortality in patients with H7N9 virus-induced ARDS, as observed during a five-year follow-up period [ 334 ]. In addition, clinical trials are currently investigating the therapeutic benefits of MSCs for COVID-19, which provide a promising opportunity for patients with pulmonary damage [ 335 ]. However, a recent multicenter, randomized, double-blind, placebo-controlled trial (NCT 03042143) has found that patients with moderate to severe COVID-19-related ARDS do not benefit from ORBCEL-C (CD362-enriched umbilical cord-derived MSCs), although the application of these MSC cells is considered safe [ 336 ]. For further information about the properties and functions of MSCs in ARDS, consider consulting the reviews by Fernandez-Francos et al. [ 337 ] and Qin and Zhao [ 325 ]. Subphenotypes in ARDS and prospects for targeted therapies While numerous preclinical studies on pharmacological treatments for ARDS have shown promise, none have yet demonstrated a significant impact on ARDS mortality in clinical trials. The potential reasons may be partially due to the heterogeneity of ARDS [ 338 ]. More homogeneous subgroups of ARDS patients can be identified based on physiological, clinical, and biological characteristics [ 339 ]. By integrating clinical and biological characteristics, Calfee and colleagues identified a hyper-inflammatory subphenotype characterized by increased inflammation and higher mortality compared to the hypo-inflammatory subphenotype [ 340 ]. Subsequent studies have reported similar findings [ 341 – 343 ]. These prognostic enrichments can identify a higher likelihood of a poor outcome and may assist in making bedside healthcare decisions. On the other hand, predictive enrichment aids in the selection of patients with a higher likelihood of positive responses to specific treatments or in the identification of patients more likely to benefit from particular interventions based on underlying mechanisms and biological characteristics [ 339 ]. Physiological and clinical phenotyping for predictive enrichment has yielded intriguing findings. For instance, Calfee CS et al. discovered elevated levels of epithelial injury biomarkers in patients with direct ARDS [ 344 ]. Additionally, some research have shown that recruitment maneuvers are less effective in primary ARDS rat models, while methylprednisolone proves to be more effective in mitigating the inflammatory response [ 345 , 346 ]. While pre-randomization trials involving biologic phenotyping for predictive enrichment are infrequent in clinical practice due to the limited availability of biomarker tests, retrospective studies have demonstrated varying responses of hypo- and hyper-inflammatory phenotypes to interventions, such as positive end-expiratory pressure, fluid management strategies, and simvastatin [ 251 , 339 , 340 , 342 ]. Given that hypo- and hyper-inflammatory phenotypes provide only a general characterization of inflammation in ARDS, it is worthwhile to identify more specific subphenotypes based on the main signaling pathways. Further evaluation of targeted treatments has the potential to enhance therapeutic responses and improve the ability to identify effective interventions. For example, Bos et al. reported elevated expression of oxidative phosphorylation genes in the “reactive” subphenotype, as identified by plasma protein biomarkers. The authors suggested further investigation of interventions targeting this pathway in patients with “reactive” subphenotype [ 347 ]. However, as summarized by Wilson JG et al., the use of metabolomics, transcriptomics, genomics, and signaling pathway characteristics for ARDS phenotyping and predictive enrichment is still in its early stages [ 339 ].
Abbreviations Alveolar epithelial cells Aqueous extract of Descuraniae Semen Alveolar fluid clearance Absent in melanoma 2 Adherens junctions Protein kinase B Acute lung injury Activin receptor-like kinase 1 AMP-activated protein kinase Alveolar macrophages Angiopoietin Activator protein-1 Acute respiratory distress syndrome Apoptosis-associated speck-like protein containing a CARD Apoptosis signal-regulating kinase 1 Activating transcription factor 6 Bronchoalveolar lavage fluid B-cell receptor associated protein 31 Bone morphogenetic protein 9 Calmodulin-dependent kinase kinase-β Cyclic adenosine monophosphate Cytoplasmic DNA sensors Cystic fibrosis transmembrane conductance regulator Cyclic GMP-AMP Cyclic GMP-AMP synthase Circular RNAs Coronavirus disease 2019 CAMP response element binding Damage-associated molecular patterns Dendritic cells Dynamin-related protein 1 Epithelial–mesenchymal transition Epithelial Na + channel Endothelial NOS Endoplasmic reticulum Extracellular signal-regulated kinase Fatty acid binding protein Fas ligand N-formyl peptide receptor Granulocyte colony-stimulating factor G-protein-coupled receptors Hairy/Enhancer of Split 1 Hypoxia-inducible factor-1α Hypoxia-inducible factor-2α High-mobility group box 1 Heme oxygenase Heat shock protein Type I-interferons Interleukin-1 Inositol requiring kinase 1α Interferon regulatory factors NF-κB inhibitor Janus kinase Junctional adhesion molecules C-Jun N-terminal kinase Kelch-like ECH-associated protein 1 Long non-coding RNAs Lipopolysaccharide Mitogen-activated protein kinase Mitochondrial antiviral signaling protein Maresin conjugates in tissue regeneration 1 Melanoma differentiation-associated gene 5 MicroRNAs Myosin light chain Mixed lineage kinase domain-like protein Mesenchymal stromal cell Mammalian target of the rapamycin Mitochondrial-derived ROS Myeloid differentiation primary response gene 88 Sodium–potassium adenosine triphosphatase Neuronal apoptosis inhibitory protein Neuronal precursor cell expressed developmentally down-regulated protein4-2 Neutrophil extracellular traps Nuclear factor-κB Nucleotide-binding oligomerization domain like receptor subfamily C Nucleotide-binding domain leucine-rich repeat protein Nucleotide-binding leucine-rich repeat receptors Nucleotide-binding oligomerization domain Nitric oxide synthase NADPH oxidase Nuclear factor erythroid 2-related factor Pathogen-associated molecular patterns Protein kinase RNA-like ER kinase Phosphatidylinositol 3-kinase Protein kinase C-ζ Pattern recognition receptors Receptors for advanced glycation end products Resolvin conjugates in tissue regeneration 1 Retinoic acid-inducible gene I Receptor-interacting protein kinase RIG-I-like receptors Roundabout 4 Rho-associated protein kinase Reactive oxygen species Sphingosine-1 phosphate Severe acute respiratory syndrome coronavirus 2 Small Mothers against Decapentaplegic Specialized pro-resolving mediators Signal transducer and activator of transcription Stimulator of the interferon gene Transforming growth factor-β T helper 17 cell Tight junctions Toll-like receptors TNF-receptor Tumor necrosis factor-α TNFR-associated death domain TNFR–associated factor TNF-related apoptosis-inducing ligand TNF-related apoptosis-inducing ligand receptor Toll/interleukin-1 receptor-domain-containing adaptor-inducing interferon-β Transient receptor potential Vascular endothelial cadherin Vascular endothelial growth factor Vascular endothelial protein tyrosine phosphatase Ventilator-induced lung injury Xanthine oxidoreductase Yes-associated protein Zonula occludens Acknowledgements Not applicable. Author contributions SL and YB designed the structure of the article; QH and YL wrote the manuscript; QH and YL created the figures and tables. SL and YB made revisions and proofread the manuscript. All authors have read and agreed to the published version of the manuscript. Funding This work was supported by grants of the China Primary Health Care Foundation (Grant No. YLGX-ZZ-2020001 to S.L.), and the Natural Science Foundation of Hubei Province (Grant No. 2021CFB376, to YB). Availability of data and materials Not applicable. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interest.
CC BY
no
2024-01-15 23:43:48
Respir Res. 2024 Jan 13; 25:30
oa_package/9e/9d/PMC10788036.tar.gz
PMC10788037
38218960
Introduction Osteosarcoma (OS) is the most common primary malignant bone tumor in children and adolescents, and recurrence and metastasis (mostly lung metastasis) are the main reasons for unsatisfactory treatment of OS [ 1 , 2 ]. The activity imbalance of osteoblasts and osteoclast in the microenvironment of OS and the different types of malignant differentiation of tumor stem cells lead to the pathological classification of OS into osteoblastic OS, chondroblastic OS, fibroblastic OS, mixed OS, etc [ 3 , 4 ]. PH is a key regulator of osteoblast and osteoclast activity [ 5 ]. GPR65 (TDAG8) is a glycosphingolipid psychoactive amine (d-nenenebb galactose group-β-1,1’-sphingosine) receptor which has been proved to be a pH sensitive G protein coupled receptor [ 6 , 7 ]. Ovariectomized mice in the absence of GPR65 showed increased bone resorption, increased number of osteoclast, and increased activity of osteoclast, leading to excessive bone resorption [ 8 ]. The activation of GPR65 inhibits calcium absorption in osteoclast, thereby improving bone density [ 9 ]. It can be seen that the expression of GPR65 plays a vital role between osteogenesis and osteoclast. Importantly, recent studies have found that GPR65 could be used as a key cancer immune checkpoint inhibitor in human tumor microenvironment, which could inhibit the release of inflammatory factors and induced significant up-regulation of tissue repair genes [ 10 , 11 ]. GPR65 is a member of the proton-sensing G protein-coupled receptor family, which is closely related to tumor microenvironment (TME) [ 12 ]. TME is composed of Extracellular matrix (ECM), stromal cells and immune cells (including T lymphocytes, B lymphocytes, tumor-associated macrophages, etc.) [ 13 ]. The composition of TME has been found to influence immune checkpoint blockade responses [ 14 ]. In the OS microenvironment, does GPR65 play an immunosuppressive role and promote the immune escape of OS? Or is it because GPR65 is strongly expressed in lymphoid tissue (tumor suppressor factor), its activation may represent a potential anti-tumor biomarker? It is currently unclear. Therefore, understanding the regulation and molecular function of GPR65 may indicate the new potential therapeutic target and prognosis predictor for OS. This study first analyzed the GPR65 expression of 97 patients with OS from TARGET database, and deeply analyzed the potential molecular network of GPR65 in OS cells and its role in biological processes. The experiment further verifies the role of GPR65 in OS. Our study found that GPR65, different from other types of cancer (colon cancer, pancreatic cancer, etc.), has a new expression feature in OS patients, and reveals that the low expression of GPR65 indicates poor prognosis in OS patients.
Materials and methods Data collection In this study, transcriptome data and clinical data (101 TARGET-OS gene expression data) of OS were downloaded from TARGET database. After deleting missing data and data without follow-up records, data from 97 patients with OS were retained for final research analysis. Detailed information can be found in the supplementary file 1 . The expression of GPR65 in various tumor tissues and normal tissues was analyzed using Gene Expression Profiling Interaction Analysis (GEPIA; http://gepiagepia.Cancer-pku.cn/ ) Analysis. Due to the fact that the GEPIA database only contains GPR65 expression data information for sarcomas, application of Gene Chip Technology to detect the expression of GPR65 in both OS and normal tissues. Difference analysis The gene expression data of different subgroups of transcriptome of OS patients were mapped with the R software ggplot2 package, and two samples were t-tested. Using the R software pROC package to draw ROC curves. Functional enrichment analysis To conduct GO enrichment analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis, GPR65 related genes, or a characteristic gene list of the cell cluster were obtained from the Database for Annotation, Visualization, and Integrated Discovery ( https://david.ncifcrf.gov/ ) download. The official genetic symbol was selected as the identifier, and Homo sapiens was selected as the spec in ies. In this study, the top six results (in ascending order of P value, P < 0.05) for biological process (BP), cell component (CC), and molecular function (MF) analysis were performed using R software DOSE, clusterProfiler, and enrichment software packages. Relationship between GPR65 expression and survival prognosis COX proportional hazards model was employed to examine the association among GPR65 mRNA expression and survival prognosis. IBM SPSS software (21.1 version) were used. Kaplan Meier analysis of GPR65 mRNA expression and OS survival prognosis was conducted using online analysis platform on 98 OS patients in TARGET database ( https://www.aclbi.com/static/index.html#/target ). Single-cell RNA sequencing (ScRNA-seq) ScRNA-seq data of GSE162454 was downloaded from the Gene Expression Omnibus (GEO) database( http://www.ncbi.nlm.nih.gov/geo/ ), including 6 samples of human primary OS. The scRNA-seq date was firstly converted to Seurat objects. The quality control of scRNA-seq data, “NormalizeData”, principal component analysis (PCA), the “Find All Markers” function and annotate the cell subpopulations of the different clusters were performed by the R Seurat package which have been described in detail in previous study [ 15 ]. Cell lines and cell culture OS cell lines U2OS and HOS were obtained from Procell Life Science&Technology Co.,Ltd (Wuhan, China). U2OS was maintained in McCoy’s 5 A (Procell, China) and HOS was maintained in DMEM (Procell, China) supplemented with 10% fetal bovine serum (FBS) (Yeasen, Shanghai, China), 100 U/mL penicillin, and 100 μg/mL streptomycin. Cells were cultured at 37 °C with 5% CO 2 . Cell transfection The cells were transfected with the siRNA/plasmids using LipofectamineTM 3000 Transfection Reagent (Thermo Fisher Waltham, MA, USA) according to the manufacturer’s instructions. The following siRNAs were used in this study: GPR65 siRNA#1: CAGUGGUCUACAUAUUUGUTT; GPR65 siRNA#2: GAAUCCGUCUUUAACUCCATT; and the control siRNA was UUCUCCGAACGUGUCACGUTT. Cell viability assay The U2OS and HOS cells were cultured at a density of 6000 cells/well in 96-well plates and then transfected with Flag-GPR65 plasmids or GPR65 siRNA. Continue cultivation for 36 h and then 20 μl of 5 mg/ml MTT was added. After 2 h, the medium was discarded and 150 μl DMSO was added. The absorbance values of each sample were measured at 490 nm using a spectrophotometer (Elx800, BioTEk instrument, USA). Colony formation assay The U2OS and HOS cells (1500–2000/well) were passaged in 12-well plates and treated with transfection. After 7 to 10 days of culture, cells were fixed with 4% paraformaldehyde for 15 min and stained with crystal violet, cell colonies were photographed using a camera. EdU assay The U2OS and HOS cells (5 × 10 4 cells/well) were plated in 24-well plates. The EdU incorporation assay was the BeyoClickTM EdU Cell Proliferation Kit with Alexa Fluor 594 (Beyotime Biotechnology, Shanghai, China), the experiment was performed as described in the previous published article [ 16 ]. F-actin filaments assay U2OS or HOS cells (5 × 10 4 cells/well) were passaged in 24-well plates and transfected with Flag-GPR65 or siRNA. After incubating for 36 h, the cells were fixed with 4% paraformaldehyde and then 0.3% Triton X-100 was permeabilized. Add enough phalloidin (green fluorescence) staining solution into cells for 30 min. Images were captured with a fluorescence microscope. Wound healing assay U2OS or HOS cells (1 × 10 5 cells/well) were seeded in 6-well plates, the next day scratched using pipette tips. Subsequently, cells were transfected with Flag-GPR65 or siRNA. Migrated cells at o and 36 h were monitored and captured using microscopy. Transwell migration assay After the transfection, U2OS or HOS cells were plated at a density of 2.5 × 10 4 cells/ml in 400 μl 1% medium in the upper chamber, while the lower chamber contained 600 μl normal medium. After 36 h co-culture, the upper membrane cells were fixed in 4% paraformaldehyde and then stained with 1% crystal violet. Images were captured by inverted optical microscopy. RNA extraction and quantitative reverse transcription PCR (qRT-PCR) Total RNA was extracted using according to the manufacturer’s instructions (YiFeiXue, Nanjing, China). qRT–PCR was performed using the 2×SYBR Green qPCR Mix (Shandong Sparkjade Biotechnology Co., Ltd., Shandong, China). The primer sequences are listed in the Table 1 . RNA-sequencing and analysis The U2OS cells were transfected with Flag-GPR65 and control plasmid respectively, total RNA was extracted using according to the manufacturer’s instructions (YiFeiXue, Nanjing, China). Shanghai Major-bio Biopharm Biotechnology (Shanghai, China) performed the transcriptome sequencing and analyses. And the data were analyzed on the Majorbio Cloud Platform ( www.majorbio.com ). Differentially expressed genes (DEGs) with P < 0.05 were identified [ 16 ]. Western blot analysis Total proteins were extracted from synovial tissues and cells by RIPA lysis buffer (Beyotime Institute of Biotechnology), the extracted protein levels were determined by BCA assay (Yeasen Biotech, Shanghai, China). Equal amounts of total protein were separated by SDS-PAGE, followed by transfer to the PVDF membranes. After blocked by 5% skim milk, the membranes were incubated with primary antibodies at 4 ◦C overnight. And incubated with the peroxidase-conjugated secondary antibody for 1 h the next day. All membranes were imaged with ECL super (Sparkjade, Shandong, China). The following antibodies were used: GPR65 (Cat#ER1910-13, Huabio, Hangzhou, China); GAPDH (Cat#AP0063, Bioworld, Nanjing, China); E-cadherin (Cat#R22490, Zen Bioscience, Chengdu, China); N-Cadherin (Cat# R23341, Zen Bioscience, Chengdu, China); Vimentin (Cat#R22775, Zen Bioscience, Chengdu, China). Statistical analysis Expression differences were calculated using R software (version4.2.2), IBM-SPSS software (version21.1). The correlation between GPR65 and immune processes was determined by Pearson correlation analysis for Gene set variation analysis (GSVA). The prognostic value was evaluated by Kaplan-Meier and COX analysis. Gene ontology (GO) was performed in the DAVID portal website ( https://david.ncifcrf.gov/summary.jsp ). The correlation was tested by Pearson correlation analysis. Data are shown as the mean ± standard deviation of at least three independent experiments. Statistical significance was set at P ≤ 0.05.
Result The expression and distribution characteristics of GPR65 in OS patients GEPIA analysis found that GPR65 expression was higher in GBM, LAML, and KIRC patients, while GPR65 expression was lower in LUAD, LUSC, and THYM tissues. Compared with normal tissue, higher expression of GPR65 was observed in sarcoma tissue (supplementary Fig. 1 ). Although GEPIA describes sarcoma tissue more generally, it is necessary to analyze the expression of GPR65 in a single OS tissue. In TARGET-OS database, the distribution characteristics of OS patients with GPR65 divided into high expression group and low expression group according to the average value (the average expression of GPR65 is 6.884). There were 42 OS patients with GPR65 expression values less than 6.884, including 25 males and 17 females. There were 55 patients with GPR65 expression values greater than 6.884, including 32 males and 23 females. There are a total of 74 patients under to 18 years old, and 23 patients over 18 years old. Due to the missing survival status information of two OS patients, this study included ninety-five OS patients, including 58 surviving patients and 37 dead patients. There were 27 metastatic patients and 70 non-metastatic patients. From the Sankey diagram, it could be seen that patients in the high and low expression groups of GPR65 exhibited asymmetric distribution in terms of gender, age, survival status, and metastasis status (Fig. 1 A). Different levels of GPR65 expression indicated different clinical and pathological characteristics in OS patients. In the TARGET database, as the expression level of GPR65 increased, there was an asymmetric distribution in patient age, gender, race, HR, FRT, PSP, MS, FE and survival status (Fig. 1 B). Further analysis revealed that as the survival time of patients increased, the expression of GPR65 showed an increasing trend. There was a statistically significant difference in GPR65 expression between OS patients with survival time less than 3 years and those with survival time greater than 3 years (Fig. 1 C). The expression of GPR65 in OS patients in the dead group is lower than that in OS patients in the survival group ( P < 0.05) (Fig. 1 D). Consistent with the trend of GPR65 expression in patients’ status, the GPR65 expression in non-metastatic OS patients was higher than that in metastatic OS patients, and the difference between the two groups was statistically significant ( P < 0.05) (Fig. 1 E). There are many missing data in the pathological grade of OS patients (54 cases in total), with only 43 patients recorded. This may be the reason why there is no statistical difference in GPR65 expression between the pathological grade Stage1/2 (0–90% Necrosis) and Stage3/4 (91–100% Necrosis) groups (Fig. 1 F). In the FE group, there was a decreasing trend in GPR65 expression among patients with FE recurrence (Fig. 1 G), while there was no statistically significant difference in terms of patient’s gender, PSP, etc. (Fig. 1 H, I). As is well known, osteosarcoma is more common in children, adolescents and young adults [ 14 ]. Coincidentally, in the TARGET database, GPR65 expression was higher in elderly OS patients (age > 20y) than in younger OS patients (age ≤ 10y) ( P < 0.05) (Fig. 1 J). Further using ROC curve to analyze the diagnostic value of GPR65 expression in terms of age in OS patients, the results showed that the area under the curve (AUC) of GPR65 gene in elderly osteosarcoma patients (age > 20 years old) was 83.30%, with statistical significance ( P < 0.05) (Fig. 1 K). Overall, these results indicated that GPR65 expression was lower in younger OS patients (age ≤ 10y), metastatic osteosarcoma patients, and osteosarcoma patients with shorter survival time. GPR65 was expressed higher in elderly patients with OS (> 20 years of age), patients with non-metastatic OS, and patients with OS who had a longer survival time. These studies suggested that low expression of GPR65 was associated with poor prognosis in patients with OS. OS-associated GPR65 is associated with inflammatory response and osteoclast differentiation in OS The biological process of GPR65 (GO-BP) is mainly enriched in the inflammatory response, immune response, and innate immune response pathways (Fig. 2 A). The cellular components (GO-CC) are mainly enriched in plasma membrane, integral component of membrane, cell surface, etc. (Fig. 2 B). The GPR65 molecular function (GO-MF) is mainly enriched in protein binding, transmembrane signaling receptor activity, inhibitor MHC class I receptor activity, signaling receptor activity/beta amyloid binding, and other functions (Fig. 2 C). The results of KEGG pathway enrichment analysis showed that GPR65 was mainly enriched in the Osteoclast differentiation, B cell receptor signaling pathway, and Tuberculosis signaling pathways (Fig. 2 D). These results suggested that the main biological function of OS associated GPR65 was related to the inflammatory immune response acting on the plasma membrane and the differentiation of osteoclast, which might be related to the involvement of GPR65 in OS immunity and bone repair after OS bone destruction under the OS immune microenvironment. OS associated GPR65 is positively correlated with tumor immune response, but negatively correlated with immunological memory process In order to enable the anti-tumor immune response to effectively kill tumor cells, the immune system initiates a series of immune responses (such as the release of chemokines and cytokines) and iteratively expands to eliminate the tumor [ 17 ]. However, in malignant tumor patients, lymphocytes, such as B cell, T cell and NK cell, may recognize antigens as self-antigens rather than foreign antigens, thus producing T regulatory cell responses rather than effector immune responses, or factors in the tumor microenvironment inhibit the function of effector lymphocytes, so that in tumor patients, tumor immune responses cannot play the best role [ 10 ]. Therefore, we examined the role of OS associated GPR65 activation in the immune response and corresponding cytokine characteristics of OS. The enrichment scores of different immune processes associated with OS GPR65 gene were analyzed using GSVA, and the results showed that OS associated GPR65 was highly correlated with immune response, especially with positive regulation of immune response and T cell co-stimulation (Fig. 3 A). Collectively, the above findings revealed that GPR65 expression is involved in the tumor immune response of OS, but is negatively correlated with the immune memory of OS. From the above analysis, it was found that GPR65 is correlated with tumor immunity. Therefore, further Pearson analysis was conducted to investigate the correlation between GPR65 and common cancer immune checkpoint inhibitors, such as CD200R1, CD47, HAVCR2, TIGIT, CTLA4, LAG3, and PD1. Research has found a correlation between GPR65 expression and the above immune detection points, especially HAVCR2/CD200R1 ( P < 0.001, Fig. 3 B). In addition, we selected seven immune system related meta gene cluster as markers of immune status. Calculate the GSVA enrichment score of 97 patients in the TARGET database, and then calculate the correlation between the seven immune system related inflammatory factor genes and OS associated GPR65 expression. The research results showed that OS associated GPR65 was negatively correlated with immune inflammatory factor genes such as HCK, Interferon, STAT2, etc., but positively correlated with IgG, MHC-II, STAT1, etc. (Fig. 3 C). GPR65 is mainly expressed on OS associated macrophages and CD4 + T cells Further analysis of ScRNA-seq from 6 cases of human OS identified different cell subpopulations, which were further clustered into 9 cellular metaclusters (Fig. 4 A). Because CD4 is a marker for CD4 + T cells, CD8 is a marker for CD8 + T cells, and CD68 is a reliable marker for macrophages. CD14 is mainly expressed on the cell membranes of monocytes and macrophages. FGFR1 and CDH11 are markers of osteoclasts or osteosarcoma cells. From single-cell sequencing analysis, it was found that CD68 was expressed in clusters 2 and 6 (Fig. 4 F), indicating that clusters 2 and 6 might be macrophages. In addition, CD14 was mainly expressed in cluster 2 cells (Fig. 4 H), therefore, indicating that cluster 2 cells were macrophages. It was not difficult to see (Fig. 4 E) that the 6 clusters of cells were CD4 + T cells. Cluster 0 represents CD8 + T cells (Fig. 4 D). FGFR1 (Fig. 4 C) and CDH11 (Fig. 4 G) were mainly expressed on 8 clusters of cells, thus it could be determined that the 8 clusters were osteoblasts or osteosarcoma cells. PPIB was expressed in all cell clusters and was not a specific cell marker (Fig. 4 I). However, the 1/3/4/5/7 cluster cells lack clear surface markers to determine the corresponding cell type. From Fig. 4 B, it could be observed that GPR65 was mainly expressed in clusters 0, 2, and 6. In other words, GPR65 was mainly expressed on CD8 + T cells, CD4 + T cells, and tumor associated macrophages, but not on 8-cluster cells (osteoblasts or OS cells). Taken together, these data indicate that GPR65 may affect the function of OS associated macrophages, CD4 + T cells and CD8 + T cells in the OS microenvironment, and further affect OS cells proliferation. GPR65 is an independent prognostic factor for improved overall survival of OS patients Through LinkedOmic ( http://linkedomics.org/login.php ) analysis of the Kaplan-Meier survival curves of 98 OS patients in the TARGET database based on GPR65 high and low expression (Median). The results showed that compared to patients with low GPR65 expression of OS, the high GPR65 expression group had significantly higher 3-year and 5-year survival rates ( P < 0.0219, HR = 0.461, Fig. 5 A). The ability to predict prognosis were determined by receiver operating characteristic curve (ROC), the results showed the model demonstrated good ability to discriminate, which was stable when tested in the test set (AUC = 0.645, Fig. 5 B). To explore the predictive value of GPR65 for the prognosis of OS patients, we conducted a COX proportional risk model analysis on 97 OS patients. Univariate COX analysis found a significant correlation between GPR65 expression, metastasis, FRT, histological response, EFS, and OS patients’ survival ( P < 0.05, HR < 1). Further multivariate COX regression analysis of the above indicators revealed a consistent trend between GPR65 expression and metastasis, FRT, histological response, and EFS univariate COX regression (Table 2 ). These findings revealed that the expression of GPR65 is a favorable prognostic factor for overall survival in patients with OS. Lower GPR65 expression is necessary for osteosarcoma cell growth To further clarify the role of GPR65 in the development of osteosarcoma, we first examined the expression of GPR65 in tissue microarray of osteosarcoma and normal bone tissue (Fig. 6 A). Consistent with the database results, the expression level of GPR65 in osteosarcoma patients was significantly lower than that in normal bone tissue. At the same time, we conducted confirmatory experiments in human osteosarcoma cell lines U2OS and HOS. We confirmed the expression effect of GPR65 overexpressed plasmid and siRNA (Fig. 6 B- 6 C). The MTT assay indicated a considerable decrease in cell viability after GPR65 overexpression compared with the empty plasmid group, while silencing GPR65 expression could enhance the proliferation ability of U2OS and HOS cells (Fig. 6 D). The same phenomenon was observed in colony formation experiments (Fig. 7 A and 7 D). It was found that the number of GPR65 cells increased significantly after knocking down GPR65 expression, and became long spindle shape. The opposite was true when GPR65 was highly expressed (Fig. 7 B). Similarly, the EdU incorporation assay showed that U2OS and HOS cell proliferation was notably promoted in cells after GPR65 silencing. However, the cell growth was inhibited when GPR65 was highly expressed (Fig. 7 C and 7 E). Taken together, GPR65 expression is decreased in osteosarcoma tissues, and knocking-down GPR65 expression in osteosarcoma cells significantly promotes cell proliferation. Silencing GPR65 enhances osteosarcoma cells invasiveness Given the changes in cell morphology of U2OS and HOS cells with different GPR65 expression, we hypothesized that GPR65 may be involved in the invasion and metastasis of osteosarcoma cells. To test this idea, we conducted a series of functional experiments to verify it. In transwell assay, the elevated number of invasive cells per field in GPR65-knockdown group was indeed increased (Fig. 8 A and 8 E). Meanwhile, wound healing assays indicated that GPR65-silencing cells had extremely increase cell mobility compared with siNC group cells, suggesting accelerated cell migration and invasion ability, whereas GPR65 overexpression could decrease cell invasion and migration capacity (Fig. 8 B and 8 F). In addition, we observed a noticeable increase in F-actin formation in both U2OS and HOS cells after endogenous GPR65 expression was silenced, while forced GPR65 expression restrained the growth of F-actin (Fig. 8 C and 8 G). Similar results were also obtained in the EMT-related marker expression (Fig. 8 D, 8 H and 8 I). Collectively, these findings demonstrated the pivotal role of GPR65 in cytoskeletal reorganization that facilitates tumor cell migration. Analysis of downstream gene and signaling pathway regulating GPR65 in osteosarcoma cells To investigate the potential mechanism of GPR65 in osteosarcoma cells, unbiased transcriptome analysis was performed by RNA sequencing (RNA-seq) on samples of U2OS cells with or without GPR65 overexpression, and all the analyses were conducted by the Majorbio Cloud Platform ( www.majorbio.com ). Based on the quantitative expression results, inter-group differential gene analysis was performed to obtain differentially expressed genes (DEGs) between the two groups. The difference analysis software was DESeq2, and the screening threshold was |log2FC| ≥ 0.585 & P value ≤ 0.05. According to cluster analysis and volcano map results, a total of 6851 DEGs were identified, among which 2310 genes were upregulated and 1348 genes were downregulated (Fig. 9 A and 9 B). Meanwhile, disease ontology (DO) enrichment analysis showed that DEGs were closely related to cancers, especially orthopedic cancers (Fig. 9 C). The results of GO enrichment analysis and enrichment chord diagram demonstrated that the DEGs after up-regulation of GPR65 were involved in muscle tissue development, regulation of epithelial cell differentiation, positive regulation of myeloid cell differentiation and other processes, revealing the important role of GPR65 in bone marrow disease occurrence (Fig. 9 D and 9 E). And the KEGG enrichment analysis indicated the DEGs were highly relevant to MAPK signaling pathway, focal adhesion and PI3K-Akt pathway (Fig. 9 F). In addition, Reactome annotations analysis revealed that differential genes are closely related to signal transduction, immune system and metabolism (Fig. 9 G). To further investigate the regulatory mechanisms of GPR65 in osteosarcoma, thousands of GPR65-mediated Alternative Splicing (AS) events were defined by AS analysis of rMARTs. AS shown in the Fig. 9 H, skip exons (SE) was the dominant type of AS events, accounting for 69.48%, followed by mutually exclusive exons (MXE) (10.25%) and alternative 3 ‘splice site (A3SS) (7.76%). It’s indicated that GPR65 principally modulated SE. Moreover, protein-protein interactions (PPI) between DEGs were predicted using a STRING database, involving a total of 285 nodes and 764 edges, and most of these proteins were involved in antiviral and immune processes (Fig. 9 I). GSEA further indicated that these DEGs were closely related to the inflammatory response, molecular function activator activity, RNA binding and enzyme regulator activity (Fig. 9 J). All these results confirmed that GPR65 plays an important role in multiple processes of osteosarcoma development. High expression of GPR65 predicted the enhancement of immune response, anti-inflammatory and anti-tumor ability.
Discussion This study analyzed 97 patients with OS from TAEGET database. The data records of this OS patient were relatively complete, and four patients with missing clinical data were deleted. It is well known that compared with other types of cancer, OS has a relatively low incidence rate. Therefore, in terms of the number of cases, it is already a relatively large number of OS cases to study 97 patients with OS. OS is more common in adolescents, and the younger the age, the higher grade of malignancy and the greater the harm. About 25% of patients with OS have detectable metastases, most commonly in the lungs [ 18 ]. Previous or potential distant metastases lead to a high recurrence rate. At present, the common treatment options such as radiotherapy, chemotherapy and surgery have not achieved satisfactory clinical results [ 19 ]. Cellular immunotherapy, stem cell therapy, and targeted therapy have been used in patients with recurrent OS in recent years [ 20 , 21 ]. Tumor immunotherapy plays an anti-tumor role by stimulating and enhancing the immune response of the body. Compared with chemotherapy, radiotherapy and targeted therapy, it has become another important way of anti-tumor therapy, with significant clinical efficacy and advantages [ 22 ]. CD8 + cytotoxic T lymphocytes (CTL), CD4 + T cells, NK cells and NKT cells all play critical roles in tumor immunity, while humoral immunity may not only inhibit tumor growth but also enhance it [ 23 ]. Researchers have devised various strategies to boost the immune system in recent years based on tumor immune response studies. It has been found that OS cells can establish a local microenvironment conducive to tumor growth, drug resistance and metastasis by controlling the recruitment and differentiation of immune-infiltrating cells [ 24 ]. G-protein-coupled receptors (GPCRS) are the largest superfamily of transmembrane proteins encoded by the human genome, mediating most cellular responses to external stimuli, including light, odor, ions, hormones, and growth factors [ 25 ]. GPR65 is a pH-sensing G protein-coupled receptor that acts as a key innate immune checkpoint in the human tumor microenvironment, inhibiting the release of inflammatory factors and inducing significant upregulation of tissue repair genes [ 11 ]. Pathios has developed PTT-3213, a small molecule inhibitor of GPR65 that significantly increases CD8 + T cells and natural killer T (NKT) cells in the tumor microenvironment. It can synergize with PD1 antibody to produce better efficacy in mouse MC38 tumor models. GPR65 can be activated by protons when the pH value is lower than 7.2, leading to the increase of cAMP and the activation of A (RhoA), a member of the Ras homologous gene family [ 26 ]. Among our 97 patients with OS, grouping analysis based on the average GPR65 expression of 6.884 (close to 7.2) for high and low expression is more scientific and reasonable. our study found that GPR65 is low expressed in young patients, and the older the age, the higher the expression of GPR65. Moreover, this study found that GPR65 expression was lower in metastatic OS patients, while it was higher in non-metastatic OS patients. High expression of GPR65 in OS patients with high overall survival rate. This means that high expression of GPR65 indicates a good prognosis for patients with OS. Further analysis of the molecular role and mechanism of GPR65 in OS patients revealed that GPR65 is mainly associated with tumor immunity in patients. Surprisingly, our study is completely different from other studies, where the higher the expression of GPR65, the worse the malignancy and prognosis of cancer [ 27 , 28 ]. Therefore, our study suggests that GPR65 as a new immune checkpoint for immune checkpoint inhibitor anti-tumor therapy (ICI-therapy) is controversial, and at least not applicable to some malignant tumors, including OS. Further ScRNA sequencing analysis showed that GPR65 was highly expressed in CD4 + cells and macrophages in the microenvironment of OS. This indicates that GPR65 is involved in the tumor immune regulatory response of OS. Further experiments have verified the above information results. The expression of GPR65 is significantly decreased in osteosarcoma tissues. Silencing the expression of GPR65 in osteosarcoma cells U2OS and HOS can promote the proliferation and invasion process, while overexpression of GPR65 can inhibit this process. Further RNA-seq results showed that high expression of GPR65 in U2OS cells can induce changes in immune system, metabolism, and signaling processes, and exert tumor inhibition through MAPK and PI3K/AKT signaling pathways. Among many signaling pathways, MAPK signaling pathway plays a particularly important role in cell proliferation, differentiation, apoptosis, angiogenesis and tumor metastasis [ 29 ]. There is reported that dioscin induces OS cell apoptosis by upregulating ROS-mediated P38 MAPK signaling [ 30 ], and Fan’s research also showed that siglec-15 promotes tumor progression in OS via DUSP1/MAPK pathway [ 31 ]. Similarly, the role of PI3K/AKT pathway in reversing drug resistance in OS is confirmed by the examination of some reports [ 32 , 33 ]. However, how GPR65 functions through MAPK and PI3K/AKT pathways in OS not yet clear, and we will address this issue in future studies. From the above discussion, the conclusion can be reached that GPR65 cannot be used as an ICI target for OS immunotherapy, but rather as a favorable prognostic factor for overall survival in OS patients. The suppression of immune escape and inhibition of proliferation may be a key pathway for GPR65 to participate in the progression of OS.
Background GPR65 is a pH-sensing G-protein-coupled receptor that acts as a key innate immune checkpoint in the human tumor microenvironment, inhibiting the release of inflammatory factors and inducing significant upregulation of tissue repair genes. However, the expression pattern and function of GPR65 in osteosarcoma (OS) remain unclear. The purpose of this study was to investigate and elucidate the role of GPR65 in the microenvironment, proliferation and migration of OS. Methods Retrospective RNA-seq data analysis was conducted in a cohort of 97 patients with OS data in the TAEGET database. In addition, single-cell sequencing data from six surgical specimens of human OS patients was used to analyze the molecular evolution process during OS genesis. Tissues chips and bioinformatics results were used to verify GPR65 expression level in OS. MTT, colony formation, EdU assay, wound healing, transwell assay and F-actin assay were utilized to analyze cell proliferation and invasion of OS cancer cells. RNA-seq was used to explore the potential mechanism of GPR65’s role in OS. Results GPR65 expression was significantly low in OS, and subgroup analysis found that younger OS patients, OS patients in metastatic status, and overall survival and progression free survival OS patients had lower GPR65 expression. From ScRNA-seq data of GSE162454, we found the expression of GPR65 is significantly positively correlated with CD4 + T cells CD8 + T cells and OS related macrophage infiltration. Verification experiment found that silencing the expression of GPR65 in osteosarcoma cells U2OS and HOS could promote the proliferation and invasion process, RNA-seq results showed that the role of GPR65 in OS cells was related to immune system, metabolism and signal transduction. Conclusion The low expression of GPR65 in OS leads to high metastasis rate and poor prognosis in OS patients. The suppression of immune escape and inhibition of proliferation may be a key pathway for GPR65 to participate in the progression of OS. The current study strengthens the role of GPR65 in OS development and provides a potential biomarker for the prognosis of OS patients. Supplementary Information The online version contains supplementary material available at 10.1186/s12935-024-03216-5. Keywords
Electronic supplementary material Below is the link to the electronic supplementary material.
Acknowledgements We appreciate the osteosarcoma data provided by TARGET database and the Single-cell RNA sequencing data of osteosarcoma provided by Liu Y on the Gene Expression Omnibus (GEO) database. Author contributions J.Q. and S.H.L. prepared Figs. 1 , 2 , 3 , 4 and 5 and Z.R.Z. prepared Figs. 6 , 7 , 8 and 9 and J.Q. and Z.R.Z. wrote the manuscript text. Funding This work was supported by the Talent Introduction Science Foundation of Yijishan Hospital, Wannan Medical College (YR20220214, YR20220216), and the Science and Technology Project of Wuhu City (2023jc29). Data availability Osteosarcoma cell lines U2OS and HOS were obtained from Procell Life Science&Technology Co.,Ltd (Wuhan, China). Declarations Ethics approval and consent to materials Not applicable. Consent for publication All the authors have read and approved the final article. Competing interests The authors declare no competing interests. Abbreviations Osteosarcoma Time to First Relapse in Days Primary Site Progression Metastasis Status First Event Tumor microenvironment Extracellular matrix Biological process Cell component Molecular function Gene Expression Profiling Interaction Analysis Kyoto Encyclopedia of Genes and Genomes Gene Expression Omnibus Fetal bovine serum Gene set variation analysis Gene ontology Receiver operating characteristic curve Alternative 3 ‘splice site RNA sequencing Differentially expressed genes Disease ontology Alternative Splicing Skip exons Mutually exclusive exons Protein-protein interactions
CC BY
no
2024-01-15 23:43:48
Cancer Cell Int. 2024 Jan 13; 24:31
oa_package/17/6d/PMC10788037.tar.gz
PMC10788038
38218872
Introduction A total of 19.3 million new cancer patients were reported worldwide in 2020, with more than 50% dead [ 1 ]. Currently, radical surgery is still considered as the most effective treatment for solid tumors. As the early-stage symptoms of many cancers are not typical, numerous patients are already in the advanced stage when diagnosed and miss the timely surgical opportunity. Non-surgical treatments of cancer consist of chemotherapy, radiotherapy, targeted therapy, etc. However, the single treatment method mentioned above always fails to achieve satisfactory therapeutic efficacy [ 2 ]. Even though the five-year survival rate of cancer patients has improved in recent years [ 3 ], recurrence and metastasis are consistently the number one killer of cancer patients. In decades, tumor immunotherapy has drawn widespread attention. Different from the traditional treatments, tumor immunotherapy indirectly eliminates tumor cells through regulating immune system rather than directly targeting on tumors. Therefore, it can not only eliminate the primary lesion, but also generate long-term immune memory, thereby inhibiting cancer metastasis and recurrence [ 4 ]. So far, more than 3000 immunotherapeutic drugs have been approved by the Food and Drug Administration (FDA) for the treatment of various cancers [ 5 ]. Among them, the most famous ones were immune checkpoint blockade (ICB) [ 6 ] and chimeric antigen receptor-T cell (CAR-T) [ 7 ]. Nonetheless, only a minority of cancer patients showed satisfactory responses to immunotherapy in clinical treatment. Immunotherapy was observed less effective in the majority of the population and even accelerated tumor progression [ 8 , 9 ]. The following factors were inferred to be responsible for the suboptimal efficacy: 1 The poor immunogenicity of tumors. 2 The low expression level of immunotherapy target. 3 Various immunosuppressive factors in the tumor microenvironment. 4 Inhibition of immune killer cells (such as effector T cells) [ 4 ]. Consequently, more and more researchers are seeking for a novel combination treatment system, exploring the possibility to combine immunotherapy with therapies such as radiotherapy, chemotherapy, and immunomodulators [ 10 , 11 ]. Although these combination therapies enhanced the antitumor efficacy, at the same time the increased incidences of severe side effects were also observed [ 12 ]. Notably, the combination of nanomaterials and immunotherapy has brought a new light for completely eliminating tumors with fabulous anti-tumor effects and negligible side effects. Nanomedicine refers to the application of nanotechnology in medicine. Conventional nanomedicine refers to intravenous injection of materials with a size of about 1–100 nm, which can passively or actively accumulate in pathological areas. The materials or the loaded drugs act at local lesions, realizing precise treatment with lower drug dosage and lighter side effects. The tumor targeting effects of nanomaterials are mainly achieved in two ways, namely passive targeting and active targeting. Passive targeting relies on the enhanced permeability and retention (EPR) effect [ 13 ] while active targeting relies on targeting ligands (such as targeting peptides and antibodies) on nanomaterials [ 14 ]. Given the promising future and current limitations of immunotherapy, more and more researchers have made efforts on exploring how to apply nanomedicine technology into tumor immunotherapy to create a novel combined therapy. In 1998, researchers discovered [ 15 ] that delivery of tumor antigens to antigen presenting cells (APC) through poly-lactide-co-glycolide (PLG) triggered a strong anti-tumor immune response, which protected mice from P815 threat of tumor cells. Similarly, a study by Murthy et al. [ 16 ] synthesized an acid-sensitive microgel material, which could be degraded in the acidic phagosome of APC, thereby releasing protein antigens. Although the design of these nanomaterials is relatively rudimentary from today's perspective, it undoubtedly brought new light on nano-immunotherapy for subsequent researchers. At present, nano-immunotherapy is generally achieved through the following three methods [ 17 , 18 ]: 1 Target and eliminate tumor cells, further causing immunogenic death. 2 Target the tumor immune microenvironment (TIME), such as immune cells (macrophages, dendritic cells, T cells, etc.) or immune-related pathways (such as PD-1/PD-L1, CTLA-4, etc.). 3 Target the peripheral immune system, such as promoting the production of APCs and cytotoxic T cells in lymph nodes and spleen. Scientometrics uses mathematical and statistical methods to quantitatively analyze overall relevant documents in a certain period of time. Through scientometrics, we can intuitively obtain the development trend in concerned research field, as well as the contributions of various authors, institutions, and countries to the field. More importantly, scientometrics can predict the future development direction of the field. In the past ten years, more and more nanomaterials have been applied in anti-tumor immunotherapy, which activated human autoimmune system through a variety of pathways. Therefore, in this article scientometrics was adopted to further count and analyze the key points of studies in nano-immunotherapy, so that researchers can more intuitively observe the hot spots and prospective development directions of anti-tumor nano-immunotherapy.
Materials and methods Data collection and retrieval strategy The Core Collection of Web of Science (WOSCC) was used to retrieve and obtain relevant literatures on antitumor nano-immunotherapy since the establishment of the WOSCC. All articles were retrieved on the same day to prevent partial results confusion due to rapid updates of subsequent publications. The search string applied was: (Topic = [“Tumor” OR “Neoplasm” OR “Cancer” OR “Neoplasias” OR “Malignancy”]) AND Topic = “Nano” AND Topic = “Immune”. According to the above retrieval formula, 893 potentially relevant papers were obtained. The following exclusion criteria was adopted to arrive at the final number of records to be analyzed: 1 Literature not related to the subject. 2 Non-English literature. 3 Documents without a complete research process such as conferences and comments. Thereafter, the authors looked through the titles and abstracts and screened out 364 irrelevant records. After reading the full text, authors further screened out 86 articles. Finally, a total of 443 records including 294 articles and 149 reviews were considered for the final analysis. The bibliometric information of 443 articles collected include: title, publication year, author, country/region, affiliation, journal, keywords, keywords plus, number of citations and reference records, abstract, Impact Factor (IF), etc. Statistical analysis The data was exported in plain text format as well as in RIS format. The file in plain text was imported to Biblioshiny for bibliometrix (online website based on Bibliometrix 4.1.0), and the processed excel file including title, publication year, author, country/region, affiliated institution, periodical, keywords, keywords plus, cited times and reference records, abstract and other information was exported for further analysis and interpretations. In addition, the file in RIS format was imported into NoteExpress software, and the file in Excel format (including the IF of the concerned records) was exported. Finally, the final version was obtained after merging the two Excel files. The statistical analysis of this study is based on the comprehensive table (Additional file 6 : Table S1). The above collated data was imported into the R language-based Bibliometrix 4.1.0 package, VOSviewer (version 1.6.18), CiteSpace (version 5.8.R2) and Excel (version 2019) to perform statistical analysis and visualization. Bibliometrix [ 19 , 20 ] is a bibliometric statistics and visualization tool based on R language, which has been adopted by more than a thousand bibliometric papers. Strategic diagram is a two-dimensional diagram constructed with the density index as the ordinate and the centrality index as the abscissa. Larger Density index indicates a higher maturity of the topic. Larger Centrality index indicates that the topic is closely related to other topics and that the topic is at the core of all research topics [ 21 , 22 ]. Vosviewer [ 23 , 24 ] software was used to make density visualization of keywords co-occurrence. Each point on the map is filled with color according to the density of the elements around the point. The higher the density, the closer the color is to the red; on the contrary, the lower the density, the closer the color is to the blue. Density is positively correlated with the number of elements in the surrounding area and the importance of those elements. CiteSpace [ 25 , 26 ] software was used for cluster analysis of keywords and time axis view visualization of keywords. In CiteSpace, Modularity Q > 0.3 and Weighted Mean Silhouette > 0.5 indicate that the clustering results are convincing enough. All radar charts, histograms, line charts and scatter plots were analyzed using Excel 2019. All violin plots were analyzed by GraphPad Prism 8. All heat maps including correlation heat maps were performed using the OmicStudio tools ( https://www.omicstudio.cn/tool ) [ 27 ]. Journals’ Impact Factor was retrieved from the 2020 Journal Citation Reports (JCR). P < 0.05 was considered statistically significant.
Results General overview A total of 34 countries published relevant literature on antitumor nano-immunotherapy (Fig. 1 A, Additional file 7 : Table S2). The top six countries with the largest number of publications were China (208), the United States (82), South Korea (21), Iran (17), India (17) and Japan (13) (Fig. 1 B). The country of relevant documents was based on the country of the corresponding author and the first author. The number of studies on antitumor nano-immunotherapy has grown exponentially in recent years, and we speculated that the growth rate of related literature would still increase in the next few years (Fig. 1 C). In addition to the first author and the corresponding author, every author contributed a lot to the paper. Therefore, we completely agreed with your suggestions. We included the entire authors of the article and further counted the total number of authors in each country to evaluate the contribution of the country (Additional file 1 : Fig. S1, Additional file 7 : Table S3). Additional file 2 : Fig. S2 Showed the top six countries contributing to this field. Additional file 3 : Fig. S3 showed the cooperation between different countries. Ten institutions have published more than 20 relevant papers: Sichuan University (Sichuan Province, China, 30), Taipei Medical University (Taiwan Province, China, 28), Wuhan University (Hubei Province, China, 27), Nanjing University (Jiangsu Province, China, 26), North Carolina State University (North Carolina, USA, 25), Mashhad University of Medical Sciences (Reza Khorasan, Iran, 25), Soochow University (Jiangsu, China, 23), University of California, Los Angeles (California, USA, 22), Shanghai Jiaotong University (Shanghai, China, 21), South China University of Technology (Guangdong, China, 21) (Fig. 1 D). We also selected the affiliated institution of the first corresponding author for the follow-up analysis (Additional file 4 : Table S4). As numerous different institutions may exist in one article, we thought the affiliated institution of the first corresponding author could be most representative. The total number of citations of relevant researches in China (4504) and the United States (4282) were far ahead of other countries. The average number of citations in the Spanish literature was 133.8, ranking firmly in the first place. Meanwhile, the average number of citations the United States was 53.5, and that of China was only 21.2 (Fig. 1 E). Among all relevant records, the total number of authors of experimental articles (8.61) was significantly larger than that of review literature (5.05) (Fig. 2 A). The number of references in the review literatures (156.8) was much greater than that in the experimental papers (56.77) (Fig. 2 B). Studies with more than 20 citations had more references (Fig. 2 C). Both experimental articles and review articles have increased rapidly in recent years, and the total number and growth rate of experimental articles are greater than that of review literatures (Fig. 2 D). In the relevant articles published in China and the United States, the number of experimental articles was about twice that of review literature. However, review literatures accounted for the vast majority of articles published in other countries (Fig. 2 E). For the above six countries with the largest number of publications, about 38% of the articles published in the United States were completed by multiple countries, while the proportion of articles published in Iran was only 12% (Fig. 2 F). Journal correlation analysis The journal co-citation network showed that papers in the field of anti-tumor nano-immunotherapy was mainly published in two types of journals (Fig. 3 A). The red colour represents journals of materials science, while the blue parts were mainly medical journals. We found that the co-citations between the two clusters were abundant, which was consistent with the theme of nanomaterials for tumor immunotherapy. In the past five years, the two journals, Journal of Controlled Release and Biomaterials, have seen the fastest growth in the number of articles published in related fields (Fig. 3 B). Figure 3 C shows that the Journal of Controlled Release published by Netherlands published the highest number of records on the subject (a total of 28 published records, 2021 IF = 11.467), followed by Biomaterials published by Netherlands (26 publications in total, 2021 IF = 15.304), and Acta Biomaterialia published by England (a total of 16 publications, 2021 IF = 10.633). There were three journals with the fourth largest number of publications, each publishing 11 articles, namely Theranostics (2021 IF = 11.6), Advanced Materials (2021 IF = 32.086), and ACS Nano (2021 IF = 18.027). It is worth mentioning that the top 20 journals by publication volume belonged to the Q1 division (2021 JIF quartile), and the impact factor of each of the top-ranked journals was above 10. Therefore, it can be concluded that the idea of antitumor nano-immunotherapy is generally recognized by high-quality journals. On the other hand, the top five journals cited articles under study are: Biomaterials, ACS Nano, Journal of Controlled Release, Advanced Materials, Nature Communications (Fig. 3 D). The top 20 journals included many top journals in the industry, such as Nature, Science, Cell, Advanced Materials, Nature Reviews Immunology, Nature Nanotechnology, etc. This reflects that the theoretical foundation of antitumor nano-immunotherapy was solid. Author related analysis We calculated the H index and the citations for different authors in articles relevant to this filed, and all analyses were performed only in the 443 included articles. There are many experts and scholars majoring in the field of antitumor nano-immunotherapy. A total of 2571 authors, averaging 7.41 authors per article contributed to records under study. Amongst them, the top 3 authors were LIU Y (15 papers), HUANG L (13 papers), and WANG Y (12 papers) (Fig. 4 A). The top three authors with the highest local H-index were HUANG L (11), WANG C (9), and LIU Y (6) (Fig. 4 B). The top three authors cited most by local literature were HUANG L (48), LIU Y (30), LIU XS (29) (Fig. 4 C). The heat map of the annual publication volume of the top 20 authors is shown in Fig. 6 D. The author co-citation network is shown in Fig. 6 E. It can be seen that the key authors in the field of nanomaterials applied to tumor immunotherapy included Prof. Yang Liu (Nankai University, China), Prof. Leaf Huang (University of North Carolina at Chapel Hill, USA) and Prof. Chao Wang (Soochow University, China). Keywords correlation analysis The cloud map of keywords (Fig. 5 A) shows that dendritic cells, delivery, cancer, T cells, immunotherapy, and photodynamic therapy were the key research directions for the application of nanomaterials in tumor immunotherapy. The keywords density map (Additional file 4 : Fig. S4) showed that in addition to the above keywords, immune checkpoint blockade, chemotherapy, tumor microenvironment, immune response, etc. were also hot research topics in related fields. The top five keywords with the highest frequency of occurrence are (Fig. 5 C): nanoparticles (87), dendritic cells (83), delivery (68), cancer (62), therapy (54). The annual term frequency line chart of keywords included in the literature showed that the usage of above keywords has grown rapidly in recent years (Fig. 5 D). Through the correlation heat map (Additional file 5 : Fig. S5), we concluded that immunotherapy is most closely related to keywords, such as immune checkpoint blockade (correlation coefficient = 0.71), photodynamic therapy (correlation coefficient = 0.65), photothermal therapy (correlation coefficient = 0.57), T cells (correlation coefficient = 0.58), tumor-associated macrophages (correlation coefficient = 0.52), tumor microenvironment (correlation coefficient = 0.47), immunogenic cell death (correlation coefficient = 0.43), etc. In addition, dendritic cells were closely related to vaccine (correlation coefficient = 0.46), and photodynamic therapy was closely related to checkpoint blockade (correlation coefficient = 0.33). The annual main keywords evolution chart reveals (Fig. 5 B) that the main keywords in 2022 was autophagy; 2021 included immunogenic cell death, tumor-associated macrophages, targeted delivery, natural killer cells, hypoxia, antitumor-activity, antibody and in 2020 included delivery, T cells, etc. The main keywords in 2019 included dendritic cells, drug delivery, regulatory T cells, etc. The earlier keywords included vaccine delivery, antigen cross-presentation, cd4(+) t-cells, etc. The co-occurrence analysis of keywords shows (Fig. 6 A) that keywords were divided into 12 clusters, represented by different colors. Amongst them, Modularity Q = 0.5624 and Weighted Mean Silhouette = 0.7886 indicated that the clustering results were convincing enough. We found that the representative words of the first three clusters were: antigen, cancer immunotherapy, dendritic cells. Based on the above clustering, we further obtained an evolution timeline of keywords clustering (Fig. 6 B). As shown in Fig. 6 C, some studies have been enduring in recent years, such as drug-delivery and dendritic cells. Strategic diagram of the sub-period (Fig. 6 D) showed that regulatory T cells, tumor microenvironment, immune checkpoint blockade, drug-delivery, photodynamic therapy, photothermal therapy, tumor-associated macrophages, etc. located in the Motor Themes quadrant, indicating that the above keywords were the core theme with high maturity. In addition, dendritic cells, vaccine, and T cells were located in the Basic Themes quadrant, which demonstrated that the above keywords were important but the current research was not enough, so the above topics may become research hotspots or future development trends. Country related analysis In fact, cooperation among authors of different countries in this field is very common, with the proportion of international cooperation in the included literature being as high as 28.67%. As a representative of developed countries, the United States has cooperated with a number of developed and developing countries, such as China, South Korea, India, Egypt, Saudi Arabia, Argentina, Iran, Israel, Spain, Portugal and so on. The United States and China, as the leaders in this field, have maintained close cooperation with the rest of the world, which is particularly crucial to the common progress of global medicine. In addition, developing countries such as India, Iran, Egypt, Saudi Arabia and Romania also play an increasingly important role in this field. With the joint efforts of both developed and developing countries, this field will move towards a better future. The country of the document was based on the country of the corresponding author. If there were corresponding authors affiliated with institutions in different countries, the country of the first author shall prevail. Researches in the field of antitumor nano-immunotherapy were mainly carried out in China and the United States. The rest of the countries published less papers and were not further analyzed. The heat map of the number of papers published by each province in China (Fig. 7 A) showed that the provinces of related fields were mainly distributed in the southeastern provinces, of which Jiangsu Province (31), Shanghai (22), and Guangdong Province (17) contributed the most. The heat map of the number of publications by provinces in the United States (Fig. 7 E) showed that the publications of related fields were mainly distributed on the east coast, of which North Carolina (16), Michigan (9), and California (9) contributed the most. In the field of antitumor nano-immunotherapy, research hotspots shared by China and the United States included dendritic cells, delivery, T cells, and combination. There were some unique research hotspots in China: immune checkpoint blockade, photodynamic therapy, and immunogenic cell death. The US-specific research hotspot was expression (Fig. 7B, F). The number of articles related to China and the United States has grown rapidly in 2022. Among them, the number of publications in China showed an exponential rise between 2013 and 2021, while the United States showed a linear growth (Fig. 7C, G). The top five institutions in China for publishing papers included SICHUAN UNIV, NANJING UNIV, TAIPEI MED UNIV, WUHAN UNIV, and SOOCHOW UNIV. The top five institutions in the United States included UNIV N CAROLINA, UNIV CALIF LOS ANGELES, UNIV MICHIGAN, UNIV PITTSBURGH, and JOHNS HOPKINS UNIV (Fig. 7D, H). Adopting IF as an evaluation indicator, the quality of papers in China and the United States were similar. The average IF of American articles was 11.84, while the average IF of Chinese articles was 10.74, and there was no significant statistical difference between the two (Fig. 7I). In addition, there was also no statistical difference in IF between review articles and experimental articles in China and the United States (Fig. 7 J). Correlation analysis of key papers The application of nanomaterials in tumor immunotherapy has received extensive attention and citations. Among them, Pérez-Herrero E [ 28 ] summarized the advantages and limitations of many nanocarriers loaded with different chemotherapeutic drugs in tumor treatment. The study was cited 752 times in total (Fig. 8 B) and 20 times in local literature (Fig. 8 A). At the same time, the number of annual citations of this research have continued to grow in recent years (Fig. 8 C). Yang G et al. [ 29 ] developed a Hollow MnO 2 -based nano-platform H-MnO 2 -PEG/C&D combined with anti-PD-L1, which can activate tumor immunity in mice and significantly inhibit primary tumors and metastatic tumors. The paper has been cited 698 times in total (Fig. 8 B), 224 times last year alone, and the degree of attention has increased year by year (Fig. 8 C). Lu J et al. [ 30 ] designed a nano-platform OX/IND-MSNP, in which phospholipid bilayer-wrapped mesoporous silica nanoparticles were simultaneously loaded with oxaliplatin and immunostimulatory drugs. This nanoparticle could effectively induce tumor immunogenic cell death (ICD) and trigger the antigen presentation of dendritic cells, further inducing the activation of T cells and tumor immune memory. Lu J's paper was cited 26 times (Fig. 8 A). Li SY et al. [ 31 ] constructed nanoparticles to deliver CTLA-4 siRNA (NPsiCTLA-4) and showed the ability of this siRNA delivery system to enter T cells both in vitro and in vivo, eliminating the immunosuppression in the tumor microenvironment. Li SY's paper was cited 18 times (Fig. 8 A). In addition, there were also rich citation relationships between these key literatures (Fig. 8 D). In addition to the articles mentioned above, the following papers also ranked in the top ten citations in this field. Jain RK [ 32 ] found that the tumor-associated hematological and lymphatic vasculature, fibroblasts, immune cells, and extracellular matrix were abnormal, which together created a hostile tumor microenvironment. However, vascular normalization can convert the immunosuppressive tumor microenvironment into an immunoactivated tumor microenvironment, and improve the efficacy of immunotherapy via increasing blood flow and oxygenation. Corbo C et al. [ 33 ] observed that nanomaterials interacted with biological components and surrounded with a protein corona (PC) while be injected in physiological environments such as blood. This can trigger an immune response and affect the toxicity and targeting ability of the NP. Moon JJ et al. [ 34 ] reviewed the advanced findings of the nanoparticle developments for immunotherapy and diagnosis. Nanomaterials used in the tumor microenvironment or in systemic lymph nodes showed satisfying potential. Moreover, strategies to actively target cancer therapeutic agents to the tumor microenvironment using immune cells themselves as delivery vehicles were also very interesting. Hamdy S et al. [ 35 ] reviewed the applications of poly (D, L-lactic-co-glycolic acid) nanoparticles (PLGA-NPs) in cancer vaccine delivery systems. PLGA-NPs containing antigens or immunostimulatory molecules can not only actively target DC, but also rescue impaired DC from tumor-induced immune suppression. Singh A et al. [ 36 ] raised an emerging immunomodulation idea based on hydrogel and scaffold, which can be perfectly applied in a variety of tumors. In addition, hydrogels and stents can also perform well in diseases other than the tumors, such as chronic infections and autoimmune diseases. Zhu G et al. [ 37 ] reviewed the vaccines for cancer immunotherapy by synthetizing nanoparticles or naturally derived nanoparticles. Nanovaccines can effectively co-deliver immune-activating adjuvants and multiepitope antigens into lymphoid organs and antigen-presenting cells, fine-tuning the intracellular release and cross-expression of the antigen by nano vaccine engineering. von Roemeling C et al. [ 38 ] discovered that as immunotherapy became increasingly important in clinical oncology, the strategies utilizing the interactions between nanomaterials and various components of the immune system provided possibilities for exploring novel immune adjuvants to exert enhanced antitumor effects. Correlation analysis of impact factor Correlation analysis showed that the impact factor of a study was positively correlated with the number of citations (Fig. 9 A) and the number of references (Fig. 9 B). The average number of authors of papers with an impact factor above 10 was significantly larger than that of papers with an impact factor of less than 10 (Fig. 9 C). The average impact factor of the experimental literature (IF = 10.36) was greater than that of the review literature (IF = 9.04), however there was no statistical difference between the two (P = 0.055) (Fig. 9 D). The impact factor of articles published in recent years has improved significantly, as compared with that before 2015 (Fig. 9 E), and this was also considered to be related to the overall increase in impact factor.
Discussion In the early twenty-first century, researches applying nanomaterials in tumor immunotherapy emerged gradually. Initially, this novel idea failed to draw much attention, and the average annual number of relevant publications was not more than 10. Nevertheless, 2015 turned out to be a turning point. With researchers' realization of the promising potential that nanomaterials have on facilitating tumor immunotherapy efficacy, this field soon became hot and be further excavated by researchers. It's known that the number of published papers can be regarded as the most important indicator of whether and when a field becomes a research hotspot. The number of publications in this research field in 2021 was 126, exhibiting in the highest number of publications on this subject in a year. Moreover, the annual growth rate of relevant publications from 2004 to 2022 was found to be 16.85%. Among the 443 publications under study, the international cooperation accounted for 28.67%. The top six countries with the largest number of publications were China, the United States, South Korea, Iran, Japan and India. China's publication volume of 213 articles far exceeded that of other countries, but its citation rate was not optimistic. Notably, the United States just followed China in the number of publications but its research results were highly recognized in the peer field. The top five institutions in terms of publication volume worldwide were Sichuan University, Taipei Medical University, Wuhan University, Nanjing University, and North Carolina State University. This suggests that the recognition of antitumor nano-immunotherapy has continued to increase in recent years. The authors speculated that the number of the articles involving in antitumor nano-immunotherapy would persistently increase. Additionally, it is believed that various countries would tightly cooperate and make progress in this field. The majority of the researches related with antitumor nano-immunotherapy have been published in the Journal of Controlled Release (2021 IF = 11.467), Biomaterials (2021 IF = 15.304), Acta Biomaterialia (2021 IF = 10.633), Theranostics (2021 IF = 11.6), Advanced Materials (2021 IF = 32.086) and ACS Nano (2021 IF = 18.027). All threse journals had the impact factor above 10. Furthermore, the top 20 journals in publication volume belonged to Q1 division (2021 JIF quartile), demonstrating that related researches were of high quality and generally recognized by top journals. Keywords analysis revealed that nanoparticles, dendritic cells, delivery, cancer, T cells, immunotherapy, photodynamic therapy, immune checkpoint blockade, chemotherapy, tumor microenvironment, and immune response were steering the research filed. A significant correlations existed between the keywords immunotherapy and immune checkpoint blockade, photodynamic therapy, photothermal therapy, T cells, tumor-associated macrophages, tumor microenvironment and immunogenic cell death, with correlation coefficient > 0.4. Thus, it can be inferred that the key therapies for antitumor nano-immunotherapy mainly consisted of immune checkpoint blockade, photodynamic therapy, photothermal therapy and vaccine. Immune cells (including dendritic cells, T cells, macrophages, etc.) in the tumor microenvironment were modulated to exert stronger injuring effects on tumors in situ or more effective immune clearance effects on metastatic lesions. In addition, there were differences in the focus of the studies in different years. The main keywords in 2019 included dendritic cells, drug-delivery, regulatory T cells, etc. The main keywords in 2020 consisted of delivery, T cells, etc., whereas keywords in 2021 comprised of immunogenic cell-death, tumor-associated macrophages, targeted delivery, natural killer cells, hypoxia, antitumor-activity, antibody, etc. Emerging evidence proved that dendritic cells played an indispensable role in antitumor nano-immunotherapy. It has now been established that the tumor cell death in the primary site can release tumor-associated antigens (TAAs) [ 39 ]. Dendritic cells are capable of capturing these antigens, and then present these antigens to the T cell receptor via a major histocompatibility complex (MHC) after migrating to immune organs such as spleen or lymph nodes. Ultimately, T cell-mediated long-term tumor immune are successfully triggered [ 40 ]. According to relevant studies in this filed, the key steps in the immune network could be summarized into 3 points as follows [ 41 ]: (1) In the killed tumor cells, calreticulin is transferred from the endoplasmic reticulum to the cell surface, which strongly attracts dendritic cells, further inducing phagocytosis of dendritic cells. (2) Release of high mobility group box 1 (HMGB1) activates dendritic cells mediated by toll-like receptor 4 (TLR-4). (3) Release of ATP stimulates P2X7 purinergic receptors on dendritic cells, triggering inflammasome, IL-1β secretion and CD8 + T cell priming. Although dendritic cells have been regarded as core theme with high maturity (Motor Themes quadrant), more researches are still needed to further seek underlying core mechanisms. Furthermore, the delivery of nanomaterials remains a key issue and urgent to be solved in this field [ 2 ]. Currently, nanomaterials are generally delivered into tumor tissues through the EPR effect, which is defined as passive drug delivery. The size, shape, and surface charge of nanoparticles are vital factors for the efficiency of drug delivery systems [ 42 , 43 ]. However, the efficacy and safety of the EPR effect have been controversial in recent years [ 44 , 45 ]. Recent statistical research revealed that merely 0.76% of the intravenous nanomaterials smoothly reached solid tumors [ 46 ]. Notably, active targeting has shown effective effects on ameliorating intracellular uptake to a certain extent. Nonetheless, limited permeability of nanomaterials in tumor tissues turned out to be an unsolved problem in the process of active targeting [ 47 ]. Researches have shown that active targeting performs better in hematological cancers in which barrier to systemic circulation is relatively small [ 48 ]. A study by Setyawati et al. [ 49 ] identified a novel form of endothelial leakage, termed nanomaterial induced endothelial leakage (NanoEL). NanoEL-induced endothelial leakage depends on the disruption of vascular endothelial-cadherin (VE-cadherin), coupled with actin remodeling and cell contraction, to expand the intercellular space. Studies have indicated that TiO 2 , Au and SiO 2 nanoparticles have significant effect on inducing the leakage of breast cancer endothelial cells [ 50 ]. Compared with the EPR effect relied on abnormal angiogenesis in mature solid tumors, nanomaterials can induce NanoEL effect by virtue of their own inherent capabilities. It can therefore be conferred that well-designed nanomaterials are capable of actively inducing leakage of vascular endothelial cells to cross blood vessels and accumulating substantially in tumor tissues, independent of tumor type and stage. However, there are series of side effects of NanoEL effect-induced vascular endothelial leakage: facilitating tumor circulatory metastasis, aggravating bacterial infection, promoting edema and thrombus formation, etc. In summary, the delivery of nanomaterials has always been a momentous part in nanomedicine field, which deserves deeper exploration. The evolution of keywords reflected that the application of nanomaterials in immunotherapy underwent a transformation from simple into complex, phenotype into mechanism. For instance, early researches mainly concentrated on the tumor-killing effects of the material. Now we tend to pay more attention to the targeted delivery of nanomaterials, the synergistic effects of multiple anti-tumor therapies, the regulation of nanomaterials on the tumor microenvironment, and the internal mechanisms of tumor immunity. We ultimately summarize and list three mainstreams for the application of nanomaterials to tumor immunity: (1) Targeting tumor cells [ 39 , 41 ]: Nanomaterials induce ICD and further release TAAs. As an important trigger and enhancer of anti-tumor immunity, nanomaterials facilitate the antigen presentation of APC. ICD can be induced by certain types of chemotherapeutic drugs (such as doxorubicin, oxaliplatin, cyclophosphamide and so on), as well as by radiation therapy, photodynamic/photothermal therapy, and other methods. (2) Targeting TIME [ 51 , 52 ]: Immunosuppressive pathways and mediators are always upregulated in TIME. For example, increased infiltration of immunosuppressive cells including regulatory T cells (Tregs), myeloid-derived suppressor cells (MDSC) and M2 macrophages have been detected. Soluble inhibitors such as indoleamine 2,3 dioxygenase (IDO), transforming growth factor-beta (TGF-beta) are also increased. Nanomaterials reverse the immunosuppressive TIME and regulate the infiltration, proliferation, maturation, and activation of T cells to further polish up the immunotherapy efficacy. (3) Targeting the peripheral immune system [ 53 , 54 ]: Nanomaterials promote anti-tumor immune responses through enhancing antigen presentation and generation of cytotoxic T cells in secondary lymphoid organs (such as lymph nodes and spleen), as well as modulating and augmenting peripheral effector immune cell populations.
Background Tumor immunotherapy can not only eliminate the primary lesion, but also produce long-term immune memory, effectively inhibiting tumor metastasis and recurrence. However, immunotherapy also showed plenty of limitations in clinical practice. In recent years, the combination of nanomaterials and immunotherapy has brought new light for completely eliminating tumors with its fabulous anti-tumor effects and negligible side effects. Methods The Core Collection of Web of Science (WOSCC) was used to retrieve and obtain relevant literatures on antitumor nano-immunotherapy since the establishment of the WOSCC. Bibliometrix, VOSviewer, CiteSpace, GraphPad Prism, and Excel were adopted to perform statistical analysis and visualization. The annual output, active institutions, core journals, main authors, keywords, major countries, key documents, and impact factor of the included journals were evaluated. Results A total of 443 related studies were enrolled from 2004 to 2022, and the annual growth rate of articles reached an astonishing 16.85%. The leading countries in terms of number of publications were China and the United States. Journal of Controlled Release, Biomaterials, Acta Biomaterialia, Theranostics, Advanced Materials, and ACS Nano were core journals publishing high-quality literature on the latest advances in the field. Articles focused on dendritic cells and drug delivery accounted for a large percentage in this field. Key words such as regulatory T cells, tumor microenvironment, immune checkpoint blockade, drug delivery, photodynamic therapy, photothermal therapy, tumor-associated macrophages were among the hottest themes with high maturity. Dendritic cells, vaccine, and T cells tend to become the popular and emerging research topics in the future. Conclusions The combined treatment of nanomaterials and antitumor immunotherapy, namely antitumor nano-immunotherapy has been paid increasing attention. Antitumor nano-immunotherapy is undergoing a transition from simple to complex, from phenotype to mechanism. Graphical abstract Supplementary Information The online version contains supplementary material available at 10.1186/s12951-023-02278-3. Keywords
Supplementary Information
Acknowledgements Not applicable. Author contributions Conceptualization: WC, GDC, MMX, BC. Methodology: WC, MYJ, WGZ. Software: WC, MYJ, WGZ, KY. Formal analysis: WC, MYJ, YXC. Investigation: WC, WGZ, JJC. Data curation: WC. Project administration: WC, MYJ, WGZ. Writing–original draft preparation: WC, MYJ, YXC. Writing–review and editing: WC, MYJ, JJC. Visualization: WC, WGZ, KY. Funding acquisition: GDC, BC, MMX. Funding This work was supported by the mission book of promotion program of basic and clinical collaborative research of Anhui Medical University (2022xkjT028), the Anhui Provincial Natural Science Foundation (2208085MH240), the Scientific Research Project of Anhui Provincial Department of Education (2022AH051167), the Anhui Quality Engineering Project (2020jyxm0898, 2020jyxm0910, 2021jyxm0727), the Anhui Medical University Clinical Research Project (2020xkj176), the Anhui Health Soft Science Research Project (2020WR01003). Availability of data and materials The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors. Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare that no competing or competing interests exist.
CC BY
no
2024-01-15 23:43:48
J Nanobiotechnology. 2024 Jan 13; 22:30
oa_package/8b/d0/PMC10788038.tar.gz
PMC10788039
38218802
Introduction Periodontitis is a common chronic infectious and inflammatory disease affecting people worldwide. Its etiology mainly includes the direct damage of the periodontal tissues by bacteria and the immune disorder of the host caused by bacteria [ 1 ]. Periodontitis is distinguished by enduring inflammation of the tissues that support the teeth, destruction of the periodontal ligaments, and progressive loss of alveolar bone around the teeth [ 2 ]. Recently, it has been shown that periodontitis can lead to several systemic illnesses. This may be due to the pro-inflammatory cytokines or bacteria in the mouth through the blood or triggering the body’s immune response and other related mechanisms [ 3 ]. Multiple sclerosis (MS) is an autoimmune disease that causes inflammatory demyelinating lesions of white matter in the central nervous system [ 4 ]. Even though the cause of MS is unclear, current findings suggest that environmental and genetic variables contribute to the disease’s development [ 5 ]. Several environmental factors, such as infection, latitude, vitamin D deficiency, and smoking, contribute to the development of MS [ 6 ]. Research has shown that bacterial infection may be a crucial factor in the etiology of MS. They were found to be pathogenic environmental factors in the pathogenesis of MS [ 13 ]. In addition, some pathogenic or symbiotic bacteria can mediate MS by activating Th17 cells to produce inflammatory factors. Studies have shown that Porphyromonas gingivalis (P. gingivalis) is significantly elevated in patients with MS, and P. gingivalis is also one of the main causative agents of periodontitis [ 7 ]. Also, people with periodontitis are more susceptible to MS, and periodontal infections may worsen MS symptoms [ 8 ]. These findings suggest that there could be links between periodontitis and MS. However, the molecular mechanisms and pathological interactions between the two remain unclear. As microarray and high-throughput sequencing technologies continue to advance quickly, bioinformatics techniques are frequently used to investigate the crosstalk between diseases in order to reveal the connections between the cellular and molecular mechanisms of diseases. In this study, we explored potential crosstalk genes between periodontitis and MS through bioinformatics methods. We analyzed the interactions between these genes and immune cells to acquire a greater comprehension of potential mechanisms of interaction between periodontitis and MS. Additionally, three candidate biomarkers for periodontitis and MS were identified by using bioinformatics tools, which were further validated by qPCR and immunohistochemical staining techniques, suggesting that they may be biomarkers for predicting the occurrence of periodontitis and MS.
Materials and methods Data download Gene expression data for periodontitis and MS were downloaded from the Gene Expression Omnibus (GEO) database ( https://www.ncbi.nlm.nih.gov/geo/ ). In the periodontitis dataset, GSE16134 (based on the GPL570 platform) was used as a test cohort with 310 gingival papillae (241 “diseased” and 69 “healthy”), and GSE1334 as a validation cohort with 247 gingival papillae with 183 “diseased” and 64 “healthy.” The MS dataset contains GSE108000 (based on the GPL13497 platform) and GSE135511 (based on the GPL6883 platform), and we combined GSE108000 and GSE135511 into a new dataset by using the “SVA” R package to remove batches. The combined dataset includes 20 healthy controls and 70 MS samples. In addition, to assess the effectiveness of the diagnostic process, we downloaded the GSE38010 dataset (based on the GPL570 platform), which contains 2 healthy controls and 5 MS samples. Identification of DEGs To normalize the datasets, R (4.2.3) software was used. Afterward, we identified differentially expressed genes (DEGs) from the GSE16134 and a combined dataset of the GSE108000 and GSE135511 by using the R package “limma” with adjusted P values < 0.05 and |log FC|≥0.8. WGCNA network construction and module identification The co-expression network of periodontitis (GSE16134) and MS (a merged dataset of GSE108000 and GSE135511) was constructed using the WGCNA package in R. The network is ensured to be a scale-free network by using a soft threshold, which is advantageous for subsequent network generation. Gene modules were identified using hierarchical clustering trees, while gene modules with strong connections were constructed using hierarchical clustering based on topological overlap matrix (TOM). Pearson’s correlation coefficient was calculated to analyze relationships between the various modules and diseases. The module showing the highest correlation with the disease was selected, and the genes within this module were obtained. Identification of shared genes and pathway enrichment By drawing Venn diagrams, the shared genes identified by WGCNA and DEG were obtained. Then, we explored functions and pathways associated with these genes through Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) using “clusterProfiler” and “org.Hs.eg.db” packages [ 9 – 12 ]. Feature selection by the least absolute shrinkage and selection operator To discover hub genes with the best diagnostic efficacy among the shared genes identified above between periodontitis and MS, we utilized the “glmnet” package in R to conduct the least absolute shrinkage and selection operator (LASSO) regression. Candidate biomarker expression levels and diagnostic value We utilized the “ggplot2” package in R software to test expression levels of the hub genes in periodontitis and MS samples. To assess the diagnostic efficacy of potential biomarkers on periodontitis (GSE16134) and MS (a merged dataset of GSE108000 and GSE135511) datasets, we used receiver operating characteristic curves (ROCs) using the “pROC” package in R. Furthermore, we verify the diagnostic efficiency of potential biomarkers using two external datasets including GSE10334 and GSE38010. ssGSEA We analyzed the infiltration of immune cells in diseased and healthy samples through ssGSEA using the “GSVA” R package. Then, we explored links between potential biomarkers and infiltrating immune cells through the Spearman method. Gingival biopsy and peripheral blood collection 10 human gingival tissues, including 5 cases and 5 controls, were obtained from healthy volunteers and patients with periodontitis. In addition, our study also included individuals with 5 MS samples and 10 healthy volunteers, and we obtained peripheral blood from multiple sclerosis patients and healthy people, respectively, for the extraction of peripheral blood mononuclear cells (PBMCs). Inclusion criteria included patients diagnosed and treated for the first time, patients with complete medical records, and patients without systemic disorders. All studies were approved by the Ethics Committee of the Affiliated Stomatology Hospital of Anhui Medical University and the First Affiliated Hospital of Anhui Medical University. RNA collection and qRT-PCR A Ficoll (Histopaque; Sigma–Aldrich, Zwijndrecht, The Netherlands) density gradient was used to extract PBMCs through centrifugation. RNA from gingival tissue and PBMCs was extracted using TRIzol reagent (Invitrogen). cDNAs was synthesized from 2 μg total RNA according to instructions of cDNA Reverse Transcription Kit (Takara, Tokyo, Japan). Subsequently, qRT-PCR was performed using the Stratagene Mx3000P system (Agilent Technologies, USA) and SYBR Green Master Mix (11,701, Accurate Biology). GAPDH was used to normalize the gene’s expression levels, and the comparative Ct method with Formula 2−ΔCt was used to compute the expression value. All experiments were repeated more than three times. Supplementary Table S1 contains a list of primers. Immunohistochemical staining of gingival tissue The collected gingival tissues were preserved using 4% paraformaldehyde and then embedded in paraffin. The paraffin-embedded tissue was sliced into serial Sect. 4 micrometers thick and then deparaffinized for antigen extraction. Subsequently, these slides were treated with goat serum and then incubated with antibodies. After that, 3,3’-diaminobenzidine tetrahydrochloride (DAB) and hematoxylin were used to stain the sections. Microscope images were captured and processed using image-processing software (ImageJ v 1.48). Statistical analysis We utilized GraphPad Prism 8.0 for both conducting statistical analysis and creating visual representations. All results are expressed as mean ± standard deviation. The method chosen for statistical analysis was the unpaired t-test ( P < 0.05).
Results Identification of DEGs In GSE16134, a total of 315 DEGs with 217 upregulated and 98 downregulated, were found, while the combined dataset of GSE108000 and GSE135511 showed 227 DEGs, 150 of which were upregulated and 77 downregulated. The top 100 DEGs of these two diseases were shown in heatmaps (Fig. 1 a, b), and expression patterns of the DEGs in these diseases were displayed in volcano maps (Fig. 1 c, d). Ten genes ( FAM46C, COL4A1, SLC7A7, LY96, CFI, DDIT4L, CD14, C5AR1, IGJ, NEFL ) differently expressed in both MS and periodontitis were revealed by combining the upregulated and downregulated genes (Fig. 1 e). WGCNA network construction and module identification By clustering samples to check the outliers, neither GSE16134 nor a combined dataset of GSE108000 and GSE135511 deleted the samples (Fig. 2 a, b). To ensure the creation of a scale-free network, a power of β = 12 was used for GSE16134, while the β value was 3 for the combined GSE108000 and GSE135511 datasets. The co-expression network generated by periodontitis samples consisted of 7 modules, whereas the network constructed using MS samples contained 9 modules (Fig. 2 c, d). The Pearson correlation coefficient was applied to calculate the associations of modules with disease. In GSE16134, the turquoise module had the largest positive correlation with periodontitis ( r = 0.67), while the blue module showed the most significant negative correlation ( r = -0.41). In a combined dataset of GSE108000 and GSE135511, the blue module had the largest positive association for MS ( r = 0.51), whereas the pink module had the most significant negative correlation ( r = -0.45). There were 151 overlapping genes obtained by intersecting genes in the most obvious positive correlation and negative correlation modules (Fig. 2 e). Identification of shared genes and pathway enrichment Venn diagrams revealed that there were eight shared genes ( FAM46C , SLC7A7 , LY96 , CFI , DDIT4L , CD14 , C5AR1 , and IGJ ) that overlapped between periodontitis and MS which were screened by WGCNA and DEGs (Fig. 3 a). The GO analysis indicated that these shared genes were most significantly associated with response to molecule of bacterial origin, positive regulation of response to external stimulus, and positive regulation of cytokine production (Fig. 3 b). According to the KEGG analysis, these genes were primarily enriched in alcoholic liver disease (ALD), pertussis, complement and coagulation cascades, staphylococcus aureus infection, NF-κB signaling pathway, Toll-like receptor signaling pathway, lipid and atherosclerosis, and salmonella infection (Fig. 3 c). Identification of potential shared diagnostic genes by least absolute shrinkage and selection operator A LASSO regression method was utilized to identify the diagnostic gene common to both disorders. Four core cross-genes were found in the periodontitis dataset GSE16134 (Fig. 4 a, b), and four core cross-genes were found in the MS dataset merged in GSE108000 and GSE135511 (Fig. 4 c, d). Three overlapping genes ( FAM46C , CFI , and DDIT4L ) were identified as the most effective diagnostic biomarkers for both periodontitis and MS by using a Venn diagram (Fig. 4 e). Candidate biomarker expression levels and diagnostic value Further studies found that three candidate biomarkers ( FAM46C , CFI , and DDIT4L ) expression levels were all upregulated in both periodontitis and MS samples (Fig. 5 a, b). ROC curves were employed to evaluate the diagnostic efficacy of these potential biomarkers. In GSE16134 (Fig. 5 c), the diagnostic value of these three biomarkers was high: FAM46C (AUC = 0.896), CFI (AUC = 0.830), and DDIT4L (AUC = 0.795). In a dataset merged from GSE108000 and GSE135511 (Fig. 5 d), CFI (AUC = 0.775) and DDIT4L (AUC = 0.820) exhibited greater diagnostic utility for MS, while FAM46C demonstrated an almost flawless diagnostic value (AUC = 0.946). Then, two external datasets (GSE10034 and GSE38010) were further used to verify the prediction accuracy of CFI , DDIT4L , and FAM46C . All three showed strong predictive performance (Supplementary Fig. S1 ). Immune infiltration analysis Furthermore, we explored the infiltration of immune cells in different samples. Both results of heatmaps (Fig. 6 a, b) and violin plots (Fig. 6 c, d) showed significant changes in a variety of immune cells in the periodontitis dataset GSE16134 and the MS dataset merged by GSE108000 and GSE135511, especially T cells and B cells. Additionally, analysis of the correlation between immune cells and candidate biomarkers revealed a positive association between regulatory T cells, natural killer cells, mast cells, immature dendritic cells and gamma delta T cells with CFI in both periodontitis samples and MS samples. In MS and periodontitis samples, there was a positive correlation between immature dendritic cells and DDIT4L . In samples with periodontitis and MS, type 1 T helper cells, T follicular helper cells, regulatory T cells plasmacytoid dendritic cells, natural killer T cells, natural killer cells, MDSCs, mast cells, macrophage, immature B cells, gamma delta T cells, activated B cells, activated dendritic cells, activated CD4 T cells and activated CD8 T cells showed a positive correlation with FAM46C (Fig. 6 e, f). CFI , DDIT4L and F4AM6C were upregulated in patients with periodontitis and MS compared with healthy controls To further validate the diagnostic values of three candidate markers, qPCR and immunohistochemical staining were used to verify their expressions in periodontitis and MS samples. qRT-PCR results indicated that mRNA levels of the pro-inflammatory cytokines (IL-1, IL-6, and IL-8) (Fig. 7 a) and also CFI , DDIT4L , F4AM6C (Fig. 7 b) were upregulated in patients with periodontitis compared with healthy controls. Similarly, qRT-PCR results (Fig. 7 c) indicated that the mRNA levels of the CFI , DDIT4L , and F4AM6C were upregulated in patients with MS compared with healthy controls. Results of immunohistochemical staining revealed that CFI , DDIT4L , and FAM46C were upregulated in periodontitis samples compared with healthy controls (Fig. 7 d).
Discussion Periodontitis, a chronic inflammatory disease, causes systemic inflammation and contributes to the development of several neurodegenerative diseases, such as MS [ 8 , 13 ]. However, the mechanisms remain to be revealed. Additionally, the lack of sufficient knowledge regarding the pathogenesis of MS has impeded the progress of treatment options. Through the use of large-scale data, bioinformatics techniques offer a thorough knowledge of numerous illnesses at the molecular level [ 14 , 15 ]. Moreover, it is also particularly important for identifying potential biomarkers for the diagnosis and prognosis of human diseases [ 16 , 17 ]. Nevertheless, there were few reports on their utilization for screening potential biomarkers in patients with periodontitis combined with MS. In this study, we used WGCNA to look into the common pathways by combining the transcriptomes of MS and periodontitis. Meanwhile, we uncovered possible intersecting genes, common pathways, and infiltration of immune cells between periodontitis and MS through multiple methods. Results of our study discovered that the most significant crosstalk genes between periodontitis and MS were FAM46C , SLC7A7 , LY96 , CFI , DDIT4L , CD14 , and IGJ , which may be associated with response to molecules of bacterial origin. Then, it was discovered that CFI , DDIT4L , and FAM46C are useful diagnostic markers for periodontitis and MS. T cells and B cells are essential in developing MS and periodontitis, according to the results of immune infiltration. The findings of this research imply that the primary genes involved in the cross-talk between MS and periodontitis are linked to a bacterial molecular response. As we all know, periodontitis is an inflammatory disease, and bacteria play an important role in its pathogenesis [ 18 ]. Studies have demonstrated that the pathogens of periodontitis include a variety of bacteria, such as Actinomyces aggregator, P. gingivalis, Forsetana, Treponema dentalis , and Clostridium nucleatus . These bacteria can cause gingival cell death and periodontal tissue damage by secreting lipopolysaccharide (LPS) and a variety of toxic substances, producing a variety of inflammatory factors. These cytokines can also spread through the blood, causing a systemic inflammatory response that triggers MS [ 8 ]. In addition to being transmitted through the blood, some bacteria can directly stimulate nerve immune cells to activate an inflammatory response. For instance, glial cells, the main immune cells in the nervous system, have been discovered to be stimulated by P. gingivalis and its products lipopolysaccharide to produce pro-inflammatory mediators such as nitric oxide (NO) and prostaglandin E2 (PGE2), leading to demyelination and aggravating MS [ 19 ]. These results imply that bacterial factors are critical in developing MS and periodontitis and may account for part of the greater incidence of MS in patients with periodontitis. The KEGG enrichment analysis revealed that these crosstalk genes are involved in ALD, the complement and clotting cascade, NF-κB signaling pathway, and Toll-like receptor signaling pathways. Studies have indicated that P. gingivalis can worsen ALD by changing the composition of intestinal microbiota and the immune response of the host [ 20 ]. Moreover, ALD has an increased risk of MS development [ 21 ]. Meanwhile, the involvement of complement and coagulation cascade in the mechanisms of periodontitis and MS has been demonstrated [ 22 – 24 ]. NF-κB is a signaling pathway that plays a crucial role in regulating immune and inflammatory responses. Activation of NF-κB signaling pathway can enhance osteoclast differentiation and exacerbate periodontitis by increasing the expression of IL-1β and various inflammatory factors [ 25 , 26 ]. Furthermore, activation of NF-κB signaling pathway can also impact MS by stimulating peripheral immunity and inflammatory responses in the central nervous system [ 27 ]. Additionally, Toll-like receptor signaling pathways have also been shown to mediate the development of periodontitis and MS by regulating immune responses [ 28 , 29 ]. This study explored the potential immunological connection between MS and periodontitis in the preliminary stages. According to our findings, the immunological patterns of the MS and periodontitis groups were considerably different from those of the control group, with the increase in B cells and T cells being particularly noticeable. Multiple infections invading the host and setting off an immune response cause periodontitis. P. gingivalis , the main pathogenic bacterium responsible for periodontitis, has been identified to release a variety of virulence factors, which in turn trigger the production of pro-inflammatory molecules, leading to an increase in the number of local B cells and T cells. Peripherally activated T-cell and B-cell interactions additionally trigger MS. It is generally known that B cells play important roles in the development of MS. For instance, B cells in MS patients may emit not only antibodies but also soluble toxic substances that, by their proliferation, harm oligodendrocytes and neurons [ 30 ]. Meanwhile, many B-cell subtypes, including memory B-cells and plasma mother cells, have been observed in the cerebrospinal fluid (CSF) of MS patients, especially memory B-cells and plasma mother cells [ 31 ]. More importantly, the success of treating MS by depleting B cells using anti-CD20 antibodies strongly highlights the importance of B cells in MS [ 30 ]. Moreover, studies have shown that CD4 T lymphocytes, particularly helper T cells 1 (Th1) and 17 (Th17), can pass the blood-brain barrier in response to myelin antigens, infiltrate the central nervous system, and trigger inflammation. Among them, Th1 and Th17 can aggravate MS by secreting IFN-γ and IL-17 [ 32 ]. It’s interesting to note that one study discovered that P. gingivalis infection can boost the impact of T lymphocytes on CNS autoantigens [ 19 ]. Therefore, periodontal disease may exacerbate MS by increasing the sensitivity of T and B cells to autoimmune antigens. To improve the accuracy of testing biomarkers, we choose datasets with large sample sizes as much as possible. In our research, the periodontitis dataset GSE16134 contained 310 samples of gingival tissue, while the MS dataset, which was created by merging GSE108000 and GSE135511, contained 90 samples of brain tissue. The receiver operator curve (AUC) is employed to evaluate the diagnostic efficacy of biomarkers. ROC curve showed that the AUC values of CFI , DDIT4L , and FAM46C in the diagnosis of periodontitis were 0.830, 0.795, and 0.896, while the AUC values in the diagnosis of MS were 0.775, 0.820, and 0.946. These results suggest that CFI , DDIT4L , and FAM46C have a high capacity to predict periodontitis and MS. Family with sequence similarity 46 member C ( FAM46C ), a non-standard poly(A) polymerase, was found to be a significant crosstalk gene between periodontitis and MS. Previous evidence has shown that FAM46C can inhibit tumor growth through a variety of pathways [ 33 ]. In addition, emerging evidence has shown that FAM46C can regulate immune responses. M1/M2 imbalance is one of the manifestations of periodontitis and MS [ 34 , 35 ]. Studies have found that FAM46C can promote the polarization of M2 and alleviate the immune response [ 36 ]. This may be one of the mechanisms by which FAM46C participates in periodontitis and MS. The results of the ssGSEA study showed that FAM46C was significantly positively associated with macrophages in periodontitis and MS samples, which also jointly emphasized the involvement of FAM46C in these two diseases of pathology through a mediated immune response. DNA-damage-inducible transcript 4 ( DDIT4L ) was found to be a gene that regulates autophagy and promotes autophagy by inhibiting the mTOR signaling pathway [ 37 ]. As we know, autophagy plays a significant part in innate immunity and has been linked to many inflammatory diseases [ 38 ]. In the pathogenesis of periodontitis, autophagy has been discovered to activate and regulate inflammation by promoting or inhibiting cytokines and lead to bone loss by disrupting the balance between osteogenesis and osteolysis [ 39 , 40 ]. In addition, studies have shown that autophagy has a dual function in MS. On the one hand, myelin antigen presentation by CD4 T cells can be enhanced by enhancing the process of autophagy, thus aggravating MS. On the other hand, defective autophagy leads to abnormal clearance of inflammatory bodies and myelin debris in microglia and promotes pro-inflammatory phenotypes [ 31 ]. The above evidence indicates that DDIT4L may play a role in periodontitis-mediated MS by regulating autophagy. However, further experiments are needed to confirm this speculation. Complement Factor I ( CFI ), a family of soluble serine proteases, can regulate the complement system by inactivating C3b and C4b [ 41 ]. However, less research has been reported on CFI in periodontitis and MS, and the evidence below suggests that CFI may participate in both diseases by regulating the complement system. Accumulated evidence has demonstrated that the complement system is implicated in multiple neurodegenerative diseases. It has been shown that the complement system is activated at the onset of MS, and the expression levels of C3 and C4 are increased [ 42 ]. In addition, the accumulation of C3b can cause damage to neurons through the activation of C5a [ 43 ]. The expression of C3, C3b, and C4b was also discovered to be elevated in the gingival tissue of individuals with periodontitis, and its expression was found to be positively connected with the severity of the condition. Meanwhile, using C3b/C4b inhibitors can alleviate alveolar bone loss in periodontitis [ 44 ]. These findings suggest that CFI may influence periodontitis-mediated MS by regulating the transformation of C3b and C4b. In summary, our study revealed a correlation between periodontitis and MS using bioinformatic analyses, suggesting that MS can be prevented by improving oral hygiene and treating periodontitis, and providing guidance for the treatment of patients with periodontitis combined with MS. More importantly, FAM46C , SLC7A7 , LY96 , CFI , DDIT4L , CD14 , C5AR1 and IGJ were the most significant crosstalk genes between periodontitis and MS, and CFI , DDIT4L , FAM46C can be used as potential biomarkers for the diagnosis of periodontitis and MS. Immune responses driven by B cells and T cells are crucial in the pathogenesis of periodontitis and MS.
Background Although periodontitis has previously been reported to be linked with multiple sclerosis (MS), but the molecular mechanisms and pathological interactions between the two remain unclear. This study aims to explore potential crosstalk genes and pathways between periodontitis and MS. Methods Periodontitis and MS data were obtained from the Gene Expression Omnibus (GEO) database. Shared genes were identified by differential expression analysis and weighted gene co-expression network analysis (WGCNA). Then, enrichment analysis for the shared genes was carried out by multiple methods. The least absolute shrinkage and selection operator (LASSO) regression was used to obtain potential shared diagnostic genes. Furthermore, the expression profile of 28 immune cells in periodontitis and MS was examined using single-sample GSEA (ssGSEA). Finally, real-time quantitative fluorescent PCR (qRT-PCR) and immune histochemical staining were employed to validate Hub gene expressions in periodontitis and MS samples. Results FAM46C , SLC7A7 , LY96 , CFI , DDIT4L , CD14 , C5AR1 , and IGJ genes were the shared genes between periodontitis, and MS. GO analysis revealed that the shared genes exhibited the greatest enrichment in response to molecules of bacterial origin. LASSO analysis indicated that CFI , DDIT4L , and FAM46C were the most effective shared diagnostic biomarkers for periodontitis and MS, which were further validated by qPCR and immunohistochemical staining. ssGSEA analysis revealed that T and B cells significantly influence the development of MS and periodontitis. Conclusions FAM46C , SLC7A7 , LY96 , CFI , DDIT4L , CD14 , C5AR1 , and IGJ were the most important crosstalk genes between periodontitis, and MS. Further studies found that CFI , DDIT4L , and FAM46C were potential biomarkers in periodontitis and MS. Supplementary Information The online version contains supplementary material available at 10.1186/s12903-023-03846-7. Keywords
Electronic supplementary material Below is the link to the electronic supplementary material.
Author contributions All authors have made substantial contributions to the conception and design of the study. E.W. and M.C. designed the project and wrote the manuscript. X.Z., T.W., S.S., M.S., and L.W. performed collection and/or assembly of data, data analysis, and interpretation. L.Z. and W.S. gave final approval of manuscript and financial support. All authors read and approved the final manuscript. Funding This work was supported by the National Natural Science Foundation of China (82071770); Research Level Improvement Project of Anhui Medical University (2021xkjT001); Anhui Provincial Natural Science Foundation (2008085QH371); Scientific Research of BSKY in Anhui Medical University (XJ201601); Research and practical innovation projects of AHMU (YJS20230039); 2022 Disciplinary Construction Project in School of Dentistry, Anhui Medical University (2022xkfyhz02); and the Anhui Province Health Research Project (AHWJ2022b055). Data availability Publicly available datasets were analyzed in this study. This data can be found at GEO data repository ( https://www.ncbi.nlm.nih.gov/geo/ ) and include the accession numbers: GSE16134, GSE108000, GSE135511, GSE1334 and GSE38010. Declarations Ethics approval and consent to participate This study was approved by the Ethics Committee of the Affiliated Stomatology Hospital of Anhui Medical University and the First Affiliated Hospital of Anhui Medical University. All methods were performed in accordance with relevant guidelines and regulations. Consent for publication Not applicable. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:43:48
BMC Oral Health. 2024 Jan 13; 24:75
oa_package/df/66/PMC10788039.tar.gz
PMC10788040
38218966
Background Umbilical outpouchings (UO) in pigs are a clinical condition that poses a challenge for the pigs as well as the producers [ 1 ]. All UOs were previously considered to be umbilical hernias, but Andersen et al. [ 2 ] found that slaughter pigs recorded with an umbilical hernia had different aetiologies: the most frequent diagnoses were cysts with haemorrhagic or serous fluid followed by hernias with intestinal content. The study also showed that all sorts of combinations between hernias, cysts, fibrotic tissue, abscesses, and paddle-formed proliferations exist [ 2 ]. Since the various disorders were typically not distinguishable based on clinical findings, the term "umbilical outpouching" was introduced as a replacement for umbilical hernia [ 3 ]. UOs are suspected to have a multifactorial background; Both genetic as well as infectious backgrounds have been suggested as hypotheses [ 4 ], as well as the handling of pigs might be relevant (e.g. how the piglets are lifted). Pigs with UO need extra management; Danish legislation requires pigs with large UO to be stabled in sick pens with soft bedding, and the risk of UO pigs being unfit for transport is increased compared to pigs without UO [ 5 ]. Some of the UO pigs can be approved for transport if the herd veterinarian provides them with a transport fitness certificate and they are transported under special conditions, which adds costs for keeping UO pigs. Therefore, a high proportion of UO pigs are euthanized, contributing to increased mortality, a poorer economy, and reduced sustainability for pig production. The true prevalence of UO in intensive pig production is unknown. Earlier studies report varying prevalences and comparisons between studies are difficult because the definitions of UO vary considerably. Searcy-Bernal and Gardner [ 4 ] examined 2958 pigs weekly and found a cumulative incidence of 1.5% with a definition including only hernias with a hernia ring of more than one cm. Mattson et al. [ 6 ] found a cumulative incidence of 8.3% including both hernias, abscesses, and other navel problems, in five Swedish herds stated to experience problems. Yun et al. [ 7 ] found occurrences between 0.7 and 2.3% including both hernias and abscesses in 6451 pigs in one Finnish herd. This study aimed to obtain knowledge about UOs in different Danish herds, build a foundation for benchmarking between herds, and add to an increasing understanding of the condition, which in the future can be used to generate new preventive interventions. A cross-sectional study was performed with three objectives: The primary objective was to estimate the within- and between-herd prevalence of UOs in Danish piglets and weaner pigs. The second objective was to describe the clinical characteristics of UOs such as size, texture, reducibility, and occurrence of ulcers. The third objective was to identify risk factors for the occurrence of ulcers on UOs.
Methods Study design A cross-sectional study was performed in 30 conventional herds visited once between September 2020 and May 2021. Piglets were examined the last week before weaning and weaners were examined between weeks three and eight after weaning. Sampling was performed at pen level by random selection of pens, and all pigs housed in sick pens were examined as a separate group (e.g. they were not part of the random sampling in the herd). The abdominal area was palpated on all selected pigs and all irregularities were recorded. UOs measuring at least 2 × 2 cm were reported as UOs in this study. Sample size Herd was the primary study unit of interest. Based on project-budget and logistic considerations it was possible to collect data from thirty herds. Thirty herds were considered sufficient for obtaining a representative sample of the Danish conventional pig population and to obtain a valid estimate of the average within-herd prevalence of pigs with UOs. To estimate the within-herd prevalence Eq. 1 [ 14 ] was used to calculate the sample size. Calculation of sample size to estimate a proportion. Based on the literature a presumed UO prevalence (P) was set to 2.5% and maximum allowable error (L) was set to 1%. With a 95% confidence level, the resulting sample size was 937 pigs in each age group in each herd. The sample size was then adjusted for herd size using Eq. 2 [ 14 ]. For piglets, n population was the number of weaned pigs per week in the specific herd, and for weaners, n population was the number of (weaned pigs/week) times six weeks (weeks 3–8 post-weaning). Thus, the sample sizes for a herd weaning 500 pigs a week were 327 piglets and 714 weaners. Calculation of adjusted sample size. Selection of herds and pigs In July 2020 a list of pig herds was retrieved from the Danish Husbandry Register (CHR database). Inclusion criteria for herds were at least 200 sows and 800 weaned pigs registered on the same CHR number, and being within a three-hour drive from Copenhagen. Secondly, herds should use either Danbred or Danish Genetics and keep pigs for the entire nursery period. Pigs had to be crossbreds between Landrace/ Yorkshire/Duroc. The homepage https://www.randomizer.org/ was used to find 30 random herds, using the “math.random” method from the JavaScript programming language [ 15 ]. Herds appointed by the research randomizer were contacted by phone and asked to participate if they fulfilled the inclusion criteria. New random herds were drawn if herds did not fulfil the inclusion criteria, contact was not established, or herds declined to participate. The study population was piglets within one week before weaning and weaned pigs between three and eight weeks after weaning. Pigs were selected at pen level and all pigs in selected pens were subjected to clinical examination. Every n th pen was examined based on the required adjusted sample size, the number of pens with weaners at the required age, and the number of pigs in each pen. 2 To ensure equal age distribution for the weaners, the number of included pens was divided equally between all weaner rooms with weaners at the right age. All pigs housed in sick pens were examined as a separate group, and not as part of the random sample. Clinical examination The piglets were lifted by technicians and palpated by one veterinarian. If there was any confusion or uncertainty about findings, findings were confirmed visually. The weaners were screened by trained technicians who palpated the abdominal area of all pigs. Every pig with an abnormality, bulge, or uncertainty was spray-marked by the technicians; as a result, only weaners with suspected outpouchings were examined by the vet and had sex recorded. Marked pigs were fixated with a herding board against a corner of the pen and examined standing. One veterinarian examined all the pigs. For pigs with outpouchings the height and width in cm were registered, as well as reducibility (yes, partly, no), ulcers (yes, no), ulcer size (length x width cm), and texture (soft, mix, hard). The outpouchings and ulcers were categorised into three categories based on the sum of the height and width of the UO, 3 and the length and width of the ulcer, as shown in Table 5 . Statistical analysis The herd is the experimental unit for all analyses except for the analysis of ulcers where the experimental unit is individual pigs with UO. All data were analysed, and graphs were made, in Rstudio [ 16 ] using functions from the Tidyverse package [ 17 ]. Comparisons between herds with and without sick pens were made using the T.test following the Shapiro–Wilk normality test and the F.test for comparing variance. Linear regression was used to look for correlations between piglets and weaners in individual herds. Risk factors for the occurrence of ulcers were first assessed by univariable analysis. Levels were reduced based on significant p-values and estimates before the multivariable model was built. For reducibility “partly” and “no” were combined because they had similar estimates and no significant differences, and the same applies to texture where “mix” and “hard” were combined. A p-value lower than 0.05 was considered significant.
Results 480 conventional herds fulfilled the inclusion criteria. From a randomised list of the latter, a total of 62 herds were contacted, and 30 herd owners agreed to participate. The sample size within each herd ranged from 115–530 for piglets and 448–853 for weaners. A total of 8052 piglets and 19,684 weaners were clinically examined. More than 90% (28/30) of the herds treated all piglets with antibiotics within 48 h postpartum in varying schemes. Of the two not using systematic antibiotics, one was in transition to becoming Danish Crown Pure Pork [ 8 ], whereas the other was a conventional herd. Figure 1 shows the prevalence for each herd including their confidence intervals for both age groups. There were no correlations between the levels of outpouchings in piglets and weaners in individual herds. The average within-herd prevalence 1 of piglets with UO was 4.2% CI [3.3–5.1] ranging from 0.8 to 13.6% between herds, with a median of 4.1%. The average within-herd prevalence of weaners with UO was 2.9% CI [2.5–3.4] ranging from 1.0 to 5.3% between herds, with a median of 2.7%. Only seven herds had sick pens and therefore the possibility to move UO pigs to the sick pens (herds 17, 18, 19, 21, 22, 23, 27 & 29), thereby maybe introducing a false lower prevalence. Comparing the seven herds with sick pens to the 23 herds without sick pens revealed a significantly lower prevalence of total UO in herds with sick pens (2.1 vs 3.2, p = 0.035), with the same distribution of small, medium, and large outpouchings as the herds without sick pens (Table 1 ). For all groups the small outpouchings were dominant, and the large outpouchings were the fewest. Approximately 60% of the pigs with outpouchings were females for both piglets and weaners, data are shown in Table 2 . Table 3 shows the prevalence of clinical characteristics of the outpouchings found in piglets and weaners. For all groups, the majority of the UOs were nonreducible and soft in texture. Less than one percent of the piglets with UOs had ulcers, whereas more than 10 percent of the weaners had ulcers on their outpouchings. When focusing on weaners with UO, size, reducibility, and texture were considered risk factors for the occurrence of ulcers. Table 4 shows the results from the univariable and multivariable analyses of the risk factors with the outcome ulcer. Based on those results the odds of developing an ulcer on the UO was significantly higher when the UO was classified as medium (OR = 3.8, p < 0.001) or large (OR = 9.9, p < 0.001) compared to small UOs. In the multivariable analysis, the texture of the UO was not statistically associated with the development of ulcers (p = 0.087), whereas weaners with non-reducible or partly reducible UOs had significantly higher odds (OR = 2.4, p = 0.017) of developing an ulcer compared to weaners with a reducible UO.
Discussion This study provided good estimates for the prevalence of UO within Danish herds; it does not, however, tell the true prevalence of UO, since management procedures in the herds affect the observed prevalence. An example of this is the use of sick pens (which are mandatory by law in Denmark), which lowers the observed prevalence in our random sample. Many herds, including herds participating in this study, routinely euthanize pigs with UO. The study’s voluntary participation could favour herds more affected by outpouchings or make herds with problems more likely to decline to take part, which is another bias. The study confirmed our prior expectation of differences between herds and a general level of approximately three percent UOs in the weaners, it also showed a higher level of UOs in the piglets. Especially in the farrowing unit, the prevalence varied between the herds. We cannot, however, tell what caused the differences, a possible explanation is different weaning ages between the herds and as such more or less healed/ inflamed umbilici and concurrent swellings. We know from other studies that UOs might disappear/ appear as the pigs grow [ 6 , 9 , 10 ] thereby affecting the observed prevalence. The variation in the prevalence of UOs in the weaners is more easily explained and strongly relates to management procedures and conditions in the stable market. If the herds are dependent on selling all their weaners they will probably euthanize more UO pigs, because they will have less tolerance for UO pigs, compared to herds who can sell UO pigs as roaster pigs or keep UO pigs in sick pens or finisher stables until slaughter. The relationship between the listing price of pig meat and the cost of feeding the animals is also an important factor when farmers decide whether to keep UO pigs or not. The main reason behind fewer pigs showing outpouchings in this study compared to previous Danish studies [ 10 , 11 ] lies in the use of different definitions of umbilical outpouchings. Larsen et al. [ 10 ] examined pigs in two herds not using systematic antibiotics at birth and found an incidence of UOs of 9.5% including every finding of a firm protrusion or a rounded protrusion at the umbilicus. More than half of the UOs found at 5 weeks of age had disappeared when the pigs were 12 weeks old. Hovmand-Hansen et al. [ 9 , 11 ] found an incidence of 8% UO pigs in two commercial herds with a history of UO problems., and spontaneous regression was seen in 14% of the UO pigs. A UO was defined as a protrusion of more than 0.5 cm. This study focused on what we consider clinically relevant outpouchings, hence the introduction of a cut-off value for the size of UO. Petersen et al. [ 12 ] used a similar definition and found less than one percent of pigs with “a visible bulge at the umbilicus” when examining finisher pigs, not providing data from the sick pens, and knowing that many pigs with UO might have been euthanized before they reached the finisher unit. The apparent higher occurrence of UO among female pigs has also been found in other studies [ 10 , 11 ]. The reasons for this are unknown. Even though herds with sick pens did have a lower occurrence of pigs with UOs in their ordinary pens, they still had the same distribution of small, medium, and large UOs. One would expect that they would have had at least fewer large outpouchings. This probably reflects the fact that UOs are quite hard to spot and when they are found it is often by chance. The risk factor analysis for ulcers agrees with other studies [ 9 ]. Hovmand-Hansen et al. [ 9 , 11 ] also found that large outpouchings were associated with higher odds for the occurrence of ulcers and that reducible outpouchings had lower odds, even though the size definitions of outpouchings were not the same as the ones in this study. This study demonstrated that there were very few pigs with large ulcers in the sick pens, which likely reflects the fact that pigs in sick pens are more closely monitored and perhaps that pigs with large ulcers are deemed unlikely to heal and therefore euthanized when they are found, more than it reflects a healing effect of the sick pens. Euthanasia is often the most reasonable cause of action since a large ulcer makes the pig unfit for transport.
Conclusions UOs are common in Denmark, with a prevalence of 2.9% in weaners and an estimated annual production of 32 million Danish pigs [ 13 ] almost a a million pigs are affected yearly. Most of these pigs will have a small or medium UO. If the pigs have large outpouchings the odds of ulcer occurrence increase significantly. Numerous of these pigs are wasted, challenging sustainability and economy. Also, UO's possible effects on the welfare of the pigs need to be considered. More research is therefore needed, especially in the prevention of UOs. Another possibility is exploring the utilisation of mobile slaughter solutions. Processing the pigs directly at the farm would spare them the stress of transport, and minimize the number of wasted pigs, thereby making pig production more sustainable and humane.
Background Umbilical outpouchings (UO) in pigs present a welfare concern because of ulceration risk and complications. Danish legislation requires pigs with larger UOs to be housed in sick pens with soft bedding, and some UO pigs might not be suited for transport. Because of this, many UO pigs are euthanized, adding to the costs of pig production. The true prevalence of UO is unknown as no scientific reports with randomly sampled herds exist. This study aimed to estimate the prevalence of UO in Danish piglets and weaners and describe their clinical characteristics: size, texture, reducibility, and occurrence of ulcers. Lastly, risk factors for the occurrence of ulcers on UOs were investigated. Results A cross-sectional study was conducted in 30 Danish conventional herds, with at least 800 weaned pigs and 200 sows. The herds were selected randomly from the Danish Husbandry Register and visited once between September 2020 and May 2021. Piglets were examined during their last week in the farrowing unit, and weaners were examined between weeks three and eight after weaning. The abdominal area was palpated on all pigs, and all irregularities were recorded; the results presented are umbilical outpouchings measuring at least 2 × 2 cm. The within-herd prevalence of piglets with UO averaged 4.2% with a range from 0.8 to 13.6% between herds. The within-herd prevalence of weaners with UO averaged 2.9%, ranging from 1.0 to 5.3% between herds. Approximately 80% of the UOs were classified as small or medium (< 7 cm piglets/ < 11cm weaners). Large outpouchings had significantly higher odds of ulcer occurrence (OR = 9.9, p < 0.001). Conclusion UOs are common in Denmark, with a prevalence of 2.9% in weaners and an estimated annual production of 32 million Danish pigs almost a million pigs are affected yearly. Most of these pigs will have a small or medium UO. If the pigs have large UOs the odds of ulcer occurrence increase significantly. Numerous of these pigs are wasted, challenging sustainability and economy. UOs might also affect the welfare of the pigs. More research is therefore needed, especially in the prevention of UOs. Keywords
Acknowledgements Thanks to participating herds and students assisting with data collection. Author contributions Conceptualization and funding acquisition KSP; Study design MLH, IL, CSK, TB, and KSP; Recruiting herds and data and sample collection in herds MLH; Statistical analysis MLH; Writing first manuscript draft MLH; Project administration KSP. All authors read and approved the final manuscript. Funding The research was funded by the Pig Levy Foundation. The funding body had no impact on study design, data collection/ analyses, interpretation, or manuscript writing. Pig Levy Foundation (Svineafgiftsfonden) Availability of data and materials The datasets used and analysed during the current study are available from the corresponding author upon reasonable request. Declarations Ethics approval and consent to participate This study was approved by the Animal Ethics Institutional Review Board, Department of Veterinary and Animal Sciences, Faculty of Health and Medical Sciences, University of Copenhagen. Assigned AEIRB Number: 2022-03-PNH-007A. All 30 farmers consented to participate in the study. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-15 23:43:48
Porcine Health Manag. 2024 Jan 13; 10:3
oa_package/e9/b5/PMC10788040.tar.gz
PMC10788042
38222175
Introduction Medication-associated tendinopathies and tendon ruptures have been described under treatment with several pharmaceutical agents, such as corticosteroids, statins, quinolones, and aromatase inhibitors [ 1 , 2 ]. Most frequently, the Achilles tendon is affected [ 3 - 5 ]. Several underlying mechanisms facilitating this clinical entity have been proposed, such as local hypoxia and impaired fibroblast activity, in combination with predisposing patient-related factors, including age and gender, as well as overuse caused by exercise and/or vigorous physical activity [ 6 , 7 ]. Novel B-Raf proto-oncogene, serine/threonine kinase (BRAF)/ mitogen-activated protein kinase kinase (MEK) targeting agents have been incorporated in the standard of care of BRAF mutated melanoma since 2011 and improved dramatically the therapeutic effects, inducing high objective response rates, and prolonging patient survival [ 8 ]. Most side effects of these agents are known and have been meticulously studied. However, despite the fact that these drugs have been in use for more than a decade, it is possible that some adverse events due to their use have not been observed yet. Hence, some long-term side effects are yet to be reported. The most frequent adverse events of BRAF/MEK targeting agents include pyrexia, skin rash, and hepatic enzyme elevation. Musculoskeletal complications, mainly muscle and joint aches, are reported at a low rate, affecting 1-2% of treated patients [ 8 ]. BRAF/MEK inhibitors have not yet been associated with tendinopathies. A case of spontaneous, non-traumatic, bilateral supraspinatus tendon rupture, occurring in a 65-year-old Caucasian male under prolonged treatment with dabrafenib plus trametinib for a stage IV, BRAF mutated melanoma, is presented.
Discussion A meticulous search of the literature indicates that this is the first case of tendon rupture associated with dabrafenib/trametinib combination or any BRAF/MEK inhibitor combination. A case of multifocal tendon rupture in a 58-year-old male, under treatment with nivolumab plus ipilimumab for metastatic melanoma, has been recently described [ 9 ], but BRAF-directed treatment has not been suspected of inducing tendinopathies to date. Both nivolumab and ipilimumab are potent immune checkpoint inhibitors, successfully applied in metastatic melanoma treatment, acting in a totally different manner from targeted treatment with dabrafenib/trametinib. While dabrafenib/trametinib block proteins crucial to cellular proliferation, nivolumab and ipilimumab enhance T-lymphocyte cytotoxic activity by blocking immune suppressive receptors expressed on the T cell surface, known as programmed death receptor 1 (PD-1) and cytotoxic T-lymphocyte associated protein 4 (CTLA-4), respectively [ 8 , 9 ]. Ipilimumab and nivolumab combination could lead to tendonitis and tendon rupture of autoimmune etiology, whereas there is no known mechanism for dabrafenib/trametinib-associated tendon damage. Medication-associated tendon rupture has been attributed to local hypoxia, frequently affecting critical tendon areas where blood flow is limited due to relevant anatomy [ 1 , 2 ]. Impaired metabolism and cell growth of tendon fibroblasts, together with increased matrix proteolytic activity and inhibition of tenocyte translocation to the site of tendon injury, are also among the proposed underlying mechanisms, as indicated by in vitro experiments [ 10 , 11 ]. Indeed, tendon degeneration has also been described in vivo in mouse models after quinolone treatment [ 12 , 13 ]. It has to be mentioned, though, that tendon rupture is not induced merely by the associated medications, as predisposing factors, such as female gender, older age, renal insufficiency, and hemodialysis, may be the basis of this damage, often in combination with vigorous physical activity [ 1 , 5 ]. Hence, although other factors may play an important role in tendon rapture, such as patients' age (the present patient was 65 years old), this report draws clinical attention to patients needing BRAF inhibitor treatment, especially those with coexisting tendon degeneration. At a microscopic level, collagen fiber disarrangement, hyaline or myxomatous degeneration, and increased metalloproteinase activity, as well as focal necrosis and degenerative vacuoles disrupting healthy tendon structure, have been reported in both humans and mouse models receiving corticosteroids and/or quinolones [ 14 - 19 ]. In the case presented here, trichoid congestion and synovial membrane reaction were described in the affected tendons specimen, with no signs of inflammation, while inflammatory markers were not observed.
Conclusions Targeted treatment against BRAF-mutated melanoma has changed the prognosis for thousands of metastatic melanoma patients. In most cases, treatment is continued until disease relapse or progression or unacceptable toxicity, as there is no way to guarantee safe withdrawal without exposing the patient to increased relapse risk. Nevertheless, long-term adverse events associated with novel melanoma treatments may only now start to appear and be reported. Physicians should remain vigilant for early detection and offer treatment against adverse reactions of BRAF targeting agents that have not been systematically recorded yet but may affect patients in the long run. Additionally, a thorough investigation has to be conducted to understand further the pathophysiology and the prevention of this rare but significant side effect.
The B-Raf proto-oncogene, serine/threonine kinase (BRAF)/ mitogen-activated protein kinase kinase (MEK) targeting agents have become the treatment of choice for BRAF-mutated melanoma during the last decade. However, it is possible that some long-term adverse events of these drugs have not yet been reported. A case of bilateral spontaneous, non-traumatic, supraspinatus tendon rupture in a 65-year-old Caucasian male suffering metastatic melanoma under prolonged and successful combination treatment with dabrafenib plus trametinib is presented. These damages could not be attributed to some other probable cause. The ruptured tendons were promptly restored arthroscopically. Oncologists should remain vigilant for the early detection of potential side effects of BRAF/MEK targeting agents that have not been systematically recorded yet but may appear and affect patients in the long run.
Case presentation The 65-year-old patient was first examined 11 years ago due to a melanoma relapse on the right lateral chest wall. The primary lesion was located at his frontal abdominal surface and had been surgically removed eight years earlier. It has been characterized as stage IB, pT2aN0M0 melanoma, with a Breslow depth of 1.45mm and Clark stage IV. The tumor was BRAF V600E mutated. The initial sentinel lymph node biopsy was negative. No adjuvant treatment had been administered. Due to disease relapse, he underwent thorough clinical, laboratory, and imaging examinations for staging his disease. No other suspicious lesions had been recorded. Hence, systematic targeted treatment with dabrafenib (oral BRAF inhibitor, 150mg twice daily) and trametinib (oral MEK inhibitor, 2mg once daily) was initiated in the context of a clinical trial protocol. The patient enjoyed an impressive complete disease remission. Ever since the trial's termination, the dabrafenib/trametinib regimen was consistently administered for more than 10 years under close medical surveillance. To date, there are no clinical or imaging findings suggesting disease recurrence. After completing 130 months under treatment uneventfully, he started complaining of pain and limited movement ability of his right shoulder and lesser similar symptoms of his left shoulder. Upon clinical examination, the patient experienced pain while lifting and lowering his arm, as well as at rest during the night. The Jobe test was positive on both sides (weakness and pain at requested shoulder abduction and internal rotation). Patient's remaining medical history was unremarkable, and he was not receiving any other medications except dabrafenib and trametinib at that moment. He was active with excellent performance status, but he did not report any heavy physical activity or overhead activities. Magnetic resonance imaging (MRI) revealed rupture of the supraspinatus tendon with approximately 2cm retraction on both sides. Both tendons had degenerative signs, such as calcific tendinopathy, as well as signs of subacromial impingement (Figure 1 ) Biopsy of the affected tendons' areas revealed local trichoid vessel congestion and mild reactive lesions on the synovial membrane but was negative for inflammation immunohistochemical markers. Both tendons were arthroscopically repaired with the use of two suture anchors. The patient had an uneventful recovery, while the dabrafenib/trametinib treatment was continued after the mandatory one-month interval during the surgical intervention. Twelve months postoperatively, the patient has active shoulder abduction up to 168 degrees on the right and 160 on the left side. He does not complain of any shoulder pain, and the Jobe test is negative. Since he had not been receiving any other drugs that could have caused the tendon ruptures for a long time before the incident, it is highly probable that the tendons' ruptures may be an adverse event of perpetuated dabrafenib and trametinib treatment.
CC BY
no
2024-01-15 23:43:48
Cureus.; 15(12):e50567
oa_package/a1/6d/PMC10788042.tar.gz
PMC10788043
38222221
Introduction Continuous laryngoscopy during exercise (CLE) is a test that uses a flexible distal-chip laryngoscope secured on a head apparatus, and it is usually performed in conjunction with the cardiopulmonary exercise test (CPET) in a laboratory setting. In addition to the baseline abnormalities visualized by conventional laryngoscopy, CLE allows for the assessment of dynamic laryngeal responses during exercise. It provides real-time visualization of laryngeal movements during physical activity and has become the gold standard in diagnosing exercise-induced laryngeal obstruction (EILO) [ 1 ]. Exercise-induced laryngeal obstruction is a relatively prevalent entity in young people that usually presents with exertional stridor, coughing, and dyspnea caused by transient closure of the larynx [ 2 - 4 ]. The diagnosis of EILO is not straightforward because its features can overlap with exercise-induced asthma, which can result in inappropriate therapy. Several studies have suggested that EILO and asthma often coexist [ 5 - 7 ]. Exercise-induced laryngeal obstruction usually arises from supraglottic obstruction, although in some instances, it can result from inappropriate closure of the glottis, and a combination of both can occur [ 1 ]. Continuous laryngoscopy during exercise has been safely used across a wide range of ages, including the pediatric population, and it plays a pivotal role in guiding the management and follow-up of patients with EILO [ 8 ]. Speech therapy and various glottic and supraglottic surgical procedures have been incorporated into the management strategies of EILO. However, despite substantial technological advances, validated diagnostic and treatment algorithms have not yet been established [ 9 - 11 ]. Although the CLE procedure has been recognized as the diagnostic technique of choice for EILO, our clinical experience suggests that its utility reaches beyond this scope and can significantly influence therapeutic decisions.
Discussion This report describes our experience of managing two cases of children with inducible laryngeal obstruction due to various etiologies. Information provided by CLE influenced our therapeutic decisions for both patients. Several reports on the diagnostic use of CPET with CLE in patients with EILO and asthma have been described [ 5 , 12 ]. However, with regard to the utility beyond diagnostic capabilities and further therapeutic implications, reports are lacking. Since its introduction to clinical practice in 2006, CLE has become an essential tool in the diagnosis of patients with various degrees of functional laryngeal dysfunction. Traditionally, options for evaluation were limited to rigid and flexible laryngoscopy and bronchoscopy. In contrast to these more conventional methods, CPET with CLE offers the benefit of direct visualization of laryngeal structures, with a focus on dynamic changes during different phases of physical activity. Dyspnea is a common symptom in both healthy children and children with surgical airways. In patients who underwent a procedure involving the upper airway, the perceived exertional symptoms may be caused by additional dynamic laryngeal responses resulting from the surgical alterations to the airway. Testing at rest, with methods such as PFTs and office laryngoscopies, does not always provide a complete picture. It can result in a delayed or incorrect diagnosis, negatively impacting treatment recommendations. Exercise testing using cardiopulmonary exercise has tremendous potential for determining exercise capacity and identifying ventilation abnormalities associated with exercise symptoms. A continuous video laryngoscopy assists in determining the cause of abnormal ventilation [ 1 ]. Olin et al. demonstrated that laryngeal obstruction is more severe during exertion at peak work capacity, submaximal exercise, and recovery than baseline obstruction present during rest [ 8 ]. In our report, CLE not only contributed to the identification of the underlying laryngeal pathology but also played a role in successful treatment alteration in both patients. In patient one, CPET with CLE was chosen due to his worsening exertional dyspnea, as conventional endoscopy of the larynx did not identify pathology that would be responsible for the degree of exertional impairment seen in this patient. Continuous laryngoscopy during exercise allowed for a further assessment of laryngeal structures during exercise, allowing for the election of conservative stepwise intervention. In patient two, the initial endoscopic inspection revealed only a mildly reduced glottic aperture; however, paradoxical adduction of the vocal cords with forced inspiration and stridor was observed during exercise using CLE. In this patient, the use of CLE allowed for a non-surgical intervention that improved the patient’s symptoms. Overall, CLE was well tolerated in both cases, implying a favorable safety profile and suggesting that it could be used in large-scale studies. The primary reason for exercise termination was dyspnea in both patients. Baseline and exercise ECG demonstrated no evidence of ischemic changes in our patients. Heart rate and blood pressure responses were normal in case one, with an appropriate increase throughout the exercise. In case two, the heart rate response was mildly exaggerated for workload, suggesting deconditioning. No adverse events were reported, and both patients returned quickly to their baselines upon the termination of the study. A systematic review conducted by Thomader et al. reported that 10 (2.2%) out of the 455 subjects who underwent CLE experienced adverse events, including laryngeal spasm, procedural anxiety, hyperventilation attacks, vasovagal collapse during local anesthesia of the nose, and an asthma-like attack [ 5 ]. Although this proportion is not negligible, our experience and that of other centers suggest that the benefits outweigh the drawbacks, as information obtained during CLE can significantly alter treatment decisions [ 13 ]. While electromyography of the laryngeal muscles can objectively demonstrate paradoxical movement of the vocal cords, it requires a rather specialized technique and equipment, and CLE may be superior. A study conducted by Hull et al. indicated that CLE offers a robust means of characterizing varying degrees of laryngeal dysfunction during exercise. This highlights the necessity for future work to determine whether targeted laryngeal intervention can improve dyspnea and exercise capacity in severe asthma [ 12 ]. Our report suggests that the scope of CLE extends beyond its original purpose, and findings from the study can have significant implications for the clinical decision-making process. As the focus of treatment for laryngeal anomalies shifts toward personalized medicine, CLE will become an even more prominent method for evaluating laryngeal dysfunction. Given this trend, the increase in diagnostic yield, and the minimal risk associated with this procedure, CPET with CLE will remain critical in the evaluation of laryngeal diseases and will provide safe and effective insight into the guiding management of individual patients. The follow-up study by Maat et al. concluded that relief of symptoms was experienced even in patients who were treated with information about the EILO diagnosis alone, although greater relief of symptoms and normalization of laryngeal function were significantly higher in a surgically treated group [ 10 ]. With further advances in technology, CLE will not only continue its current role in clinical practice but will also expand its scope as a minimally invasive advanced diagnostic tool. The involvement of a multidisciplinary approach makes the interpretation of findings and decision-making more reliable [ 11 ]. Select patients with a more complex medical history associated with combined aerodigestive pathologies may obtain more benefit from a decision-making standpoint. Since CLE has not been formally studied in randomized trials, further studies are needed to compare different diagnostic options to better define the appropriate indications and timing of CLE in order to better guide management.
Conclusions Continuous laryngoscopy during exercise is a safe and well-tolerated tool that can help diagnose various degrees of dynamic laryngeal dysfunction that constitutes a serious problem among children and adolescents, severely impairing their quality of life. Appropriate evaluation and diagnosis can help refine the next steps in the diagnostic process, prevent unnecessary diagnostic testing, and aid in tailoring the management of individual patients.
Exertional dyspnea is a common and disabling symptom in otherwise healthy children and adolescents, as well as in children with baseline airway abnormalities. It impairs the quality of life and may be associated with fatigue and underperformance in sports. Exertional dyspnea can be caused by a wide variety of structural and psychogenic causes. Exercise-induced laryngeal obstruction (EILO) is a relatively prevalent entity in young people that usually presents with exertional stridor, coughing, and dyspnea caused by transient closure of the larynx. In more complex cases where conventional tests such as pulmonary function tests (PFTs), chest imaging, ECG, and echocardiography are unrevealing, continuous laryngoscopy during exercise (CLE) tests may provide diagnostic utility. In addition to the baseline abnormalities visualized by conventional laryngoscopy, CLE can assess dynamic laryngeal responses during exercise. This article describes the clinical characteristics of two pediatric patients with various degrees of laryngeal dysfunction at baseline and the utility of CLE testing in tailoring management strategies.
Case presentation Case one A 12-year-old boy with a complex medical history of prematurity, severe bronchopulmonary dysplasia, developmental delay, and prolonged mechanical ventilation presented with worsening exertional dyspnea. He had fixed obstruction of the airway at the supraglottic and glottic levels (arytenoid complex and immobile vocal cords) and underwent previous tracheal reconstructions at an outside institution. He had a history of persistent wet coughs and nighttime continuous positive airway pressure (CPAP) dependence. He presented for the evaluation of exertional stridor as well as a breathier and weaker voice. After considering the substantial risks of surgical interventions that could potentially worsen the voice/airway balance and a concern for dynamic EILO, he underwent a multidisciplinary team evaluation. Spirometry testing for baseline evaluation of lung function was limited due to poor technique; however, there was evidence of fixed airway obstruction as seen on the flow volume (FV) loop (Figure 1 ). Flexible and rigid bronchoscopy under sedation revealed supraglottic obstruction, mostly from the left arytenoid complex and epiglottic petiole prolapse, posterior glottic stenosis, subglottic stenosis, and tracheomalacia. As the perceived exercise symptoms were disproportionate to the already-known fixed airway obstruction, a decision to perform CPET with CLE was made. Continuous laryngoscopy during exercise confirmed fixed abnormalities of the upper airway. At rest, vocal folds appeared medialized with minimal abduction with inspiration (Figure 2A ). The laryngeal inlet appeared flat in the anterior-posterior dimension secondary to prolapse of the petiole of the epiglottis. Continuous laryngoscopy during exercise at peak exercise revealed the arytenoid complexes tethered together with minimal movements of the vocal cords (Figure 2B , Video 1 ). The left arytenoid was prolapsing into the laryngeal inlet, obscuring one-third of the aperture with exertion. There was a mismatch between ventilation demands and delivery through the narrowed laryngeal aperture, which caused the stridor to worsen as the exercise progressed. Given these new findings on CLE, a thoughtful, step-wise, minimally invasive approach to surgical interventions was planned to optimize voice quality and improve exercise tolerance while minimizing the risk of aspiration. Case two An 11-year-old boy was evaluated for worsening exertional dyspnea and stridor. His medical history was significant for tracheomalacia, aortopexy, anxiety, mild asthma, and a type 2 laryngeal cleft repaired endoscopically using bilateral arytenoid flaps. There were no respiratory exacerbations or aspiration events, but he continued to experience stridor, dyspnea, and choking sensations in the airways. Symptoms slightly improved after adenotonsillectomy and arytenoid debulking. Pulmonary function tests (PFTs) showed mild obstruction without a response to bronchodilators. Laryngoscopy revealed symmetric vocal cord motion and a slight limitation of the abduction bilaterally. Due to the exertional nature of his symptoms, CPET with CLE was recommended to evaluate for additional dynamic EILO as a cause of his worsening symptoms. The results of his CPET with CLE showed borderline normal exercise capacity and limited abduction of the vocal cords with tethering of the posterior glottis at rest (Figure 3A ). A marked reduction in breathing reserve and respiratory responses during peak exercise, with clear evidence of paradoxical adduction of the vocal cords on CLE, was suggestive of a combination of anatomical and functional abnormalities (Figure 3B , Video 2 ). The exaggerated heart rate response to the level of exercise was suggestive of an element of deconditioning. The paradoxical vocal fold movements were attributed to the panic attacks experienced with intense exercise activities, mostly due to the failure of compensatory ventilatory responses due to upper airway obstruction. He took part in a program of graded exercise rehabilitation and speech therapy. At the follow-up visit after one year, he lost 25 pounds and his exercise symptoms resolved.
CC BY
no
2024-01-15 23:43:48
Cureus.; 15(12):e50572
oa_package/0d/f3/PMC10788043.tar.gz
PMC10788044
37191257
Background Empowering the FMs of TBI patients has received little attention in nursing science. Most previous studies have focused on the needs of FMs ( de Goumoëns et al., 2018 ; Kreutzer et al., 2018 ) and the relationships between life satisfaction ( Manskow et al., 2017 ), perceived burden ( Doser & Norup, 2016 ), and the functioning of the TBI patient and FMs after hospitalization. These studies reported FM’s unfulfilled needs in the acute phase of TBI patient care were related to insufficient emotional support, professional support, and involvement with care ( de Goumoëns et al., 2018 ). In addition, research has reported that FMs’ needs do not decrease over time but actually increase ( Anke et al., 2020 ; de Goumoëns et al., 2018 ). Furthermore, FMs’ feelings of burden ( Doser & Norup, 2016 ) and depression increased and were related to decreased life satisfaction ( Manskow et al., 2017 ), especially in the context of severe brain injury ( Rasmussen et al., 2020 ). Therefore, professionals should recognize and attend to the needs of FMs in the acute phases of TBI to better support and empower FMs in order to prevent these negative consequences for the individual and the family. Empowerment is a mutual process multidimensional concept that has been defined in several disciplines, including education, politics ( Mehta & Sharma, 2014 ), social sciences ( Rubin & Babbie, 2016 ), psychology ( Jones et al., 2011 ), feminist studies ( Rodwell, 1996 ), and nursing science ( Friend & Sieloff, 2018 ; Wåhlin, 2017 ). In nursing science, the concept of empowerment has been studied from the perspectives of patients ( Ania-Gonzalez et al., 2022 ), health care professionals ( Papathanasiou et al., 2014 ), and management ( Garcia-Sierra & Fernandez-Castro, 2018 ), but less from the viewpoint of TBI patients and their FMs. Empowerment has been described both as a process and an outcome ( Friend & Sieloff, 2018 ). Empowerment as a process means offering hope and confidence and encouraging people to promote their well-being, decision-making, and self-management ( Chen & Li, 2009 ). Empowerment as an outcome, in turn, means that the individual feels able to manage and control their situation ( Sakanashi & Fujita, 2017 ). From an empowerment perspective, FMs require support, knowledge, and guidance from the health care professionals during the acute phase of TBI patient hospital care ( Sakanashi & Fujita, 2017 ) to manage the complex, life-changing situation and adapt to it ( Kreutzer et al., 2018 ). Empowerment of FMs requires that the information a health care professional provides is multifaceted and corresponds to the FM’s expectations and needs in a manner that can also benefit decision-making ( Sigurdardottir et al., 2015 ). Qualities such as authenticity, communication, listening, and equality are needed for an empowered mutual relationship between families and health care professionals, with acceptance and support being the key factors thereby creating an atmosphere where FMs can express their feelings and concerns ( Wåhlin, 2017 ). The key elements of providing empowering support to FMs relate to equal and trustful relationships between the professionals and the FMs ( Sakanashi & Fujita, 2017 ). FMs can develop a positive belief in themselves and the future in this process. Professional competence to support FMs in achieving the skills needed to manage TBI survivors’ care independently after hospitalization and to overcome challenges through guidance and emotional support are also important in the empowerment process. Furthermore, health care professionals must meet FM’s needs and expectations with the knowledge to reach potential empowerment ( Funnell et al., 1991 ; Nygårdh et al., 2012 ; Wåhlin, 2017 ). Previous systematic reviews have examined the experiences, requests for support, and needs of FMs of TBI patients in the hospital ( Coco et al., 2011 ; Oyesanya, 2017 ; Wetzig & Mitchell, 2017 ). According to recent studies ( de Goumoëns et al., 2018 ; Doser & Norup, 2016 ; Manskow et al., 2017 ), FMs reported that they did not receive enough information, support, and guidance from health care professionals. As a result, FMs experienced a long-term feeling of burden and a reduced quality of life. There is a gap in the available knowledge from the perspective of providing empowering support for FMs in the acute phase of TBI patients’ hospital treatment. Moreover, there is a lack of nursing recommendations and structured care procedures prepared to support FMs in the acute phase of TBI patients’ hospital treatment. Research focusing on empowering support for FMs in the acute phase of TBI patient care is significant, both for increasing health care professionals’ awareness of FMs’ needs and for improving care procedures to support and empower FMs experiencing the TBI of a loved one. This systematic review aimed to identify, critically evaluate, and synthesize available evidence of empowering support for FMs in the acute phase of TBI patient hospital treatment, including emergency care, intensive care unit (ICU) care, and inpatient care. Specifically, we wanted to (a) identify factors that contribute to FMs’ empowerment and (b) understand the empowering support from the perspective of FMs of TBI patients. The research question that guided this study was: What is empowering support for the FMs of TBI patients in the acute phase of TBI patient hospital treatment, and what are the influencing factors?
Method Design This mixed-methods systematic review explored FMs’ perspective of empowering support in the acute phase of TBI patients’ hospitalization. A convergent integrated design approach was chosen because it enables gathering information about the care procedures that families found helpful and also explored the experiences of FMs to better understand these multifaceted phenomena ( Grant & Booth, 2009 ; Lizarondo et al., 2020 ). The population, intervention, control, and outcomes format was used for framing the research question ( Aslam & Emmanuel, 2010 ). The literature review was conducted and reported using the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement ( Page et al., 2021 ) (see Online Supplementary File 1 ). Search Methods We performed the systematic data retrieval by dividing the research question into thematic entities to define key concepts and construe search terms. We conducted searches in the CINAHL, PubMed, Scopus, and Medic databases; the process also included testing and combining Medical Subject Headings terms. The search strategy with phrases variations is provided in Table 1 . An information specialist’s expertise was used to improve the data set coverage and reliability in the data retrieval process. Inclusion criteria were studies involving adults over 18 years old; the patient’s and FMs’ experiences of TBI; and needs of FMs for support during the acute phase of treatment. In addition, the health care professional’s supportive approaches, nursing practices, and nursing interventions from the perspective of FMs’ empowerment were also examined. Furthermore, factors related to empowering TBI patients’ FMs in the acute phases of hospital treatment were included. In order to obtain a more comprehensive synthesis, numerous qualitative, quantitative, RCT, and mixed methods studies were screened. Exclusion criteria included non-traumatic brain injuries, FMs’ experiences and needs of children with TBI literature reviews, medical intervention, rehabilitation, and outpatient care. Data retrieval was limited to peer-reviewed research articles in English. We did not include gray literature in the data retrieval process. The time period included in all database searches was 12 years (2010–2021). Table 2 presents the criteria for inclusion and exclusion of studies in the review. Study Selection and Data Extraction The literature selection process proceeded in two phases. The first author (JL) independently carried out report retrieval for the study. In the first phase, duplicates and records marked as ineligible by the automation tool were removed. According to the inclusion and exclusion criteria, two researchers (JL and KC) independently selected studies based on the title and the abstract. Covidence program was used ( Kellermeyer et al., 2018 ) for data extraction. The second phase included reading each study and re-checking whether the study answered the research question and fulfilled the inclusion criteria. Any possible disagreements were discussed with the other members of the research group (TV and HT) to reach a consensus and make the decisions.
Results Study Selection At the first stage of the data retrieval process, the number of hits within the search limits was 1500. After removing duplicates and records marked as ineligible, 907 articles remained. Of these, 873 articles were excluded based on the title and the abstract. This selection process resulted in 34 articles. After the full texts were read, 14 articles were excluded. The main reason the interventions were excluded was that they were not nursing interventions; if they were, they did not focus on the acute phase of hospital treatment or provide a perspective of the FMs. Finally, 20 original research articles were selected for review after completing the data retrieval process and the parallel analysis. Figure 1 illustrates the search selection process using a PRISMA flowchart. Description of Included Studies Table 3 presents the selected research articles and highlights the studies’ characteristics and quality. Overall, the majority of the selected studies used a qualitative design ( n = 10): Abrahamson et al., 2017 ; Adams & Dahdah, 2016 ; Degeneffe & Bursnall, 2015 ; Gan et al., 2010 ; Holloway et al., 2019 ; Keenan & Joseph, 2010 ; Kreitzer et al., 2019 ; Lefebvre & Levert, 2012a , 2012b ; and Schutz et al., 2017 . A cross-sectional design was used in eight research reports: ( Arango-Lasprilla et al., 2010 ; Calvete & Arroyabe, 2012 ; Choustikova et al., 2020 ; de Goumoëns et al., 2019 ; Dillahunt-Aspillaga et al., 2013 ; Doyle et al., 2013 ; W. Liu et al., 2015 ; and Norup et al., 2015 ). The remaining studies used a mixed-methods design ( n = 2): ( Bellon et al., 2015 ; Kanmani & Raju, 2019 ). Most studies were conducted in the United States ( n = 7), Canada ( n = 4), and the United Kingdom ( n = 2). The remaining seven studies were from Australia ( n = 1), Spain ( n = 1), Finland ( n = 11), Switzerland ( n = 11), India ( n = 11), China ( n = 1), and Denmark ( n = 1). Most studies focused on FMs’ experiences of empowering support ( n = 15). However, five studies discussed the perspective of empowerment more broadly, such as from the perspective of TBI patients and health care professionals. Synthesis of Results Data synthesis with an integrated approach was used. Based on convergent results of the systematic literature review, empowering support for FMs in the acute phase of TBI patient hospital care is based on four main themes of the empowerment process: (a) needs-based informational support, (b) participatory support, (c) competent and interprofessional support, and (d) community support (see Figure 2 ). Theme 1: Needs-Based Informational Support to Empower the FMs The FMs’ most pressing need was identified as a need for information during the acute phase of TBI patients’ hospital care, which lasted throughout the patient’s hospital treatment, from emergency care to discharge ( Keenan & Joseph, 2010 ; Kreitzer et al., 2019 ; Lefebvre & Levert, 2012a ). However, FMs’ needs and ability to acquire information changed over time ( Keenan & Joseph, 2010 ; Lefebvre & Levert, 2012a ). For example, during emergency care and intensive care, the members of the family needed information focused on the TBI patient’s health conditions, medical treatment, and recovery ( Doyle et al., 2013 ; Lefebvre & Levert, 2012a ; W. Liu et al., 2015 ). In the inpatient ward, the FMs’ need for information focused more on practical issues and future plans ( Keenan & Joseph, 2010 ; Lefebvre & Levert, 2012a ). Although the FMs’ need for information changed over time, to empower the FMs, the information needed be trustworthy, versatile, and consistent ( de Goumoëns et al., 2019 ; Gan et al., 2010 ; Lefebvre & Levert, 2012b ). Needs-based informational support to empower FMs contained three sub-themes: information about TBI patients’ health conditions in the acute phase, trustworthy and adequate information about the progress of the TBI patients’ care, and practical information about the uncertain future. Information about TBI patients’ health conditions in the acute phase described the importance for FMs to have an early diagnosis of the patient’s brain injury ( Choustikova et al., 2020 ; Gan et al., 2010 ). FMs wished to receive factual information about the accident ( Keenan & Joseph, 2010 ), the brain injury, and its effect on the future ( Bellon et al., 2015 ) at an early stage ( Kanmani & Raju, 2019 ). If FMs felt they were receiving too little information from health care professionals, they would seek more information online or from their friends and relatives ( Lefebvre & Levert, 2012a ). Receiving sufficient information about the symptoms of TBI such as memory disorders ( Arango-Lasprilla et al., 2010 ), emotional problems ( Abrahamson et al., 2017 ), and changes in mood and personality ( Adams & Dahdah, 2016 ; Calvete & Arroyabe, 2012 ; Gan et al., 2010 ; Kreitzer et al., 2019 ), helped FMs to understand, for example, why their relative with TBI displayed changes in behavior ( Degeneffe & Bursnall, 2015 ). In addition, FMs wished to receive information on the TBI patient’s medical care ( Doyle et al., 2013 ; Lefebvre & Levert, 2012a ; W. Liu et al., 2015 ), and they needed reassurance that the patient received all necessary medical care ( Gan et al., 2010 ). FMs wanted trustworthy and adequate information about the progress of the TBI patient’s care, such as any changes in the TBI patient’s condition, and that all questions were answered honestly ( Kreitzer et al., 2019 ; Lefebvre & Levert, 2012b ) and professionally ( Keenan & Joseph, 2010 ). FMs hoped that hospital staff would always be honest with them, even when the patient’s condition worsened. The research demonstrated that honesty was seen as a characteristic of professionalism that promoted the development of a trusting relationship between FMs and health care professionals ( Keenan & Joseph, 2010 ; Kreitzer et al., 2019 ; Lefebvre & Levert, 2012b ). FMs could better understand the purpose of their relative’s care if the information was conveyed in a peaceful environment with sufficient processing time ( de Goumoëns et al., 2019 ). The information also needed to be provided in oral form ( Gan et al., 2010 ; Lefebvre & Levert, 2012b ) and written form ( Choustikova et al., 2020 ). In addition, from the empowerment perspective, FMs wished to receive regular patient updates ( Keenan & Joseph, 2010 ; Lefebvre & Levert, 2012b ) that were specific to their relative and not based on general statistics and probabilities in order to utilize the information in their decision-making ( Keenan & Joseph, 2010 ). FMs needed practical information about the uncertain future after the TBI patient had left the ICU and the situation had stabilized ( Lefebvre & Levert, 2012a ). FMs’ needs for information shifted from damages the accident caused and medical care to planning for the future ( Keenan & Joseph, 2010 ). In the inpatient ward, FMs’ needs focused on receiving sufficient guidance ( Choustikova et al., 2020 ) and support for practical issues such as organizing extended hospital visits and managing financial matters ( Abrahamson et al., 2017 ). At this point, FMs started to realize they had to attend to other obligations such as family ( Adams & Dahdah, 2016 ; Arango-Lasprilla et al., 2010 ), work, and community life ( Keenan & Joseph, 2010 ). FMs frequently wondered how TBI would affect the patient’s life in the areas of work ( Calvete & Arroyabe, 2012 ), independence ( Degeneffe & Bursnall, 2015 ), family activities ( Dillahunt-Aspillaga et al., 2013 ; Gan et al., 2010 ; Holloway et al., 2019 ) and marriage ( Lefebvre & Levert, 2012a ). FMs also needed support and information about taking care of themselves, for example, by taking a break from the care, problems, and responsibilities ( Doyle et al., 2013 ). In the inpatient ward, FMs were interested in finding out about available services ( Kreitzer et al., 2019 ) and resources ( Adams & Dahdah, 2016 ) to ease their social adaptation as well as to promote the family’s independence and coping after hospital discharge ( Lefebvre & Levert, 2012a ). Theme 2: Participatory Support to Empower the FMs Uncertainty and concern about the patient’s survival increased the FMs’ feelings of powerlessness and, arguably, their need to participate in the patient’s care ( Bellon et al., 2015 ; de Goumoëns et al., 2019 ; Keenan & Joseph, 2010 ). To empower the FMs, the professionals must recognize them as an integral part of the TBI patient’s comprehensive nursing process ( Degeneffe & Bursnall, 2015 ). Being close to the patient was the primary way for FMs to participate in the patient’s care ( Keenan & Joseph, 2010 ). However, concretely participating in the patient’s care through the nursing procedures and the patient’s transfers and discharge plans was also important for empowering FMs ( Lefebvre & Levert, 2012a ; W. Liu et al., 2015 ; Norup et al., 2015 ). This theme included two sub-themes: participating in the TBI patient’s care and FMs’ involvement in the TBI patient’s transfers and discharge plans. By participating in the TBI patient’s care, FMs reported feeling part of the patient’s holistic care ( Calvete & Arroyabe, 2012 ) and nursing process ( Degeneffe & Bursnall, 2015 ). This, in turn, promoted the FMs’ understanding of the situation and future ( Bellon et al., 2015 ; de Goumoëns et al., 2019 ) and helped to identify their abilities, to trust in themselves, and their coping process at home ( Calvete & Arroyabe, 2012 ; Lefebvre & Levert, 2012a ). In addition to practical duties (e.g., assisting in washing and eating), participating in planning and decision-making were considered essential aspects of inclusion in the patient’s care ( Bellon et al., 2015 ; de Goumoëns et al., 2019 ). However, just staying at the patient’s side was enough to create a sense of participation ( Calvete & Arroyabe, 2012 ; Kanmani & Raju, 2019 ). Being at the patient’s side increased FMs’ sense of managing the situation and created an optimistic feeling that their relative’s recovery was progressing ( Keenan & Joseph, 2010 ). FMs’ involvement in the TBI patient’s transfers and discharge plans was significant for FMs. They wanted to participate in planning the discharge together with the professionals ( Lefebvre & Levert, 2012a ; W. Liu et al., 2015 ; Norup et al., 2015 ) because FMs usually knew better if the patient could cope at home and whether the necessary preparations had been made at home ( Abrahamson et al., 2017 ). Problems with hospital discharges are often related to poor communication, inadequate planning, and abrupt discharges without prior notice to the FMs ( Abrahamson et al., 2017 ). Delays and long waiting times for transport without timely provision of information exacerbated anxiety ( Abrahamson et al., 2017 ) and perceived burden ( Kreitzer et al., 2019 ) among FMs. Proactive discharge planning, identifying differences between units ( Keenan & Joseph, 2010 ), evaluating the FMs’ and patient’s needs, and setting goals together with nursing staff ( Abrahamson et al., 2017 ) reduced the anxiety experienced by FMs ( Keenan & Joseph, 2010 ). It enhanced their preparedness to cope at home ( Calvete & Arroyabe, 2012 ). Theme 3: Competent and Interprofessional Support to Empower the FMs The versatile support from health care professionals was one of the essential factors in empowering FMs during the acute phases of the TBI patient’s treatment. To empower the FMs, the health care professionals needed to be competent, listen, and maintain the FMs’ sense of hope throughout the patient’s treatment ( Abrahamson et al., 2017 ; de Goumoëns et al., 2019 ; Gan et al., 2010 ; Kreitzer et al., 2019 ; W. Liu et al., 2015 ). The nurse’s role was especially significant in empowering FMs because they were often considered to be part of the family ( Keenan & Joseph, 2010 ). In addition, participating in interprofessional collaboration to support FMs was also perceived as a significant factor in empowering families because their needs changed during the different phases of the patient’s hospital care ( Choustikova et al., 2020 ; Keenan & Joseph, 2010 ; Lefebvre & Levert, 2012b ). This theme included three sub-themes: confidence in the competence of health care professionals, maintenance of a sense of hope, and interprofessional collaboration to support FMs. The FMs’ confidence in the competence of health care professionals was necessary ( Gan et al., 2010 ) because it increased their feeling that the patient was receiving holistic care ( W. Liu et al., 2015 ). Professionals’ knowledge and skills in caring for the TBI patient demonstrated the staff’s competence. This and communication were the key factors influencing the FMs’ experience receiving empowering professional support ( Keenan & Joseph, 2010 ). It was important to ensure that FMs were able to talk to a doctor at least once a day; otherwise, the FMs experienced disappointment ( Calvete & Arroyabe, 2012 ). In this study, the health care professionals who expressed little interest in involving the family were perceived as leaving the FMs alone with difficult issues. Talking about difficult issues with professionals eased the FMs’ fear, anxiety, and shock ( Choustikova et al., 2020 ). In addition, having a good relationship with professionals allowed the FMs to feel that they were part of the team, the treatment, and the decision-making process ( Lefebvre & Levert, 2012b ). Furthermore, good communication and information sharing between FMs, and staff promoted the coordination of care and achievement of shared goals ( de Goumoëns et al., 2019 ). The need for cohesive, consistent, and long-term communication between service providers and between service providers and families was essential for empowering FMs ( Abrahamson et al., 2017 ; de Goumoëns et al., 2019 ; Kreitzer et al., 2019 ). To empower the FMs, health care professionals require good listening skills ( Calvete & Arroyabe, 2012 ), know the family, and communicate with different health care providers ( Keenan & Joseph, 2010 ). FMs wished to be heard more on patient-related issues ( Kreitzer et al., 2019 ) because they felt they had valuable ( Lefebvre & Levert, 2012b ) and useful ( Holloway et al., 2019 ) knowledge to convey that could prevent the staff from making false conclusions ( Choustikova et al., 2020 ). Especially in situations where the patient had limited communication ability, involving the family was an important factor for the patient’s recovery ( Holloway et al., 2019 ) and the FMs’ adaptation ( de Goumoëns et al., 2019 ). Maintaining a sense of hope was needed because unexpected news of an accident causes a powerful emotional reaction ( Bellon et al., 2015 ; Keenan & Joseph, 2010 ) and a sense of powerlessness among FMs ( Lefebvre & Levert, 2012a ). Uncertain prognosis of the TBI increased the FMs’ need for hope ( W. Liu et al., 2015 ), and they wished for health care professionals to recognize this association ( Schutz et al., 2017 ). Although the FMs wanted truthful information, they also wanted health care professionals to give them hope for the future ( W. Liu et al., 2015 ). Even in cases of patient death, the FMs remained hopeful and focused on minimizing the perceived suffering of the TBI patient ( Schutz et al., 2017 ). This sense of hope gave FMs the strength to ensure their loved ones received the best care possible ( Calvete & Arroyabe, 2012 ). However, FMs need professional encouragement ( W. Liu et al., 2015 ) to maintain a sense of hope ( Arango-Lasprilla et al., 2010 ). Physicians were perceived as being particularly pessimistic ( Keenan & Joseph, 2010 ), emphasizing nurses’ role in maintaining hope and empowerment for the FMs ( Schutz et al., 2017 ). In the acute phase of hospital care, FMs had many questions ( Norup et al., 2015 ) and challenges ( Abrahamson et al., 2017 ); thus, interprofessional collaboration in supporting FMs was needed ( Keenan & Joseph, 2010 ; Lefebvre & Levert, 2012b ). For example, FMs wanted to see a hospital chaplain to discuss and share their feelings ( W. Liu et al., 2015 ) and to meet a social worker to handle financial matters ( Choustikova et al., 2020 ). Many FMs also hoped to meet with a physiotherapist and psychiatric nurse during the acute phase of hospital care ( Choustikova et al., 2020 ). In addition, FMs needed interprofessional support in planning the future to strengthen their sense of control over the new situation at home with the TBI survivors, which usually arose from insecurities FMs experienced due to the potentially progressive nature of TBIs ( Gan et al., 2010 ). Theme 4: Community Support to Empower the FMs The findings highlight community support as a fundamental part of empowering FMs. Arguably, it is essential for FMs to receive support from health care professionals, FMs, and friends ( Calvete & Arroyabe, 2012 ; Holloway et al., 2019 ; Keenan & Joseph, 2010 ). In addition, the results indicate that peer support services complement the support for FMs and reduce the FM’s feelings of anxiety and fear ( Gan et al., 2010 ; Norup et al., 2015 ). However, at the end of the patient’s treatment, the FMs hoped the patient’s treatment would continue after hospitalization. Once again, the nurses’ role was emphasized because the FMs hoped that the nurses would coordinate the follow-up care and organize the services. In summary, community support can empower FMs in the long term ( Abrahamson et al., 2017 ; Bellon et al., 2015 ; Doyle et al., 2013 ; Lefebvre & Levert, 2012a ; W. Liu et al., 2015 ; Norup et al., 2015 ). This theme had three sub-themes: good social support network, information about peer support services, and ensuring continuity of care after the TBI patient’s hospital discharge. A good social support network meant tangible help was available from friends or relatives, such as when transporting the family to the hospital ( Calvete & Arroyabe, 2012 ) or taking care of the children’s needs ( Keenan & Joseph, 2010 ). However, the mere presence of friends and other FMs ( Calvete & Arroyabe, 2012 ) made the FMs feel they were not alone with all the challenges and thus promoted their feeling of empowerment ( Holloway et al., 2019 ; Keenan & Joseph, 2010 ). Despite welcoming community support, FMs also wanted health care professionals to address the burden that the number of contacts from relatives caused ( Calvete & Arroyabe, 2012 ). FMs perceived time spent with friends and relatives and answering their questions as cumbersome and stressful. They wanted to have professional guidance ( Lefebvre & Levert, 2012a ) and support ( Keenan & Joseph, 2010 ) to limit their contacts ( Calvete & Arroyabe, 2012 ). FMs were interested in obtaining information about different peer support services at the first stage of hospitalization ( Norup et al., 2015 ). Peer support services offer timely and helpful support in a crisis and relevant information on various resources for FMs ( Gan et al., 2010 ). To be empowered, FMs needed to share their feelings ( Norup et al., 2015 ) and experiences ( Bellon et al., 2015 ) with people who had been in the same situation and had faced the same problems. Thus, they could offer suggestions and solutions for arising issues ( Gan et al., 2010 ) and help them prepare for the worst ( Arango-Lasprilla et al., 2010 ). Other people’s stories and experiences about the effects of TBI on family life gave FMs courage ( Gan et al., 2010 ). They made them feel hopeful about the TBI patient’s recovery ( Keenan & Joseph, 2010 ) and the family’s coping ( Adams & Dahdah, 2016 ). Ensuring continuity of care after the TBI patient’s hospital discharge was critical. At the end stage of the patient’s inpatient care, FMs hoped there was a person who would manage and coordinate the discharge and organization of services ( Bellon et al., 2015 ; Lefebvre & Levert, 2012a ; W. Liu et al., 2015 ; Norup et al., 2015 ). FMs hoped for a nurse to assume the responsibility for coordinating duties, providing information, and organizing the necessary care meetings and services ( Abrahamson et al., 2017 ; W. Liu et al., 2015 ) because FMs frequently experienced inequalities in access to services ( Bellon et al., 2015 ; Holloway et al., 2019 ). Access to necessary support services was crucial for empowering FMs because studies have shown such services promote FMs’ adaptation to their new roles, ease intrafamilial relationships, satisfy families’ long-term needs ( Bellon et al., 2015 ; Gan et al., 2010 ), and reduce the sense of burden FMs experienced ( Doyle et al., 2013 ).
Discussion Summary of Findings The findings of this systematic review outline the factors contributing to the empowering support of FMs while also describing the empowerment support from the FMs’ perspective during the acute phase of hospital care. We have defined the process of empowerment as a dialogical and supportive relationship between FMs and health care professionals, in which the FMs were seen as part of the TBI patients’ comprehensive treatment planning and implementation throughout the acute hospitalization period. Needs-based informational, participatory, professional, and community support were identified as factors of the empowerment process to promote FMs’ empowerment. FMs are empowered when they have sufficient, concrete, and needs-based information about brain injury, its treatment, and its effect on the future from health care professionals during the acute phase of care. This enabled the FMs to utilize information in their decision-making and hence better process the consequences and effects of brain injury on family activities ( de Goumoëns et al., 2019 ; W. Liu et al., 2015 ). FMs’ needs for information change in time ( Lefebvre & Levert, 2012a ) and, according to Keenan and Joseph (2010) , decrease by 50% when the patient is transferred from the ICU to an inpatient ward. Later in the inpatient ward, FMs felt more capable of evaluating the progress of the patient’s recovery ( Keenan & Joseph, 2010 ). At this point, it was important for the FMs to become informed about practical factors, such as transport services and managing finances ( Abrahamson et al., 2017 ). From the empowerment perspective, the results support the findings of Wåhlin’s (2017) research, which identified knowledge as an empowerment-promoting tool and receiving information as an integral part of it. However, another point to consider is that the quality of the information and the environment where the information is offered also affects the extent of FMs’ empowerment. The information should thus be tailored to fit the FMs’ needs. The closer the received information and support are to the FMs’ needs, the more potential there is for empowerment ( Funnell et al., 1991 ). In light of this new knowledge, future health care professional education should focus on how to offer guidance, especially from the perspective of the family’s needs ( Choustikova et al., 2020 ). The International Family Nursing Association (IFNA, 2017) has developed advanced practice competencies for family nursing that may be useful for health care professionals working with this population of families. Participating in the patient’s care and involvement in the patient’s transfers and discharge plans were also associated with empowering FMs ( Lefebvre & Levert, 2012a ; Norup et al., 2015 ), as it made the FMs feel they were useful and part of the patient’s holistic care ( Calvete & Arroyabe, 2012 ). This further corroborates previous results ( Kivunja et al., 2018 ; Manskow et al., 2017 ), although Oyesanya (2017) found that FMs frequently felt they were invading the health care staff’s territory by actively participating in the patient’s care. However, Wetzig and Mitchell (2017) discovered that health care professionals recognized the benefits and significance of FMs’ involvement from the perspective of the TBI patient’s recovery in acute care. The previous results emphasize that participation in patient care has also been essential in empowering the FMs because they are viewed as equal and active partners in TBI patient treatment ( Degeneffe et al., 2011 ; Man et al., 2003 ; Rodwell, 1996 ). Thus, health care professionals should actively encourage and guide FMs on how to participate in patient care concretely. In addition, health care professionals should boldly involve FMs in all phases of the patient’s treatment plan and decision-making process to ensure that both the FMs and professionals have up-to-date information on future activities and plans and to allow the FMs to feel that they are a part of the patient’s nursing process ( Lefebvre & Levert, 2012b ). Our results also show that FMs need competent professional support as well as interprofessional collaboration to comprehend the trauma ( de Goumoëns et al., 2019 ). Thus, FMs can mourn the damage the brain injury caused; process ( Keenan & Joseph, 2010 ), and manage ( Abrahamson et al., 2017 ) their own emotions, such as fear, grief, anger, and guilt ( Calvete & Arroyabe, 2012 ); and adjust to the new life situation ( Lefebvre & Levert, 2012a ). Health care professionals, especially nurses ( de Goumoëns et al., 2019 ), played a significant role in supporting the FMs of TBI patients during the acute phase of hospital care ( Keenan & Joseph, 2010 ). Earlier studies have confirmed an equal and trustful communication between health care professionals and FMs can contribute to empowering the latter ( Sigurdardottir et al., 2015 ) and decrease the feelings of burden ( Sakanashi & Fujita, 2017 ) and abandonment ( Wåhlin, 2017 ). The empowered FMs feel emotionally and physically balanced, which increases confidence to act as a caregiver and further supports adaptation to a new situation ( Sakanashi & Fujita, 2017 ). Even though empowerment cannot be handed over ( Sigurdardottir et al., 2015 ), the review shows that nurses should recognize the effect of their support and actions on family members’ long-term capacities and well-being, particularly their coping at home after the TBI patient’s discharge. In addition, the findings highlight how valuable it is for FMs to obtain support from outside the hospital in the acute phase of TBI patient care, particularly from other FMs, friends, and peers. Furthermore, emotional and financial support from the FMs; work environment was also significant ( Gan et al., 2010 ). Before discharging the patients from the hospital, FMs hoped to establish a single contact point between the family and the health care and social services ( Holloway et al., 2019 ) to provide long-term support based on the FMs’ needs, which change with time ( Abrahamson et al., 2017 ). Undoubtedly, the nurse’s role was significant again because the FMs hoped that the nurses would take responsibility for organizing follow-up and aftercare services ( W. Liu et al., 2015 ). It was essential to ensure continuity of care and access to support services during the acute phase of patient treatment to maintain the FMs’ well-being and ability to cope ( Abrahamson et al., 2017 ) because FMs frequently reported facing a fragmented ( Gan et al., 2010 ) and inconsistent health care system ( Kreitzer et al., 2019 ) after hospital care. Earlier studies have consistently demonstrated that FMs of TBI patients often experience anxiety, depression, social isolation, and economic disruption after hospital care ( Anke et al., 2020 ; Manskow et al., 2017 ; McIntyre et al., 2020 ). Specifically, deficiencies in organizing and ensuring the provision of aftercare services for the family in the discharge phase may cause this. Regarding these findings, receiving incomplete or little information about support services during the patient’s hospitalization also delayed access to them and reduced FMs’ adaptation to their new role and living situation Bellon and colleagues (2015) . Therefore, health care professionals must ensure that support services are available for TBI patients before being discharged from the hospital ( Abrahamson et al., 2017 ). Although studies have demonstrated the importance of ensuring and securing the continuity of care for FMs ( Bellon et al., 2015 ; W. Liu et al., 2015 ), they also show that it has not been recognized as part of the empowerment concept. However, the review revealed that information about support services alone was insufficient to ensure continuity of care for TBI patients and empower FMs. In summary, FMs experience long and difficult times during a TBI patients’ hospitalization, especially in the ICU. Moreover, FMs have needs during the acute phases of the patients’ care that may have long-reaching consequences, such as feelings of burden, reduced life satisfaction, and depression, if they are not met. The goal of empowering FMs is to promote and maximize the FMs’ ability to manage independently with the TBI patient after hospitalization and to increase FMs’ coping and well-being. Receiving high-quality, sufficient information, participating in the patient’s care and decision-making, holistic support from health care professionals, and ensuring the TBI patient’s care constitute essential elements in the FMs’ empowerment process. Considering these elements when facing FMs during the acute phase of a TBI patient’s care may ease the family’s transition from hospital to home and facilitate adjusting to the new life situation. However, it should be noted that the FMs’ experiences and perceived needs during the acute phases of care are insufficient sources of information to offer empowering support. Therefore, it is important to define and examine the concept of empowerment in more depth from the perspective of acute care. It would be interesting to determine whether the FMs’ primary information and support needs resulted from the sudden and uncertain nature of the brain injury, the hectic environment of acute care, and the limited resources available to the health care professionals, or a combination of these factors. Strengths and Limitations Our systematic literature review was performed systematically and comprehensively, and the data analysis was conducted using original data. The university library information specialist was consulted in the data retrieval process to improve the data’s coverage and reliability. In addition, two researchers performed a literature quality assessment in parallel and independently. This systematic literature review contributes beneficial knowledge on empowering support for FMs of TBI patients during the acute phase of hospital care from the empowerment perspective. However, this study may have limitations due to the lack of available literature on FMs’ empowerment. Moreover, empowerment is a multidimensional concept, and in this study, it was generally observed on an individual level. However, information about the organizational and community levels would have also provided a more comprehensive understanding of the empowerment process. According to Wåhlin (2017) , it is possible that health care professionals need to feel empowered in their professional role in order to empower FMs which was not addressed in this review. Nevertheless, it is notable to understand that it is the health care professionals who form the healthcare organization. We did not find any clinical trials in nursing that focused on the effectiveness or efficiency of empowering TBI patients’ FMs. Therefore, other aspects of empowerment may not have been identified and may warrant more thorough research in the future. For example, tested interventions can be used to ensure and strengthen the empowerment of FMs, even after hospitalization. Although these findings are based on the perspective of FMs, the results of this study can assist health care professionals in identifying factors that help FMs process and utilize the provided support and information to control new, possibly insecure, situations. Future studies should focus more on the perspectives of health care staff when empowering FMs in acute care to gain a deeper and more holistic understanding of empowerment in the context of TBI patient care.
Conclusion This study provides a systematic overview of the factors contributing to FMs’ empowerment and describes the empowerment from the FMs perspectives. We can conclude that empowerment in the acute phase of TBI patient treatment consists of an interactive relationship between FMs and professionals, which includes professionals providing comprehensive information and support and ensuring that the patient’s care will continue after hospitalization. Consequently, the process of empowering FMs does not end when the TBI patient’s acute phase ends, but instead continues after hospitalization. Nevertheless, it is clear that in the future, it is essential to study the concept of empowerment more at the organizational and community levels in the context of acute care and from the perspective of health care professionals. Although this review provides information on the nature of empowering support for FMs of TBI patients during the acute phases of care, this information is derived mainly from qualitative and cross-sectional studies. In the future, clinical trials in TBI nursing aiming to find concrete and effective means to increase and to support TBI patients’ and family’s empowerment are needed. It might prove beneficial for future studies to redirect the methodology and study design toward interventional studies to obtain more comprehensive information on aspects of FMs’ empowerment support in acute care.
This review aimed to identify and synthesize empowering support for the family members of patients in the acute phase of traumatic brain injury hospital treatment. CINAHL, PubMed, Scopus, and Medic databases were searched from 2010 to 2021. Twenty studies met the inclusion criteria. Each article was critically appraised using the Joanna Briggs Institute Critical Appraisals Tools. Following a thematic analysis, four main themes were identified about the process of empowering traumatic brain injury patients’ family members in the acute phases of hospital care: (a) needs-based informational, (b) participatory, (c) competent and interprofessional, and (d) community support. This review of findings may be utilized in future studies focusing on designing, implementing, and evaluating an empowerment support model for the traumatic brain injury patient’s family members in the acute care hospitalization to strengthen the current knowledge and develop nursing practices.
Traumatic brain injury (TBI) is functional or structural damage to the brain caused by a sudden external injury. TBI can be classified as a mild, moderate, or severe brain injury. Moderate and severe brain injuries in the acute phase often require hospital treatment ( Capizzi et al., 2020 ). Approximately 5.3 million people in the United States and 7.7 million people in the European Union ( G. Liu et al., 2021 ) suffer from various symptoms and problems caused by a TBI, including impaired attention, difficulty with memory, depression, impulsivity, poor decision-making, aggressive behavior, slowness, fatigue, and mental disorders ( Capizzi et al., 2020 ; Rasmussen et al., 2020 ). A considerable number of people with brain injuries are below 25 years old, although brain injuries have also increased among older people ( Nguyen et al., 2016 ). After hospital discharge, family members (FMs) are often the primary caregivers for a TBI survivor, offering daily support and executing demanding care procedures ( McIntyre et al., 2020 ). FMs must adapt to this new, unexpected role ( McIntyre et al., 2020 ), and as a result, they often experience difficulties managing the TBI survivor care process ( Kivunja et al., 2018 ), and need empowering support ( Sakanashi & Fujita, 2017 ). Based on the literature, TBIs are a global health problem ( Maas et al., 2017 ), and the number of brain injuries is constantly increasing ( Jochems et al., 2021 ). Therefore, it can be assumed that the number of FMs and caregivers will also increase in the future. Data Analysis A thematic analysis was used to analyze and synthesize the findings; the review included qualitative, quantitative, and mixed methods designs. The studies were read, familiarized, and coded by forming a narrative interpretation of the quantitative results ( Lizarondo et al., 2020 ; Vaismoradi et al., 2013 ). Each publication was analyzed to find expressions describing the FMs’ experiences of receiving empowering support in the acute phase of TBI patient’s hospital treatment. Some of these expressions offered by FMs were narratives (e.g., “ . . .need for continuity of care. . .so it has all been taken care of and then you can free your time to go to work” ), and some were phrases (e.g., “ to receive concrete information on the brain injury and its effect on the future at an early stage” ), and some were single words (e.g., “ . . .empowerment processes. . . ”). These meaningful expressions formed a basis for data reduction, categorization, and abstraction. After this, similar reduced expressions were grouped into categories by comparing their similarities and differences. Categories with similar content were grouped as a subtheme with a name that described the content (e.g., information about TBI patients’ health conditions in the acute phase ). The subthemes were then grouped into higher-level categories and main themes (e.g., informational support to empower the FMs) ( Elo et al., 2014 ); (see Online Supplementary File 2 ). Two reviewers (JL and KC) independently appraised the methodological quality of the studies and performed the quality assessment using Joanna Briggs Institute (JBI) Critical Appraisals Tools: (a) Checklist for Qualitative Research and (b) Checklist for Analytical Cross-Sectional Studies. The JBI critical appraisal checklist includes 10 criteria for qualitative studies and 8 criteria for quantitative studies, addressing the risk of bias in its design, conduct, and analysis ( Moola et al., 2020 ). For each study, two reviewers completed the appraisal step (each reviewer rated each study “Yes,” “No,” “Unclear,” or “Not applicable”). In my opinion, this sentence can be removed here, as it will come up in the next section “Description of Included Studies”. The studies’ strengths were related to a clear description of the research methodology, data collection methods, and data analysis. The inclusion and exclusion criteria of the sample were also clearly described, including the study’s subjects and settings. Weaknesses in reviewed studies related to the lack of description of potential confounding factors and strategies to control them. In total, the selected studies ( N = 20) were generally of good quality and were not excluded based on their quality assessment. Supplemental Material
The authors wish to thank the Traumatic Brain Injury Association of Finland for their support, and the Carers Finland and The Finnish Nursing Education Foundation for funding this research. Author Biographies Julia Lindlöf , RN, MSc, is a registered nurse who holds a Master of Nursing Science degree. She is currently a doctoral student in the Department of Nursing Science, University of Eastern Finland, Kuopio, Finland. Her clinical work and research focuses on supporting traumatic brain injury patients’ family members during the acute phases of hospitalization. She is currently serving as a board member in the Finnish Association of Neuroscience Nurses. Her recent publication include “Traumatic Brain Injury Patients’ Family Members’ Evaluations of the Social Support Provided by Healthcare Professionals in Acute Care Hospitals” in Journal of Clinical Nursing (2020, with H. Turunen, H. Tuominen-Salo & K. Coco). Hannele Turunen , RN, PhD, is a registered nurse and Professor, Department of Nursing Science, Faculty of Health Sciences, University of Eastern Finland, Kuopio, Finland. She leads an international multidisciplinary research group focused on patient safety that includes family members’ perspectives. Her recent publications include “Examining Family and Community Nurses’ Core Competencies in Continuing Education Programs Offered in Primary Health Care Settings: An Integrative Literature Review” in Nurse Education in Practice (2023, with M. Azimirad et al.), “Family Caregivers’ Experiences Of Providing Care for Hospitalized Older People With a Tracheostomy: A Phenomenological Study” in Working with Older People (2022, with W. Tabootwong, K. Vehviläinen-Julkunen, P. Jullamate & E. Rosenber), and “Patient Participation in Patient Safety—An Exploration of Promoting Factors” in Journal of Nursing Management (2019, with M. Sahlström, P. Partanen, M. Azimirad, & T. Selander). Tarja Välimäki , RN, PhD, is a registered nurse and docent in clinical nursing science. Her scientific work focuses on neurological diseases and family caregiving. She has an extended track record of publications focused on family caregivers’ psychosocial health and gerontological nursing. Using mixed methods and RCT research approaches, she works within interdisciplinary teams to explore family caregiving in longitudinal diseases. Her recent publications include “Different Trajectories of Depressive Symptoms in Alzheimer’s disease Caregivers—5-Year Follow-Up” in Clinical Gerontologist (2022, with A. M. Koivisto, T. Selander, T. Saari & I. Hallikainen), and “Experiences of People With Progressive Memory Disorders Participating in Non-Pharmacological Interventions: A Qualitative Systematic Review” in JBI Evidence Synthesis (2020, with A.-M. Tuomikoski, H. Parisod & S. Lotvonen). Justiina Huhtakangas , MD, PhD, is a staff neurosurgeon in the Department of Neurosurgery, Helsinki University Hospital, Finland. Her earlier publications mainly focus on vascular neurosurgery, but at the intensive care unit and neurosurgical wards she participates actively in the treatment of other patient groups such as neurotrauma and tumor patients. Her recent publication includes “Screening of Unruptured Intracranial Aneurysms in 50 to 60-Year-Old Female Smokers: A Pilot Study” in Scientific Reports (2021, with J. Numminen, J. Pekkola, M. Niemelä & M. Korja). Sofie Verhaeghe , RN, MSc, PhD, is a professor at Ghent University and Hasselt University and a research supervisor at VIVES University College KULeuven. Her research focuses on experiences of patients and family members (broadened to patient networks) with illness and care and on nurse-patient interactions including patient participation. She mainly works in the domain of mental health care, oncology, care for the elderly, and critical care. Her recent publications include “Experiences and Needs of Partners as Informal Caregivers of Patients With Major Low Anterior Resection Syndrome: A Qualitative Study” in European Journal of Oncology Nursing (2022, with E. Pape et al.), “Family Expectations of Inpatient Mental Health Services for Adults With Suicidal Ideation: A Qualitative Study” in International Journal of Mental Health Nursing (2021, with J. Vandewalle, B. Debyser & E. Deproost), and “Parents’ Perceptions on Speech Therapy Delivery Models in Children With a Cleft Palate: A Mixed Methods Study” in International Journal of Pediatric Otorhinolaryngology (2021, with C. Alighieri et al.). Kirsi Coco , RN, PhD, is a registered nurse and senior advisor at Tehy—The Union of Health and Social Care Professionals in Finland. Kirsi has more than 20 years of neurosurgical nursing experience at Department of Neurosurgery at Helsinki University Hospital in Finland. Her recent publication includes “Traumatic Brain Injury Patients’ Family Members’ Evaluations of the Social Support Provided by Healthcare Professionals in Acute Care Hospitals” in Journal of Clinical Nursing (2020, with J. Choustikova, H. Turunen & H. Tuominen-Salo).
CC BY
no
2024-01-15 23:43:48
J Fam Nurs. 2024 Feb 16; 30(1):50-67
oa_package/56/bf/PMC10788044.tar.gz
PMC10788045
38222215
Introduction and background Incisional hernia (IH) is defined as an abdominal wall gap with or without a bulge at the site of a previous surgical scar detected by clinical examination or imaging [ 1 ]. Complex IH There is no clear-cut definition for the term complex IH. However, it is used to describe IH with one or more of the following characteristics: a large hernial defect, enterocutaneous fistula, several hernias situated anatomically apart from each other, IH close-to-bone or associated with local infection, loss of domain (LOD), and re-recurrence [ 2 ]. Loss of domain There is a lack of consensus on a precise definition of LOD in the existing literature. Clinically, it can be diagnosed when the herniated contents cannot be reduced below the fascial level in the supine position [ 3 ]. A more accurate modality to diagnose LOD is cross-sectional imaging, appreciating the ratio between the herniated and the intra-abdominal volumes. Some set the threshold at an extraperitoneal volume between 20 and 25% to diagnose LOD [ 4 , 5 ]. In comparison, others diagnose LOD if the extraperitoneal volume approaches 50% or more, i.e., when the ratio of hernia sac volume to the abdominal cavity volume is ≥ 0.5 [ 3 ].
Conclusions IH is a common complication after open and minimal access surgery with a multifactorial pathogenesis. The predisposing factors included inherent and modifiable ones. Elective repair would improve the QOL and prevent the sinister outcomes of emergency IH repair. Accordingly, the watchful wait strategy should be reviewed, and the options should be discussed thoroughly during patients’ counselling. Risk stratification tools for predicting IH would help adopt prophylactic measures like suture line reinforcement or mesh application in high-risk groups.
Incisional hernia (IH) is a frequent complication following abdominal surgery. The development of IH could be more sophisticated than a simple anatomical failure of the abdominal wall. Reported IH incidence varies among studies. This review presented an overview of definitions, molecular basis, risk factors, incidence, clinical presentation, surgical techniques, postoperative care, cost, risk prediction tools, and proposed preventative measures. A literature search of PubMed was conducted to include high-quality studies on IH. The incidence of IH depends on the primary surgical pathology, incision site and extent, associated medical comorbidities, and risk factors. The review highlighted inherent and modifiable risk factors. The disorganisation of the extracellular matrix, defective fibroblast functions, and ratio variations of different collagen types are implicated in molecular mechanisms. Elective repair of IH alleviates symptoms, prevents complications, and improves the quality of life (QOL). Recent studies introduced risk prediction tools to implement preventative measures, including suture line reinforcement or prophylactic mesh application in high-risk groups. Elective repair improves QOL and prevents sinister outcomes associated with emergency IH repair. The watchful wait strategy should be reviewed, and options should be discussed thoroughly during patients' counselling. Risk stratification tools for predicting IH would help adopt prophylactic measures.
Review Incidence The reported incidence of IH in the current literature is quite variable, ranging from below 5% to as high as 70% in some series. This wide variation is attributed to the variation in the primary surgical pathology, surgical incision site and extent, associated medical comorbidities, and previous exposure to risk factors [ 6 - 8 ]. A systematic review of renal transplant patients showed an IH rate of 1.1% - 7% after an open renal transplant, with a mean of 3.2% [ 9 ]. A recent Sweden's Renal Cell Cancer Database study analysed 6417 patients to determine the comorbidities and subsequent development of IH. Of these 6,417 patients, 19% (1,201 individuals) underwent minimally invasive surgery, whereas 81% (5216 individuals) had open surgery. After a five-year follow-up period, the IH development rate was 2.4% (1.0-3.4%) following minimally invasive surgery and 5.2% (4.0-6.4%) after open surgery (p<0.05). In the open surgery group only, IH was significantly associated with left-sided surgery and age (both p<0.05) [ 10 ]. Conversely, a recent study that included 157 patients with abdominal aortic aneurysm (AAA) showed that IH incidence after open repair of AAA was 46.5%, with a median time for IH development of 24.43 months. The risk factors identified were active or previous smoking, chronic kidney disease, and previous abdominal surgery [ 11 ]. IH can develop in young people and even in infant populations undergoing abdominal surgery. A recent study from the Netherlands analysed 2055 infants under three years old who had abdominal surgery between 1998 and 2018. One hundred and seven infants (5.2%) developed IH. However, the incidence was variable among the different primary surgical pathologies; necrotising enterocolitis (12%), gastroschisis (19%), and omphalocele (17%) had the highest incidences of IH. Wound infection, preterm birth, and history of stoma were all identified as significant risk factors for developing IH [ 12 ]. The high rate of open surgery and the occurrence of IH through more minor abdominal defects after minimal access surgery, including both laparoscopic and robotic procedures, contribute to the high prevalence of IH [ 13 ]. A meta-analysis of 24 trials included 3490 patients to study the rates of IH after laparoscopic versus open abdominal surgery. The results showed that the incidence of IH was significantly lesser in the total laparoscopic procedures. However, laparoscopically assisted procedures did not significantly reduce IH compared to open surgery [ 14 ]. Despite that, port-site variant IH has been commonly reported [ 15 ]. Aetiology and molecular basis Hernias that occur in the early postoperative stage result from inadequate closure and faulty surgical technique. In the presence of wound infection, the neutrophil’s local inflammatory response and proteolytic enzymes disrupt the normal wound healing process by interrupting collagen synthesis [ 16 ]. However, the late-onset development of IH points to the disorganisation of the extracellular matrix (ECM) and the disequilibrium of collagen metabolism when collagen breakdown exceeds synthesis [ 17 - 19 ]. A comparative study examined the hernial fascial ring tissue (HRT) and hernia sack tissue (HST) harvested from patients undergoing hernia surgery compared with normal fascia (FT) and peritoneum (PT) by histology and immunofluorescence. Compared to the control, there were alterations in tissue architecture, fibroblast morphology, and ECM organisation in the IH tissues. These findings support the heterogeneity of the fibroblast population at the laparotomy site that could contribute to the development of IH [ 20 ]. Collagen disorganisation and impaired fibroblast function compromise the abdominal wall's mechanical integrity, leading to IH [ 19 , 21 ]. These changes in mechanical properties initiate repair reactions within load-bearing tissues, like ligaments and tendons. Furthermore, the cells in tendons and ligaments that load-bear resist hypoxic and ischemic insults after injuries. Oppositely, muscles of the abdomen, when exposed to ischemia, induce fibroblasts to generate atypical collagen, leading to an impaired ECM [ 18 , 22 ]. During wound healing, platelets and fibrin produce a provisional matrix, serving as a transient scaffold that attracts other critical components for effective wound restoration. Insufficient haemostasis with the generation of haematoma can interrupt this ECM resulting in IH [ 23 , 24 ]. The temporary matrix draws in inflammatory cells and signalling molecules, initiating inflammatory classical pathways. When the inflammatory response is delayed or persists for a prolonged period, it culminates in the activation of pathogenic fibroblasts, ultimately causing disorganisation of the ECM [ 25 ]. The structural strength of connective tissues depends on the balance between collagen type I and type III. This is because the intermolecular bonds between collagen type I and type III contribute additional tissue strength [ 26 , 27 ]. Type I collagen is fibrous, strong, and thicker in diameter than type III collagen [ 28 ]. Tissues from IH patients showed a decreased collagen type I to type III ratio, resulting in a disorganised ECM [ 29 , 30 ]. Additionally, skin fibroblasts’ increased secretion of type III collagen has been linked to the onset of IH as collagen type III imparts weaker mechanical properties to the tissue [ 17 ]. On the other hand, fibroblasts play a pivotal role in ECM repair and wound healing. Another proposed mechanism for wound failure is the existence of abnormal fibroblasts secondary to reduced levels of growth factors or cell cycle arrest due to ischemia [ 17 , 31 ]. Xing et al. identified an atypical fibroblast population as the culprit behind the secretion of modified collagen phenotype in the early failure of laparotomy wounds [ 32 ]. Moreover, these neutrophils exhibit different chemotactic responses. Fibroblasts’ phenotype selection is influenced by the reduction in the abdominal wall’s mechanical strength [ 32 ]. Diaz et al. studied the changes in the fascia of IH patients [ 33 ]. A notable thinning of the ECM reduced fibroblast density, minimal presence of immune cells, and dysmorphic fibroblasts exhibiting limited interaction with the surrounding matrix were observed. IH tissue’s fibroblasts exhibited a spindle-like bipolar shape with a decreased surface area and demonstrated a more pronounced vimentin network than actin expression. Examination under electron microscopy unveiled cytoplasmic vacuolation and swelling of the mitochondria. In response to fibronectin and collagen type I, these fibroblasts exhibited enhanced proliferation, reduced adhesion, and quicker migration. Additionally, the fibroblast cells from the IH demonstrated heightened sensitivity to apoptosis and autophagy [ 33 ]. Proline hydroxylase and lysine hydroxylase catalyse collagen cross-linkage to enhance mechanical stability [ 34 ]. The structural tissues of IH patients show lower hydroxyproline content. Furthermore, fibroblasts cannot transport hydroxyproline in these patients, reducing cross-linking and enhancing collagen solubility. This condition ultimately leads to mechanical failure [ 35 , 36 ]. Understanding the molecular basis of IH pathogenesis would enable early prediction to adopt preventative measures. The degradation products of collagen are released into the bloodstream during tissue remodelling after injury or surgical trauma. These fragments are called neo-epitopes and can be considered serum biomarkers for collagen turnover [ 37 - 39 ]. Henriksen et al. observed a higher turnover of collagen type IV when compared to collagen type V in IH patients preoperatively [ 39 ]. The serum concentration of N-terminal pro-peptide of type IV collagen 7S domain (P4NP-7S), which is a breakdown product of collagen type IV, was observed to be increased in IH patients and is considered to be linked to the development of IH [ 40 ]. These results imply that collagen degradation products have diagnostic significance. Moreover, alterations in the matrisome structure and the existence and growth of anomalous fibroblasts are causative factors in developing IH. The ischemia at the incision site induces the accumulation of truncated ECM, resulting in prolonged wound healing. Additionally, the changes in the quantities and proportions of various collagen types are the primary underlying factor for the disorganisation of the ECM. Neo-epitope measurement is a promising diagnostic tool [ 41 ]. Risk factors IH is associated with a multitude of risk factors, encompassing male gender, smoking, and comorbidities (such as diabetes mellitus (DM), chronic obstructive pulmonary disease (COPD), and obesity). Furthermore, hypoalbuminemia, immunosuppression (e.g., via steroids and chemotherapy), exposure to radiotherapy, malignancy, connective tissue disorders, operative-related factors, and postoperative complications (e.g., intra-abdominal collections and abdominal sepsis) constitute additional risk factors [ 2 ]. A recent study examined the molecular mechanisms of IH and the association of these factors with smoking, abdominal aortic aneurysms, obesity, diabetes mellitus, and diverticulitis [ 42 ]. The results showed that the levels of collagen I and III, matrix metalloproteinases, and tissue inhibitors of metalloproteases are abnormal in ECM of IH patients, and ECM disorganisation has overlapped with these comorbid conditions. This could partly explain the association of IH with these comorbidities. Moreover, BMI is a known risk factor for local wound complications after surgery, which can eventually compromise the healing process and lead to IH [ 43 , 44 ]. A Swedish study included 1,621 patients with vascular procedures and laparotomies for bowel procedures in 2010 [ 45 ]. They revealed that wound infection posed a risk factor for wound dehiscence and IH. Moreover, an elevated BMI (exceeding 30 kg/m 2 ) was recognised as a risk factor for wound dehiscence [ 45 ]. The same set of risk factors has been proven to be associated with IH, and the results have been reproduced through studies involving different surgical procedures for diverse surgical pathologies. These factors include obesity, midline incision site, previous abdominal surgery, re-operation through the same incision, wound infection, chronic kidney disease, smoking, prolonged cough, diabetes, jaundice, and urinary obstruction [ 9 , 11 , 46 , 47 ]. Clinical picture and presentation IH can manifest itself in the form of a broad spectrum of disease presentations and progression, ranging from an asymptomatic state up to incarceration with strangulated perforated bowels. Patients with IH could complain of unspecific symptoms such as postprandial fullness, pain, and disfigurement due to large abdominal bulges, which in turn leads to social exclusion [ 48 ]. Large IH can be associated with overlying skin changes, dyspnea, insomnia, and limited ability to work. Additionally, in the long term, it can negatively affect the static of the musculoskeletal system and chronic spinal problems [ 48 , 49 ]. The most severe complication which may occur in the natural course of untreated IH is incarceration, which is estimated to affect 6 to 15% of cases of IH. Approximately 4% of patients need surgery to reduce pain, respiratory dysfunction, and discomfort and to prevent sinister complications [ 48 , 50 ]. A recent study of the Danish National Colorectal Cancer Group database included 2466 patients who had surgery for colonic cancer [ 51 ]. They assessed the quality of life (QOL) with the development of IH with a median time from colonic cancer resection to QOL assessment of 9.9 years. They found 215 (8.7%) patients developed IH; 156 (72.6%) underwent surgical repair. IH was significantly associated with reduced QOL in the domains of global health, physical functioning, role functioning, emotional functioning, and social functioning, as well as significantly associated with increased symptoms in the scales of pain, dyspnoea, and insomnia. Surgical repair was associated with increased QOL in physical and role-functioning domains [ 51 ]. Strategy and options Given the diverse clinical presentation of IH from asymptomatic to minimally symptomatic condition up to incarceration and the associated comorbidities and intricate surgical field, some would advocate the watchful wait strategy for minimally symptomatic uncomplicated IH [ 2 , 52 ]. However, the scene is quite dynamic, and the hernial defects and sac expand with time [ 53 ]. This strategy could be the reason behind the increasing rate of complicated hernia with incarceration and bowel compromise [ 54 ]. Incarcerated large IH is among the top 10 causes of emergency laparotomies in the UK. In 2017 it represented 1.3% of all laparotomies according to the 4th NELA report; this incidence has doubled to represent 2.8% of all laparotomies done in the UK in 2020 in the 7th NELA report [ 55 , 56 ]. It has become prominent that surgery has a dramatic change in terms of symptom control and overall QOL. A Swedish study showed that regardless of the surgical technique, all patients reported a quality of life comparable to that of the general population eight weeks after surgery. This improvement has persisted after one year [ 57 ]. Moreover, the percentage of patients complaining of symptoms dropped from 81% preoperatively to 18% after surgery [ 57 ]. Additionally, surgery led to significant improvements in movement, the feeling of fatigue, and visual analogue scale (VAS) pain score [ 57 ]. The same results have been reproduced in a more recent study with improvement in pain, depression and quality of life [ 58 ]. On the other hand, there has been evidence of some residual symptoms in many patients after surgical repair of IH [ 57 ]. In a recent study, 210 patients were included, and the median follow-up period was 3.2 years [ 59 ]. The patients attended the outpatient clinic to collect patient’s reported outcomes (PROs). While 63% of the patients reported experiencing an improvement in the overall condition of their abdominal wall following surgery, an equal percentage reported postoperative symptoms, such as discomfort, pain, and bulging. Furthermore, 20% indicated that the overall status of their abdominal wall remained unchanged, and 17% reported a deterioration compared to their presurgical repair condition. As a result, in retrospect, 10% of the patients would choose not to undergo the operation. This study underscores the significance of effectively managing patient expectations and incorporating PROs in informed consent and decision-making [ 59 ]. Surgical techniques and postoperative care The open surgical technique with retro muscular (sub-lay) mesh placement has been the gold standard and the most popular technique [ 60 ]. A meta-analysis of 21 studies that included 5891 procedures showed that sub-lay placement of mesh was associated with the lowest risk for recurrence and surgical site infections (SSIs) [ 61 ]. However, with the advances in minimally invasive techniques and training programs, the minimal access IH repair techniques are gaining wide popularity, including laparoscopic and robotic-assisted techniques. These minimally invasive approaches have the advantages of reduced postoperative morbidity, faster recovery, and fewer wound-related complications [ 62 ]. In a recent survey, general surgeons in Canada were surveyed to outline their typical surgical approach for a patient with a midline IH and a 10 x 6 cm fascial defect [ 63 ]. Among the 483 surgeons surveyed, 74% expressed their preference for conducting an open repair, while 18% favoured laparoscopic repair. Ninety eight percent of the surgeons would opt for using mesh, 73% would undertake primary fascial closure, and 47% would consider a component separation as part of their surgical approach. The mesh was most frequently placed in the retrorectus/ preperitoneal area (48%) and intraperitoneal space (46%). They concluded that although nearly all surgeons conducting IH repairs would opt for permanent mesh, there was considerable diversity in their surgical approaches, choices of mesh placement, techniques for fascial closure and the consideration of component separation. A meta-analysis that included nine RCTs showed that both open and laparoscopic techniques of IH have similar rates of reoperation and surgical complications and comparable recurrence rates [ 48 ]. Recent case series have proved the feasibility of the robotic approach to IH repair with comparable results with laparoscopic surgery [ 64 - 66 ]. However, a tangible clinical benefit does not offset the robotic approach's higher cost and longer operative time [ 67 ]. The empirical postoperative care after IH repair would include a period of physical rest in addition to an abdominal binder (AB) or the application of pressure dressing. The former has been meant to avoid early recurrence, and the latter to help prevent seroma formation, reduce pain and improve physical activity. The physical rest after hernia repair was first advised by Bassini after inguinal hernia repair [ 68 ]. However, with the evolution of surgical techniques [ 69 ], this was challenged through a large case series and RCTs [ 70 , 71 ]. Although the application of AB may reduce pain and improve physical function after major abdominal surgery [ 72 ], two dedicated studies did not prove any effect of AB on pain, movement, seroma formation, fatigue, general well-being, or quality of life after ventral and IH repair [ 73 , 74 ]. A recent survey conducted in Germany showed a significant variation in postoperative protocols after IH repair, including postoperative physical rest and the use of AB [ 49 ]. Additionally, the same study reviewed six relevant publications on open incisional herniorrhaphy. There was no correlation between the duration of physical rest, SSIs, and the recurrence rate [ 49 ]. Surgical outcomes and complications A broad spectrum of adverse outcomes could be expected in an elderly population with multiple comorbidities after such complex abdominal wall reconstruction procedures. However, SSIs and hernia recurrence are considered direct surgical complications and might need further interventions [ 75 , 76 ]. A study from the USA assessed the effect of these three modifiable comorbidities, obesity, diabetes, and smoking, on wound complications after IH repair [ 77 ]. In this study, 3908 patients were included, with 31% having no modifiable comorbidities, 49% having one modifiable comorbidity and 20% having two or more modifiable comorbidities. Compared to individuals without modifiable comorbidities, one modifiable comorbidity or two or more modifiable comorbidities significantly increased the likelihood of SSIs. Nevertheless, only patients with two or more modifiable comorbidities displayed significantly higher odds of surgical site complications necessitating interventions when contrasted with those without modifiable comorbidities and those with just one modifiable comorbidity. Patients who had all three comorbidities experienced a twofold increase in the odds of experiencing any wound-related complications, and obese patients with diabetes exhibited a comparable pattern [ 77 ]. Another USA study included 220,629 patients with elective incisional, inguinal, umbilical, or ventral hernia repair from 2011 to 2014. Out of these, 40446 (18.3%) were current smokers. Current smokers experienced an increased likelihood of reoperation, readmission, and death. Furthermore, smokers experienced an increased risk of postoperative complications (including pulmonary, infectious, and wound-related) [ 78 ]. Recurrent hernias are considered complex wall hernias, and 20% of all IH repair procedures involve a recurrent hernia [ 43 ]. Recurrence rates after IH repair range from 8.7 to 32%, depending on a host of factors, including obesity, use of mesh, setting of repairs, elective versus emergency, and hernial defect size [ 43 , 76 , 79 ]. The European Hernia Society and Americas Hernia Society guidelines clearly recommend smoking cessation for 4-6 weeks and weight loss to BMI below 35 kg/m2 before elective ventral hernia repair [ 80 ]. Cost and burden The significant complications and recurrence rates of IH management substantially burden healthcare provider facilities [ 81 ]. A French study examined the direct costs (related to hospital expenses) and indirect costs (of sick leave) associated with IH repair [ 82 ]. The study collected data from 51 public hospitals in France, involving 3239 IH repair procedures. The average overall cost for IH repair in France in 2011 was approximated to be 6451€. This cost varied, with it being 4731€ for unemployed patients and 10107€ for employed patients, whose indirect costs (5376€) were slightly higher than the direct costs. They estimated that a five percent reduction in the incidence of IH following abdominal surgery, achieved through measures like adopting the European Hernia Society Guidelines on abdominal wall incision closure or considering prophylactic mesh augmentation in high-risk patients, could lead to national cost savings of 4 million euros [ 82 ]. Another study from the USA projected that between 2012 and 2014, 89258 IH repair surgeries were performed annually, resulting in hospital costs of $6.3 billion [ 83 ]. Also, they revealed a strong negative correlation between nonelective IH repair and poorer outcomes, such as postoperative complications, prolonged hospital stay and in-hospital mortality. Risk prediction and prophylactic measures From the above, it is evident that every effort should be exercised to help prevent the development of IH. A standardised fascial closure technique after abdominal surgery has reduced the incidence of IH [ 84 ]. Additionally, there has been a recent trend toward using prophylactic meshes or suture line reinforcement to prevent IH development after abdominal surgery [ 85 ]. In a recent open-label RCT [ 7 ], high-risk adult patients aged over 18 years who had undergone a midline laparotomy procedure were followed up for three years. These patients were randomly assigned in a 1:1 ratio to receive either the reinforced tension line (RTL) technique or primary suture only (PSO). The study initially included 124 patients, with 51 from the RTL group and 53 from the PSO group completing the three-year follow-up. The incidence of IH was found to be higher in the PSO group (28.3%) compared to the RTL group (9.8%), and this difference was statistically significant (p = 0.016). Both groups exhibited similar SSI rates, haematoma, seroma, and postoperative pain during the follow-up period. The STITCH trial was a double-blind, randomised controlled trial that took place across multiple medical centres, specifically within the surgical and gynaecological departments of ten different hospitals in the Netherlands from October 2009 to March 2012 and included a total of 560 patients, which were randomly assigned to either the “large bites” group (comprising 284 patients) or the “small bites” group (consisting of 276 patients) [ 86 ]. The groups had a follow-up till August 2013, with 545 (97%) patients completing the follow-up period. Patients in the “small bites” group underwent fascial closures with a greater number of suture stitches, a higher ratio of suture length to wound length, and a longer closure time compared to those with “large bites” closure. After one year of follow-up, it was observed that 57 out of the 277 patients (21%) in the “large bites” group and 35 out of 268 patients (13%) in the “small bites” group had developed IH (p = 0·0220). They concluded that the small bites technique should be the standard closure technique for midline incisions because it prevents IH in midline incisions than the conventional large bites technique. In contrast, previous trials that examined the impact of techniques involving suture length or modifications in the size of sutures (large bites) did not yield significant results, indicating limited success in demonstrating their effectiveness. A prospective, multicenter, double-blind, parallel-group, randomised controlled superiority trial investigated the influence of suture length on the development of IH during fascia closure [ 87 ]. They compared the two suture techniques: one using short stitches (ranging from 5 to 8 mm, spaced every 5 mm) with a USP 2-0 single thread and an HR 26 mm needle, and the other using long stitches (10 mm apart) with a USP 1 double-loop suture and an HR 48 mm needle. Both techniques utilised a suture material based on poly-4-hydroxybutyrate (Monomax®). They compared closure using a short stitch (5 to 8 mm every 5 mm, USP 2-0, single thread HR 26 mm needle) or long stitch technique (10 mm every 10 mm, USP 1, double loop, HR 48 mm needle) with a poly-4-hydroxybutyrate-based suture material (Monomax®). Moreover, they involved 425 patients, who were randomised to either the “short stitch” group (n = 215 patients) or the “long stitch” group (n = 210 patients). After one year of follow-up, it was observed that seven out of 210 patients (3.3%) in the “short stitch” group and 13 out of 204 patients (6.4%) in the “long stitch” group developed IH. However, this difference was not statistically significant (p = 0.173). The initial findings of this trial, observed at the one-year follow-up, indicated a relatively lower incidence of IH in the “short stitch” group. However, this difference did not reach statistical significance. A more recent prospective, multicenter, single-blinded randomised controlled trial evaluated both the clinical and cost-effectiveness of the Hughes abdominal closure technique compared to the standard mass closure method following colorectal cancer procedures [ 88 ]. The study involved 802 adult patients who had undergone surgical resection for colorectal cancer at 28 different surgical sites in the UK. At the one-year follow-up, the incidence of IH, as determined through clinical examination, was 50 cases (14.8%) in the group that used the Hughes abdominal closure technique, compared to 57 cases (17.1%) in the standard mass closure group. However, this difference was not statistically significant (p = 0.4). In the second year, the incidence of IH was 78 cases (28.7%) in the Hughes abdominal closure group and 84 cases (31.8%) in the standard mass closure group, with no statistically significant difference (p = 0.43). Furthermore, the mean incremental cost for patients undergoing the Hughes abdominal closure was £616.45, which also did not reach statistical significance (p = 0.3580). Quality of life did not show a significant difference between the two groups. Several other trials assessed the effectiveness of prophylactic mesh enhancement after major abdominal procedures. An open-label RCT from Switzerland included 169 patients undergoing elective open abdominal surgery from 2011 to 2014 with a follow-up one year and three years after surgery [ 89 ]. They included patients with two or more of the following risk factors: overweight or obesity, diagnosis of neoplastic disease, male sex, or history of a previous laparotomy. Patients were randomly assigned to prophylactic intraperitoneal mesh implantation or standard abdominal closure. Prophylactic intraperitoneal mesh implantation reduced the incidence of IH but increased early postoperative pain and reduced trunk extension. The same results have been reproduced in a more recent retrospective analysis of 309 patients who had open colorectal surgery. Prophylactic mesh closure reduced the incidence of IH but was associated with a higher rate of SSIs [ 90 ]. Another study with a five-year follow-up [ 91 ] of the PRIMAAT trial [ 92 ] included 114 patients; thirty-three in the NO-MESH group (33/58-56.9%) and 34 patients in the MESH group (34/56-60.7%) were evaluated after five years. The cumulative incidence of IHs in the NOMESH group was 32.9% after 24 months and 49.2% after 60 months. No IHs were diagnosed in the MESH group. In the NOMESH group, 21.7% (5/23) underwent re-operation within five years due to an IH. Aiolfi et al. conducted a systematic review and meta-analysis of RCTs comparing prophylactic mesh reinforcement (PMR) to primary suture closure (PSC) in abdominal surgeries [ 93 ]. Their analysis included 14 RCTs involving a total of 2332 patients. Among these patients, 1280 (54.9%) underwent PMR, while 1052 (45.1%) had PSC, and the follow-up period ranged from 12 to 67 months. The results indicated that the incidence of IH was significantly lower in the PMR group compared to the PSC group, with rates of 13.4% and 27.5%, respectively. The estimated pooled relative risk RR for IH in the PMR group compared to the PSC group was 0.38 (p < 0.001). A subgroup analysis, categorised by mesh placement, revealed a reduced risk reduction for various locations: preperitoneal (RR = 0.18; 95% CI 0.04-0.81), intraperitoneal (RR = 0.65; 95% CI 0.48-0.89), retro-muscular (RR = 0.47; 95% CI 0.24-0.92) and on-lay (RR = 0.24; 95% CI 0.12-0.51) compared to PSC. Additionally, the risk of developing seromas was higher in the PMR group (RR = 2.05; p = 0.0008). They concluded that PMR was effective in reducing the risk of IH following elective midline laparotomy in comparison to primary suture closure but appeared to have a higher postoperative risk of developing seromas. As these preventative prophylactic measures are associated with increased risk of pain, reduced mobility and SSI, there is a need for developing a risk stratification tool to identify those patients with a high risk of IH to justify the utilisation of extra precautions like prophylactic meshes or suture line reinforcement. A recent study assessed preoperative abdominopelvic CT scans’ morphometric, linear, and volumetric measurements to predict IH development after colorectal surgery [ 94 ]. The study involved 212 patients, with 106 matched pairs. Out of the 117 features that were measured, 21 of them exhibited the ability to distinguish between patients with IH and those without. Specifically, they identified three morphometric domains on routine preoperative CT imaging that were linked to the presence of IH: the widening of the rectus complex, an increase in visceral volume, and the atrophy of body wall skeletal muscles. Furthermore, a recent USA study included 29739 patients who had abdominal surgery from 2005 to 2016 [ 95 ]. They created eight surgery-specific predictive models for IH with excellent risk discrimination. These included colorectal and vascular surgery. The most prevalent risk factors that raised the probability of developing IH included a history of previous abdominal surgery and smoking. Also, they developed a risk calculator application for preoperative estimation of IH after abdominal surgery.
CC BY
no
2024-01-15 23:43:48
Cureus.; 15(12):e50568
oa_package/54/89/PMC10788045.tar.gz
PMC10788046
38041390
Method A descriptive cross-sectional study was conducted in two phases. Phase 1 involved the cultural and linguistic adaptation of the ICE-EFFQ to European Portuguese comprising a cycle of translation and back-translation, followed by the linguistic screening and cultural adaptation, performed by experts’ analysis and a cultural pre-test with 10 patients and family members from the target population. Phase 2 involved the psychometric testing of the Portuguese version of the instrument to perform principal components analysis, confirmatory factor analysis, and reliability assessment. The Iceland-Expressive Family Functioning Questionnaire (ICE-EFFQ) The ICE-EFFQ ( Sveinbjarnardottir et al., 2012 ) is a self-report questionnaire developed and psychometrically assessed in three different studies by a group of Icelandic nurses who are experts in family nursing. The ICE-EFFQ measures the concept of expressive functioning in families that are dealing with the acute or chronic illness of their members and defines these families’ expressive functioning as a multidimensional concept that covers the expression of emotions, collaboration and problem-solving, communication, and behavior ( Sveinbjarnardottir et al., 2012 ). It consists of 17 items and 4 factors, scored on a Likert-type scale ranging between 1 (almost never) and 5 (almost always). The ICE-EFFQ is based on the functional assessment category of the CFAM developed by Wright and Leahey (2013) , which reflects the response of families to acute or chronic illness of their members ( Sveinbjarnardottir et al., 2012 ; Wright & Leahey, 2013 ). It was found to be valid, reliable, and to have good internal consistency, with adequate alpha values for Cronbach’s coefficient for the total scale α = .922 and for all subscales: expressing emotions α = .737; collaboration and problem-solving α = .809; communication α = .829; and behavior α = .813 ( Sveinbjarnardottir et al., 2012 ). Phase I—Linguistic and Cultural Adaptation of the Instrument In the translation and cultural adaptation of the ICE-EFFQ into European Portuguese, the guidelines proposed by Sousa and Rojjanasrirat (2011) were adopted. The process was developed in five steps (see Figure 1 ). Step 1. Forward Translation of the Original Instrument Into European Portuguese The instrument’s adaptation into European Portuguese started with the linguistic component, through a cycle of translation and back-translation ( Sousa & Rojjanasrirat, 2011 ). The original instrument in English was translated into Portuguese by two bilingual experts who are independent, certified, native Portuguese speakers with distinct backgrounds. The first expert was familiar with the terminology used in the field of health and the instrument construct’s contents in Portuguese. The second expert was familiar with the cultural and linguistic characteristics of the population and the Portuguese language, although having no knowledge of medical terminology or the instrument construct. Two provisional translations of the original instrument were produced, simultaneously covering medical language and the language usually spoken in the target language, considering their cultural characteristics. Step 2. Comparison of the Two Translated Versions of the Instrument: Synthesis I Upon receipt of the two translations, a third-party, bilingual, independent expert was brought in who is a native Portuguese speaker and has good knowledge of the English language and of the instrument construct’s contents, in both Portuguese and English. This expert then compared the instructions, items, and response format in the two translated versions, with one another and with the original instrument, in relation to ambiguities and discrepancies in words, sentences, and meaning, and developed a synthesis of the translated versions (synthesis I). Then a first meeting of experts was held, with the participation of the main research team (MRT) and the three bilingual experts, who analyzed the main differences between the two translated versions, the synthesis I and the original instrument. In addition, questions and differences related to semantics, concepts, and cultural aspects were discussed, and, by consensus, the preliminary initial translated version of the instrument into Portuguese was produced. Step 3. Blind Back-Translation of the Preliminary Initial Translated Version of the Instrument Into English The questionnaire was subsequently back-translated into the original language by two bilingual, independent, certified, experts, who are native English speakers with the same characteristics as the experts of step 1. None of the experts had prior knowledge of the instrument to be back-translated. Based on the preliminary initial translated version of the instrument into Portuguese, two independent back-translated versions in the original language were produced by native English-speaking experts. Step 4. Comparison of the Two Back-Translated Versions of the Instrument: Synthesis II Next, a multidisciplinary committee consisting of the MRT, and all bilingual and bicultural translators involved in the previous steps, compared each one of the two back-translations and the original instrument with respect to the similarity of the instructions, items, and response format, wording, phrasal structure, similarity of meaning and relevance of sentences. All ambiguities and discrepancies regarding cultural meaning and idioms in words and sentences were discussed in the committee and decided by consensus. A synthesis of the back-translated versions was then produced (synthesis II) and sent along with both back-translations to the first author of the original instrument, who provided insights on the construct of the instrument and clarified the meaning of some words and expressions. Minor linguistic adjustments to the preliminary initial translation of the instrument into Portuguese and synthesis II in English were made. Two meetings followed with a panel of three experts in family nursing, mental and psychiatric health nursing, and community nursing, with the dual function of assessing, reviewing, and consolidating the instructions, items, and answer format of the two back-translations and synthesis II, with conceptual, semantic, and content equivalence, and developing the pre-final version of the instrument in the target language, for pilot testing and psychometric assessment. The expert panel carefully compared the two back-translations one another and with the original version, the synthesis II, and the preliminary initial translated version into Portuguese, regarding format text, phrasal and grammatical structure, colloquial parlance, language, similarity of meanings, cultural significance, and relevance. Small words were changed to ensure cultural and conceptual equivalence, and all ambiguities and discrepancies were discussed and resolved by consensus. The expert panel assessed the conceptual equivalence of the instructions, items, and the response format, by completing a dichotomous scale (clear/unclear) that obtained 100% agreement among the evaluators. This process resulted in the pre-final version of the instrument in Portuguese, which was called the “ Questionário do Funcionamento Expressivo da Família (QFEF)” (“Questionnaire on the Expressive Family Functioning (QEFF)”). Step 5. Content Validity Assessment and Pilot Testing of the Pre-Final Version of the Instrument in Portuguese To analyze the content validity of the pre-final version of the instrument in Portuguese (QFEF), a panel of three family nursing experts with experience in academic and clinical practice was selected. To assess the relevance of each item for the underlying dimensions that the QFEF intends to measure, the 3 experts completed a content validity index (CVI) using a 4-point Likert-type scale, scored from 1—Non relevant to 4—Very relevant and succinct ( Polit & Beck, 2006 , 2017 ). The content validity of the instrument was estimated by assessing the content validity at the item level (I-CVI) and at the scale level (S-CVI). The values for I-CVI and S-CVI should not be less than 1.0 when there are fewer than 5 expert evaluators ( Polit & Beck, 2006 , 2017 ; Streiner et al., 2015 ). The content validity of the scale, calculated by the assumed mean method, measured homogeneous results whose values evidence strong relevance of the items in the Portuguese version of the ICE-EFFQ: mean I-CVI = 1.0; S-CVI/UA = 1.0; and S-CVI/Ave = 1.0. A cultural pre-test was performed with 10 participants taken from the target population, to strengthen the conceptual, semantic, and content equivalency of the translated instrument, to improve the phrasal structure of the instructions, items, and response format, and to allow for easy understanding by the target population ( Polit & Beck, 2004 , 2017 ) Each participant was invited to evaluate the clarity of the instrument’s instructions, items, and response format on a dichotomous scale (clear/unclear) and to offer suggestions about how to rewrite the statements they thought were unclear ( Sousa & Rojjanasrirat, 2011 ). A 100% agreement was obtained between the evaluators in the sample, for the clarity of the instructions and items, and 90% for the clarity of the response format. Notably, 10% of the participants suggested changing the ascending order of the “generally” and “almost always” answers, to “almost always” and “generally,” or replacing the “almost always” option with “always.” The committee decided to keep the response format of the original authors, so there were no modifications in the instructions, items, or response format, after the application of the pre-test. This step aimed to review and refine the items from the pre-final version of the instrument, and generate the final psychometric instrument, with adequate estimates for reliability, homogeneity, and validity, and with a stable factor structure, and/or model adjustment ( Sousa & Rojjanasrirat, 2011 ). As a result of this step, the final European Portuguese version of the ICE-EFFQ was achieved. Phase II—Psychometric Testing This was followed by a complete psychometric testing in a sample taken from the target population ( Streiner et al., 2015 ). Sample and Participants The target population of the study consisted of Portuguese families with adult members with depression, living in the Autonomous Region of Madeira (RAM). The participants were recruited in the health centers and psychiatric inpatient facilities of the RAM after depressed patients were identified by mental health specialist nurses, general care nurses, and family doctors. The sample included depressed patients and their family members. Recruitment and data collection took place from May 2015 to February 2017. The inclusion criteria were as follows: Patients aged between 18 and 75 years old; diagnosed with depression, according to the International Statistical Classification of Diseases and Related Health Problems, 10th Revision (ICD-10), Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV ; American Psychiatric Association, 1994 ), and/or International Classification of Primary Care, 2nd edition (ICPC-2), with a score > 20 on the “Inventário de Avaliação Clínica da Depressão-IACLIDE” [Clinical Assessment Inventory of Depression] ( Serra, 1994 ), or <20, if there is a history of depressive symptomatology and medical diagnosis of depressive disorder (according to the ICD-10, DSM-IV , and/or ICPC-2) within a year before the assessment. Family members aged 18 years or older, with or without blood ties to the depressed person, who are referred to by the member with depression as family, and designated by him to take part in the study. The exclusion criteria were as follows: depression secondary to another clinical condition; clinical history of schizophrenia or bipolar disorder; and active psychotic and/or delusional symptoms at the time of assessment. The data were collected by nurses specializing in mental and psychiatric health and by the investigator, and a non-random sample of 121 participants (7.12 per item of the scale) was formed, including 55 families with recent experience of depression. This was defined as a diagnosis of acute or chronic depression of 1 adult family member in the period of 1 year before the time of assessment ( Sveinbjarnardottir et al., 2012 ). The sample size, as a rule, should consider a certain number of subjects for each item of the scale; an acceptable subject/item ratio of at least between 5 and 10 participants per item is suggested ( Marôco, 2021a ; Nunnally & Bernstein, 1994 ; Polit & Beck, 2017 ; Streiner et al., 2015 ). The average completion time of the Portuguese version of the ICE-EFFQ was 10 minutes, with a standard deviation of 7.5 minutes and minimum and maximum completion times of 3 and 57 minutes, respectively, and 75% of the respondents took 11.5 minutes to complete the questionnaire. Additional structured questions to provide information on sociodemographic and health variables (gender, age, marital status, education, family relationships, work situation, and psychiatric health status of the participants) were filled out by the nurses through a data collection interview at the nursing consultation. Ethical Considerations The study was developed according to the international ethical principles of scientific research embodied in the Helsinki Declaration ( World Medical Association, 2013 , 2018 ). Permission was requested and granted via e-mail from the authors of the ICE-EFFQ for the translation, cultural adaptation, and psychometric validation of the instrument in European Portuguese. The study was approved by the Ethics Committee of the Regional Health Service of RAM (No 51/2014). All adult participants diagnosed with depression gave their prior consent for the inclusion of their families in the study and designated family members for contact and recruitment. The participants were informed about the study’s objectives, purpose, and implications, about the confidentiality agreement and the anonymity granted by the ethical principles of research, and about the right to participate voluntarily and to withdraw at any time and without any consequences, should they wish to do so. All participants received a document containing information about the research subject and purpose and signed an informed consent form. The investigator’s telephone number and e-mail were made available to the participants, to clarify any doubts during the investigation process. Construct Validity To measure the construct validity of the Portuguese version of the ICE-EFFQ, a complete psychometric test of the final version of the translated instrument was conducted. The instrument’s metric properties were assessed through validity studies, involving exploratory factor analysis (EFA) by the principal component method, to determine dimensionality, and confirmatory factor analysis (CFA), to confirm the factor structure ( Marôco, 2021a , 2021b ; Pestana & Gageiro, 2014 ; Sousa & Rojjanasrirat, 2011 ). Exploratory Factor Analysis (EFA) In EFA, the principal component analysis method was used to verify that the variables of the Portuguese version synthesized the same factors as the original version of the ICE-EFFQ, and by use of varimax orthogonal rotation, to determine the weight or loading of each item in the extracted factor ( Marôco, 2021b ; Pestana & Gageiro, 2014 ). To assess the instrument’s adequacy to proceed to factor analysis, we used the Kaiser–Meyer–Olkin (KMO) test for sampling adequacy, and for the factorability of the correlation matrices, Bartlett’s test of Sphericity of χ 2 ( Marôco, 2021b ; Pestana & Gageiro, 2014 ). Factor analysis is considered to show that an instrument is adequate for the variables when the value for KMO is between .80 and .90, and very adequate when this coefficient presents higher values ( Kaiser, 1974 ; Marôco, 2021b ; Pestana & Gageiro, 2014 ), and the Bartlett sphericity test yields p < .001. As a criterion for factor retention, the cutoff point, or saturation of items in each factor, was set at ≥.40, with eigenvalues greater than 1 (Kaiser criterion), total variance explained by the factors and the instrument’s total, and the “scree plot,” or slope chart, proposed by Cattell ( Marôco, 2021b ; Pestana & Gageiro, 2014 ). Mean, standard deviation, and communalities ( h 2 ) were assessed. In the extraction of factors, the underlying theoretical perspective and the results of the factor analysis were considered, with different factor structures tested. Confirmatory Factor Analysis (CFA) For CFA,a covariance matrix was used, and maximum likelihood estimation was adopted for parameter estimation ( Marôco, 2021a , 2021b ; Pestana & Gageiro, 2014 ; Polit & Beck, 2017 ). The following statistical procedures were considered: (a) item sensitivity evaluated by asymmetry ( Sk ≤ 3) and flattening ( Ku ≤ 7) ( Marôco, 2021a ); (b) quality of the global fit of the factorial model evaluated by using the following indices and reference values: Chi-square ratio versus degrees of freedom ( x 2 /df ). It is suggested to be less than 3 to be considered a good model fit ( Marôco, 2021a ; Soeken, 2010 ); goodness-of-fit index (GFI). Values greater than .90 are suitable, with GFI = 1 indicating a perfect fit ( Marôco, 2021a , 2021b ; Pestana & Gageiro, 2014 ; Soeken, 2010 ); comparative fit index (CFI). Values closer to 1 indicate better adjustment, and .90 is the reference to accept the model ( Hu & Bentler, 1999 ; Marôco, 2021a ); root mean square error of approximation (RMSEA) is a measure of the amount of error in the CFA ( Marôco, 2021a ). Values lower than .05 are indicative of a good fit between the proposed model and the observed matrix, although values lower than .08 are acceptable ( Hu & Bentler, 1999 ; Marôco, 2021a ; Pestana & Gageiro, 2014 ); root mean square residual (RMSR). The lower the RMSR (<.1), the better the adjustment, with RMSR = 0 indicating a perfect fit ( Marôco, 2021a , 2021b ); standardized root mean square residual (SRMR). A value of 0 indicates a perfect fit, values less than .10 are desired, and a value less than .08 is considered a good fit ( Hu & Bentler, 1999 ; Soeken, 2010 ); (c) quality of the local fit of the factorial model assessed by factor weights ( λ ≥ .50) and the individual reliability of items ( r 2 ≥ .25) ( Marôco, 2021a ); (d) composite reliability (CR), which estimates the internal consistency of the items relative to the factor, was assessed with the standardized Cronbach’s alpha for each of the factors. CR ≥ .70 indicates appropriate construct reliability, although, for exploratory investigations, lower values may be acceptable ( Marôco, 2021a ); (e) convergent validity analysis, obtained by average variance extracted (AVE), assesses how the items that reflect a factor strongly saturate this factor, that is, the behavior of these items is explained by this factor. Values of AVE ≥ .50 indicate adequate convergent validity ( Marôco, 2021a ); and (f) discriminant validity (DV) analysis, assessed by comparing the AVE for each factor with the squared Pearson correlation. There is evidence of DV when the squared correlation between the factors is lower than the AVE for each factor ( Marôco, 2021a ). The model fit was based on the modification indices indicated by Analysis of Moment Structures (AMOS) “above 11; p < .001” and on theoretical considerations ( Marôco, 2021a ; Soeken, 2010 ). Reliability Assessment Reliability assessment, a measure that assures the data are stable or consistent, regardless of the context, the instrument, or the researcher ( Polit & Beck, 2017 ; Ribeiro, 2010 ; Streiner et al., 2015 ), involved determination of the internal consistency and temporal stability: (a) internal consistency or homogeneity of items was conducted to assess the degree to which the items of the scale were measuring the same construct. It was estimated using the following: Cronbach’s alpha coefficient for each factor and the overall scale. A good internal consistency should exceed a .80 alpha ( Cronbach, 1990 ; Pestana & Gageiro, 2014 ). The reference values were those recommended by Pestana and Gageiro (2014) : >.9 very good; .8 to .9 good; .7 to .8 reasonable; .6 to .7 weak; <.6 unacceptable; Pearson’s correlation coefficient of the various items, assuming a global score as a reference value, with correlations >.20 ( Marôco, 2021a ). It seeks to determine the degree of item differentiation, in the same sense as the global test, since an item is more discriminative, the more discrepancy it provides between two groups (higher and lower values of the scale); (b) temporal stability, also understood as test–retest reliability, seeks to ascertain the stability of the instrument over time, that is, whether the instrument gives identical results when administered at different times. Test–retest reliability coefficients above .9 are considered high and between .7 and .8 are acceptable for research tools ( Keszei et al., 2010 ; Streiner et al., 2015 ). To measure reliability, the questionnaire was administered to a subset of the sample ( n =40), 3 weeks apart, according to the temporal spacing between measurements, recommended by the authors ( A. C. de Souza et al., 2017 ; Keszei et al., 2010 ; Streiner et al., 2015 ). Test–retest correlation was assessed by calculating the Pearson correlation coefficient and the intraclass correlation coefficient (ICC). The ICC is one of the most used tests to estimate the stability of continuous variables, since it takes measurement errors, such as variations over time and systematic differences, into account ( A. C. de Souza et al., 2017 ; Streiner et al., 2015 ; Streiner & Kottner, 2014 ).
Results Participants were recruited in the health centers (66.1%) and psychiatric hospitals (33.9%) of RAM, by mental health specialist nurses (61.2%), general nurses (32.1%), and family doctors (6.6%). In Table 1 , gender, age, civil status, education level, family relationship to the patient, and family type are summarized. There were 121 subjects, 67 family members (55.4%) mostly sons/daughters, spouses, and parents (85 %), and 54 patients, who completed the questionnaire. Most were female (66.1%), the mean age of the participants was 44.9 years old ( SD = 14.5), and the majority ( n = 71, 58.7%) were less than 50 years old (Male n = 24, 58.5%; Female n = 47, 58.8%). The difference between the proportions of subjects in the different age groups was not statistically significant ( p value = 1.000>.05), which points to the homogeneity of the gender compared with the age group. Most participants (61.2%) had a lower level of education ( Eurostat, 2022 ; Eurydice, 2022a , 2022b ), 72.2% ( n =88) belonged to the working population, and 24% ( n =29) were unemployed. More than a half ( n =68, 56.2%) had a stable source of income, while 20.7% ( n =25) depended on social subsidies and 23.1% ( n =28) had no source of income. Regarding current psychiatric diagnosis, most participants ( n = 68, 56.2%) had a psychiatric disorder, 50.4% ( n = 61) had depression, and less than a half had no psychiatric pathology, while 46.2% ( n = 56) of the subjects showed a history of psychiatric pathology, 37.1% ( n =45) had a history of depression, and almost 49% had no psychiatric antecedents (see Table 2 ). There were no missing values in the answers to the questionnaires, except in the descriptive data in the variables “Current psychiatric diagnosis” and “Psychiatric history,” where six family members gave no information. Construct Validity Exploratory Factor Analysis (EFA) The KMO test, a measure of sampling adequacy, was adequate (KMO = .834) to proceed to factor analysis ( Marôco, 2021b ; Pestana & Gageiro, 2014 ). The Bartlett sphericity test ( p < .001) (χ 2 = 620,824 [ df = 136]; p < .001) also indicated that the matrix was suitable for analysis ( Marôco, 2021b ). The 17 items were then subjected to EFA utilizing the principal components method with varimax orthogonal rotation, with latent roots greater than 1, using values equal to or greater than .40 as a criterion for item saturation ( Marôco, 2021b ). Table 3 shows item loadings, eigenvalues, variance accounted for each factor, and communalities. The final factor solution allowed the extraction of 4 factors, which explained 55.6% of the total variance. Factor 1, communication, explained 17.8% of the total variance; factor 2, expression of emotions, explained 13.4% of the total variance; factor 3, problem-solving, explained 12.9% of the total variance; factor 4, cooperation, explained 11.4% of the total variance. Furthermore, the scree plot chart supported the retention of the four factors, based on the inflection point of the curve. Slightly changes have emerged from the EFA, in relation to the original scale: the order of the factors was changed; the factor “collaboration and problem-solving,” was split into two different factors, “problem-solving” and “cooperation”; and the factor “behavior” was removed, since the items of this factor have saturated in the remaining factors. Items 14, 15, and 17 were moved from factor 4 “behavior” to factor 1 “communication,” item 5 was moved from factor 2 “collaboration and problem-solving” to factor 2 “expression of emotions,” and items 4 and 16 were moved from factor 1 “expressing emotions” and from factor 4 “behavior,” respectively, to factor 3 “problem-solving.” The proportion of variance for each variable explained by the factors, which is usually referred to as communality ( h 2 ), was >.40 reference value ( Marôco, 2021b ) for all communalities, except for item 8. Confirmatory Factor Analysis (CFA) The four-factor solution of the questionnaire was assessed utilizing CFA ( Marôco, 2021a ). In the assessment of normality, the items showed response heterogeneity, with minimum and maximum indices ranging from 1 to 5. We assessed the sensitivity of each item, using asymmetry ( Sk ≤ 3) and flattening ( Ku ≤ 7). Results revealed asymmetry values oscillating in absolute values, between –1.45 and –0.18, flattening values between –1.09 and 1.97, and a multivariate Mardia coefficient of 4.62 ( Cm ≤ 5), which indicate a normal distribution ( Marôco, 2021a ). The critical ratios, or z values, were all statistically significant ( p < .001), which led to the retention of all the items. As shown in Figure 2 , the trajectories of the items with the factors to which they correspond had high factor weights ( λ ≥ .50), except for item 1 (emotions 2 [Ee2 λ = .41]) and item 8 (cooperation 3 [Coop3 λ = .47]). The reference values may, in exploratory studies, be between .40 and .50, so it was not necessary to eliminate any items, and CFA could proceed. Individual reliability was adequate ( r 2 ≥ .25) in the four subscales, except for the cooperation subscale in relation to item Coop3 ( r 2 = .22). As shown in Table 4 , in the initial model, the global goodness-of-fit indices showed a good fit with χ 2 / gl ., CFI, RMSR, and SRMR and acceptable fit with GFI and RMSEA. We then proceeded to refine the model, based on the modification index indicated by AMOS, which correlated errors 2 (commun11) and 3 (commun12). There were no problems of multicollinearity, that is, there were no correlations between items, revealing that the items were independent. We note that, after model refinement, the global fit remained unchanged for all global fit indices (see Table 4 ). Given that the high correlational values between factors were suggestive of the existence of a second-order factor, we proposed a hierarchical structure with a second-order factor, which we designated as expressive family functioning (EFF). Figure 3 illustrates the model obtained. Analysis shows the following: factor 1—communication explained 73% of global factor 5—EFF; Factor 2—expression of emotions explained 50% of global factor 5—EFF; factor 3—problem-solving explained 76% of global factor 5—EFF; and factor 4—cooperation explained 62% of global factor 5—EFF. The lowest correlation registered with the global factor was observed in factor 2 ( r = .71) and the highest with factor 3 ( r = .87). As can be observed in Table 4 , the global fit indices remained unchanged in the second-order model, compared with those recorded in the initial model and the refined model. Table 5 represents CR, AVE, and DV. Factors 1 and 3 had adequate (>.70) internal consistency (CR) and factors 2 and 4 had poor (<.70). The AVE showed that all four factors had a lower value than recommended (≥.50). There was no convergent validity among all the factors. The DV was only evident between factor 2 and factor 3 and between factor 2 and factor 4 since the squared correlational values were lower than the AVE. In addition, the stratified coefficient was high (.91), with .38 AVE ( Marôco, 2021a ). Table 6 represents the convergent/divergent validity of items. All items had convergent validity with the corresponding factor since the correlational value was higher with the subscale to which the item belonged, followed by the correlational value of the item with the total factor of the scale. Reliability Assessment The analysis of the scale’s reliability, as shown in Table 7 , revealed that the mean indices and their standard deviations were all above the midpoint, oscillating from 3.17 ± 1.35 in item 10 to 4.29 ± 0.92 in item 7. In the correlation coefficients, corrected item-total, all items had correlations >.20 reference value ( Pestana & Gageiro, 2014 ), so none were excluded. The item that presented the lowest stability ( r = .23) was item 1. The one with the maximum correlation ( r = .60) was item 15. Cronbach’s alpha for each item was ≥.85, with a global alpha of .86. The test–retest correlation showed values of temporal stability for the global scale of r = .75 ( p < .001), and for the subscales: communication r = .66 ( p < .001); expression of emotions r = .55 ( p < .001); problem-solving r = .66 ( p < .001); and cooperation r = .64 ( p < .001). The calculation of the ICC indicated, as shown in Table 8 , that the precision of the instrument’s estimates was highly significant ( p < .001), for the total scale and for the four subscales, and that the values for temporal stability were all satisfactory in the interval estimate of 95% confidence ( A. C. de Souza et al., 2017 ; Polit & Beck, 2017 ). Based on the final version of the scale, as seen in Table 9 , we finished studying the scale, referring to the mean of the global scale and the study of internal consistency by subscales of the remaining items. The mean scores for all the items, for the four factors and for the global scale, were all above the midpoint. In factor 1—communication, the mean values showed homogeneity in the responses given to the different items, since the scores obtained range from 3.17 ± 1.35 in item 10 to 4.10 ± 0.90 in item 14. Cronbach’s alpha coefficients per item indicated reasonable internal consistency if the item is eliminated; the lowest value ( α = .75) is found for item 12 and the highest ( α = .78) for item 14. The internal consistency obtained for the communication subscale was also reasonable ( α = .79). Item 12 was the one that correlated the most with communication ( r = .58) with a variability of 39.9%, instead of item 14 ( r = .45), which correlates the least with factor 1 with a percentage of explained variance of 26.4%. Regarding factor 2—expression of emotions, the mean values showed homogeneity in the responses given to the different items, since the scores ranged from 3.87 ± 1.16 in item 5 to 3.98 (±1.17 in item 1 and ±1.04 in item 2). Cronbach’s alpha revealed weak internal consistency for items 1 ( α = .68), 3 ( α = .63), and 5 ( α = .63); unacceptable for item 2 ( α = .49); and weak for the subscale ( α = .68). The item that correlated the most with the overall results of factor 2 was item 2 ( r = .65) and the lowest correlation was for item 1 ( r = .36) with percentages of explained variance of 42.4% and 17%, respectively. In the result analysis of factor 3—problem-solving, the mean indices oscillated between 3.36 ± 1.18 in item 6 and 3.79 ± 1.20 in item 4. Cronbach’s alpha indicated poor internal consistency for the items; the lowest value ( α = .60) was for item 6 and the highest ( α = .66) for item 4. The subscale featured a total alpha of .71, indicative of reasonable internal consistency. Item 6 was the one that correlated the most with problem-solving ( r = .55) with a variability of 30.3%, instead of item 4 ( r = .50), which correlated the least with factor 3 with a percentage of explained variance of 25.2%. Regarding factor 4—cooperation, homogeneity was observed in the responses, with mean values ranging between 3.91 ± 1.16 for item 9 and 4.29 ± 0.92 for item 7. Cronbach’s alpha indicated unacceptable internal consistency for items 7 ( α = .42) and 9 ( α = .43) and weak for item 8 ( α = .64) and for the subscale ( α = .61). The item that correlated the most with the overall results of factor 4 was item 7 ( r = .48) and the one with the lowest correlation was item 8 ( r = .32), with percentages of explained variance of 25.1% and 10%, respectively. Factor analysis of the scale was then completed by presenting the Pearson correlation matrix between the several factors and the global scale. As shown in Table 10 , the correlations between the different subscales showed moderate to high correlation values with the total factor of the scale, explaining 49.3% to 76.7% of the total variance. The correlation matrix between the four factors and the global scale revealed that the correlations were positive and significant ( p < .001), indicating that an increase or decrease in the indices of a variable corresponded to an increase or decrease of the variable with which it correlated. The lowest correlational value between subscales was between factor 4 and factor 2 ( r = .38), with an explained variance percentage of 14.7%, and the highest correlational value was between factor 3 and factor 1 ( r = .54), with an explained variance of 29.2%.
Discussion Nurses who work with families dealing with acute or chronic illnesses need to know the effects of their interventions on families. Valid and reliable instruments that can measure family functioning, including expressive functioning, therapeutic change, and the results of nursing interventions on families, are needed in both clinical and research contexts ( Chesla, 2010 ; Gisladottir & Svavarsdottir, 2016 ; Sveinbjarnardottir et al., 2012 ). This study aimed to adapt the ICE-EFFQ ( Sveinbjarnardottir et al., 2012 ), for European Portuguese and assess the psychometric properties of the Portuguese version “Questionário de Funcionamento Expressivo da Família (QFEF)” (Questionnaire on the Expressive Family Functioning [QEFF]). To the best of our knowledge, this is the first valid and reliable instrument in the Portuguese context, designed to measure expressive family functioning in families affected by a member’s acute or chronic mental illness. Regarding the translation and adaption process, the use of a rigorous and systematic method developed in five steps ( Sousa & Rojjanasrirat, 2011 ), with the involvement of expert translators and the use of an expert panel from distinct areas of nursing (family nursing, mental health and psychiatric nursing, and community nursing), ensured the content equivalence of the instrument, improved the adaptation of the instrument to the Portuguese context, and provided the content validity of the Portuguese version. An excellent agreement among experts on items’ relevance for measuring the construct and optimal values for CVI at the item level and at the scale level ( Polit & Beck, 2006 , 2017 ; Streiner et al., 2015 ) was achieved, evidencing a strong relevance of the items in the Portuguese version. The pre-testing strengthened the conceptual, semantic, and content equivalency ( Polit & Beck, 2004 , 2017 ) of the pre-final version of the translated instrument, ensuring the QFEF’s face validity and generating the final European Portuguese version of the ICE-EFFQ, for psychometric testing. The results of psychometric testing were obtained from 54 (44.6%) patients and 67 (55.4%) family members, mostly female (66.1%), with a minimum age of 18 years and a maximum of 75 years, and an average age of 44.9 years ( SD = 14.5) for the total sample. In comparison with the Icelandic ( Sveinbjarnardottir et al., 2012 ) and Danish ( Konradsen et al., 2018 ) studies, there was a predominance of females and similar mean age, except for the Danish sample, where the mean age (61; SD = 14.1) was much higher. Family members related to the patient, as well as in the Icelandic original study ( Sveinbjarnardottir et al., 2012 ), were mostly sons/daughters, spouses, and parents (85%), reinforcing the strong Portuguese tradition of the closest relatives being caregivers of sick family members. The distribution of responses on family functioning in this study revealed high scores, which are considered to represent good family functioning. The mean indices and their standard deviations were well centered, all being above the midpoint of the rating scale, meaning that, on average, families were functioning well in all domains of family functioning. Although no score in the Portuguese version suggests optimal family functioning, the cooperation subscale was the one that scored the highest, pointing to a very good family functioning, in this dimension of family functioning. The overall high scores may indicate that Madeiran families function well even when they must deal with an acute or chronic family illness, such as depression. Comparable results were found in the Danish study ( Konradsen et al., 2018 ), despite the differences between the therapeutic settings and the Danish and Portuguese cultures. Our results are not in line with those of Daches et al. (2018) , Pérez et al. (2018) , and Sell et al. (2021) , who reported an impaired family functioning, in families with a member with mental illness. This divergence of results may be related to differences in culture, sample composition, perception bias ( de Los Reyes et al., 2008 ; Haack et al., 2017 ; Sell et al., 2021 ), or the use of different assessment tools. Furthermore, as stated by Sell et al. (2021) , Portuguese family members may have tended to overestimate family functioning and conceal family problems, to protect the member with mental illness. The final factor solution derived from the EFA, with all factorial loads above .4 ( Marôco, 2021b ), confirmed the 4-factor structure of the original instrument ( Sveinbjarnardottir et al., 2012 ), with a lower explained variance (55.6%) than the original scale (60.3%). In this factor solution, six items were found to load onto factors other than those on which they had saturated on the original scale (ICE-EFFQ), resulting in the splitting of the “problem-solving and collaboration” factor into two factors, in the removal of the “behavior” factor, and in the Portuguese version being a modified version of the ICE-EFFQ. Such differences may be due to the following reasons: First, cultural characteristics of Icelandic and Portuguese populations, may have influenced the response of the participants to the questionnaire, in each context. The authors of the original instrument ( Sveinbjarnardottir et al., 2012 ) and the authors of the Danish study ( Konradsen et al., 2018 ) addressed this issue, pointing out that the cultural context might influence family functioning. Second, in the Icelandic study, the sample consisted of family members, while, in the Portuguese study, as proposed by Chesla (2010) , the sample was made up of patients and family members. As stated in the literature, the validity of an instrument should be thought of as a characteristic of the instrument itself when applied to a sample. That is, the structure of a scale can be directly influenced by the characteristics of the population under study. Third, the ICE-EFFQ ( Sveinbjarnardottir et al., 2012 ) was tested on family members of patients with various kinds of diseases (medical, surgical, pediatric, geriatric, and psychiatric), while the QFEF was assessed in a more restricted population of patients with depression and their family members. We believe that the cognitive and functional losses and emotional changes associated with depression ( de Almeida, 2018 ; World Health Organization, 2017 , 2021 ) may have influenced the interpretation and response of depressed individuals to the questionnaire ( Daches et al., 2018 ; Pérez et al., 2018 ; Sell et al., 2021 ). Furthermore, although the sample composition was 44.6% of patients, we found that 50.4% of the participants had depression, and 56.2% had a mental illness. The presence of such health condition, usually associated with abnormal thoughts, perceptions, emotions, behaviors, and relationships with others ( World Health Organization, 2019 ), in such a high percentage of participants, may have influenced the results of the Portuguese questionnaire, justifying the differences found in comparison with the Icelandic instrument ( Sveinbjarnardottir et al., 2012 ). In addition, there are concerns on the support to be provided to these families, since we found that, beyond patients, 20.9% of the family members were suffering from a psychiatric illness. It is suggested that more studies with larger samples including patients and family members should be conducted in the clinical context of depression. Correlational studies should also be done, to clarify to what extent the presence of mental illness may influence the scale results. All items of the QFEF presented a good sensitivity, with an acceptable range of asymmetry ( Sk < 3) and flattening ( Ku < 7) values ( Marôco, 2021a ). All items had high factor weights ( λ ≥ .50) with the factors to which they corresponded, except items 1 and 8. Items with saturations lower than .50, in a more conservative analysis, should be eliminated. However, the decision was made to keep them because the study was preliminary. Furthermore, a factor must have at least 3 items, and if this rule were followed, and item 8 was eliminated from factor 4, this factor would also be eliminated. Individual reliability was also adequate ( r 2 ≥ .25) in the four subscales, showing the relevance of the factors to predict the items. CFA has shown acceptable (GFI and RMSEA) to good (χ 2 / gl ., CFI, RMSR, and SRMR) values, in the goodness of fit indices. The RMSEA is a measure of the amount of error in the CFA that should be minimal ( Marôco, 2021a ). Values <.05 show a good fit between the proposed model and the observed matrix, while values <.08, indicate an acceptable fit ( Marôco, 2021a ; Pestana & Gageiro, 2014 ). The RMSEA was found to have a perfect fit (.00) in the ICE-EFFQ ( Sveinbjarnardottir et al., 2012 ) and a poor fit (.11) in the Danish version ( Konradsen et al., 2018 ). The differences between our results and those from the Icelandic and Danish instruments might be explained by differences in sample composition and cultural and clinical settings. The GFI showed an acceptable fit (.87), as well as in the Danish version (.80). Such results might be related to the small sample size, since GFI tends to increase, with an increase in sample size and the number of model variables ( Marôco, 2021a , 2021b ; Pestana & Gageiro, 2014 ; Soeken, 2010 ). Further studies with a larger sample size are recommended for greater sensitivity. All fit indices showed values within the cutoff point, indicating that the four-factor model fits the data and that there is construct validity. The analysis of CR showed indices >.9 for the total scale and a range between .628 and .788, for the four domains. The CR estimates the internal consistency of items relative to the factor, indicating the degree to which these items are consistently manifestations of the latent factor. A suitable construct reliability has a cutoff point of .7, although lower values are acceptable for exploratory investigations ( Marôco, 2021a ). It follows that, according to the principles of internal consistency, the Portuguese version of the ICE-EFFQ exhibits measurement reliability. The convergent validity of the 4 factors estimated by AVE, (AVE F1=.35; AVE F2=.43; AVE F3=.48; AVE F4=.37) was lower than the reference values (≥.50). Therefore, there was no convergent validity among all factors. There was evidence of DV between factor 2 (expression of emotions) vs. factor 3 (problem-solving), and between factor 2 (expression of emotions) vs. factor 4 (cooperation), since the squared correlational values between the factors were lower than AVE, meaning that these factors measure different facets of family expressive functioning. It should be noted that, for the global scale, the stratified coefficient was high (.91), with .38 AVE. Based on these results, the instrument is appropriate for this sample, so it may be a valuable resource for the study of family expressive functioning in the Portuguese population. The adjustment of the 4-factor model was acceptable, with factor weights greater than the reference value (.40), and adjustment quality indices supporting the 4 dimensions structure in the modified Portuguese version: factor 1—communication (7 items); factor 2—expression of emotions (4 items); factor 3—problem-solving (3 items); and factor 4—cooperation (3 items). The QFEF reliability achieved a good internal consistency for the global scale ( α = .86), and was acceptable for all four factors ( Pestana & Gageiro, 2014 ). Compared with the ICE-EFFQ ( Sveinbjarnardottir et al., 2012 ) and the Danish version of the instrument ( Konradsen et al., 2018 ), the QFEF presented lower values for Cronbach’s alpha for the total scale and for the four subscales. This may be related to the small sample size, or to cultural differences as suggested by the authors of the original questionnaire ( Sveinbjarnardottir et al., 2012 ). In that sense, more testing with larger samples and among participants from distinct cultural backgrounds and with different family illnesses is required, since cross-cultural studies are essential to strengthening the psychometric properties of a scale ( Alfaro-Díaz et al., 2020 ; Rodrigues et al., 2021 ). The test–retest correlation displayed good values of temporal stability for the global scale, and satisfactory values for all subscales. Test–retest reliability analysis showed that the instrument is stable over time, meaning that it can yield a similar score when administered under the same conditions, to the same participants, and at separate times. The internal consistency and test–retest reliability showed good reliability of the QFEF in the context of its application, supporting the instrument’s construct validity and confirming that it is a reliable measure ( Polit & Yang, 2015 ). The QFEF presents adequate factor validity, sensitivity, and reliability, is available in Portuguese and English, and has the potential to measure family expressive functioning, before and after family nursing interventions, when family members face an acute or chronic mental illness of a close relative. The results of this study ensure the validity and reliability of the Portuguese version of the ICE-EFFQ, warranting its usefulness and suitability for Portuguese health care settings. Limitations One limitation of this study might be the small sample size and the non-randomization of the sample, associated with difficulties in selecting participants. A selection bias could occur, since participants were intentionally selected, although according to the inclusion and exclusion criteria. Considering that the questionnaire has been psychometrically tested in a specific population of depressed patients and their family members, the validity and reliability of the Portuguese version of the ICE-EFFQ were achieved for the sample under study and cannot be extrapolated to different samples or clinical settings. Further empirical testing correlating the QFEF with other measures could also have strengthened the validity and reliability of the instrument. It is worth noting that correlations between males and females, patients, and family members, depressed and non-depressed participants, and between older and younger than 50 years of age were not assessed. Therefore, it will be of interest that, in future studies, the evaluation of these correlations will be considered. Strengths Regarding the identified strengths, the fact is highlighted that the QFEF is the first sensitive, valid, and reliable instrument available in European Portuguese, to assess expressive family functioning in the context of its application. It derives from the ICE-EFFQ ( Sveinbjarnardottir et al., 2012 ), an Icelandic questionnaire grounded on a conceptual framework of family nursing, the CFAM ( Wright & Leahey, 2013 ), which has deep-rooted theoretical foundations on many years of clinical experience with families facing acute or chronic illnesses of their relatives. Furthermore, although there are instruments for assessing family functioning that have been psychometrically tested and have been applied by nurses, they focus on family functioning in healthy families and are based on conceptual frameworks from scientific areas other than nursing (sociology and other health sciences). The QFEF has a good reproducibility, and there is a great deal of strength on its ability to assess family functioning, before and after family nursing interventions. It is a particularly useful and easy-to-apply instrument, which measures the therapeutic change and the outcomes of nursing interventions on families. This questionnaire is not intended to determine whether families are emotionally healthy or not and does not classify families as functional or dysfunctional, although it assesses expressive family functioning and the dimensions considered essential for a healthy family functioning, The QFEF fills in a gap in the availability of instruments to assess family functioning in Portugal and will be valuable in the clinical activity of nurses specializing in mental health nursing and psychiatry, and of all nurses whose professional practice involve family intervention, regardless of the context.
Conclusion Family functioning is a concept that has been widely studied in social science research, health care, and nursing sciences. Valid and reliable instruments from the areas of sociology and health sciences have been applied by nurses to assess family functioning in healthy families. However, to respond to the needs of families faced with illness experiences, it is essential that, in the clinical context, there be valid and reliable instruments that assess family functioning, therapeutic change, and the effect of nursing interventions on families ( Chesla, 2010 ; Mattila et al., 2009 ; Sveinbjarnardottir et al., 2012 ). The QFEF, the European Portuguese version of the ICE-EFFQ ( Sveinbjarnardottir et al., 2012 ), is a sensitive, valid, and reliable instrument, available in Portuguese and English, to assess expressive family functioning in families facing mental illness of their members. It was rigorously translated, culturally adapted, and psychometrically assessed with robust statistical tests, which confirmed its validity and reliability in the context of Portuguese families with depressed members. Content validity is well established exhibiting an excellent agreement among experts and optimal values for CVI. EFA confirmed the original four-factor structure ( Sveinbjarnardottir et al., 2012 ), although with slight differences in item structure, which resulted in the Portuguese version of the ICE-EFFQ being a modified version of the original instrument. CFA showed that there is construct validity, with acceptable to good values of goodness of fit indices ( Marôco, 2021a ). Internal consistency and test–retest reliability also showed good reliability of the QFEF. The QFEF is an easy-to-apply self-report questionnaire, takes approximately 10 minutes to complete, and can be applied in psychiatric hospitals and community health centers. This is a useful instrument for nurse researchers, educators, managers, practice nurses, and other health professionals working with families facing mental illness of their members, to measure family functioning, evaluate therapeutic change, assess the outcomes of family interventions, improve nursing practice with families, and foster the nurses’ spirit of scientific curiosity, contributing to supporting the emerging translational research. The QFEF is a powerful therapeutic and research instrument, with a large potential of application in a wide range of clinical settings. We strongly believe that the QFEF is a valuable instrument for clinical practice, as it provides a standardization of procedures and a methodological working guideline, for the mental health professionals who intervene with families dealing with a member’s mental illness. The practical applicability of this instrument will add value to the professional performance of health care professionals, working in this area. It will also contribute to the health indicators production (useful for management, research, education, and development), promoting the assurance of continuous quality improvement in mental health care provided to families. With the application of this instrument, it is possible to know how families are functioning at a given time and to intervene in the overall family functioning as well as in its most vulnerable components. The QFEF will help to tailor interventions, according to each family’s specific needs, with the purpose of softening family suffering and improve, promote, and/or maintain a good family functioning as well as the family mental health. Altogether, it enhances the visibility on mental health nurses’ therapeutic role in their intervention with families. Further studies with the QFEF are suggested, with larger samples, greater diversity of family health problems, and in different clinical settings, to ensure and strengthen the validity and reliability of the instrument and expand its use. Therefore, the cultural adaptation and validation of this instrument into European Portuguese leaves open for nurses and other health professionals the possibility and scientific curiosity of its application, validation, and dissemination in other clinical and cultural contexts. The applicability of this instrument in families with adult members with depression, and in other family illness contexts, may constitute an added value for better family mental health and for better general family health.
A family’s experience of mental illness can change the family’s functioning. In clinical contexts, valid and reliable instruments that assess family functioning, therapeutic changes, and the effects of family nursing interventions are needed. This study focuses on the linguistic and cultural adaptation of the Iceland-Expressive Family Functioning Questionnaire (ICE-EFFQ) to European Portuguese and examines the psychometric properties of this instrument. A non-random sample of 121 Portuguese depressed patients and their relatives completed the questionnaire. Principal components analysis extracted 4 factors, explaining 55.58% of the total variance. Confirmatory factor analysis revealed acceptable adjustment quality indices. Cronbach’s alpha coefficient was adequate for the global scale α = .86 and for the 4 subscales: communication α = .79, expression of emotions α = .68, problem-solving α = .71, and cooperation α = .61. The Portuguese version of ICE-EFFQ is a sensitive, valid, and reliable instrument for use with Portuguese families with adult members with depression and can be valuable in assessing these families’ expressive functioning, before and after intervention.
Mental illness is a family affair ( Price et al., 2021 ; Wright & Bell, 2021 ; Wright & Leahey, 2013 ) which may cause changes in family functioning ( MacFarlane, 2003 ; Marshall & Harper-Jaques, 2008 ; Sell et al., 2021 ; Sveinbjarnardottir & Svavarsdottir, 2019 ; Yorganson & Stott, 2017 ). The illness process affects both instrument and expressive functioning in families ( Hill et al., 2022 ; Kassem et al., 2022 ; Wright & Leahey, 2013 ; Yorganson & Stott, 2017 ). Depression has been shown to have a significant effect on family functioning, that is, with communication, affective involvement, problem-solving, and moreover with overall family functioning ( J. de Souza et al., 2011 ; Dibenedetti et al., 2012 ; Park & Jung, 2019 ; Sveinbjarnardottir & Svavarsdottir, 2019 ). Family functioning is a concept that encompasses the dynamics and relationships within a family system ( Sell et al., 2021 ; Skinner et al., 2000 ) and is focused on the collective health of the family ( Daches et al., 2018 ). According to the Calgary Family Assessment Model (CFAM) by Wright and Leahey (2013) , family functioning includes both instrumental and expressive aspects. The instrumental aspects refer to daily life activities, such as dressing, eating, and hygiene, while the expressive aspects refer to communication, relationships, and problem-solving between family members ( Wright & Leahey, 2013 ). These expressive aspects include emotional and verbal communication, power dynamics, beliefs, and connections. Nurses must be educated to include families in the care for their ill family member ( Chesla, 2010 ; Duhamel et al., 2015 ; Naef et al., 2021 ) and understand the importance of expressive functioning in evaluating family functioning ( Wright & Leahey, 2013 ). The assessment of family functioning focuses on the patterns of interaction between family members and considers each member’s behavior in the context of the family system ( Papadopoulos, 1995 ; Wright & Leahey, 2013 ). The family is viewed as a system of interacting members who influence and define each other within the family context. Research has demonstrated the benefits of family interventions for both patients and family members ( Chesla, 2010 ; Konradsen et al., 2018 ; Sveinbjarnardottir & Svavarsdottir, 2019 ). Family-centered interventions are crucial and should be offered to families, adults, and children, affected by mental illness, as they have been shown to improve family functioning ( Beardslee et al., 2007 ; Sveinbjarnardottir & Svavarsdottir, 2019 ). Reliable and valid instruments for assessing expressive family functioning in families facing a mental illness, particularly depression, are important for detecting family needs, improving family functioning, and evaluating the effectiveness of nursing interventions. In the assessment of family functioning, a variety of instruments have been used by health professionals ( Åstedt-Kurki et al., 2002 , 2009 ; Galán-González et al., 2021 ; Hohashi et al., 2008 ), including the Family Assessment Device ( Epstein et al., 1978 ; Miller et al., 2000 ), the Family Functioning Health and Social Support questionnaire ( Åstedt-Kurki et al., 2002 , 2009 ), the Iceland-Expressive Family Functioning Questionnaire (ICE-EFFQ) ( Konradsen et al., 2018 ; Sveinbjarnardottir et al., 2012 ), and the Feetham Family Functioning Scale ( Hohashi et al., 2008 ; Roberts & Feetham, 1982 ). The ICE-EFFQ ( Sveinbjarnardottir et al., 2012 ) is a highly regarded instrument that measures expressive family functioning and has been shown to be useful, valid, and reliable in various clinical settings, with families facing acute and chronic illnesses ( Dieperink et al., 2018 ; Konradsen et al., 2018 ; Sveinbjarnardottir et al., 2012 ). The ICE-EFFQ ( Sveinbjarnardottir et al., 2012 ) has been tested and successfully used to assess family functioning in families with acute and chronic illnesses ( Galán-González et al., 2021 ; Kamban & Svavarsdottir, 2013 ; Svavarsdottir et al., 2012 ), acute psychiatric patients ( Sveinbjarnardottir et al., 2013 ), and those with oncological disease ( Dieperink et al., 2018 ; Konradsen et al., 2018 ; Svavarsdottir & Sigurdardottir, 2013 ). The study aimed to adapt the ICE-EFFQ ( Sveinbjarnardottir et al., 2012 ) to the Portuguese language and culture, and to evaluate its psychometric properties. The goal was to make the adapted questionnaire available for use by health care professionals and researchers in Portuguese families dealing with acute or chronic mental illness in a family member. The decision to study families affected by acute or chronic depression was influenced by three main factors. First, mental health professionals in the Autonomous Region of Madeira identified them as a priority focus. Second, Portuguese epidemiological data showed a high prevalence of mental illness and mood disorders ( de Almeida, 2018 ; de Almeida et al., 2013 ), with depressive disorders presenting higher levels of severity compared with other groups of psychiatric pathologies ( de Almeida, 2018 ). Third, research has shown that depression affects the behavior, emotions, communication, and well-being of individuals and their families ( Källquist & Salzmann-Erikson, 2019 ; Sveinbjarnardottir & Svavarsdottir, 2019 ; World Health Organization, 2019 ) and is associated with impaired family functioning ( Daches et al., 2018 ; Pérez et al., 2018 ; Sell et al., 2021 ; Wang & Zhao, 2013 ). These factors point to the importance of addressing the needs of families affected by depression in mental health care. Given the fact that, in Portugal, there are no known instruments to measure family expressive functioning in families facing an acute or chronic mental illness of a relative, and that valid and reliable instruments able to measure the therapeutic change and the effectiveness of family interventions are greatly needed ( Chesla, 2010 ; Galán-González et al., 2021 ; Sveinbjarnardottir et al., 2012 ), we decided to translate the ICE-EFFQ ( Sveinbjarnardottir et al., 2012 ) to European Portuguese and to test its psychometric properties. Purpose of This Study The purpose of this study was to develop a linguistic and cultural adaptation of the ICE-EFFQ ( Sveinbjarnardottir et al., 2012 ) to European Portuguese and to assess its psychometric properties for future application by health professionals and researchers in Portuguese families facing acute or chronic mental illness of their members. Data Analysis The psychometric properties of the instrument were assessed, using the Statistical Package for Social Sciences (SPSS) Version 24 and the special module of SPSS AMOS Version 24. For the sociodemographic characterization of the sample, descriptive statistics were used, with measures of central tendency and dispersion, particularly absolute and relative frequencies, mean, median, minimum, maximum, standard deviation, and percentiles. In the application of statistical inference methods, namely, the Chi-square homogeneity test, a significance level of α = .05 was considered.
Special acknowledgment is given to the families who took part in this study, to the mental health nurses and the general care nurses who took part in the families’ recruitment and data collection, and to all the experts who contributed to the questionnaire’s translation, cultural adaptation, and content and construct validation. The authors also extend their thanks to Lénia Carina Castro Serrão, MCM, for all her support and availability to proofread the writing of the manuscript. A special appreciation is expressed to João Carvalho Duarte, RN, PhD, for all his support, contribution, and indispensable guidance, in the exploratory and confirmatory factorial analysis. They express their gratitude as well to the health institutions that allowed the data collection, namely, the Health Service of the Autonomous Region of Madeira (SESARAM E.P.E.), Irmãs Hospitaleiras—Casa de Saúde Câmara Pestana (CSCP) [Hospitaller Sisters—Câmara Pestana Health House], and Ordem Hospitaleira São João de Deus—Casa de Saúde São João de Deus (CSSJD) [Hospitaller Order Saint John of God—Saint John of God Health House]. Author Biographies Maria do Carmo Lemos Vieira Gouveia , RN, MSN, is a doctoral student at the Nursing School of Lisbon—University of Lisbon. She is an adjunct professor at the High School of Health—University of Madeira. She has done post-graduate studies in mental health and psychiatric nursing and in systemic intervention and family therapy. Her doctoral research focuses on the development of a complex nursing intervention to promote family expressive functioning in families affected by depression. Recent publications include “Intervenções promotoras do Funcionamento expressivo em famílias com membros adultos com depressão [Interventions Promoting Expressive Functioning in Families With Adult Members With Depression]” in AICA-Revista de Divulgação Científica/AICA-Journal of Scientific Dissemination (2020, with M. A. P. Botelho & M. A. P. Henriques), “Family Strengths and Difficulties in Families Affected by Depression: Mental Health Nurses’ Perception” in AICA-Revista de Divulgação Científica / AICA-Journal of Scientific Dissemination (2020, with E. K. Sveinbjarnardottir & M. A. P. Henriques), and “Perspectives: European Academy of Nursing Science Debate” in Journal of Research in Nursing (2016, with J. Taylor &. R. Olsen). Eydis Kristin Sveinbjarnardottir , RN, PhD, is a professor, Faculty of Nursing and Midwifery, School of Health Sciences, University of Iceland, and an adjunct associate research professor at the School of Health Sciences, University of Akureyri, Iceland. She served as dean at the University of Akureyri from 2016 to 2021. During Iceland’s chairmanship in the Arctic Council 2019 to 2021, she served as the chair of AHHEG (Arctic Human Health Expert Group). Recent publications include “Collaboration With Families, Networks and Communities” in Advanced Practice in Mental Health Nursing, A European Perspective (2022, with N. Kilkku), “Maintaining or Letting Go of Couplehood: Perspectives of Older Male Spousal Dementia Caregivers” in Scandinavian Journal of Caring Sciences (2021, with O. A. Stefansdottir & C. Munkejord), and “Recovery of Patients With Severe Depression in Inpatient Rural Psychiatry: A Descriptive Clinical Study” in Nordic Journal of Psychiatry (2020, with S. O. Gudjonsson & R. H. Arnardottir). Maria João Barreira Rodrigues , RN, PhD, is a coordinator professor at the High School of Health, Madeira University. She has completed post-graduate studies in mental health, public health, family therapy, and systemic intervention. She collaborates in national and international research projects focused on evidence-based practice, mental health, development of technologies for home care, and experiences and adaptative strategies to the COVID-19 pandemic in adults from the Autonomous Region of Madeira. She is a member of the Scientific Committees of the International Conference on Serious Games and Applications for Health (SeGAH). Recent publications include: “Emotional Experiences Associated With the Covid-19 Pandemic Situation in the Adult Population” in AICA-Revista de Divulgação Científica / AICA-Journal of Scientific Dissemination (2022, with D. Pereira, R. Silva, & I. Fragoeiro), “Experiências relacionais sociais associadas à situação pandémica do covid-19, na população adulta da Região Autónoma da Madeira [Social Relational Experiences Associated With the Covid-19 Pandemic Situation in the Adult Population of the Autonomous Region of Madeira]” in Representações sociais, saúde e qualidade de vida em tempos de pandemia covid-19: uma análise sobre Brasil e Portugal [Social Representations, Health and Quality of Life in Times of the Covid-19 Pandemic: An Analysis of Brazil and Portugal] (2022, with R. M. L. B. Silva, I. M. A. R. Fragoeiro, & D. I. F. Pereira), and “Enfermagem e Famílias: Uma visão sistémica [Nursing and Families: A Systemic View]” in Visita domiciliária (2018). Rita Maria Lemos Baptista Silva , PhD, is an adjunct professor at the Higher School of Health of the University of Madeira, Portugal. Her principal areas of interest are nursing sciences, electronic records, health and management of health services, and perioperative nursing (operating room). Recent publications include “Differential Manifestation of Teacher Self-Efficacy in Brazilian University Professors in the Health Area” in Engineering Research and Science (2020, with R. Capelo), “Electronic Records Program in Surgical Center for Integral Care to the Patient” in EHealth Technologies in the Context of Health Promotion (2020, with M. M. Martins et al.), and “The Impact of Perioperative Data Science in Hospital Knowledge Management” in Journal of Medical Systems (2019, with M. Baptista et al.). Márcia Sílvia Baptista , MBA and Bachelor’s degree in Mathematics—Scientific, is a student in Epidemiology at Lisbon School of Medicine. She works full-time at Statistics National Institute at Madeira Island in the Social Demographic Statistics and Geographic Information Department. She also performs Statistics analyses in MD and PhD projects. Her research interest is epidemiology, health and biology statistics, and multivariate statistics such as logistic regression and knowledge management. Recent publications include “The Impact of Perioperative Data Science in Hospital Knowledge Management” in Journal of Medical Systems (2019, with J. B. Vasconcelos et al.) and “The Psychological Impact on the Emergency Crews After the Disaster Event on February 20, 2010” in Journal of Health Science (2017, with H. G. Jardim, R. Silva, M. Silva, & B. R. Gouveia). Maria Adriana Pereira Henriques , PhD, MSEPI, RN, is coordinator professor of nursing at Escola Superior de Enfermagem de Lisboa (Nursing School of Lisbon) Portugal. Her main research interest is older people with chronic conditions, caregiving, and nursing at home. She has been a coordinator of the Nursing Doctoral Program at the University of Lisbon since 2021. She is a fellow member of the European Academy of Nursing Science (EANS) and a board scientific committee member. Recent publications include “The Fear of Falls in the Caregivers of Institutionalized Elders” in Revista Gaúcha de Enfermagem (2021, C. L. Baixinho, M. D. A. Dixe, C. Marques-Vieira, & L. Sousa), “Functional Profile of Older Adults Hospitalized in Convalescence Units of the National Network of Integrated Continuous Care of Portugal: A Longitudinal Study” in Journal of Personalized Medicine (2021, A. Ramos, C. Fonseca, L. Pinho, H. Lopes, & H. Oliveira), and “Gait Ability and Muscle Strength in Institutionalized Older Persons With and Without Cognitive Decline and Association With Falls” in International Journal of Environmental Research and Public Health (2021, with M. A. Dixe, C. Madeira, S. Alves, & C. L. Baixinho).
CC BY
no
2024-01-15 23:43:48
J Fam Nurs. 2024 Feb 1; 30(1):7-29
oa_package/c7/9c/PMC10788046.tar.gz
PMC10788047
38222191
Introduction Recurrent shoulder dislocation is a debilitating condition that significantly affects the quality of life of affected individuals. Bilateral involvement is rare but can lead to severe functional limitations [ 1 ]. While conservative management is typically attempted initially, surgical intervention may be required in cases of recurrent instability. The Latarjet procedure, introduced by Michel Latarjet in 1954, involves the transplantation of the coracoid process to the scapular neck. This surgical technique has not only demonstrated excellent long-term clinical outcomes but also remarkable return-to-sport rates [ 1 ]. It has emerged as a reliable method for managing recurrent shoulder dislocations [ 1 ]. This case report describes the successful treatment of bilateral recurrent shoulder dislocation using the bilateral shoulder open Latarjet procedure.
Discussion Although bilateral shoulder dislocation is most often posterior, there are a few cases of bilateral anterior shoulder dislocation reported in the literature. They are the result of high-energy trauma, most often during high-speed sports accidents [ 1 ]. In young patients, like in the present case, the main complication of anterior shoulder dislocation is the instability of the shoulder; a prospective cohort study reported that 55.7% of young patients developed a recurrence of shoulder instability within two years [ 2 ]. Therefore, it's necessary to stabilize a young patient's shoulder with surgical treatment to prevent recurrent instability. Many options are possible but the most recommended are arthroscopic Bankart and the open Latarjet procedures [ 3 ]. The Latarjet procedure involves the transplant of the coracoid process to the scapular neck and has demonstrated excellent long-term clinical outcomes and return to sport rate. Recurrent instability is reported to be as low as 0-5.4% [ 4 ]. In our case, the open Latarjet technique was used with good clinical outcomes on bilateral shoulders. The Latarjet procedure is an established option in the treatment of recurrent anterior shoulder instability, and it’s particularly indicated in young, active patients with glenoid and/or humeral bone loss. The Latarjet procedure allows for a faster return to sports after surgery and most patients regain their preinjury level performance with good results [ 5 , 6 ]. The bone-blocking effect of the Latarjet procedure, achieved by fixing the coracoid graft flush with the joint line, compensates for anterior glenoid bone loss and increases the anterior-posterior diameter, resulting in glenoplasty. However, while this bony augmentation contributes to the stabilization provided by the Latarjet procedure, it is not the sole factor. Other factors that contribute to stability include the effect of the conjoined tendon acting as a sling on the inferior subscapularis and anteroinferior capsule when the arm is abducted and externally rotated, as well as the repair of the capsule to the coracoacromial ligament stump. The combined effect of bony, muscular, and capsular mechanisms, known as the "triple blocking effect" initially described by Patte and Debeyre, aims to minimize the occurrence of recurrent subluxation or dislocation [ 7 ]. Bilateral shoulder instability can be synchronous or asynchronous, depending on whether both shoulders are affected at the same time or at different times. In our case, the patient had asynchronous bilateral anterior instability due to traumatic events. The treatment of bilateral shoulder instability is challenging and requires a careful evaluation of the patient’s goals, expectations, and functional demands. The surgical options included simultaneous or staged procedures, arthroscopic or open techniques, and soft tissue or bone grafting procedures. The decision was made based on several factors, such as the type, direction, and severity of instability, the degree of glenoid bone loss, the presence of associated lesions, and the surgeon’s experience and preference. A study conducted by Ernstbrunner and colleagues demonstrated that labral damage and greater glenoid bone loss had a substantial impact on increasing cartilage contact pressures in the shoulder, both on the glenoid and humeral sides [ 8 ]. While the Latarjet procedure could partially alleviate this effect, the positioning of the graft was found to be a crucial factor in determining the level of glenoid and humeral contact loading. In cases where there was a 25% loss of glenoid bone, performing the Latarjet procedure with a graft placed level with the glenoid and positioning the humerus at the midpoint of the glenoid led to a substantial rise in humeral cartilage contact pressure when compared to the preoperative condition [ 8 ]. Numerous research investigations have examined the comparison between arthroscopic Bankart repair with remplissage and the Latarjet procedure for individuals with off-track lesions and less than 25% glenoid bone loss [ 9 ]. These studies consistently reported similar outcomes in terms of patient-reported results, range of motion, pain levels, and rates of recurrence and return to sporting activities for both surgical methods. However, Yang and colleagues' findings indicated that collision athletes and those with more than 15% bone loss derived greater advantages from the Latarjet procedure in terms of patient-reported results, reduced instability recurrences, and lower revision rates, when compared to arthroscopic Bankart repair with remplissage [ 9 ]. Postoperative rehabilitation is a crucial component of the treatment plan following the bilateral shoulder open Latarjet procedure. A structured and progressive rehabilitation program is essential to optimize outcomes and facilitate the patient's return to function. Early range of motion exercises, followed by strengthening and stability exercises, are implemented to ensure proper graft healing, muscle activation, and joint coordination. Close collaboration between the orthopaedic team and physical therapist is vital to tailor the rehabilitation protocol to the patient's specific needs.
Conclusions This case report underscores the successful management of recurrent bilateral shoulder dislocation through a bilateral open Latarjet procedure. This surgical intervention significantly benefited the patient by facilitating an earlier return to sports activities. It proved to be an effective solution for addressing bilateral shoulder instability, resulting in favorable clinical outcomes, including stability restoration, increased range of motion, and enhanced functional capabilities. Additional research is necessary to assess the long-term effectiveness and compare outcomes between bilateral and unilateral approaches. Nevertheless, the bilateral open Latarjet procedure stands as a valuable treatment option for well-selected patients dealing with recurrent bilateral shoulder dislocation.
Recurrent shoulder dislocation is a common orthopedic condition, but bilateral involvement is rare and presents unique challenges in management. The Latarjet procedure is an effective surgical technique that addresses instability by creating a bony block on the anterior glenoid rim. This case highlights the successful management of bilateral recurrent shoulder dislocation using the bilateral shoulder open Latarjet procedure and emphasizes the importance of early intervention in such cases.
Case presentation A 24-year-old male patient, a boxing instructor by profession, presented to our orthopedic clinic with a complaint of recurrent shoulder dislocation in both shoulders. The patient reported that the initial injury occurred during a football match five years ago when he fell onto his outstretched arms after being tackled by an opponent team member, resulting in a left anterior shoulder dislocation. A few months later, the patient experienced a right anterior shoulder dislocation when he fell down a staircase while using his outstretched arms to break the fall. Following this incident, he experienced recurrent episodes of shoulder dislocation in both shoulders during various physical activities, including boxing training sessions. The patient reported severe pain, loss of function, and instability in both shoulders. Each dislocation episode required manual reduction at the emergency department, and in some instances, the patient was able to self-reduce the dislocated shoulder. Despite attempts at conservative management, including immobilization and physiotherapy, the patient continued to experience recurrent dislocations, significantly impacting his professional and personal life. On physical examination, bilateral shoulder laxity was observed, with positive apprehension, relocation tests, and anterior drawer tests indicating anterior instability. The range of motion was limited due to pain and apprehension. Neurovascular examination revealed no abnormalities. Radiographic evaluation, including anteroposterior and scapular Y views, as well as CT scans of both shoulders, revealed bilateral glenoid bone loss of less than 20% along with Hill-Sachs lesions (Figures 1 , 2 ). MRI further confirmed the presence of anterior labral tears, Bankart lesions, Hill-Sachs lesions, and associated soft tissue injuries in both shoulders (Figure 3 ). Based on the patient's clinical presentation, history of recurrent dislocations, and radiographic findings, a diagnosis of bilateral recurrent shoulder dislocation with concurrent glenoid bone loss and Hill-Sachs lesions was established. Given the bilateral nature of the shoulder instability and the presence of glenoid bone loss over bilateral shoulder, surgical intervention was considered the most appropriate treatment option for this patient. After a detailed discussion of the surgical options, risks, and potential benefits, the patient provided informed consent for a bilateral shoulder open Latarjet procedure. However, due to the left shoulder instability being more prominent and causing greater functional impairment, the patient decided to undergo the procedure on the left shoulder first. Subsequently, the right shoulder was addressed in a separate surgical setting one month later. The bilateral shoulder open Latarjet procedure involves transferring the coracoid process to the anterior glenoid rim, creating a bony block that prevents the anterior translation of the humeral head and provides stability to the shoulder joint (Figure 4 ). The surgery was performed sequentially, with the patient under general anesthesia, in a supine position propped up 30 degrees with a sandbag under the right shoulder. The Latarjet procedure was performed in a standard manner using the deltopectoral approach, with autograft coracoid bone blocks secured to the anterior glenoid rim using screws (Figure 5 , 6 ). Additional procedures were carried out to address associated pathology, including labral repair and capsular plication. Postoperatively, radiographs showed satisfactory results and the patient underwent a structured rehabilitation program (Figures 7 - 10 ). This program involved initial immobilization in shoulder slings followed by a progressive range of motion exercises, strengthening exercises, and proprioceptive training. The patient received regular follow-up evaluations to monitor progress, assess stability, and make necessary adjustments to the rehabilitation program. At the one-year follow-up, the patient demonstrated substantial improvement in both shoulders. He reported no recurrence of dislocation or instability, enabling him to resume his role as a boxing instructor and engage in physical activities without any restrictions. The range of motion and strength in both shoulders exhibited remarkable enhancement compared to the preoperative condition. Specifically, the patient attained a full range of motion in shoulder forward flexion and abduction and nearly achieved a full range of motion in bilateral shoulder external rotation (Figures 11 - 13 ). Moreover, the patient expressed contentment with the surgical outcome and remains under our care for ongoing follow-up at the outpatient clinic.
CC BY
no
2024-01-15 23:43:48
Cureus.; 15(12):e50569
oa_package/0d/08/PMC10788047.tar.gz
PMC10788048
38222135
Introduction Children experiencing head trauma are particularly prone to skull fractures. Skull fractures in the pediatric population cause morbidity and mortality [ 1 - 4 ]. Isolated skull fractures are commonly seen in injuries in the Emergency Department (ED) [ 5 ]. Young children with isolated skull fractures are often hospitalized for neurologic monitoring and observation. Despite the commonality of skull fractures, serious complications and neurosurgical interventions are rare [ 1 - 2 ]. It has been reported that less than one percent of skull fractures require neurosurgical intervention. More efforts are being made to determine if these children can be sent home, avoiding unnecessary transfers, hospital admissions, and associated costs [ 3 , 5 - 7 ]. The purpose of this study is to describe the injury characteristics and clinical outcomes in children with isolated skull fractures.
Materials and methods After institutional review board approval (FWA#00009807), we screened all patients, 0 to 5 years of age, who presented to the ED between the 1st of January 2015 and the 30th of December 2021, and we reviewed their medical records all patients with the following characteristics to screen them for head trauma in an inner-city hospital pediatric ED in the borough of South Bronx in New York City. This study occurred at the facility, a state-designated level 1 adult trauma center with massive transfusion capability and an onsite pediatric intensive care unit (PICU), but without an onsite pediatric surgery and neurosurgery service. The hospital serves a low socioeconomic urban minority population. The inclusion criteria included children with head trauma with isolated skull fractures and had a normal neurological examination. The exclusion criteria were evidence of intracerebral hemorrhage and abnormal neurological examination. From chart review, we examined the patients’ demographics, mechanisms of injury, physical findings, imaging studies, fracture location (displacement/non-displacement), PICU admissions, and treatments and interventions (if any). We also reviewed their disposition, transfer decision, and length of stay in the ED. Statistically, t-test and chi-square analysis were used for evaluating the differences. The pediatric ED follows a systematic and multidisciplinary team approach for the evaluation, such as the PECARN (Pediatric Emergency Care Applied Research Network) Head Trauma Protocol, a clinical guideline designed to assist healthcare providers in assessing and managing head injuries in children [ 8 ]. Table 1 lists the diagnostic criteria used for discharge. The patients in the observation group were provided an appointment at follow-up with a pediatric primary care provider (PCP) and trauma service.
Results We identified 26 children with isolated skull fractures and normal neurological examination (Table 2 ). The average age of children presenting to the ED with overall head trauma (both with and without skull fracture) was 1.0±1.3 years old. In the 26 patients with skull fractures, the median age was six months old. Of those with isolated skull fracture(s), 46% were male. Demographically, 46% identified as Hispanic, 12% Black, and 42% “other”. Most patients with head trauma, regardless of fracture status, were brought into the ED by private vehicle (71%), followed by Emergency Medical Services (EMS) (17%), and others (12%). The mechanisms of injury from most prominent to least were falls (61.5%), unknown (19.2%), motor vehicle collisions (MVC) (n=1), dog bite (n=1), and sledding accidents (n=1). The most common mechanism of injury was a fall in this dataset. Fracture characteristics such as fracture location and description were also studied (Table 3 ). The location of the fractures varied; these include parietal (46%), occipital (19%), temporal (15%), frontal (7.7%), occipital + parietal (7.7%), and parietal + frontal (3.8%) regions. Four fractures were depressed (15%), and the remainder were non-displaced (n=22). Additional associated injuries in the cohort included hematomas of the face and scalp and abdominal bruising. In our cohort (n=26), 11 children (42.3%) were transferred to a designated tertiary care pediatric trauma center from our ED, and 15 (57.7%) were hospitalized and monitored at our primary hospital. Of those that required transfer, CT-head findings were significant for subdural (3/11; 27%), subarachnoid (2/11; 18%) and epidural hemorrhage (2/11; 18%), scalp hematoma at the site of fracture (2/11; 18%). Two (18%) patients required transfer based on physician discretion. The patients in our cohort that stayed in our facility for observation only (n=15) had an average length of stay of 3.1 days (range 1 to 6 days). Of those patients that were admitted for observation, nine patients (35%) were admitted to the general pediatric inpatient service, and six (23%) were sent to PICU for closer observation. All the hospitalized children had a Glasgow Coma Scale (GCS) of 15 on arrival. None of the children in the cohort required intubation or other advanced interventions. The results of this study were previously presented as a meeting abstract at the 2022 American College of Emergency Physicians Research Forum on October 1-4, 2022.
Discussion In the pediatric population, head trauma often results in skull fractures. Pediatric skull fractures require age-specific treatment and should not have the same treatment plan as adult fractures. Unlike their adult, the pediatric skull has a greater capacity to remodel; concurrently, pediatric brains are still developing [ 9 ]. Though much research has been conducted studying pediatric trauma, literature is sparse in terms of isolated skull fractures. Our study aimed to help fill this knowledge gap by determining whether it is necessary to transfer pediatric patients with isolated skull fractures to tertiary centers for neurosurgical evaluation or if they could be closely observed at the primary care center. The PECARN Head Trauma Protocol is a clinical guideline designed to assist healthcare providers in assessing and managing head injuries in children [ 8 ]. It considers age-specific criteria, GCS score, and the duration of loss of consciousness (LOC) to evaluate the severity of head injuries in children systematically. This protocol stratifies patients into low-, intermediate-, or high-risk categories based on their clinical presentation, helping healthcare providers make informed decisions about the need for CT scans. For low-risk patients, the protocol discourages unnecessary CT scans to minimize radiation exposure, reducing unnecessary radiation exposure while ensuring the timely detection of serious injuries, whereas intermediate and high-risk patients receive clear indications for CT imaging [ 8 , 10 ], in addition to the emphasis on parental education. PECARN prioritizes patient safety, minimizes radiation exposure, and ensures timely identification and treatment of serious head injuries while providing a structured risk assessment and management framework. Although this protocol stratifies the imaging a patient should receive, it does not delineate disposition criteria for admission versus observation. A common practice is to admit children with skull fractures to the hospital for observation. Neurologically intact children with an isolated skull fracture without intracranial hemorrhage do not require neurosurgical intervention. However, patients with worrisome findings may be referred to tertiary hospitals with pediatric neurosurgery capabilities. Recently, more efforts have been made to reduce unnecessary hospitalizations. Studies suggest that children with linear non-displaced skull fractures and no intracranial hematoma after head trauma have a very low risk of evolving other traumatic findings or requiring neurosurgical intervention, so observation in the ED may be sufficient [ 11 ]. Overall, there is no consensus on the appropriate course of action in children with isolated skull fractures, so there is considerable variability in the standard of care. In our cohort, there were 26 patients found to have a fracture via a CT scan of the head. Among these patients, 11 were transferred to another tertiary care facility. These individuals were noted to have hemorrhage in the epidural, subdural, and subarachnoid regions, and some had a hematoma along the fracture line. The pediatric ED physician team and clinical exam determination delineated the transfer of these patients. The remaining 15 were admitted and observed in the Inpatient and PICU services. A recent study by Barba et al. found that multi-level falls (MLF) accounted for upwards of 37.7% of pediatric basilar skull fractures [ 12 ]. Perheentupa et al. found that the most common skull base fracture type was the temporal bone fracture (64%), with road traffic accidents as the primary etiology [ 13 ]. Leibu et al. also found the temporal bone to be the most common fracture location (57%) but via falls [ 14 ]. In our ED, most skull fractures were of the parietal bone (46%), and the most common etiology for isolated skull fractures was from falls (62%). There is much variability in fracture location and etiology in children. There is no universal protocol in place for the standard of care regarding the decision to admit for observation versus transfer versus discharge of a patient following a head trauma encounter with an isolated skull fracture in the ED, and often, these patients are admitted for observation. Frequently, neurologically normal children who have an isolated (basilar) skull fracture without any intracranial hemorrhage do not require any neurosurgical intervention [ 9 ]. Since there is no consensus on the appropriate course of action in children with isolated skull fractures, there is considerable variability in their evaluation. In this study, all patients received head CT as part of the generalized trauma protocol. However, Barba et al. suggest that CT examinations only detected abnormalities in 1.9% of patients, so it is hard to know if CT scans are, in fact, necessary for every head trauma [ 8 , 12 ]. Head CT is preferred over simple radiography due to the ability to dual identify skull fractures and traumatic brain injury. The threshold for obtaining a CT scan in infants younger than two years, particularly those younger than three months, should be lower than for older children [ 4 , 10 , 15 , 16 ]. Skull radiographs may be performed when trauma history is uncertain [ 14 ]. After the X-ray, one patient was directed to the CT suite for a head scan at the physician's discretion. Powell et al. described a set of criteria for admitting pediatric skull fracture patients, but this is not the universal gold standard. Signs of increased intracranial pressure, such as persistent neurologic deficits, headache, or vomiting, intracranial injury, suspected child abuse, and parents or caregivers who are unreliable or unable to return, if necessary, require an admission [ 17 ]. Additionally, those patients with depressed skull fractures (15% in our study) do require neurosurgery consultation [ 17 ]. This process could be protocolized to evaluate and follow up with the patient during admission to streamline recommendations with discharge and outpatient services guidance. Children with isolated skull fractures, specifically those that are non-displaced/depressed, which accounted for 85% of our patient population that is neurologically intact, can be safely discharged home if specific discharge criteria are met [ 16 ]. Patients are instructed to follow up with their pediatric PCP within one to two days of injury, which also brings up a similar issue; the child might not have a PCP, or their caretaker might not be able to take another day off from work [ 16 ]. Our patients may not fit the proposed criteria for admission, but they may still get admitted due to their social determinants of health [ 16 , 18 ]. It is possible that different protocols need to be in place as a contingency plan if adequate follow-up is not available. Though our study has limitations, including its retrospective study design with a relatively small sample size and conducted in a low socioeconomic patient population, the results of our study demonstrate no advantage in hospitalization of children with isolated skull fractures who otherwise have normal neurologic examination as no clinical deterioration noted during observation. Few patients were noted as having an unclear mechanism of injury. This is a limitation as the chart was reviewed to explore if abuse was at play, and the chart review noted appropriate steps taken by the clinical staff to explore this; thus, a retrospective review of the data places a limitation. For the patients that were transferred to a tertiary pediatric care facility with a neurosurgical department, our team was unable to capture the overall outcome with the management plan and length of stay at the tertiary center, as this is a limitation of this report. All children in this study had good clinical outcomes. Therefore, inpatient observation for children with isolated skull fractures and normal neurological examinations may suffice without being transferred or admission awaiting follow-up with consultation service. It is important to develop protocols to guide when planning which patients require hospitalization or transfer for a higher level of care at a tertiary care center.
Conclusions This limited dataset suggests that isolated, linear, non-displaced skull fractures with intact neurologic examination in children are at low risk for complications. This raises the question of whether these children need to be transferred to a pediatric trauma center or could be safely monitored in the primary non-pediatric trauma center where they are first seen. We believe clinically stable patients with no underlying brain injury on CT scan do not need to be admitted or transferred to a tertiary care center and can be observed safely in a non-pediatric neurosurgical center, provided there are no additional injuries. Multicenter studies are required to make a uniform recommendation and change practice, but this limited data suggests that children with isolated, linear, non-displaced skull fractures can be discharged safely from the ED after a brief period of observation.
Introduction Young children experiencing head trauma are prone to skull fractures. Pediatric skull fractures are distinct from adults as they have a greater capacity to undergo remodeling. The objective of this study was to evaluate whether children with isolated skull fractures without an underlying brain injury and normal neurological exam require a transfer to a tertiary hospital with pediatric neurosurgery service. Methods A retrospective chart review was performed to review children under five years old presenting to the emergency department of a non-pediatric trauma center with an isolated skull fracture resulting from head trauma without intracerebral hemorrhage between 2015 and 2021. The inclusion criteria consisted of children who have isolated skull fractures without underlying injuries and normal neurological examination. We reviewed these patients' injury characteristics, disposition, and clinical outcomes. The t-test and chi-square were used for evaluating the groups and evaluating the transfer to a dedicated trauma care facility. Results We identified 26 children who had isolated skull fractures with no underlying brain injury and normal neurological examination. The two most common mechanisms of injury were falls (64%) and motor vehicle collisions (MVC) (11%). The median age of patients was six months old. The location of the skull fractures was as follows: parietal (46%), occipital (19%), temporal (15%), frontal (7.7%), occipital + parietal (7.7%), and parietal + frontal (3.8%). Four fractures were depressed (15%) and the remainder were non-displaced. Eleven children with skull fractures (42%) were transferred to a designated pediatric trauma center and the remaining 58% were hospitalized for observation and monitored at the primary hospital. None of the children with skull fractures required intubation or other advanced interventions. Conclusion In this relatively limited sample, approximately one-third of the children with isolated skull fractures without brain injury were managed successfully in a non-tertiary care center. However, none of them required surgical intervention. Thus, we propose that patients akin to those in this study can be observed at a local hospital without being transferred to a pediatric trauma center.
CC BY
no
2024-01-15 23:43:48
Cureus.; 15(12):e50571
oa_package/f5/77/PMC10788048.tar.gz
PMC10788057
38222091
Introduction The COVID-19 pandemic, caused by severe acute respiratory coronavirus 2 (SARS-CoV-2) led to unprecedented, accelerated vaccine development ( 1 ) and expansive roll-out programs ( 2 , 3 ). Much of the global population now has some level of adaptive immunity to SARS-CoV-2 induced by exposure to the virus (natural infection), vaccination, or a combination of both (hybrid immunity). Natural infection induced by, and/or vaccination against, SARS-CoV-2 leads to the development of both binding and neutralizing antibodies (nAbs) ( 4 , 5 ), and the induction of T-cell responses during active immune reaction and clearance of infection ( 6 ). Key questions that subsequently arise relate to the duration and the level of protection an individual might expect based on their infection and vaccination history. Studies of those infected early in the pandemic documented that natural SARS-CoV-2 infection afforded some level of protection against reinfection in most individuals, and that subsequent reinfections were typically less severe than the primary episode ( Table 1 ). However, SARS-CoV-2 has high rates of mutation and heavily mutated variants have emerged ( 21 ). Most significant are the “variants of concern” (VOCs) ( 22 ), and there is now ample evidence that protection against reinfection with the B.1.1.529/21 K (Omicron) variant ( 23 , 24 ) is dramatically reduced compared with previous variants ( Table 1 ). Any descriptor of immunity based on patient history will encompass a population of individuals with vastly variable exposure to vaccines and viral variants with differing orders of immune challenge intensity. Unrecognized “silent infections,” especially in Omicron-positive subjects with underlying immunity, further complicate the assessment. Therefore derivation of potential immunity based on patient history requires assistance from a surrogate composite score to inform about protection and to aid decision making. Correlates of protection or risk In vaccinology, a correlate of protection (CoP) reflects a statistical non-causal relationship between an immune marker and protection after vaccination ( 25 ). Most accepted CoPs are based on antibody measurements ( 26 ) and vary depending on the clinical endpoint, for example protection from (symptomatic) infection or severe disease. In contrast, a correlate of risk (CoR) can be used as a measurement of an immunologic parameter that is correlated with a study endpoint ( 27 ) and can predict a clinical endpoint in a specified population with a defined future timeframe. Notably, antibody markers have been used as correlates of immune function in clinical trials of SARS-CoV-2 vaccine efficacy (VE) ( 28–33 ), and for identifying the risk of symptomatic infection by VOCs ( 34 , 35 ). In VE trials, a CoR can be a CoP if the CoR reliably predicts VE against the clinical endpoint, thereby acting not just as an intrinsic susceptibility factor or marker of pathogen exposure. In this case, the CoR could be a surrogate of the endpoint and could be useful for licensure of new vaccines. A CoR would likely comprise a measure of the immune component plus determinants that act to modify such a measure (a multi-component composite CoR). While there is no scientific evidence for an absolute humoral or cellular CoP against SARS-CoV-2, identification of a multi-component composite CoR might be useful to guide the use of vaccines or patient management. In general, the immune component of a composite CoR should be easily measured by widely available technologies that are amenable to automation, are scalable, cost-efficient, and have a rapid turn-around time. Given the relative complexity, cost and pre-analytic requirements for cellular immune response testing, the preferred candidate for the immune component of a CoR would be detection of humoral immune response(s) (i.e., antibody). This perspective evaluates the various elements that need to be accommodated in the development of an antibody-based composite CoR for reinfection with SARS-CoV-2 or severe COVID-19.
Discussion A composite CoR would be helpful particularly for high-risk groups, such as solid organ transplant recipients ( 157 ), and those in occupations with high risk of exposure to SARS-CoV-2. However, whether a composite CoR would operate at the individual or population level is yet uncertain. For health policymakers, a composite CoR could be useful for: (1) predicting the durability of protection, supporting serosurveys to determine the protection levels of individuals and populations; (2) aiding decision-making with regard to monitoring vaccination efficacy and identifying individuals who would benefit from booster vaccinations; (3) evaluating the need for extra protection of vulnerable communities in the face of new variants with low cross protection and less efficacious vaccines; (4) licensing new vaccines; and (5) developing clear immunologic vaccine trial endpoints. A previous systematic review by Perry and colleagues found mixed evidence for a serologic CoP, with the lack of standardization between laboratory methodology, differing assay targets and sampling time points, and the lack of information on the SARS-CoV-2 variant confounding interpretation ( 158 ). We have highlighted various parameters that should be controlled for in any measure of risk, some of which will be challenging to obtain (such as host genetics). Comparing different protection studies is also difficult as infectious pressure in the observation time period is often uncertain as, in reality, community data are incomplete and the number of oligosymptomatic infections is unclear. Of course, individual responses to infection and vaccination with regards to antibody production will make long-term assessment difficult, intrinsic risk will vary by age and protection will not be linear ( 139 , 159 ). To ensure an acceptable level of accuracy, it will also be important to assess the composite CoR in geographic settings where extrinsic environmental factors, host genetic backgrounds, and circulating variants contribute to the overall effect on the immune response. All the variables previously described need to be thought of in the general context of laboratory diagnostics, paying attention to sensitivity, specificity, positive/negative predictive value, reliability, precision, dilution, linearity, robustness, stability, preanalytics, scalability (automation), cost-efficiency, In Vitro Diagnostic Regulation certification, and the use of qualified standard and control materials. Laboratory quality is essential for meaningful follow-up of quantitative antibody levels. While the development of a composite CoR is a sizeable undertaking, steps can be taken to address this need. Studies need to adapt to the requirements of new variants, controlling for patient settings (vaccination types, earlier infections), and levels of disease severity. The emergence of VOCs means that a CoR will undoubtedly be variant-specific and the timing of infections and vaccination, how variants impact disease severity, antibody kinetics, and assay reactivity, must be respected. Frequently revisiting the data would be helpful as overall epidemiology changes; since almost all epidemiologic population-based studies have ended, background data is increasingly difficult to acquire, and this must be reversed. While serologic testing has retreated from the political agenda and public interest, we have an obligation to broaden the scientific knowledge base, and collect data to inform public health authorities, given that COVID-19 still causes a significant number of deaths and there is a considerable population of those with post-acute sequelae of SARS-CoV-2 infection [long COVID; ( 160 )]. A composite CoR will differ depending on the clinical endpoint ( 26 ). Definitions of symptomatic or severe disease are often not consistent across studies ( 100 ). Clinical outcomes must be precisely defined: an evaluation of the primary endpoints of 19 clinical trials for severe COVID-19 revealed the complexity of this task, reporting 12 different primary endpoints ( 161 ). In addition, the ideal timeframe for predictive ability is yet to be determined. While we support the development of a composite CoR and serologic testing by high- quality controlled assays, viruses such as influenza have significant strain variation and similar disease severity, so the importance of a composite CoR for SARS-CoV-2 should be judged against other pathogens of interest. Assessment of cost-effectiveness will likely inform upon the need for a composite CoR.
Edited by: Ritthideach Yorsaeng, Chulalongkorn University, Thailand Reviewed by: Igor Stoma, Gomel State Medical University, Belarus; Giulia Piccini, Vismederi srl, Italy Much of the global population now has some level of adaptive immunity to SARS-CoV-2 induced by exposure to the virus (natural infection), vaccination, or a combination of both (hybrid immunity). Key questions that subsequently arise relate to the duration and the level of protection an individual might expect based on their infection and vaccination history. A multi-component composite correlate of risk (CoR) could inform individuals and stakeholders about protection and aid decision making. This perspective evaluates the various elements that need to be accommodated in the development of an antibody-based composite CoR for reinfection with SARS-CoV-2 or development of severe COVID-19, including variation in exposure dose, transmission route, viral genetic variation, patient factors, and vaccination status. We provide an overview of antibody dynamics to aid exploration of the specifics of SARS-CoV-2 antibody testing. We further discuss anti-SARS-CoV-2 immunoassays, sample matrices, testing formats, frequency of sampling and the optimal time point for such sampling. While the development of a composite CoR is challenging, we provide our recommendations for each of these key areas and highlight areas that require further work to be undertaken.
A composite CoR: a brief summary of extrinsic viral and intrinsic host elements that should be considered Variation in exposure dose and transmission route Viral load varies widely between infected individuals and over time ( 36 ), with viral emissions independent of symptom severity ( 37 ). Exposure to SARS-CoV-2 is tempered by the use of personal protective measures and, at the population level, adherence to public health measures that reduce exposure has been variable ( 38 , 39 ), making assessment of exposure dose complex. Controlled human infections to directly study the impact of viral inoculum and disease severity are controversial ( 40 ), and only one human challenge trial of SARS-CoV-2 using a single low inoculum dose has been reported to date ( 41 ). However, the initial infective dose of SARS-CoV-2 is thought to be associated with disease severity ( 42–44 ), since relationships between dose and severity exist for many other viral infections ( 44 ). Evidence from SARS-CoV-2 animal models suggests that the route of transmission similarly affects disease severity ( 45 ). Viral genetic variation Risk reduction depends on the dominant variant in circulation. Continued evolution of SARS-CoV-2 can lead to significant changes in viral transmission and impact reinfection rates ( 46 ). Mechanistically, the receptor binding domain (RBD) within the viral spike (S) glycoprotein engages in initiation of infection via interaction with the angiotensin converting enzyme-2 (ACE2) receptor ( 47 ). The RBD is a target for many nAbs ( 47 ) and mutations are frequently located at the RBD–ACE2 interface ( 48 ). It is therefore not surprising that changes to the viral epitope can reduce antibody binding ( 48 ), helping to drive immune escape from anti-RBD nAbs ( 49 ), decreasing previously generated protective immunity ( 50–52 ), and leading to variant-specific risks of severe illness ( 53 , 54 ). Patient factors Patient differences impact susceptibility to reinfection and disease severity. The immune response declines with increasing age ( 55 , 56 ), and age is the strongest predictor of SARS-CoV-2 infection–fatality ratio ( 57 ). Older individuals have been shown to exhibit reduced binding antibody titers and neutralization following vaccination ( 58–60 ). Pregnant women are also at high risk of severe outcomes ( 61 ). Similarly, immunocompromised or immunosuppressed individuals, or those affected by cancer or human immunodeficiency virus (HIV), exhibit reduced immune responses to infection or an increased risk of hospitalization ( 62–66 ). Other co-morbidities are frequently observed in those with severe COVID-19 ( 67 , 68 ). Vaccination status and exposure history COVID-19 vaccines include recombinant subunit, nucleic acid, viral vector and whole virus vaccines, among others, and some vaccines have been adapted for Omicron variants ( 69 ). The use of different vaccines, combinations, the number of boosters received, the interval between boosters, the occurrence of natural infection, and combinations thereof, trigger the immune system to varying degrees in depth, breadth or duration of response ( 35 , 66 , 70–83 ). Pre-existing heterotypic immunity, due to past infections with other coronaviruses, may also influence the immune response to SARS-CoV-2 ( 84 , 85 ). Following primary infection, severely ill patients exhibit higher binding and neutralizing antibody titers or activity compared with individuals with mild disease ( 86–91 ). Persistence of nAbs has also been associated with disease severity ( 92 ). In the event of reinfection, there is an implicit assumption that nAb titers ameliorate severe COVID-19 ( 93 , 94 ). In brief, in infection-naïve individuals, post-vaccination antibody titers (anti-S IgG and nAbs) correlate with higher vaccine efficacy ( 71 ), and post-vaccination anti-RBD IgG and nAbs levels associate with protection against infection and symptomatic disease even during the Omicron era ( 95 ) or inversely correlate with risk of death (anti-S IgG below 20th percentile) ( 96 ). Generally, individuals with higher nAbs (levels or capacity) are considered increasingly protected from infection ( 97–99 ), symptomatic reinfection ( 99–101 ), severe disease ( 100 ), or death ( 102 ) compared with individuals with lower nAbs. There is evidence that neutralization capacity can be strain specific ( 103 ). In summary, viral and host elements modify the risk of reinfection or development of severe COVID-19 in various manners ( Figure 1 ). A composite CoR: antibody dynamics, serology in practice and challenges, and expert recommendations The antibody component of a composite CoR should be developed under defined conditions. To provide insight into these conditions, an understanding of antibody dynamics is required. SARS-CoV-2 antibody dynamics Natural infection with SARS-CoV-2 elicits a diversity of antibodies including those targeting S and nucleocapsid (N) antigens ( 75 , 109 ) and the development of anti-RBD IgG antibodies is associated with improved patient survival ( 110 ). A detailed systematic review of 66 studies investigated antibody responses ( 111 ). Collectively, the evidence supports the induction of IgM production in the acute phase of natural infection (peak prevalence: 20 days) followed by IgA (peak prevalence: 23 days), IgG (peak prevalence: 25 days), and nAbs (peak prevalence: 31 days) after symptom onset ( 111 ). Serum IgG has the longest half-life compared with the relatively transient IgA or IgM ( 112 ). A longitudinal analysis of 4,558 individuals, measuring total anti-N antibodies, revealed that, while total antibodies begin to decline after 90–100 days, they may persist for over 500 days after natural infection ( 113 ). Specifically measuring nAb via plaque reduction neutralization test (PRNT) shows that infection yields a robust nAb response in most individuals ( 86 ). Some studies report that anti-S antibodies show greater persistence than anti-N antibodies ( 114 , 115 ). Dramatic inductions of anti-S or anti-RBD IgG antibodies is indicative of vaccination ( 75 , 116 , 117 ). Primary vaccination by some vaccines [but not all ( 118 )], or boosters generates high nAb titers ( 117 , 119 , 120 ) or neutralizing responses ( 116 ). Notably, nAbs wane over time ( 35 ) with a half-life of 108 days ( 100 )—although the level of decay may be assay or variant dependent ( 119 ) – and multiple clinical factors affect the duration of neutralization responses after primary vaccination ( 66 ) (see also Figure 1 ). Anti-SARS-CoV-2 antibody testing Commercial high-throughput immunoassays Numerous immunoassays for the detection of antibodies against SARS-CoV-2 are available, differing in the immunoglobulin class detected, target viral antigen, format, and output [qualitative, (semi)-quantitative] [reviewed in detail ( 121 , 122 )]. Head-to-head comparisons from the pre-Omicron era reveal variable levels of performance between the assays ( 123–127 ), caused by numerous technical factors including assay methodology, format and antibodies used, timing of testing, and the targeted viral antigen. Comparison studies show that sensitivity for detecting prior infection by different serologic assays changes over time ( 128 ). Commercial assays developed early during the pandemic are based on ancestral/wild-type antigens. Subsequently, there is potential for differential performance in the Omicron-era: in particular, S- and RBD-specific immunoassays have shown significantly reduced performance ( 129–131 ), and decreased comparability of quantitative results ( 132 ). Most common commercial immunoassays detect both binding and nAbs without differentiating between them, however certain assays measuring IgG or total antibodies correlate well with neutralizing capacity ( 28 , 97 , 133–139 ), acting as surrogates of neutralization. Cell-based virus neutralization tests can be used to measure neutralizing capability, but these are typically not readily available in clinical laboratories due to inherent test performance challenges associated with their methodology (including the need for biosafety level 3 containment for live-virus neutralization assays), time and cost ( 140 ). Expert recommendations Mature immune responses are dominated by IgG. Serologic assays that measure IgG or total antibodies (if skewed toward IgG) that correlate with neutralizing activity and focus on anti-RBD should be used for the serologic component of a composite CoR; anti-N antibodies are unlikely to be neutralizing as the N protein is located within the viral envelope ( 75 ). Assays should be adapted for accurate measurement of the modified antigen, if applicable. However, frequent adaptation of assays is unlikely if several variants are circulating in parallel and due to regulatory requirements for assays. Therefore, studies are needed to determine assay applicability in the present conditions, especially since RBD mutations frequently occur and recombinant versions of RBD or S are commonly used in immunoassays ( 122 ). Accordingly, the upper and lower thresholds of any CoR may need modification. External ring trials show poor comparability of assays from different manufacturers ( 141 , 142 ) and there are significant challenges with the current binding antibody units (BAU) standardization, due to multiple factors, including different assay methods, antibody class(es) detected and target antigen used. Of note, BAU reference materials were derived from UK convalescent individuals infected in 2020 ( 143 ) (pre-Omicron), and there are vastly different BAU standardized values ( 144 ). While new reference materials include VOCs, they still contain antibodies derived during the pre-Omicron era ( 145 ). Antibody measurements should be harmonized across assays from different manufacturers, irrespective of the different epitopes utilized, to reduce variability. To support this, there is an urgent need for external quality assessment, production of robust traceable certified reference materials, standards for different variants, and improved documentation of the methods on laboratory reports. Age-specific normalization of reference intervals in defined groups, by means of z-log transformation and documentation in antibody passes, may further improve the comparability of assays. Stakeholders should agree on minimum performance-based criteria to develop the gold standard for CoR, allowing validation of secondary assays. Finally, systemic cellular assays could provide a comprehensive profile of the immune response, especially in immunocompromised and susceptible individuals who are not able to mount a robust antibody response. Currently, they lack scientific evidence and their use in clinical practice still remains uncertain. Sample matrices Systemic anti-SARS-CoV-2 antibody testing can be performed on blood, plasma/serum, or dried blood spots (DBS) ( 122 , 146–148 ). An advantage of whole blood or DBS collection is the ease in obtaining the sample. While many methodologies focus on systemic testing, infection with SARS-CoV-2 or vaccination against COVID-19 induces mucosal antibodies ( 149 , 150 ), thus secretions such as saliva offer another possibility. Antibody dynamics will differ depending on the material in question ( 151 ), and sample types are subject to specific idiosyncrasies, such as additional pre-processing, that need to be accounted for ( 152 ). Of note, the collection protocol (passive drool versus swab-stimulated saliva, for instance) can influence the antibody yield ( 153 ). Currently secretion-based testing is less suitable for a composite CoR as performance is variable ( 154 ). Expert recommendations A composite CoR will likely be sample matrix-specific. Our preference is for plasma/serum, as this sample matrix has the largest evidence base, shows the least variability, experiences less interference than whole blood, and is consistent with CoRs established for other infectious diseases. DBS would be also possible, but variability is high, and few laboratories have an established workflow. Serologic testing formats Formats include high-throughput automated enzyme immunoassay/ electrochemiluminescence immunoassay/enzyme-linked immunosorbent assay (certified and used in central laboratories and hospitals), point-of-care (POC) testing (used in emergencies and outpatients setting), and direct-to-consumer testing (at-home use with online services). POC testing is gaining in popularity, but methodological variation is higher ( 155 ) and any method that relies upon sampling from untrained individuals is less reliable for (semi)quantitative measurements ( 156 ). Expert recommendations We recommend automated assays that are approved by location-specific regulatory agencies and performed in certified and centralized laboratories. Home sampling/DBS would contribute to a reduction in clinician workload, particularly in high-density residential facilities, but methods are not yet sufficiently robust. Currently, there is no clear benefit in POC testing as urgent results are not critical. Frequency of sampling and optimal time point Considering antibody dynamics, several important questions arise: what is the optimal time point for measurement; would the timing differ depending on the vaccine schedule, and/or the presence of previous infection of a specified severity; should antibody levels be measured once or serially? While single values can be plotted into modeled curves showing decrease rates over time, serial measurements could further refine the composite CoR. Only individuals with symptomatic disease or vaccination are known to stabilize the curve—infections that are sufficiently mild to lack detection will impact the composite CoR model. Expert recommendations As most individuals have experienced infection or vaccination, and titers are generally high and more stable than with single exposures, sampling should be performed annually or less. Serologic evaluation should be conducted more frequently in the older adult or immunocompromised than the general population (time interval to be defined), depending on any underlying disease and/or treatment. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author. Author contributions SH: Conceptualization, Writing – original draft, Writing – review & editing. CD: Conceptualization, Writing – review & editing. JI: Conceptualization, Writing – review & editing. ET: Conceptualization, Writing – review & editing. AW: Conceptualization, Writing – original draft, Writing – review & editing.
The authors thank Corrinne Segal of Elements Communications Limited (London, United Kingdom) for editorial assistance. This article has previously been submitted to the Authorea preprint server and can be found at https://doi.org/10.22541/au.169412008.83234734/v1 . Conflict of interest SH has received grants from Roche Diagnostics, Sysmex and Volition, consulting fees from Instand e.V, EQAS, Merck KG, Roche Diagnostics and Thermo Fisher Scientific, speaker’s honoraria from BMS, Medica, Roche Diagnostics and Trillium, and has leadership roles in the International Society of Oncology and Biomarkers (board member and secretary), DGKL Competence Field Molecular Diagnostic (vice speaker) and Federal Medical Association, D5 Group (delegate of the DGKL). CD has received speaker’s honoraria from Abbott Diagnostics, Roche Diagnostics and Siemens Healthineers. JI has received grants from Roche Diagnostics. ET has received consulting fees from EUROIMMUN US, Serimmune and Roche Diagnostics, speaker’s honoraria from the American Society for Microbiology and EUROIMMUN US, and support for meetings from the American Society for Microbiology, the New York City Branch of the American Society of Microbiology and the Pan American Society for Clinical Virology. AW has received grants from numerous different public fundings, including the German Center for Infection Research, Fraunhofer Gesellschaft, and German Aif and Zim programs, royalties or licenses from Smart United GmbH, consulting fees from Roche Diagnostics and Roche Pharma, speaker’s honoraria from BÄMI and Roche Diagnostics, support for meetings from BÄMI and Roche Diagnostics, participated in advisory boards for Roche Diagnostics, declares stock or stock options in Smart United GmbH and Munich Innovative Biosolutions UG (haftungsbeschränkt), and has received reduced rates for materials and equipment from EUROIMMUN and Roche Diagnostics. SH is a founder of CEBIO and SFZ BioCoDE. The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision. Publisher’s note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
CC BY
no
2024-01-15 23:43:49
Front Public Health. 2023 Dec 28; 11:1290402
oa_package/fd/61/PMC10788057.tar.gz
PMC10788079
38222243
Introduction The incidence of syphilis, a sexually transmitted infection (STI) caused by the spirochete bacterium Treponema pallidum , continues to rise in the United States at an alarming rate. In the year 2000, the reported rate of infection was 11.2 per 100,000; in 2021, it was 53.2 per 100,000. [ 1 , 2 ]. Most of these cases are found in men who have sex with men (MSM), patients who have human immunodeficiency virus (HIV), or patients who have other "high-risk" behaviors, such as intravenous drug use [ 1 ]. The classic lesion of primary syphilis is a solitary, painless genital ulcer or chancre. The presentation of secondary syphilis can be more variable with a fever, rash, and/or lymphadenopathy. A wide variety of less common disease manifestations are possible, earning syphilis the epithet of "the great imitator" [ 2 ]. Despite a rise in the reported cases of syphilis, infections secondary to Neisseria gonorrhea and Chlamydia trachomatis remain the more common, and therefore the more commonly tested for, STIs [ 3 ]. Additionally, traditional screening with nontreponemal serologic tests is known to give false negative results in the early stages of disease [ 4 ]. For these reasons, syphilis may go unrecognized by clinicians in non-classical settings. In this report, we present a rare case of anal syphilis in a young woman who presented with perianal pain. A comprehensive literature search yielded no reported cases of anal syphilis in a woman in the United States.
Discussion The incidence of sexually transmitted infections in the United States has steadily increased over the past decade [ 3 ]. Although primary and secondary (P&S) syphilitic infections reached an all-time reported low in the year 2000, by 2017, cases had increased by >400%, and this rising trend continued into the COVID-19 pandemic [ 1 ]. In general, men make up between 80-90% of newly reported P&S syphilis cases, and most of these patients identify as men who have sex with men (MSM) [ 1 ]. Still, the rate of P&S syphilis in women has also been increasing, and the concerning rise in incidence across multiple populations means that syphilis should remain on the differential diagnosis for all patients who present with uncharacteristic anal lesions [ 1 , 2 ]. The differential diagnosis for an anorectal lesion is broad. A painful perianal ulcer or fissure in a midline position is most often related to trauma/constipation [ 5 ]. If there are multiple fissures, or if the fissures are in a lateral position, then inflammatory bowel disease, specifically Crohn's disease, should be considered [ 5 , 6 ]. If an ulcer or mass is present, then neoplasia may be favored, and a biopsy is required for appropriate diagnosis. Ceretti et al reported a patient who identified as an MSM and who presented with a 3 cm ulcerated right rectal mass that was palpable on digital rectal examination and highly concerning for malignancy. A tissue biopsy confirmed primary syphilis rather than carcinoma [ 7 ]. Similar mass-like presentations causing clinical concern for neoplasia have been reported in other anatomic sites, including the stomach and oral cavity [ 8 - 9 ]. Finally, patients who present with the classic painless genital chancre of primary syphilis may receive testing for STIs; however, up to one-third of perianal syphilitic lesions may be painful. The presence of pain is not known to correlate with co-infection with either HSV or HIV [ 10 ]. Given this wide differential for perianal lesions and the variability in presentation for anal syphilis, additional testing is often necessary to reach a diagnosis of anal syphilis. If a tissue biopsy is taken, histologic clues to a diagnosis of syphilis include a plasma cell-rich inflammatory infiltrate, frequent lymphoid aggregates, and a relative lack of neutrophils and eosinophils [ 11 ]. In our case, unusual histiocytes with abundant pale cytoplasm were also a distinct morphologic feature that raised suspicion for an infectious etiology. For serologic diagnosis of syphilis, there are two primary testing algorithms. In the first traditional diagnostic pathway, nontreponemal tests, such as the venereal disease research laboratory (VDRL) assay and RPR test, are performed as screening assays due to their high sensitivity (>85% in primary and secondary syphilis). This algorithm is limited by false negative results that occur early in Treponemal infection, which may have been the case with our patient. [ 4 , 12 ] The reverse screening algorithm begins with a treponemal-specific test, such as a T. pallidum particle agglutination test. Although the treponemal tests are more sensitive in early infection, they have occasional false positive results and remain positive indefinitely, negating any ability to monitor patient response to therapy or re-infection [ 12 ]. When an STI is suspected in the setting of an anogenital lesion, performing concurrent syphilis serologies and chlamydia/ gonorrhea nucleic acid amplification testing (NAAT) is prudent given the high likelihood of STI co-infection [ 3 ]. In the setting of a non-genital or perianal site of the lesion, biopsy can provide important information to rule out other etiologies and confirm the presence of spirochetes [ 11 ]. Prompt diagnosis and treatment of syphilis prevents disease progression in the patient and reduces disease transmission within a population. Penicillin is a highly effective treatment for P&S syphilis; there are no documented cases of penicillin resistance in over 60 years of use for the treatment of Treponema pallidum [ 2 ]. A single dose of 2.4 million units of benzathine penicillin G is the current treatment standard for early syphilis with a low treatment failure rate of 5% [ 2 ]. For our patient, the delay in diagnosis of syphilis led to the unnecessary treatment of HSV with valacyclovir and prolonged disease symptoms. Once diagnosed, a single dose of Benzathine penicillin G was effective in resolving the patient’s symptoms and clearing the infection.
Conclusions This case documents an atypical, painful presentation of anal syphilis in a young HIV-negative woman whose only risk factor appears to have been multiple sexual partners. Ultimately, the detection and timely treatment of P&S syphilis relies on data from a comprehensive clinical history, appropriate laboratory testing, and occasional histopathologic review. With ever increasing rates of sexually transmitted infections, clinicians can expect to see more patients with routine and atypical presentations of these pathogens. Prompt penicillin G therapy is effective and can prevent patient morbidity and mortality from advanced syphilitic disease.
Anorectal syphilis is relatively uncommon and diagnostically challenging given the wide differential diagnosis for anal lesions. Risk factors, such as men who have sex with men or HIV-positive status, are especially important to elicit from patients during the clinical history. In this report, we present a rare case of painful anal syphilis diagnosed in an HIV-negative woman by tissue biopsy .
Case presentation A 23-year-old otherwise healthy woman was referred to the outpatient colorectal surgery clinic for evaluation of a painful perianal lesion that had been present for two months. The patient’s pain was localized to the anal verge and exacerbated during defecation. The patient described clear drainage coming from the lesion without associated dysuria, hematuria, rash, or ulcers in her oral cavity or genitalia. Her history was notable for sexual intercourse with multiple partners and inconsistent barrier protection. However, STI testing was unremarkable one year prior to presentation, including a treponemal antibody screening test. Repeat serologic studies were performed and were negative, as were nucleic acid amplification testing (NAAT) for Neisseria gonorrhea and Chlamydia trachomatis . She had recently completed a 3-day course of oral Bactrim antibiotics prescribed by her gynecologist but she had not experienced symptomatic improvement. On physical exam, bilateral perianal ulcerations were identified. These were tender to palpation and weeping clear fluid. Herpes simplex virus (HSV) was presumed based on history and examination and the patient was treated with two courses of valacyclovir, again without improvement of her symptoms. The patient returned to the clinic for a rectal exam under anesthesia and a biopsy of the perianal lesion. This examination showed a shallow ulcer at the anal verge with a sharp, raised border (Figure 1A ). The ulcer was tender but did not have purulent discharge. A biopsy from the ulcer edge was sent to pathology and showed ulcerated squamous mucosa with abundant plasma cell inflammation and atypical appearing histiocytes with cleared cytoplasm. (Figure 1B ). A syphilitic ulcer was suspected due to the clinical history and histologic features, and a spirochete immunostain confirmed Treponemal pallidum infection (Figure 1C ). The patient was referred to an Infectious Disease specialist for further workup and management. A comprehensive serologic panel was sent and showed that the patient was Treponema antibody and Treponema pallidum particle agglutination assay (TPPA) positive, but rapid plasma reagin (RPR) and HIV negative. She was treated with one dose of intramuscular penicillin. Her perianal lesion and associated pain resolved entirely on examination (Figure 1D ).
CC BY
no
2024-01-15 23:43:49
Cureus.; 15(12):e50575
oa_package/a7/af/PMC10788079.tar.gz
PMC10788080
38222231
Introduction Ventricular septal rupture (VSR) is an uncommon mechanical complication of myocardial infarction that occurs between the third and fifth day of evolution and is usually complicated by cardiogenic shock (CS) resulting in high in-hospital mortality [ 1 , 2 ]. The optimal timing of intervention remains controversial [ 1 ], especially in patients with CS and multiorgan failure, where the early use of mechanical circulatory support (MCS) devices such as peripheral venoarterial extracorporeal membrane oxygenation (VA ECMO) in the preoperative period and delayed surgical repair of VSR have been associated with lower mortality [ 3 , 4 ]. This improved outcome with delayed surgery may be related to better myocardial tissue stability allowing for more effective repair [ 4 ]. We present the case of a 53-year-old man with CS stage D secondary to post-myocardial infarction VSR, in whom VA ECMO was used as a bridge to successful delayed surgical repair.
Discussion The ideal timing of surgical intervention for post-myocardial infarction VSR complicated by CS remains controversial, as higher mortality has been reported when surgery is performed acutely compared to delayed intervention [ 1 ]. It has been reported that surgery in the first 24 hours has the highest mortality (>60%), within the first seven days the mortality is 54.1%, compared to after seven days when the mortality decreases to 18.4% [ 5 , 6 ]. In our country, a mortality of 50% has been reported in patients with isolated VSR, and CS has been identified as one of the main complications (41.7%) in them [ 2 ]. Patients with post-myocardial infarction VSR with CS and multiorgan failure usually present with a large VSR or an infarct with biventricular involvement [ 1 ], requiring more efficient MCS, such as VA ECMO, to achieve hemodynamic stability, improve preoperative status, and allow delayed surgery [ 1 , 4 ]. The time elapsed between myocardial infarction and surgical repair of VSR has an impact on patient survival [ 7 ]. Ariza et al. performed a retrospective study (from 2014 to 2017) among 28 patients with post-myocardial infarction VSR complicated by CS, and found that only the group of patients who underwent early MCS with VA ECMO as a bridge to delayed surgery (17.9%) survived to hospital discharge compared to those who underwent unsupported surgery, postoperative MCS, and conservative management, whose mortality was 27.3%, 50%, and 100%, respectively [ 3 ]. Delayed surgery in patients using early VA ECMO had a mean of 5.2 days (range = 4-6 days) after admission [ 3 ]. Arnaoutakis et al. reported that the longer the interval between myocardial infarction and surgical repair of VSR, the better the outcomes, especially when surgery was performed after seven days, highlighting that mortality after day 21 was reduced by up to 10% [ 6 , 7 ]. In our case, VA ECMO was used as a bridge to delayed surgery (12 days after infarction), which was successful after stabilizing and improving the patient’s hemodynamics and organ function. The complete maturation of the VSR edges provides a more durable and resistant tissue for the placement of sutures to secure the patch [ 7 ], which may explain the better results with delayed surgery. In addition, it is important to highlight the benefit of VA ECMO in reversing multiorgan failure. However, it is worth mentioning that the use of this type of MCS device is also associated with in-hospital complications such as bleeding and infection [ 3 ], which is why it should be used for the minimum time necessary. The mean duration of early VA ECMO support in the group of patients who survived to hospital discharge was nine days (range = 4-12 days) [ 3 ]. Our patient spent a total of 10 days on ECMO-VA, and in the immediate postoperative period, weaning was successful without complications.
Conclusions The reasonable use of VA ECMO as a bridge to delayed surgery for post-myocardial infarction VSR complicated by CS has shown benefits in survival and postoperative outcome; however, the optimal timing of surgery remains controversial, reflecting the complexity of these cases. Our report highlights the usefulness of early support with VA ECMO to improve hemodynamic stability and organ function in the preoperative period and supports the trend of delayed surgery in this group of patients.
Ventricular septal rupture (VSR) after myocardial infarction is often complicated by cardiogenic shock (CS) with high in-hospital mortality rates. Early use of preoperative venoarterial extracorporeal membrane oxygenation (VA ECMO) and delayed surgical repair have demonstrated lower mortality rates; however, the optimal timing of surgical intervention remains controversial. We report the case of a 53-year-old man with CS stage D due to post-myocardial infarction VSR, who was successfully treated with VA ECMO as a bridge to delayed surgical repair. This case highlights the complexity of determining the optimal timing for surgical intervention in these patients and emphasizes the benefits of early use of VA ECMO for preoperative stabilization in patients with CS and multiorgan failure.
Case presentation A 53-year-old man was admitted to our hospital with oppressive chest pain associated with dyspnea for three days. His medical history was significant for hypertension and heavy smoking. Physical examination revealed a left parasternal holosystolic murmur and crackles in the lower third of both lungs. Blood pressure was 102/75 mmHg, pulse was 130 beats/minute, respiratory rate was 26 breaths/minute, and oxygen saturation was 96% with an FiO 2 of 0.36. The electrocardiogram showed sinus rhythm and ST-segment elevation in precordial leads. The troponin level was elevated (5.2 ng/mL; normal <0.1 ng/mL). The diagnosis of anterior ST-elevation myocardial infarction complicated by VSR was raised. Transesophageal echocardiogram (TEE) showed a left ventricular ejection fraction (LVEF) of 47%, right ventricular fractional area change (RVFAC) of 33%, and the presence of an apical VSR of 17 mm along with left-to-right shunting (Figure 1 ). Coronary angiography revealed occlusion of the mid-left anterior descending (LAD) artery and severe stenosis of the mid-right coronary artery (Figure 2 ). Right heart catheterization revealed a Qp/Qs ratio of 2.68, cardiac index of 1.25 L/minute/m 2 , pulmonary capillary wedge pressure of 39 mmHg, and right atrial pressure of 21 mmHg. His clinical condition deteriorated and was complicated by CS stage C, for which he was intubated and connected to mechanical ventilation, started on norepinephrine 0.5 μg/kg/minute, dobutamine 7.5 μg/kg/minute, and intra-aortic balloon pump (IABP) was implanted. On day two of hospitalization, renal and hepatic deterioration and lactate elevation (4.2 mmol/L) were added, progressing to CS stage D, for which it was decided to implant emergency peripheral VA ECMO guided by TEE. Clinical evolution after placement of VA ECMO was favorable, and 12 days after myocardial infarction, surgical repair with bovine pericardial patch of the VSR was performed, as well as placement of three coronary artery bypass grafts (left internal mammary artery to LAD, saphenous vein to diagonal, and saphenous vein to posterior descending). In the immediate postoperative period, we continued to wean VA ECMO and IABP with LVEF of 40% and RVFAC of 35% and maintained dobutamine support at 5 μg/kg/minute, which was gradually tapered. The pre-discharge transthoracic echocardiogram showed an LVEF of 40% and a residual interventricular defect of 3 mm adjacent to the pericardial patch, which did not cause significant hemodynamic compromise (Figure 3 ). The evolution was favorable, and he was discharged one month after hospitalization on aspirin 100 mg od, clopidogrel 75 mg od, atorvastatin 40 mg od, valsartan 80 mg bid, bisoprolol 5 mg od, dapagliflozin 10 mg od, spironolactone 50 mg od, and furosemide 40 mg od. At six months of outpatient follow-up, the patient remains in functional class II, continues to receive optimal medical therapy, and has had no new ischemic episodes or rehospitalizations.
CC BY
no
2024-01-15 23:43:49
Cureus.; 15(12):e50574
oa_package/6b/42/PMC10788080.tar.gz
PMC10788081
38222200
Introduction Dermatomyositis is an autoimmune connective tissue disorder of unknown etiology showing bimodal age distribution; Juvenile dermatomyositis (JDM) and adult form dermatomyositis. In the absence of cutaneous changes, the term polymyositis is used [ 1 ]. Cutaneous manifestations are variable, including Heliotrope rash, Gottron’s papules/sign, nailfold capillary changes, facial malar eruption, mouth/skin ulcers, gingival telangiectasia, limb edema, xerosis, poikiloderma, calcinosis, and lipodystrophy. Non-specific constitutional symptoms such as fever, lethargy, and adenopathy can present in JDM cases. Dyspnea should raise the suspicion of interstitial lung disease, and rarely cardiac involvement. The diagnosis can be established through the score-based EULAR/ACR classification criteria. Criteria elements represent the aforementioned classical clinical features, in addition to the age of onset, elevated muscle-derived serum enzyme levels, muscle biopsy/MRI, and myositis-specific antibodies.1 JDM is treated with systemic steroids, methotrexate, and/or cyclosporin in mild to moderate disease. Intravenous immune globulin is used in refractory or recurrent disease [ 2 - 4 ]. Several other immunomodulators including rituximab, anti-TNF agents, JAK-STAT inhibitors, and many other agents are under investigation and show promising results [ 5 - 7 ]. JDM presenting with generalized scaly poikeloderma is an unfamiliar presentation. It is important to encourage clinicians to share their experience in JDM atypical presentations to reach the goal of easier and earlier detection of the disease.
Discussion Although both dermatomyositis and polymyositis are considered a spectrum of the same disease entity, the pathophysiology of tissue destruction was discovered recently to be different. Muscle fiber degeneration and necrosis are mediated by cytotoxic T-cells, whereas cutaneous changes are caused by humoral antibody- and complement-mediated capillary vasculopathy. The peak incidence is during school ages and girls are affected two- to fivefold greater rate than boys. However, our patient showed onset during the infantile period. The hallmark of JDM is symmetrical proximal extensor muscle weakness, usually with myalgia. Involvement of palatal and cricopharyngeal muscles is common, causing problems while swallowing. JDM can present with many distinct cutaneous features. Heliotrope sign and Gottron’s papules are classically considered pathognomic for the disease, however, both were absent in our patient. Poikiloderma can affect both photodistributed and photoprotected areas, the former is very characteristic for dermatomyositis. However, our patient showed generalized scaly poikiloderma, a feature that is rarely seen in DM. Such atypical presentations can delay the diagnosis and prevent patients from the benefit of early treatment of such debilitating diseases, especially in this age group. Amyopathic JDM is very rare in the pediatric age group accounting for about 5% of JDM cases [ 1 ]. In one series included 166 newly diagnosed children with JDM showed that skin rash is the presenting symptom in 65% of cases [ 2 ]. Our case was initially diagnosed as amyopathic DM; however, six months later she developed clinical, laboratory, and imaging features of JDM. So, one should not hurry to label the case as amyopathic JDM until two years from the onset of the disease, where after that time a definitive diagnosis of amyopathic JDM can be made if the patient did not develop muscle weakness [ 3 ]. Unlike adult DM, children do not have an increased risk of internal malignancy, so no workup of internal malignancy was ordered. To our knowledge, our case is the first case in the literature showing JDM with generalized poikiloderma.
Conclusions JDM cutaneous features are variable but rarely present with generalized poikeloderma. It is important to encourage clinicians to share their experience in JDM atypical presentations to reach the goal of easier and earlier detection of the disease. A definitive diagnosis of amyopathic JDM is made if the patient does not develop muscle weakness for two years after the onset of the skin rash.
Juvenile dermatomyositis (JDM) is a chronic autoimmune inflammatory disorder and is considered the most common form of idiopathic inflammatory myopathies. JDM primarily affects the skin and the skeletal muscles. Characteristic signs and symptoms include Gottron papules, heliotrope rash, calcinosis cutis, and symmetrical proximal muscle weakness. However, JDM presenting with generalized scaly poikeloderma is an unfamiliar presentation. Herein we report a 14-month-old female toddler presented with generalized progressive asymptomatic scaly mottled violaceous patches (poikilodermatous) that started when she was seven months old. Her lab results were unremarkable. She was diagnosed with poikilodermatous skin rash with a differential diagnosis of Amyopathic dermatomyositis, poikilodermatous genodermatosis, and patch-stage mycosis fungoides. She was prescribed moisturizer creams only. A year later, during a follow-up, she presented with a full picture of JDM, with a history of scaly poikilodermatous skin patches that became more widespread, frequent choking during oral intake, and not being able to stand and sit unsupported. Laboratory workup was significant for low WBC and hemoglobin counts, along with elevated CPK, LDH, ferritin, CRP, and ESR levels. MRI revealed the right anterior thigh and vastus lateralis subcutaneous edema. Therefore, the child was diagnosed and treated as a case of JDM.
Case presentation A 14-month-old female toddler, not known to have medical illnesses, presented to our clinic with generalized progressive asymptomatic skin lesions that started when she was seven months old. Her perinatal history was uneventful. Systemic review did not reveal a change in the child's activity. No history of (H/O) frequent choking during oral intake. No H/O irritability. No H/O fever. Family history was unremarkable and there was no similar case in the family. Developmental milestones were reached for her age. Skin examination revealed multiple scaly mottled violaceous patches on her upper and lower extremities (Figures 1 , 2 ). Lesional skin punch biopsy showed mild perivascular dermal lymphocytic infiltration with melanin inconvenience. Laboratory workup was insignificant for CBC, CPK, LDH, AST, ferritin, CRP, and ESR levels. ANA and dsDNA were negative. Based on the above clinicopathological findings, the baby was diagnosed with poikilodermatous skin rash with a differential diagnosis of amyopathic dermatomyositis, poikilodermatous genodermatosis, and patch-stage mycosis fungoides. She was prescribed moisturizer creams only. A year later, during follow-up, at the age of 26 months, the scaly poikilodermatous skin patches became more widespread. The mother stated that she is not happy with her child's activity. The baby has a history of frequent choking during oral intake. She is also not able to stand and prefers always to be carried up. Mother described her to be always unhappy and irritable. The baby still cannot sit unsupported, nor can she bear her weight while standing. Developmental milestones, other than gross motor delay, were reached for her age. Skin examination revealed generalized scaly poikilodermatous patches all over her body with mild involvement of the trunk and face (Figures 3 , 4 ). Laboratory workup was significant for low WBC and hemoglobin counts, along with elevated CPK, LDH, ferritin, CRP, and ESR levels. ANA and dsDNA were negative. MRI revealed the right anterior thigh and vastus lateralis subcutaneous edema. Pan CT scan did not show any hidden masses. Given the aforementioned information, the child was labeled as JDM. The child was admitted under rheumatology and received three doses of pulse methylprednisolone and IVIG 2 g/kg. The baby was put on prednisolone syp 1 mg/kg once daily and methotrexate SC injection 1 mg/kg once weekly as maintenance therapy with excellent responses.
The authors provide special thanks to Rheumatology, Radiology and Pathology Departments at King Abdulaziz Hospital for their evident interest and help in making this case report happen.
CC BY
no
2024-01-15 23:43:49
Cureus.; 15(12):e50573
oa_package/7e/2c/PMC10788081.tar.gz
PMC10788082
38222992
Introduction Thrombotic microangiopathy (TMA) is a serious pathological state where there is microvascular thrombosis leading to the mechanical destruction of red blood cells and consumption of platelets resulting in microangiopathic hemolytic anemia (MAHA) and thrombocytopenia, respectively [ 1 ]. The thrombotic occlusion of the small blood vessels preferentially affects the kidneys, brain, and heart, and this leads to the organ's dysfunction with significant morbidity and mortality [ 1 , 2 ]. The systemic process of TMA is coupled with a number of biochemical findings such as thrombocytopenia due to platelet aggregation and thrombi formation, anemia and presence of schistocytes due to fragmentation of red blood cells, raised lactate dehydrogenase (LDH) due to tissue ischemia and cell lysis, and low plasma haptoglobin due to intravascular hemolysis as it binds to free hemoglobin [ 3 ]. Several causes for TMA have been reported, both hereditary and acquired [ 1 ]. Pregnancy and the postpartum period are well-recognized triggers for TMA and it is thought that this is due to the increase in the production of Von Willebrand factor (VWF), which in turn increases consumption of ADAMTS13 with subsequent thrombosis [ 3 ]. The spectrum of pregnancy-associated TMAs includes disorders that are relatively common and cause secondary forms of TMA, such as pre-eclampsia, eclampsia, and HELLP (hemolysis, elevated liver enzymes, and low platelets) syndrome. These disorders are part of the same syndrome with different presentations and severity [ 4 ]. In addition, autoimmune conditions, such as systematic lupus erythematosus (SLE) and catastrophic antiphospholipid syndrome (CAPS) can also present with TMA. The activation of both classical and alternative complement pathways appears to play key roles in the SLE-associated TMA [ 3 ]. During pregnancy, other serious but less common causes of TMAs are hemolytic uremic syndrome (HUS), thrombotic thrombocytopenic purpura (TTP), and atypical hemolytic uremic syndrome (aHUS). The clinical presentation of TMA in pregnancy is challenging and the presence of active autoimmune disorders such as SLE at the same time can make it a diagnostic dilemma. We present a case of severe acute kidney injury (AKI) due to TMA in a young female with active SLE in pregnancy and the postpartum period.
Discussion TMA is a serious systemic illness with progressively life-threatening thrombocytopenia, MAHA, and renal dysfunction [ 5 ]. TMA can be classified as primary, characterized by a complement mutation or complement autoantibodies, such as TTP and aHUS, or secondary due to infections, pregnancy, and autoimmune disorders such as SLE [ 6 ]. During pregnancy, the different causes of TMA have common clinical and laboratory findings which make it challenging to distinguish them apart. The frequency of the disorder and the timing of the presentation can serve as practical clues, as pre-eclampsia or HELLP syndrome have a relatively high incidence of one per 20 and one per 1000 pregnancies, respectively, while syndromes of primary TMA such as HUS or TTP are much less common at one per 25000 and one per 200000 pregnancies, respectively [ 7 ]. In addition, pregnancy-associated HUS is the only form of TMA to occur most frequently in the postpartum period and up to three months post-delivery in almost three-fourths of cases, and TMA starting in the postpartum of an uneventful pregnancy is very suggestive of complement-mediated aHUS [ 8 , 9 ]. Moreover, checking the activity testing of ADAMTS13 can be diagnostic in pregnancy-associated TTP [ 10 ]. However, this should be done prior to commencing the plasma exchange as the activity level of the test will change significantly and correspond to the clinical improvement once the therapy is started [ 11 ]. Unfortunately, testing might not always be available or requires sending to expert reference centers, which can extend the timing for diagnosis to days or even weeks. The PLASMIC (Platelet count; combined hemoLysis variable; absence of Active cancer; absence of Stem-cell or solid-organ transplant; Mean corpuscular volume; International normalised ratio; Creatinine) score has been developed to assist clinicians in deciding the likelihood of severe ADAMTS13 deficiency when the result of ADAMTS13 is not available [ 12 ]. In our case, the targeted testing was not done prior to the plasma exchange as there were no clear systemic signs of TMA. This highlights the need for clinicians to be aware of this diagnosis in this cohort of patients. The calculated PLASMIC score was 4 points, which is considered low and gives a 0% risk of severe ADAMTS13 deficiency. However, this didn't delay the management with plasma exchange which eventually resulted in a good recovery. The kidney biopsy showed clear evidence of TMA with glomerular capillaries filled with thrombi; however, different forms of TMA are often indistinguishable based on the kidney biopsy findings. Moreover, immunofluorescence is usually negative apart from positive staining for fibrinogen with glomerular capillaries, arterioles, and small arteries [ 13 ]. In our case, although the first biopsy had granular C3 staining, this was felt to be non-specific as only a few reports have signaled that immunostaining might indicate complement activation in TMA. In addition, no specific or sensitive markers of complement activation are yet known for this entity [ 14 ]. For some forms of pregnancy-associated TMA such as pre-eclampsia, eclampsia, and HELLP syndrome, the rapid delivery can be sufficient to control the disorder and for other forms such as TTP and HUS, it can help achieve more rapid remission [ 10 ]. Other lines of management such as plasma exchange should be considered especially when there is an atypical presentation of pregnancy-associated TMAs, life-threatening neurological or cardiac findings, or profound thrombocytopenia (<30g/L) [ 10 ]. Expectant management with close monitoring would be reasonable if improvement in hemolysis markers, platelet levels, and no deterioration of renal function. If aHUS diagnosis is made by exclusion of other possibilities, then anti-C5 monoclonal antibodies should be initiated instead of plasma exchange [ 10 ]. Although the safety of the use of anti-c5 treatment in pregnancy has not been assessed in controlled clinical trials, the limited initial data suggest its safety, especially when considering the potentially catastrophic effects of uncontrolled TMA in pregnancy. Renal TMA can happen in the context of active LN and plays an important role in its natural history [ 6 ]. In SLE, the histopathological presence of TMA in the kidneys is a hallmark of severe and active renal disease with worse outcomes [ 15 , 16 ]. In this case, the presence of SLE and makers of activity both biochemically and clinically brought about an even bigger challenge as it can also provide a potential trigger for TMA even without clear systemic markers of hemolysis. The finding of TMA on the renal biopsy without conclusive LN was helpful in changing the route of management, especially as the renal clinical abnormalities were persisting prior to the plasma exchange.
Conclusions TMA in pregnancy and the postpartum period is a complex and serious disorder that requires a high index of suspicion and a prompt course of action. Other coexisting elements such as autoimmune disorders or infections can make the diagnosis a real challenge. The natural history of the illness especially in relation to delivery along with targeted testing can aid the diagnosis and management. Histopathological investigations can provide very valuable information and should be pursued, especially when renal involvement is suspected.
Thrombotic microangiopathy (TMA) is a severe systemic disorder with multiorgan manifestations due to thrombosis of the microvasculature. Pregnancy and post-partum are particularly high-risk periods for many forms of TMA. The disease progression is rapid and can lead to organ failure and even death; therefore, urgent recognition and treatment are paramount. The presence of other triggers such as infections or autoimmune diseases like systematic lupus erythematosus (SLE) can add further complexity, which emphasizes the need for definitive diagnostic investigations such as kidney biopsy to promptly direct further diagnosis and management. We describe a case of a 27-year-old female with post-partum severe acute kidney injury and nephrotic range proteinuria. She had a new diagnosis of active SLE and was found to have TMA on kidney biopsy without conclusive features of lupus nephritis. She was managed successfully with plasma exchange with rapid improvement of her kidney markers.
Case presentation A 27-year-old female, gravida 5, para 4, with no history of illness prior to her last pregnancy in 2021, presented in the third trimester with arthritis, headache, and generalized fatigue. The patient was found to have hypertension, renal impairment, and proteinuria. She was managed as pre-eclampsia with antihypertensives and had an early induction of vaginal delivery at 32 weeks of gestation. Her labs revealed antinuclear antibodies (ANA) +4, positive anti-double strand DNA, serum creatinine of 140 mmol/L, low complements' levels, 24-hour urine protein of 4377 mg, erythrocyte sedimentation rate (ESR) of 135 mm per hour, and C-reactive protein (CRP) of 9.5 mg/dl. At that point, she was diagnosed as SLE with likely lupus nephritis (LN) and started on methylprednisolone 1 gram IV for three doses, followed by oral prednisolone 1 mg/kg/day along with hydroxychloroquine 200 mg once daily. The kidney biopsy was deferred at that point due to the postpartum status. After discharge, she presented again at 40 days postpartum on February 23, 2022, with pleuritic chest pain, dyspnea, generalized fatigue, and myalgia. Her labs showed hemoglobin of 5.7 g/dL, serum platelets of 105,000 per microliter, and serum creatinine of 283 mmol/L with an estimated glomerular filtration rate (eGFR) of 20 ml/minute/1.73 m 2 by Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation, 24-hour urine protein 4972 mg, and low levels of complements 3 and 4 at 54.5 mg/dl and 10 mg/dl, respectively. During her admission, she was managed initially as a relapse of SLE with LN and was given methylprednisolone 1 g per day for four days, hydroxychloroquine 200 mg once daily, and mycophenolate mofetil 1500 mg twice a day. Interestingly, the makers of hemolysis were not elevated, the LDH was 213 unit/L, no schistocytes on blood film, and the total bilirubin was 2.5 μmol/L. Unfortunately, ADAMTS13 activity level, complement antibody, and gene mutation testing were not available at that time. Her kidney tests remained markedly deranged despite the initial therapy; therefore, we proceeded with kidney biopsy (Figure 1 ) which revealed several glomeruli showing capillary lumina filled with thrombi, double contour of glomerular capillary basement membrane, and mesangiolysis. Arteriolar and arterial fibrinoid necrosis associated with apoptotic bodies was seen. These features were consistent with thrombotic microangiopathy. Also, there were few glomeruli with endocapillary hypercellularity that were suggestive of LN; however, the direct immunofluorescence study demonstrated only mesangial granular deposit for complement C3 and no deposit for all the immunoglobulins. Therefore, 10 days after commencing the treatment for presumed LN with maximum immunosuppression as described above, we elected to manage the patient as postpartum TMA and to commence her on plasma exchange; her serum creatinine remained markedly elevated at 252 mmol/L at that point. The patient received plasmapheresis with 1.5 plasma volume of fresh frozen plasma on alternate days with significant improvement of her serum creatinine from 252 to 80 mmol/L. After completing four sessions, the 24-hour urine protein improved from 4972.8 mg to 2978 mg and reached 1288 mg within a month. In the following year, the level of proteinuria increased to 11036.4 mg/24 hours and serum creatinine was 104 mmol/L. A new kidney biopsy was performed (Figure 2 ), and 17 out of 33 viable glomeruli showed endocapillary hypercellularity associated with karyorrhexis or leukocyte infiltration. Twelve glomeruli revealed global sclerosis and three glomeruli revealed cellular crescent. The direct immunofluorescence study demonstrates intense granular staining for all the immunoglobulin; IgG, IgA, IgM, and for the complements C3 and C1q mainly along the peripheral capillary wall. Some mesangial immune deposits were also noted in the C3 and the IgA. No features of TMA were seen. The biopsy was diagnosed as class III/IV LN. The patient was treated with a pulse of IV methylprednisolone 1000 mg for three days followed by oral prednisolone at 1 mg /kg/day for two months and mycophenolate mofetil increased again to 1500 mg twice a day. After that, the levels of proteinuria decreased to 3100 mg/24 hours and serum creatinine improved to 93 mmol/L three months later (Figures 3 , 4 ). Her blood work during the two admissions post delivery is summarized in Table 1 .
CC BY
no
2024-01-15 23:43:49
Cureus.; 16(1):e52248
oa_package/2d/e2/PMC10788082.tar.gz
PMC10788087
38223752
Introduction Postural stability is defined as the ability to maintain the centre of mass of a body within the base of support with minimal postural sway through somatosensory information ( Pino-Ortega et al., 2020 ), and is commonly assessed through static and dynamic balance ( Shumway-Cook & Woollacott, 2001 ). Static balance is defined as the ability to maintain the line of gravity (vertical line from the centre of mass) of a body within the base of support (BoS) with minimal postural sway. While, dynamic balance consists of the ability to move the centre of pressure (CoP) within the BoS and to move CoP from one BoS to another BoS ( Kusumoto et al., 2020 ; Reina et al., 2022 ). These assessments are routinely used in sports and clinical settings to identify balance disorders. For instance, a poor balance in sports is associated with lower limb injuries (such as muscle injuries or ligament sprains) ( McGuine et al., 2000 ; Emery & Meeuwisse, 2010 ; Brachman et al., 2017 ), while in the elderly population it is the most important factor associated with the risk of falls ( Muir et al., 2010 ). Given its importance, the use of effective lower limb-injury detection tools is needed in order to reduce the injury rate, downtime, and health care costs associated with short- and long-term treatment of lower limb injuries ( Marcoux et al., 2017 ). Monopodal postural stability is a widely used test to assess static and dynamic balance; several tools with varying levels of difficulty have been proposed in order to adapt to the target population ( Horak, 1987 ; Emery et al., 2005 ; Powden, Dodds & Gabriel, 2019 ). On the one hand, laboratory balance measures ( e.g ., stabilometry or motion analysis) provide multiple objective values related to stability, but require the use of equipment that is costly, highly technical, and often not portable ( Horak, 1987 ; Fridén et al., 1989 ; Emery et al., 2005 ; Powden, Dodds & Gabriel, 2019 ). On the other hand, other measurement tools have been developed for use in the clinical and sports setting, such as the three-directions modified Star Excursion Balance Test (mSEBT) or the Emery balance test (EBT), which are faster to perform and require less time ( Emery et al., 2005 ; Powden, Dodds & Gabriel, 2019 ). The mSEBT is the simplification in three directions of the initial eight-direction Star Excursion Balance Test described by Gray (1995) . It evaluates single-leg balance, dynamic neuromuscular control, proprioception, flexibility, core stability, ROM and strength while an individual reaches three directions (anterior, posteromedial, and posterolateral) with the non-stance leg ( Gribble, Hertel & Plisky, 2012 ). The EBT was specifically designed to assess dynamic balance on an unstable surface with eyes closed in young adults and adolescents ( Emery et al., 2005 ). The reliability and validity of these tests have been described in healthy adolescents and asymptomatic adults ( Emery et al., 2005 ; Shaffer et al., 2013 ; Powden, Dodds & Gabriel, 2019 ). These tests are reported in the literature to reflect changes after an intervention, but dissimilar results have been observed when these tests have been used simultaneously ( Blasco et al., 2019 ). The clinimetric analysis of measurement instruments is of great importance in the clinical and sports settings since the change in a specific measurement can reflect a change in the patient’s clinical situation, which is essential for evaluating the effectiveness of interventions ( de Yébenes Prous, Rodríguez Salvanés & Carmona Ortells, 2008 ). The metric property that analyses this effect is responsiveness, which is defined as the ability of a tool to detect meaningful clinical changes over time ( Mokkink et al., 2010 ). Even so, the responsiveness of monopodal postural stability measurements through stabilometry, mSEBT, and EBT has not been evaluated after an instability training programme or analysed using multiple statistical indicators of responsiveness. Furthermore, while studies use the dominant/non-dominant ( i.e ., trained/untrained) lower limb comparison to detect within-subject changes in stability after an intervention ( Temporiti et al., 2023 ), the external responsiveness ( i.e ., discriminative ability) of the tests has not been previously examined. Therefore, the main aim of this study was to analyse the responsiveness of the three monopodal postural stability tests.
Materials and Methods Study design A single-group pretest-posttest design was used, which involved repeated monopodal postural stability assessment of the dominant and non-dominant lower limb before and after a 4-week intervention (three weekly sessions) consisting of dominant lower limb instability training. This study was conducted from April 2020 to June 2021, starting the recruitment phase in November 2020. All measurements were performed in the clinical research laboratory of the Department of Physiotherapy (University of Valencia). A physiotherapist with experience in applying the test (M.S-B) evaluated the participants. This examiner was blinded during the measurement process, not being aware of which limb had received the intervention. Before participation, participants were informed of the study procedures and their possible associated risks. All of them provided written informed consent. This study was completed following the principles outlined in the Declaration of Helsinki, and it was approved by the Human Research Ethics Committee of the Ethics Committee on Experimental Research of the University of Valencia (Comité Ético de Investigación en Humanos de la Comisión de Ética en Investigación Experimental de la Universitat de Valencia), in Spain (1271077). Subjects Thirty healthy recreational athletes (21 males/nine females; mean age: 22.7 ± 2.7 years; weight: 70.13 ± 12.39 kg; height: 172.5 ± 8.1 cm; weekly physical activity: 438.0 ± 170.4 min) volunteered in this study, of which 27 completed the entire intervention and evaluations and were included in the analysis. Appendix S1 contains the flow chart of the study participants. Participants were physiotherapy students recruited by email using the University of Valencia Intranet. For inclusion, they had to be between 18 and 30 years old, have no history of lower limb injury or pain during the year preceding the study, and perform at least 90 min of physical activity per week. The established exclusion criteria were to have previously participated in any balance improvement or lower limb proprioception programme or presenting any known balance disorder, such as vertigo, or vestibular or central nervous system alterations. Instruments Stabilometry For the stabilometric assessment of monopodal stability, the Dinascan/IBV P600 force platform (digital signal with a sampling frequency of 1,000 Hz) was used with its software application NedSVE/IBV (Valencia, Spain). The participants were asked to place the foot of the leg to be measured on the mark on the platform, with the knee of the other leg flexed 90° and their arms alongside the body ( Fig. 1A ). The participants, with their eyes closed, were asked to maintain that position for 15 s, during which the platform recorded the variations in balance ( Romero-Franco et al., 2014 ), and rested 30 s before the next measurement. Three measurements were taken. Subsequently, the process was repeated with the contralateral leg ( Powden et al., 2019 ). The values analysed were the CoP displacement (lateral displacement and anteroposterior displacement), the swept area (mm 2 ), and the average speed (m/s). In subsequent analyses, as there is no consensus in the literature on how to process the data ( Romero-Franco et al., 2014 ; Powden et al., 2019 ), stabilometry values were analysed based on four variants: the mean of the three measurements, the first measurement, the lowest, and the highest. mSEBT mSEBT consists of standing on one leg while, with the contralateral leg, reaching as far as possible in three different directions (anterior, posteromedial and posterolateral) ( Plisky et al., 2006 ; Gribble, Hertel & Plisky, 2012 ). Adhesive tape was placed on the floor to delimit two posterior diagonals with a 90° angle between them, with a 135° angle with respect to the anterior line ( Fig. 1B ). The distance covered in each attempt was normalised with the length of the leg, for which both lower limbs of each participant were measured in the supine position, taking as reference the anterior superior iliac spine and the internal malleolus of the same leg ( Gribble & Hertel, 2003 ). Next, each participant was allowed to make four attempts with each leg and in each direction to practice, followed by three more attempts that were registered ( Gribble & Hertel, 2003 ; Granacher et al., 2014 ). They first performed the anterior direction with their dominant leg, then the posteromedial, and finally the posterolateral. Afterwards, the same procedure was repeated with the non-dominant leg. A 15-s rest was allowed between attempts in the same position ( Granacher et al., 2014 ), resting 5 min between different directions ( Gribble & Hertel, 2003 ; Granacher et al., 2014 ). The values of the last three attempts were recorded to calculate the average value later. All measurements were made barefoot and with hands placed on hips. In turn, for the anterior measurements, the stance foot was aligned at the most distal aspect of the toes, while for the backward directions, it was aligned at the most posterior aspect of the heel ( Gribble, Hertel & Plisky, 2012 ). Attempts were not considered valid, and the movement was repeated, if the participant failed to touch the line with the mobile foot, moved the supporting foot, dropped hands from hips, lost balance at some point supporting the mobile foot, failed to maintain the start or end position for at least one second, or placed weight on the moving foot at the end of the run ( Granacher et al., 2014 ). EBT Another test used to assess the dynamic balance of a participant was the EBT, which is widely used in athletes and adolescents due to its greater complexity. Participants had to close their eyes and then stand on one leg on an Airex® Balance Pad, barefoot and with their hands placed on their hips ( Emery et al., 2005 ; Blasco et al., 2019 ). The participants were asked to remain as stable as possible for a maximum time of 180 s ( Hahn et al., 1999 ). They made three attempts with each leg and rested 15 s between them. A handheld stopwatch was used to measure the time the participant held the position. A test time of 15 s was given to the participants before starting the measurements so that they became familiar with the pad ( Emery et al., 2005 ). The supporting leg should be slightly flexed at the knee (about 30°), and the contralateral leg should be at 45° knee flexion ( Fig. 1C ) ( Granacher et al., 2014 ; Blasco et al., 2019 ). The recorded value was the best time obtained in the three attempts for each leg ( Blasco et al., 2019 ). The timer was stopped when a participant dropped hands from hips, touched the ground with the contralateral leg, moved the supporting foot, moved the pad from its original position, or opened his eyes ( Emery et al., 2005 ; Granacher et al., 2014 ). Blackboard The instability device selected for the instability programme was the Blackboard (Blackboard Training, Innenstadt, Germany), which is a device designed to work on monopodal stability, consisting of two wooden boards joined together by tape. At its base, it has a Velcro surface on which half-cylindrical wooden bars can be freely placed. Depending on the position in which they are placed, one or other type of instability will be obtained ( e.g ., lateromedial or anteroposterior instability or forefoot and rearfoot only or both). The Blackboard was used in its complete instability configuration, with two bars placed in the centre of each board to create instability in both the forefoot and rearfoot ( Fig. 2B ). Procedures Before starting the instability training programme, height was measured using a 1-millimeter sensitivity flexible tape measure, while weight and body mass index (BMI) were assessed using a standardised body composition analyser (Tanita BC 418 MA; Tanita Corp, Tokyo, Japan). In that same session, monopodal postural stability was evaluated using stabilometry, mSEBT, and EBT tests performed randomly. A familiarisation session was then carried out in which the participants performed two to three repetitions of static single-leg support for 20 s, as needed, to become familiar with Blackboard ( Fig. 2A ). Next, following the same setup for the training sessions, participants performed five 40-s repetitions of training only with their dominant leg followed by 60 s of rest ( Wright, Nauman & Bosh, 2020 ). The edges of the Blackboard were allowed to contact the ground and participant could slightly shift their position, but always reaching the proposed 40 s of training. Finally, a 4-week programme including three weekly sessions of instability training in order to improve the stability of the participants was performed. The duration, frequency, and dosage of the programme sessions were based on previous literature on balance training programmes ( Cain, Garceau & Linens, 2017 ; Anguish & Sandrey, 2018 ; Powden et al., 2019 ), and it was carried out in a research laboratory of the Faculty of Physiotherapy of the University of Valencia. Statistical analysis Baseline data were summarised as means and standard deviations (SD) for continuous variables and as absolute and relative frequencies for categorical variables. Variables were checked for normality with the Kolmogorov-Smirnov test and homogeneity of variances with Levene’s test. Responsiveness was quantified based on internal and external responsiveness. On the one hand, internal responsiveness was determined by the paired t-test and supplemented with an effect size statistic, as recommended by Husted et al. (2000) and similar to what was carried out by other studies ( Liang, Fossel & Larson, 1990 ; Choi et al., 2016 ; Navarro-Pujalte et al., 2019 ; Pajari et al., 2022 ). For this analysis, we used the standardised response mean (SRM) as an effect size statistic, which estimates the magnitude of change that is not influenced by sample size ( Husted et al., 2000 ; Navarro-Pujalte et al., 2019 ). Values of 0.20, 0.50, and 0.80 or higher have been proposed in the literature to represent small, medium, and large responsiveness, respectively ( Husted et al., 2000 ). On the other hand, external responsiveness was determined by receiver operating characteristic (ROC) curves ( Husted et al., 2000 ; Rysstad et al., 2017 ; Wan et al., 2018 ; Yee et al., 2022 ). We dichotomised the values for ROC curves between the dominant and non-dominant lower limb ( i.e ., experimental and control lower limb), assuming that the values for the dominant lower limb tests had changed after the intervention. This was done from the perspective of the responsiveness to observed change, which is quantified when scores are compared in situations where variation in the attribute is expected but not verified explicitly as having occurred ( Beaton et al., 2001 ). In particular, for the circumstance of change observed before and after a treatment/intervention (usually of “known efficacy”) ( Beaton et al., 2001 ). We calculated the area under the ROC curve (AUC), which represents the probability of the measure correctly classifying participants. An AUC > 0.70 was used as a generic benchmark to consider its discriminant ability acceptable ( Stratford, Binkley & Riddle, 1996 ). The person responsible for the statistical analysis for external responsiveness (R.M-SA) was blinded with respect to the limb in which the intervention was carried out. An a priori sample size calculation was developed based on a medium effect size (d = 0.50), using an α value of 0.05 and a power of 0.8. The sample size was estimated at 27 subjects. Assuming losses of 10% of the sample in the follow-up measurement, an initial sample of 30 subjects was calculated as necessary.
Results Changes associated with instability interventions Table 1 shows the changes associated with an instability training programme measured with three monopodal postural stability tests. The dynamic balance for the dominant lower limb, as measured with the mSEBT and EBT, showed significant time improvements and distance reached, respectively, after the interventions. For the non-dominant lower limb, a significant change was observed in the total score of the mSEBT test and in the postero-medial and postero-lateral directions. Conversely, platform measures suggested that neither limb presented significant changes in the CoP excursions after the interventions, except for the X-axis for the dominant lower limb of the first measurement recorded. Furthermore, relative changes showed the greatest improvements for EBT of the dominant leg, with a 46.2% improvement over baseline time. Appendix S2 shows individual values for all participants and tests (of the dominant lower limb). Internal and external responsiveness Internal responsiveness to instability training of the three monopodal stability tests is shown in Table 2 . Internal responsiveness statistics suggest that EBT and all parameters in mSEBT for the dominant lower limb showed large internal responsiveness (SRM > 0.8) among participants after instability training. Furthermore, mSEBT values for the non-dominant lower limb (except anterior displacement) also experienced significant changes with an associated large internal responsiveness. Finally, none of the stabilometry platform parameters showed a significant change in response after the intervention. The ability of the EBT to discriminate between the dominant and non-dominant lower limb ( i.e ., trained vs untrained, respectively) was generally acceptable (AUCs = 0.708) ( Table 3 ). However, none of the parameters of the mSEBT test showed an acceptable AUC to distinguish between trained and untrained lower limbs after the intervention (AUC < 0.6). Ultimately, none of the stabilometry parameters showed acceptable AUC either.
Discussion To our knowledge, this is the first study that analyses the responsiveness of different monopodal stability tests in healthy participants after an instability training programme. We found that only EBT showed both internal and external responsiveness, while the mSEBT showed acceptable internal responsiveness. In contrast, none of the stabilometry platform measures exhibited responsiveness. This study presents novel findings, as it is the first study that has used multiple statistical methods to assess the internal responsiveness (paired t-test and SRM) and external responsiveness (ROC) of three measures of monopodal stability in healthy recreational athletes. This study shows that the EBT is the only monopodal stability measure that detects changes after an instability training programme, with an acceptable internal and external responsiveness. Until now, no study had analysed this psychometric ability of the EBT. However, previous studies have identified changes in stability measured using this test after an instability training programme, as reported by Blasco et al. (2019) . These authors found improvements in the time of the EBT (ranging between 3.3 and 6.1 s) similar to those found in our study (5.52 s) ( Blasco et al., 2019 ). Regarding the dynamic stability measured with the mSEBT, our study shows a high internal but not external responsiveness. Both the intervention and control lower limb improved for all directions, except for the anterior direction of the control side. For the intervention lower limb, all mSEBT parameters showed significant improvements. Similar results have been reported in the total score of mSEBT by Blasco et al. (2019) , with slightly smaller improvements (ranging between 3.2% and 4.5%) than those observed in our study (5.3% intervention lower limb). Even so, the control lower limb also exhibited similar improvements (3.8%), which, together with the lack of external responsiveness, would suggest that mSEBT is not a suitable test to monitor changes in dynamic balance using the non-dominant lower limb as control. A possible explanation is that the balance intervention on the dominant lower limb favours it going further during the mSEBT when it is not the support lower limb. Another possible mechanism is the effect of cross-education, which is defined as adaptation of an untrained limb after unilateral training of the contralateral limb ( Son & Kang, 2020 ) and whose improvements appear to reflect use-dependent plasticity within the central nervous system ( i.e ., interhemispheric communication in the brain, primarily through the corpus callosum) ( Lawry-Popelka, Chung & McCann, 2022 ). Another important finding of our study is that none of the stabilometry platform measures were able to detect a change in monopodal stability after the instability training programme. This is consistent with other authors who, after instability training, have found no changes in either healthy individuals ( Blasco et al., 2019 ) or participants with chronic ankle instability (CAI) ( McKeon et al., 2008 ). In this latter case, they concluded that CoP-based measures most likely lacked the sensitivity to detect improvements in postural control associated with a balance training programme in patients with CAI ( McKeon et al., 2008 ). The fact that only the dynamic measurements showed responsiveness compared to the measurements obtained with the stabilometric platform could be due to the fact that a healthy participant’s capacity for improvement in static balance is minimal, and there is a ceiling effect for the measurements of the stabilometric platform. On the other hand, the improvement capacity for dynamic balance is possibly greater in those participants and therefore, dynamic balance-related tests detect changes. Among the strengths, this research primarily evaluated the responsiveness of several monopodal stability tests in healthy participants. The clinical importance of this study lies in the fact that a simple and rapid dynamic test, such as the EBT, can detect changes in healthy participants after an instability training programme. This could offer a practical application in sports, where most participants are healthy. Therefore, it could be a tool used to identify whether injury prevention programmes aimed at improving monopodal stability are efficient. This study had limitations that should be considered. First, there is a limitation associated with the lack of generalisability. Thus, the sample included only healthy and young recreational athletes, so these findings cannot be extended to identify changes concerning recovery from injuries, such as knee or ankle sprains, or extrapolated to unhealthy or older populations. Even so, in view of the studies that use such tests in healthy subjects, we consider this analysis necessary, and future studies should replicate this metric platform analysis in specific populations. Secondly, the protocol used to measure stabilometry is not standardised as there is no consensus in the literature, making it difficult to compare our findings with other studies. However, we rely on the protocol proposed by Romero-Franco et al. (2014) to assess stabilometry measurements ( Romero-Franco et al., 2014 ) while analysing stabilometry values for different variants.
Conclusions According to the results, a positive responsiveness of the EBT to changes in monopodal stability after instability training in healthy participants can be concluded. In contrast, mSEBT only showed internal responsiveness, and none of the stabilometry platform measures were able to identify these changes, so the stabilometry platform would not be recommended in healthy participants, as well as the mSEBT for those cases where they carry out comparisons between lower limb intra-subject.
Background Stabilometry, the modified Star Excursion Balance Test (mSEBT) or the Emery balance test (EBT) are reported in the literature to reflect changes after an intervention in monopodal postural stability. Even so, the responsiveness of those tests has not been evaluated after an instability training programme or analysed using multiple statistical indicators of responsiveness. The main aim of this study was to analyse the responsiveness of the stabilometry, mSEBT or EBT. Methods Thirty healthy recreational athletes performed a 4-week programme with three weekly sessions of instability training of the dominant lower limb and were evaluated using stabilometry, mSEBT, and EBT tests. Responsiveness was quantified based on internal and external responsiveness. Results EBT and all parameters in mSEBT for the dominant lower limb showed large internal responsiveness (SRM > 0.8). Furthermore, mSEBT values for the non-dominant lower limb (except anterior displacement) also experienced significant changes with an associated large internal responsiveness. None of the stabilometry platform parameters showed a significant change after the intervention. The ability of the EBT to discriminate between the dominant and non-dominant lower limb ( i.e ., trained vs untrained, respectively) was generally acceptable (AUCs = 0.708). However, none of the parameters of the mSEBT test showed an acceptable AUC. Conclusions EBT showed a positive responsiveness after instability training compared to mSEBT, which only showed internal responsiveness, or stabilometry platform measures, whose none of the parameters could identify these changes.
Supplemental Information
Additional Information and Declarations
CC BY
no
2024-01-15 23:43:49
PeerJ. 2024 Jan 11; 12:e16765
oa_package/65/2d/PMC10788087.tar.gz
PMC10788088
38223764
Introduction Birds, which encompass a remarkable diversity of over 11,000 species, are a captivating and highly valued part of the natural world ( BirdLife International, 2018 ). Their intricate variety ranges from the tiniest to the largest, and the slowest to the swiftest flyers. Each bird species possesses a unique presence, habits, and habitat preferences ( BirdLife International, 2018 ). This remarkable diversity showcases itself in both the vast numbers of some species, like the 8,421 species classified as least concern, and the scarcity of others, with a mere handful of surviving individuals ( IUCN, 2018 ). The International Union for Conservation of Nature (IUCN) red list categories further categorize birds, with 1,470 species classified as threatened, and among them, 223 critically endangered, 461 endangered, and 786 vulnerable ( IUCN, 2018 ) In this tapestry of avian diversity, Ethiopia emerges as a hotspot, harboring 872 distinct bird species, 18 of which are endemic, and another 67 represented as endemic sub-species ( Mengistu, 2002 ). With 851 of its bird species evaluated within the IUCN red list categories, Ethiopia underscores the global importance of preserving avian populations ( IUCN, 2018 ). As they traverse the world’s diverse habitats, birds leave their ecological footprints, indicating the health of ecosystems. Birds, being excellent indicators of environmental health, offer a window into the impacts of pollution and climate change ( Sekercioglu, Daily & Ehrlich, 2004 ). The interplay between birds and their habitats is fundamental in shaping distribution patterns. Habitats, often shaped by vegetation and complemented by other factors, determine where birds thrive. Recognizing the significance of this dynamic, Important Bird and Biodiversity Areas (IBAs) and Key Biodiversity Areas (KBAs) have emerged as key tools for global conservation efforts. These designated areas, which number over 13,000 across more than 200 countries, act as crucial bastions for the conservation of biodiversity ( BirdLife International, 2018 ). Beyond their ecological roles, birds provide an array of essential ecosystem services. They diligently contribute to pollination, insect pest control, seed dispersal, and nutrient cycling, all which ripple through ecosystems, benefiting both nature and human society. Bird activity knits together ecosystems and influences the abundance of other species ( Sekercioglu, Daily & Ehrlich, 2004 ; Wenny et al., 2011 ). For example, frugivorous birds maintain gene flow and enhance restoration efforts through seed dispersal. In this context, birds can be regarded as ecological engineers, shaping landscapes, and fostering ecosystem resilience ( Wenny et al., 2011 ). However, the intricate web of avian diversity and its contributions to ecosystems faces a looming threat. Birds have become bioindicators of environmental changes, and their declining populations serve as a stark warning ( Bonisoli-Alquati et al., 2022 ; Mekonen, 2017 ). The IUCN red list data reveals a steady deterioration in the status of the world’s bird species ( IUCN, 2018 ). Human activities, from agricultural expansion and logging to pollution and invasive species introduction, are driving these declines ( Malhotra, 2022 ). Furthermore, the long-term specter of climate change hovers, potentially amplifying these threats ( de Moraes et al., 2020 ). The decline in avian diversity worldwide due to human activities and climate change poses a threat to the ecosystem services that birds provide. Therefore, there is an urgent need for conservation efforts to preserve avian diversity and safeguard these ecosystem services for the benefit of both nature and humanity. The objective of this study was to identify species diversity and relative abundance as baseline information through a survey or census of bird populations in Dodola forest. Initial surveillance or inventory of bird species has not been specifically conducted in the study area. The area is experiencing habitat disturbance, and the status of bird populations remains largely unknown, making this a critical concern. Therefore, it is essential to assess the composition, abundance, and presence or absence of birds across different habitats. This information is crucial for ongoing monitoring and evaluation of bird statuses in the study area. This baseline information would be used to inform conservation efforts and monitor changes in bird populations over time.
Materials and Methods Description of the study area Location The Dodola natural forest habitat is part of the Adaba Dodola Jalo forest which is one of the 61 National Forest Priority Areas (NFPA) of the country that covers approximately 530 km 2 ( Gelashe, 2017 ). The Ericaceous sub-afro alpine habitat is found at higher elevations to the natural forest, while the plantation forest below the dry evergreen afro-montane forest. Dodola forest is located West Arsi zone of the Oromia regional state, southeastern Ethiopia ( Fig. 1 ). The study area is adjacent the Bale mountains massif and occurs at 325 km from Addis Ababa towards the southeast, 70 km from Shashemene. The area is bordered by the Kofale district to the west, the Adaba district to the east, the Nensabo and Kokossa districts to the south, and the Asasa district to the north. The geographical location ranges between 6°39′E38°57′N and 7°0′E39°24′N. The altitude range varies from 2,400–3,712 m.a.s.l. The area is a part of tropical forest and tropical shrub land that consists of natural forest (Dry evergreen Afromontane Forest), Ericaceous vegetation (sub-afro alpine habitat) and community plantation forest of a total of 738.30.24 km 2 . Climate and vegetation The study area has a four-month as dry season (November–February) and an eight-month as wet season (March–October) ( Hundera, Bekele & Kelbessa, 2007 ). The characteristics of the forest are categorized as upland dry evergreen forests of Afromontane forests ( Friis, Rasmussen & Vollesen, 1982 ). The Dodola region’s forest landscape changes with altitude. Between 2,565 to 2,800 m, conifer forests become dominant, with Podocarpus and Juniperus as the prevailing species. Moving to the middle altitude zone of 2,804–3,115 m, Juniperus procera takes the lead, alongside other broadleaf hardwood species, while Podocarpus falcatus becomes less common and sporadically found at the lower boundary of this zone. In the upper elevation range of 3,120–3,400 m ( Brooks, 2009 ), the forest is similar in ecological characteristics to Bale Mountain National Park, featuring highland forest habitat and sub-afro alpine terrain with Ericaceous vegetation ( Evangelista, Swartzinski & Waltermire, 2007 ). The Erica trimera dominates at higher elevations, while Erica arborea prevails at lower elevations. Additionally, the Dodola region’s forest includes native species like Hagenia abyssinica, Hypericum lanceolatum , and Erica arborea, as well as introduced exotic species like Eucalyptus and Cupressus lusitanica in peripheral areas. Juniperus procera is noteworthy for its susceptibility to wildfires and preference for well-drained, nearly neutral pH soils, thriving within specific altitude, precipitation, and temperature conditions in the study area ( Gelashe, 2017 ). Socioeconomic information The total population of the district is about 194,000. The urban population of 35,000 (18%) is one of the largest in the zone ( Ethiopian Central Statistical Agency, 2007 ). Subsistence agriculture and animal husbandry are the main activities in and outside of the forest delineation area. Methods Preliminary survey Preliminary assessment was carried out for identification of key habitats during September 2018. To observe habitat type, age effect, topography, and climatic factors for survey design preconditions. During this period, waypoints were collected using GPS in each habitat type ( QGIS.org, 2018 ). A pilot survey was also conducted for sample size information. Sampling design A point transect sampling method was used to investigate bird species composition, relative abundance, and habitat association ( Buckland et al., 1993 ). Based on the preliminary survey, the study area was stratified into three dominant habitat types: the sub-afro alpine Ericaceous scrubland habitat; dry evergreen Afromontane Forest; and mixed plantation forest using QGIS. In each habitat type, systematic sampling design was employed. There are eleven blocks: five Erica, five forest and one plantation. The total block area was 128.839 km 2 area, which is 17.5% of the study area. A systematic point grid of a 1.5-kilometer fixed dimension was randomly superimposed ( Fig. 2 ), and rotation onto the survey region employed proportionally in each habitat type ( Buckland et al., 1993 ). The required number of sample points in the survey region calculated as where k 0 and n 0 are roughly estimated in a pilot survey, and the value of b = 3 ( Buckland et al., 1993 ). In the pilot survey there were five points and 54 individual bird observations. The required number of points was 111 points; 42 points in Erica, 64 in forest and five in plantation ( Fig. 2 ). In cluster: Data collection Field guidebooks were tools for identification of the type of bird species exist in the area (birds of the horn of Africa, birds of the East of Africa, birds of Lake Tana, and important bird areas of Ethiopia) ( Redman, Stevenson & Fanshawe, 2016 ). The data collected was carried out for two seasons during the months of July and August for the wet season and December and January for the dry season. Per season, data collection was conducted in two sessions/visits. Data collection was carried out early in the afternoon and late in the afternoon. Detection distances was measured from the point to detected object ( Buckland et al., 1993 ). All observation beyond 70 m sighting distance were truncated. Birds’ songs were used for most elusive forest birds ( Buckland et al., 2001 ). Identification and counting of most bird species were assisted by binoculars. Points taken 200 m distance inside from edge to avoid edge effect. Duration a point count lasts from 2 min to 20 min ( Bibby, Jones & Marsden, 1998 ). Data analysis Lists of information about habitat type, season, visit, block, point, cluster size and species code were organized in a single data frame. With the help of R software data organizing functions, for similarity and diversity analysis, the data was organized in form of data frame where rows as species list, and columns as the presence and absence data. One column for a single habitat, and one column for a sample point to similarity and species accumulative curve data analysis respectively. Data analyzed based on distance sampling method distance 7.3 software ( Thomas et al., 2010 ), and the mark recapture distance sampling (MRDS) analysis engine supplemented by R software ( R Core Team, 2019 ). R software was used to analyze ANOVA test using the Car package, and similarity and diversity indicies were analyzed with the Simba and Vegan package ( R Core Team, 2019 ). AIC and the chi-square statistical test were applied to obtain the best-fitted models ( Buckland et al., 2001 ; Buckland et al., 1993 ). The result was analyzed based on the data recorded on 111 sample points and 222 total efforts of two replication or visit during both seasons. The analysis of distance was based on the formula described by Eqs. (1) – (4) ( Buckland et al., 2001 ; Buckland et al., 1993 ). For point transects analyses MRDS always uses the P3 estimator for encounter rate variance. where ti is the number of times point i was visited, andis the number of objects detected at point i on visit j . Relative abundance of avian species determined using encounter rates calculated for each species by dividing the number of birds recorded(n) by the number points (k) multiply time of visit or effort (t) ( Buckland et al., 2001 ). The encounter rate (ER) was estimated as: Encounter rate data was classified into crude ordinal categories of abundance ( e.g. , abundant, common, frequent, uncommon, and rare) ( Table 1 ). The number of individuals per total effort were ≤0.01, 0.01–0.2, 0.2–1,1-4 and >4. For each interval, the following abundance labels is given rare, uncommon, frequent, common, and abundant, respectively. Therefore, the relative abundance of each bird species was determined by Excel if function of rare, uncommon, frequent, common, and abundant. For example, if the encounter rate is ≤0.01, the species is considered as rare. Analysis were prepared for two type of data selection steps in multispecies analysis options. the first is setting individual species analysis using data filter, the second was not based on individual species; thus, birds as one taxonomic categories of class of Aves as compared to species taxa. In both steps, habitats were stratum whereas seasons were analyzed by using data filter separately. A two-way ANOVA was used to analyze density and number of individual observation effect of three factors through season, habitat, and species. The ANOVA type III error to investigate interaction effect (Model 1). The ANOVA type II error was used for incasing of non-interaction effect (Model 2) where, μ = the overall mean of species observed, α i , β j and γ j are the i th , j th and k th habitat, season and species effects, respectively. where δ ij is interaction term ( Searle, Speed & Milliken, 1980 ). Post-hoc test used for separate group analysis for interaction effect results. Estimated marginal means (emmeans) was used for non-interaction effect pairwise comparison of groups. Differences were considered statistically significant at 5% ( Chambers & Hastie, 1992 ). Unbiased sim was calculated as , Simpson’s index D = Simpson’s Simpson returns 1-D and inv Simpson returns 1/D ( Hurlbert, 1971 ) , where n i denotes number of individuals in the i th species ( n i = 1,2,3...., n and n 1 + n 2... n = N ), S = total number of species ( Shannon, 2001 ). In Fisher’s logarithmic series the expected number of species f with n observed individuals is The parameter α is used as a diversity index. The parameter x is taken as a nuisance parameter which is not estimated separately but taken to be n/(n+ α ) ( Fisher, Corbet & Williams, 1943 ). The species discovery curve was used species richness/number of species discovered across each sample points based on the sample-based rarefaction formula for adequate sample size for a multi-species survey. A collection on n samples, the rarefaction curve is the plot of against i ( i = 1, ..., n ), where S i indicates the arithmetic mean, S n denotes the total number of observed species, nk denotes the number of samples containing at least one individual species k ∈ G ( Chiarucci et al., 2008 ). The diversity and relative abundance presented by tables, qq plot and detection function plot. Statistical difference presented through ggplot2 supported by narrative descriptions. Habitat association of number of species were computed for Sorenson’s similarity index (SI) among habitats under two seasons by using the following formula. SI = 2a/2a+b+c; where 2a = number of species common to two habitats, b = number of species in first habitat, c = number of species in the second habitat ( Sorensen, 1948 ).
Results Species composition Over the course of two distinct climatic periods (dry and wet), a total of 78 species of birds were recorded. Within the recorded species, the Abyssinian Catbird ( Parophasma galinieri ), Ethiopian Siskin ( Serinus nigriceps ), and Yellow Fronted Parrot ( Poicephalus flavifrons ) have been identified as endemic. Furthermore, there exists a subset of ten species, inclusive of the Wattled Ibis (Bostrychia carunculate ), the black-winged lovebird ( Agapornis taranta ) and Rouget’s Rail ( Rouget‘s rougetii ), which are recognized as endemic to both Ethiopia and Eritrea ( Appendix S1 ). Based on the lowest AIC value of MRDS analysis engine, the fitted model was single observer distance model and half-normal key function with model for scale parameters is a constant (CDS). Figure 3 shows the species discovery curve and Fig. 4 shows the extrapolation curve with increasing number of species in the y axis with sample points in the x axis; the curve turns as asymptote shape indicates that the species discover is adequate. The asymptote predicts 86 species to be discovered, which means that over 90% of the species in the area were discovered with a slope 2.62 (the more the slope close to zero, a few or none of species in the area are left detected) ( Fig. 3 ). The Quantile-Quantile (QQ) plot, which shows the fitted cumulative distribution function (cdf) against the empirical distribution function (edf), represents the number of observed bird species. The dots on the plot correspond to these observations. The line in the QQ plot represents the expected distribution if the model fit was perfect. The proximity of the dots to the line indicates the fit of the model. In this case, the dots surrounding the line suggest that the model is well-fitted ( Figs. 5 and 6 ). The detection function plots illustrate the expected probability density function of frequencies divided by distance. The curve in these plots represents the expected distribution, while the histograms display the number of observations. The unweighted Cramer-von Mises tests a p -value was less than 0.001 in both seasons ( Figs. 7 and 8 ). It is important to note that the detection function depicted in Figs. 6 and 8 represents the overall class Aves. This means it does not account for individual bird species observed in the study. The species composition of birds during the wet and dry seasons was not significantly different (F, Season = 0.004, p > 0.05) which was 0.95. On the other side, there was a significant difference among habitats (F, Habitat = 12.78, p < 0.05) which was 7.466e−06 ***. There was no season and habitat interaction effect (F2, Habitat: Season = 2.28, p > 0.05) which was 0.11. The estimated marginal means, also known as least-squares means, revealed a significant difference in the mean number of species across two habitat types: Erica and forest. The mean number of species in the Erica habitat was 24 (±3.16 SE), while in the forest habitat it was 22 (±2.33 SE). However, in the plantation habitat, the estimated marginal mean value was −0.8 (±4.3 SE) ( Fig. 9 ). According to a Tukey pairwise comparison test with a 95% confidence interval, there was no significant difference in the mean number of species between the Erica and forest habitats. Plantation had the least mean number of species. The P value for Erica vs forest was >0.05. The P value for Erica vs plantation and forest vs plantation was <0.01. The highest species diversity (D) during the wet and dry seasons were observed in Forest habitat (dry evergreen afromontane forest), followed by the Erica (sub-afroalpine) habitat with (0.951 & 0.949) and (0.929 & 0.926) respectively, while the mixed plantation habitat had the least with (0.905 & 0.887). The highest species evenness was observed in the Erica habitat. For the entire season, the forest habitat had the highest species diversity (0.943), while Erica habitat had the highest species evenness (0.85) ( Table 2 ). Species relative abundance In the dry season, a total of 2,639 individual birds were recorded, while in the wet season, 2,410 individual birds of 78 species were observed ( Table 3 ). In the 2018 IUCN red list categories, six species faced global threats, three species neared the threat status, and a total of 69 species were classified as least concern. The mixed plantation forest habitat recorded the highest relative abundance of Aves, with 15.7 and 16.7 during the dry and wet seasons respectively. This was followed by Erica in the dry season with 13.14, and the forest in the wet season with 10.67. The dry season exhibited a higher overall seasonal relative abundance of 11.89 ( Table 3 ). The relative abundance of individual species in stratified habitat is shown in Tables 4 , 5 and 6 in Erica, forest and plantation habitat, respectively. In the Erica (sub-afroalpine habitat), the encounter rate was calculated as number of individual observations in the Erica per Erica point samples times number of a point visit (n/84). In the Erica habitat, the Red-wing Starling had the highest relative abundance during the dry season (1.95), while the Scare Swift had the highest relative abundance in the wet season (1.02). The Chestnut-napped Francolin and Common Buzzard were not recorded in the dry season, and similarly, the Yellow-billed Kite and White-headed Vulture were not recorded in the wet season. During the dry season, four, 10 and 21 species were classified as common, frequent, and uncommon, respectively. During the wet season, one, 17 and 18 species were common, frequent, and uncommon, respectively. Rare and abundance species were not recorded under the two seasons ( Table 4 ). No species were recorded as rare or abundant in either of the two seasons ( Table 4 ). In the dry afromontane forest habitat, the encounter rate was calculated as number of individual observation per the dry afromontane forest habitat effort (n/128). Montane White-eye was the highest relative abundance during both seasons (1.88 and 1.57). Mouse-colored Penduline Tit, Variable Sunbird, Abyssinian owl, African Stonechat and Common Buzzard were not recorded in the dry season, while in the wet season, Yellow-billed Kite and White-headed Vulture were not recorded. During the dry season two, 14, 39 and two species were common, frequent, uncommon, and rare respectively. During the wet season, one, 19 and 34 species were common, frequent and uncommon respectively. Rare and abundant species were not recorded under the wet season ( Table 5 ). In the plantation forest habitat, the encounter rate was calculated as number of individual observations per plantation forest habitat Effort (n/10). Ground Scarper Thrush was the highest relative abundance during dry seasons (3.1). In the wet season, the yellow crown canary was the highest (2.00). During the dry season two, 10 and 7 species were common, Frequent and Uncommon, respectively. During the wet season, two, four and 14 species were common, Frequent and Uncommon respectively. Rare and abundance species were not recorded under two seasons. six species in the dry season and five species in the wet season were isolated record in the season ( Table 6 ). Habitat association of bird species Not all species were distributed in all habitat type. Of 39 different bird species, 15 species specific to Ericaceous sub-afro alpine scrubland vegetation ( Table 4 ), while in Afromontane forest habitat, a number of 59 different bird species were founded, about twenty six species were specific to the habitat ( Table 5 ), In the plantation forest habitat 26 bird species were recorded, there three species specific to community plantation forest habitat ( Table 6 ) and the rest 34 species were recorded either in three or in only the two habitats ( Appendix S1 ). A forest habitat accounts for a high number of bird species and high specific species. The Erica and forest habitats share more species that are common in the dry season ( Fig. 10 and Table 7 ). Plantation and forest share more species that are common in the wet season ( Fig. 11 and Table 7 ). Plantation and Erica share the lowest common species ( Table 7 ).
Discussion Low detection frequencies and detection probabilities Some species, including the African Black Swift, Pied Crow, and Common Buzzard, exhibited low detection frequencies, falling below the standard recommended threshold of 60–80 observations. This could introduce bias into results, as insufficient data for these species may hinder robust analyses ( Buckland et al., 1993 ). Additionally, certain species, particularly those specialized in woodland habitats, displayed lower detectability. This lower detectability, especially for species in closed habitats, may be influenced by various factors, including habitat structure and observer bias ( Johnston et al., 2014 ). For multispecies surveys, it is crucial to account for local habitat effects on all species, not just those with abundant data ( Zipkin et al., 2010 ). Seasonal variability The research unveiled substantial seasonal fluctuations in bird abundance, with marked differences between the dry and wet seasons. While data collection was successful in both seasons, some species exhibited stronger presence during the wet season. This observation aligns with existing research emphasizing the influence of seasonal changes in resource availability and weather conditions on avian populations ( French & Rockwell, 2011 ; Li et al., 2022 ). However, the effect of seasonality on avian species composition may be less pronounced in tropical regions. Seasonal changes in bird populations, feeding habits, and migration patterns are more prominent in temperate regions ( Ward, 1969 ; White, Warren & Baines, 2015 ). Many birds in the study area are resident breeders, with limited migration during seasonal shifts, possibly contributing to the insignificant effect of seasons on bird species composition ( Appendix S1 ). Migratory birds in Ethiopia are primarily associated with aquatic, wetland, and riverine habitats ( Brooks, 2009 ). Habitat influence on bird composition and structure Habitat strongly influenced bird species composition, with distinct preferences observed among different species. Some species displayed specific habitat associations, such as the White-backed Black Tit in forest habitat, Thekla Lark in Erica, and Semi-colored Flycatcher in plantation habitat ( Tables 4 , 5 and 6 ). This suggests that avian community composition in the Ethiopian Highlands is intricately linked to habitat types. These findings align with previous studies highlighting the importance of habitat characteristics in shaping avian communities ( Aynalem & Bekele, 2008 ). This suggests that various bird species have adapted to distinct ecological niches within the Ethiopian Highlands. The high species abundance in natural forest habitats further underscores their significance in avian biodiversity conservation. These findings resonate with studies by Hendershot et al. (2020) , which underscored the importance of preserving diverse habitat types for effective avian biodiversity conservation. The heightened encounter rate observed within plantation forests ( Table 6 ) suggests the presence of edge effects, particularly as influenced by adjacent agricultural areas. These edge effects are known to attract generalist bird species ( Khamcha et al., 2018 ), which typically exploit transitional zones. However, it is noteworthy that the elevated encounter rate is primarily attributed to a select few bird species that have specifically adapted to the plantation forest habitat. Microclimate and habitat structure emerged as major drivers influencing avian community composition within specific habitats ( Rajpar & Zakaria, 2015 ). The relationship between habitat and species composition was further evident in the similarity index results, which indicated higher similarity between neighboring habitats ( Table 5 ). Microclimate, habitat structure, and environmental gradients likely contribute to species distribution patterns and preferences within the Ethiopian Highlands. Altitudinal gradients played a role in avian diversity, with the highest species composition recorded in middle elevation zones, primarily within forest habitats ( Quintero & Jetz, 2018 ). Decreases in diversity at higher altitudes may be attributed to factors such as lower speciation or higher extinction rates, potentially influenced by smaller areas or lower temperatures ( Quintero & Jetz, 2018 ). The size of the habitat patch and edge effects may also influence avian species composition. Edge-sensitive, neutral, and preferring species respond differently to habitat edges ( Brand & George, 2001 ). Forest species exhibit sensitivity to the contrast between natural and anthropogenic habitats ( Zurita et al., 2012 ). Plantation habitats, characterized by smaller areas, displayed lower species composition estimates ( Quintero & Jetz, 2018 ). As habitat destruction is a significant concern, particularly in forested areas, preserving diverse habitats and their associated bird species should be a conservation priority ( Girma et al., 2017 ; Wang et al., 2017 ). Future research in this region should address the limitations of our study. Long-term monitoring with extended survey periods, including intermediate seasons, can provide a more comprehensive understanding of avian population dynamics. In addition, expanding taxonomic coverage and accounting for external factors such as climate change and invasive species will enhance our understanding of avian biodiversity in the Ethiopian Highlands.
Conclusion The study revealed the presence of three unique endemic bird species, constituting a notable 20% of Ethiopia’s endemic avian population. Moreover, within the study area, an impressive 71% of Ethiopia and Eritrea’s endemic bird species call this region home, highlighting the exceptional levels of endemism present. These findings create an opportunity for the development of community-based ecotourism initiatives. Significantly, bird observation within various blocks of the study area is a crucial aspect. Rather than being influenced by seasonal fluctuations, these blocks are distinguished by differences in elevation, vegetation types, and the presence or absence of bird species. These variations in elevation generate microclimates, each nurturing distinct bird communities. However, this localized endemism also presents challenges, including the concentration of endemic species and potential resource constraints that could pose risks to specific bird populations. The study underscores the critical need for sustained surveillance and conservation strategies, particularly targeting forest-dependent and Erica-specific avian species. These proactive measures are imperative to address the potential risks associated with resource limitations and to safeguard the continued existence of these distinct bird communities. Our findings serve as a call to action for conservationists and policy makers, emphasizing the importance of preserving these unique ecosystems for future generations. Thus, it is vital to prioritize dedicated conservation efforts, incorporating a multifaceted approach involving community-based ecotourism development and landscape restoration projects. This study represents the inaugural avian survey within this ecologically significant region. It establishes the groundwork for future research endeavors that can explore various aspects of this ecosystem. These future inquiries may investigate relationships between forest fragmentation and bird density, the impact of human disturbance on bird populations, and the intricate interplay between vegetation and bird communities. As this initial exploration concludes, it opens doors to a wealth of forthcoming insights and discoveries aimed at preserving the Dodola dry afromontane forest and ericaceous scrubland ecosystems.
Background Birds’ functional groups are useful for maintaining fundamental ecological processes, ecosystem services, and economic benefits. Negative consequences of loss of functional groups are substantial. Birds are usually found at a high trophic level in food webs and are relatively sensitive to environmental change. Methods The first surveillance bird study was carried out southeast of Ethiopia adjacent to Bale Mountain National Park aimed at investigating the composition, relative abundance, and distribution of Aves. Using regular systematic point transact sampling, the density and species composition were analyzed through the mark recapture distance sampling engine assisted by R statistical software. Results This study recorded a total of seventy-eight bird species over two distinct seasons. Among these, fifteen species were exclusive to Erica habitats, twenty-six were found in natural forest habitats, and three were specific to plantation forest habitats. The study also discovered three endemic species. Based on the 2018 IUCN Red List categories, six of the species are globally threatened, three are near threatened, and the remaining sixty-nine are classified as least concern. The relative abundance of birds did not significantly differ across habitats and seasons, but variations were observed among blocks. Bird density was found to fluctuate across the three habitats and two seasons; however, these habitat differences were not influenced by seasonal changes. Conclusion The findings of this study reveal that the differences in composition and relative abundance are not merely seasonal changes in the forest and Erica habitats. Instead, these habitats create microclimates that cater to specific bird species. However, this localized endemism also presents challenges. The concentration of endemic species and potential resource constraints could pose a threat to these habitat-specialist birds.
Supplemental Information
Additional Information and Declarations
CC BY
no
2024-01-15 23:43:49
PeerJ. 2024 Jan 11; 12:e16775
oa_package/c5/3e/PMC10788088.tar.gz
PMC10788089
38223754
Introduction The systematization of taxa of endemic wild mountain plant species is an urgent issue in the latest taxonomy of the Rosaceae Juss family; one of the most prominent examples is the systematization of representatives of endemic mountain shrubs of the almond subgenus. The almond is one of the most essential cultivated and wild plant species worldwide. Shrub forms are often used in introductory and subsequent material for landscaping large cities in Kazakhstan. Studying isolated endemic populations will allow for the comparison of genetic variation among the genus’ general distribution area species, as some medicinal properties of almonds are also known. Specimens from pristine mountain populations are potential carriers of valuable biological and chemical compounds ( Gradziel, 2011 ). Almond plants are members of the genus Prunus L. (tribe Amygdaleae ), one of 65 genera of the subfamily Prunoideae of the complex family Rosaceae Juss. The genus is represented by four subgenera (subgen. Amygdalus (L.) Focke., Cerasus (Mill.) A.Gray., Emplectocladus (Torr.) A.Gray., and Prunus L.) and includes about 254 species. The subgenus Amygdalus consists of six sections: Amygdalopsis (Carr.) Linsz., Cerasioides (Carr.) Linsz., Chamaeamygdalus Spach, Euamygdalus Spach., Lycioides Spach., and Spartioides Spach. One of the least studied sections is the pygmy almond Chamaeamygdalus , which has low yield, a special protection status (endemic and rare plant species), and ornamental properties ( Artemov et al., 2009 ; Browicz & Zohary, 1996 ). According to the list of vascular plants in Kazakhstan ( Abdulina, 1999 ; Komorov, 1941 ), there are three species in the flora of Kazakhstan: Chamaeamygdalus —steppe almond ( Prunus tenella Batsch syn. Amygdalus nana L.), Ledebour’s almond ( Prunus ledebouriana (Schlecht.) YY Yao syn. Amygdalus ledebouriana ( Schlechtendal, 1854 )), and Pettunnikov’s almond ( Prunus petunnikowii (Litv.) Rehder syn. Amygdalus petunnikowii Litw.). In addition, the list of flora of Kazakhstan includes one species of the section Lycioides ( Amygdalus communis L. syn. Prunus dulcis (Mill.) D.A. Webb.) and one species of the section Euamygdalus ( Amygdalus spinosissima Bunge. syn. Prunus spinosissima (Bunge) Franch.). P. tenella is widespread in Southern Europe and the European part of Asia, mainly in the steppe zones, and is available for cultivation. According to the flora of the Kazakh Soviet Socialist Republic (Kazakh SSR), P. ledebouriana is an endemic species for Kazakhstan, growing in the Altai, Tarbagatai, and Dzungarian Alatau mountains and replacing P. tenella in east Kazakhstan ( Pavlov, 1961 ). P. ledebouriana is listed in the Red Book of the Republic of Kazakhstan ( Baitulin, 2014 ) and the Book of Woody Plants of Central Asia ( Eastwood, Lazkov & Newton, 2009 ). Various international databases have interpreted the status of P. ledebouriana ( The Plant List (TPL) and World Flora Online (WFO)) and attempted to determine the taxonomy of this species ( GBIF, 2020 ). P. ledebouriana is closely related to the steppe almond P. tenella , as they have similar morphological features and are distinguished by a relatively large habitus (plant height), sizes of leaves, and fruits ( Orazov et al., 2020 ). P. tenella is the southernmost species of section Chamaeamygdalus , has a wide distribution from the northern Balkans to Kazakhstan and China ( Ladizinsky, 1999 ), and is often used for cultivation as an ornamental species. These two related species have no boundaries and the genetic differences are poorly understood. According to a report on the flora of China, P. tenella has a synonymous name, A. nana (syn.) ( Lu et al., 2003 ). Across various literary sources, two synonymous species have been recorded in the territory of East Kazakhstan (an administrative region of Kazakhstan bordering Russia and China), adding complexity to identifying the species. The distribution of P. tenella among studied territories is not uniform since the area consists of several isolated populations. According to various sources, P. tenella predominates in the steppes and low hills of the low mountains of the Kalba and Ulba ranges in the Altai Mountains ( Planetarium ). P. ledebouriana is found in the cold and xerophytic mountain areas adjacent to Russia (Narym Range of the Altai Mountains) and China (foothills of the Tarbagatai Range) ( Orazov et al., 2020 ). The population in the foothills of Tarbagatai is the most extensive and is included in the Red Book of Kazakhstan ( Stepanova, 1962 ; Baitulin, 2014 ). Natural populations of P. ledebouriana are declining due to habitat degradation, frequent droughts, changes in the fire regime (succession), overgrazing, and urbanization ( Sumbembaev, 2018 ; Sumbembayev et al., 2021 ; Aidarkhanova et al., 2022 ; Kusmangazinov et al., 2023 ). Therefore, by the Decree of the Government of the Republic of Kazakhstan in 2018, the Tarbagatai State National Natural Park (East Kazakhstan region, Urdzhar district) was adopted and the preservation of this rare and endangered species was recommended ( Republic of Kazakhstan, 2018 ). The morphological similarity of the two species complicates the protection of this endemic plant species ( Potter et al., 2002 ). In this regard, making a clear distinction between the two species of wild almond populations in Eastern Kazakhstan is crucial. P. ledebouriana primarily reproduces vegetatively in addition to sexual reproduction. It usually flowers from April to May and bears fruit from June to July ( Pavlov, 1961 ; Bin et al., 2008 ; Orazov et al., 2022 ). Isolating factors include the presence of Lake Zaisan and the Irtysh River that feeds it; the location of the Zaisan basin between Altai and Tarbagatai and the non-contiguous Ulba, Kalba, Narym ridges of the Altai mountains; and the remote location of the Tarbagatai ridge of the Saur-Tarbagatai mountains ( Egorina, Zinchenko & Zinchenko, 2003 ). Applying molecular methods in botany and plant systematics has provided opportunities for identifying and confirming species and their taxonomic position in the genus ( Andersen & Lübberstedt, 2003 ). Various types of DNA markers have been successfully used to assess the genetic diversity of species of Prunus . These studies included the use of random amplified polymorphism of DNA (RAPD) ( Casas et al., 1999 ), inter simple simples sequence repeats (ISSR) ( Martins, Tenreiro & Oliveira, 2003 ), amplified fragment length polymorphism (AFLP) ( Struss et al., 2003 ), and simple sequence repeats (SSR) ( Aranzana et al., 2003 ) markers. One of the most informative types of DNA markers are microsatellite markers (SSR), which are characterized by a high level of polymorphism and codominant inheritance ( Kalendar, 2011 ; Genievskaya et al., 2020 ). Analysis of population genetics using microsatellite markers provides information on the overall levels of genetic diversity, genetic structure, and effective population size, which are typically critical when developing effective management strategies for the research of genetic resources of endemic plant species ( Turuspekov & Abugalieva, 2015 ; Abugalieva & Turuspekov, 2017 ; Almerekova et al., 2018 ; Almerekova et al., 2020 ; Genievskaya et al., 2020 ). Various microsatellite markers have been successfully used to study the phylogenetic relationships between various cultivated almonds (common almond P. dulcis (Mill.) DA Webb.) and their wild relatives ( Xu et al., 2004 ; Xie et al., 2006 ; Shiran et al., 2007 ; Sorkheh et al., 2007 ; Zhang et al., 2018 ; Zargar et al., 2023 ). However, there have been limited studies on the genetic analysis of natural populations of wild species in the subgenus Amygdalus ( Varshney, Graner & Sorrells, 2005 ; Tahan et al., 2009 ). This study is one of the first to explore the mountain populations of P. ledebouriana and obtain genetic information. The application of SSR markers in population genetic analysis for the narrowly endemic species P. ledebouriana can be successfully used to study the genetic diversity and population structure of the natural population in Eastern Kazakhstan.
Materials & Methods Study of genetic structure using SSR This study investigated three isolated populations of P. ledebouriana from two mountain geographic ranges (Altai and Tarbagatai) and one P. tenella from Eastern Kazakhstan. Among these four populations, the first one (1-UR) was collected on several isolated gorges in the state national natural park “Tarbagatai” at the height of the shrub belt. The materials of the second and third populations were collected in the cold and xerophytic mountain regions of the Kalba (2-KO) and Narym (3-KA) ridges. The population of P. tenella (4-UK) was collected from a small hilly plain (steppe) zone in the outlines of the Kalbinskiy and Ulbinskiy ridges in the border zone of the city of Ust-Kamenogorsk and the village of Novo-Akhmirovo ( Table S1 ). The plant height of P. ledebouriana was measured according to Goloskokov (1972) . In total, 20 leaves from each of the 60 P. ledebouriana and 20 P. tenella plant populations were collected. The distances between populations were at least 100 kilometers, and plants within a population were selected at a distance of at least 50 m from each other. Fresh plant leaves were used for DNA extraction. Total DNA was isolated from crushed leaf powder according to the Cetyl trimethyl ammonium bromide (CTAB) protocol with double purification with chloroform ( Doyle & Doyle, 1987 ). The quality and concentration of DNA were assessed using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA) and electrophoresis in 1% agarose gel. DNA concentration was normalized to the working concentration for further analysis. Twenty-two SSR markers of the nuclear genome were used as DNA markers and selected according to Mnejja et al. (2005) . PCR amplification was performed in 10 ul reaction volume containing 20 ng template DNA, one PCR buffer, 1.5 mM of MgCl2, 0.2 mM of dNTPs, 0.4 mM of each primer, and one unit of Taq DNA polymerase (Sileks, Badenweiler, Germany). PCR conditions were set at 95 °C for 1 min, followed by 35 cycles of 94 °C for 30 s, 50–65 °C for 30 s, 72 °C for 30 s, and 5 min at 72 °C for final elongation. PCR products were separated on a 6% polyacrylamide gel using 0.5x Tris-borate-EDTA buffer. DNA fragments were identified using an ethidium bromide staining procedure. Alleles were determined using 100-bp ladders (Thermo Fisher Scientific). Visualization was performed using the GelDoc XR+ gel documentation system (Bio-Rad, Hercules, CA, USA). Statistical analysis and polymorphism information content Descriptive statistics and t-tests (SPSS Statistics v.27.0; https://www.ibm.com/products/spss-statistics ) were used to describe morphological features and identify population differences (plant height across different populations of P. ledebouriana and P. tenella ). Each DNA fragment obtained was treated as a separate character and evaluated as a discrete variable. Accordingly, rectangular binary data matrices were obtained for SSR markers. To assess the effectiveness of markers, the following polymorphism indices were used: marker index (MI), resolving power (Rp), observed heterozygosity (HO), expected heterozygosity (HE), polymorphism information content (PIC values), inbreeding coefficient (FIS), and fixation index, (Fst) as estimated by Weir & Cockerham (1984) . The genetic diversity of P. ledebouriana , including the Shannon diversity index (I), was assessed using PopGene ( Yeh et al., 2000 ). Wright’s F-statistics was calculated for each SSR locus using the GenAlEx 6.5 program ( Peakall & Smouse, 2006 ). Analysis of molecular variance (AMOVA) was performed across organizations using GenAlEx 6.5 ( Peakall & Smouse, 2006 ). Principal coordinate analysis (PCA) was performed using the Numerical Taxonomy and Multivariate Analysis System (NTSYS-pc) program ( Rohlf, 1998 ). The STRUCTURE program also applied Bayesian cluster analysis ( Pritchard, Stephens & Donnelly, 2000 ). The dendrogram was constructed using the PAST program and unweighted pair group method with arithmetic mean (UPGMA) algorithm and Boot N:1,000 ( Joshi et al., 2000 ). Cluster analysis of P. ledebouriana and P. tenella was studied using STRUCTURE. An admixture model was used, which made it possible to analyze the frequencies of admixtures and correlated alleles. Five independent simulations were run, each including 100,000 burn-in steps and a subsequent 100,000 Markov chain Monte Carlo (MCMC) iterations.
Results Variability of plant height among populations The samples of three P. ledebouriana and one population of P. tenella were assessed according to plant height ( Fig. 1 ). The most significant plant height was recorded for the 3-KA population (2.09 ± 0.06 m), which was the mountain population’s highest elevation above sea level ( Table S1 ). Similar parameters for plant height were recorded in representatives from two other mountain populations: 2-KO and 1-UR, 1.79 ± 0.03 m and 1.78 ± 0.03 m, respectively. The lowest height values of plants in the steppe populations of P. tenella were recorded in 4-UK (1.41 ± 1.04 m). The T -test confirmed a statistically significant difference between the plant height of the three populations of P. ledebouriana : 3-KA, 2-KO, and 1-UR ( P value = 2.3e−15). The t -test suggested that all three mountainous populations had significantly higher plants than the steppe population P. tenella 4-UK ( P < 0.0001). Allelic variation of SSRs The assessment of allelic variations in P. ledebouriana and P. tenella showed that 19 out of the studied 22 markers were polymorphic ( Table S2 ). Figure 2 shows two gel snapshots of different populations with high polymorphism. It appeared that CPDCT038 had three loci. Information on these 19 polymorphic SSR loci, including the sizes of bands, is presented in Table 1 . Results showed that the three SSR loci had two alleles, 11 SSR loci had three, three SSR markers had four, and two loci had five. The results of genotyping using 19 SSR loci and their statistical details are shown in Table 2 . It was determined that 19 loci, according to the polymorphism level (PIC values) could be separated into three subpopulations. The first group was comprised of five SSR loci with a PIC above 0.5, the second group of eight loci with a PIC between 0.3 and 0.5, and the third group of the remaining six loci with a PIC below 0.3. The mean PIC value was 0.38 ( Table 3 ). The genetic diversity assessment using Nei’s index suggested that the highest genetic diversity was in P. tenella 4-UK (0.622), followed by three populations of P. ledebouriana : 1-UR, 2-KO, and 3-KA ( Table 2 ). The average genetic diversity index for the three populations of P. ledebouriana was 0.501. To compare inter-population diversity, two main scenarios were used. The first included the assumption of the unity of four populations as representatives of the species P. ledebouriana , and the second scenario assumed the presence of three populations of P. ledebouriana and one population of P. tenella . The AMOVA suggested that the total genetic diversity in P. ledebouriana could be partitioned as 73% within populations and 27% between populations. The evaluation of the partitioning of the genetic variation across four populations (including 4-UK P. tenella ) resulted in a decrease in the level of variation within populations (63%) and an increase in variation between populations (37%). The t- test was applied to test associations between SSR markers and plant height in samples of both species. The results showed that nine SSR loci were statistically associated with plant height ( Table 4 ). Population structure in samples of four studied populations using SSR markers All P. ledebouriana and P. tenella samples were analyzed for population structure using the package STRUCTURE based on genotyping data from 19 polymorphic SSR markers. The structure was assessed using results from K = 2 to K = 10. The assessment of K plots suggested that starting from K = 3 and K = 4 ( Fig. 3 ), plants from population 4-UK were separated from three populations of P. ledebouriana . Interestingly, the population 3-KA, growing at the highest elevation and characterized by the lowest level of genetic diversity within four studied populations ( Table 2 ), was drifting apart from other groups of plants in step K = 2. In the analysis of principal coordinates (PCoA), it was determined that the first and second principal coordinates described 49.05% and 41.16% of the variability, respectively ( Fig. 4 ). PC1 effectively separated 3-KA from the other three populations. At the same time, PC2 allowed for the differentiation of 4-UK from 1-UR and 2-KO. In addition, the UPGMA dendrogram was built based on the genotyping results for samples in four populations. The results suggested that population 4-UK formed a distinct cluster, and only one sample from that population (4-UK_07) was positioned close to the cluster with the domination of samples from population 3-KA. Likewise, in PCoA analysis, the UPGMA dendrogram distinguished 3-KA from 1-UR and 2-KO. At the same time, the latter populations had a mix of samples in several clades ( Fig. 5 ).
Discussion The phylogenetic relationship between P. tenella and P. ledebouriana The phylogeny of the genus Prunus is complex. Previous reports provided contradictory results on the relationships of plum species ( Badenes & Parfitt, 1995 ). Nevertheless, it has been well established that P. tenella belongs to the section Amygdalopsis within the subgenus Amygdalus ( Avdeev, 2016 ). The complexity of taxonomy in species within this section could be more precise, as there are questions about the relationship among different taxa. These questions include the taxonomic relationship between P. tenella and P. ledebouriana , where the former is widespread in the Eurasian continent, and the latter is limited to mountainous populations of East Kazakhstan, particularly in the Altai mountains ( Zhukovsky, 1971 ; Dzhangaliev, Salova & Turekhanova, 2003 ; Myrzagalieva et al., 2015 ; Myrzagalieva & Orazov, 2018 ). Several reports proposed that these two species have minor differences and could be considered one species ( Sokolov, Svyazeva & Kubli, 1980 ; Qiu et al., 2012 ). Conversely, publications suggest that P. ledebouriana and P. tenella have sufficient morphological differences to separate them into two distinct species ( Zaurov et al., 2015 ; Orazov et al., 2020 ). Plant height is one of the most prominent traits used to differentiate these species ( Mushegyan, 1962 ; Vintereoller, 1976 ). In this work, all plants from the three P. ledebouriana populations and one population of P. tenella were measured for plant height. The results clearly showed that samples from the two species have a distinct separation based on this trait ( P < 0.0001). Similar results for different morphological characteristics confirm the difference among other representatives of the genus ( Devi, Singh & Thakur, 2018 ). Hence, evaluating plant height can be a reliable way of distinguishing between two closely genetically related species within the section Amygdalopsis . The performance of this trait in plants is reliably related to the elevation above sea level (a.s.l.), as the lowest altitude for P. ledebouriana samples was higher than 1.7 m a.s.l. and the highest altitude for P. tenella samples was lower than 1.5 m a.s.l. ( Table S1 ). We assessed samples from two species using 19 polymorphic SSR loci to confirm the conclusion based on the plant height study. The application of SSR markers resulted in a constructed UPGMA phylogenetic tree that separated 20 samples of P. tenella (4-UK) from 60 samples of P. ledebouriana (1-UR, 2-KO, 3-KA) ( Fig. 4 ). This result was also supported by the PCoA plot, where PC2 (41.2%) split 4-UK samples from 1-UR and 2-KO. In contrast, PC1 (49.1%) separated 3-KA from the remaining three populations ( Fig. 3 ). Interestingly, the clusterization in Fig. 4 suggests that, generally, there is a low level of admixture between populations, which supports the model of “isolation by distance” ( Meirmans, 2012 ). The genetic heterozygosity index (Nei’s index) assessment showed that the highest genetic diversity was registered in the population of P. tenella (0.606). In contrast, the lowest index was recorded in population 3-KA (0.449), representing the area with the highest sampling elevation. High altitude is a sufficiently solid environmental factor that negatively influences genetic variation in P. ledebouriana . Nevertheless, the separation of 3-KA from 1-UR and 2-KO supported a more significant genetic variation within the species. The high genetic differentiation between mountain and steppe populations is most likely due to the factor of the steppe zone and the presence of anthropogenic pressure on the P. tenella (4-UK) population. In turn, mountain populations are distinguished by mountain isolation of ridges. The analysis of samples by DNA genotyping using SSR markers suggested that five loci were characterized as markers with the highest polymorphism level ( Table 3 ). These five SSR loci could also be recommended for the discrimination of Prunus species in other studies. In addition, a t -test was applied to test the association of 19 polymorphic SSR loci with plant height ( Table 4 ). It was concluded that nine out of 19 SSR loci were significantly associated with plant height. This result may not be a direct reflection of associations between SSRs and plant height but an indication of the genetic differences between P. ledebouriana and P. tenella, as these two species have significantly differed in plant height ( Fig. 1 ). Therefore, these nine SSR loci can be efficiently used in further studies of discrimination between P. ledebouriana and P. tenella . The assessment of the population structure using the STRUCTURE package suggested that populations of two species started separating at steps K = 3 and K = 4, which is another indication that P. ledebouriana and P. tenella are two different species. The evaluation of samples in four clusters at K = 4 ( Fig. 3 ) showed little admixture level, supporting the model of isolation by distance with a limited gene flow among the populations. Mantel tests revealed a positive correlation between geographic and genetic distance among populations ( r = 0.4387), demonstrating consistency with the isolation-by-distance model.
Conclusion Discriminating the endemic species P. ledebouriana from wild almond P. tenella has been a poorly studied issue for the genus Prunus . In this work, two different approaches were conducted to analyze the genetic relationship between these two closely related species. In the first approach, the detailed analysis of plant height from one population of P. tenella and three populations of P. ledebouriana allowed a significant separation ( P < 0.0001) between the two species. In the second approach, the samples of these four populations were genotyped using 19 polymorphic SSR loci. The NJ phylogenetic tree and PCoA plot also showed a significant separation of two species on two groups of clusters. Also, the UPGMA dendrogram and PCoA plot have demonstrated that within P. ledebouriana , the population 3-KA sharply differed from populations 1-UR and 2-KO, supporting a high diversity level within the species. The assessment of the connections between SSR loci and plant height showed that nine out of 19 loci were associated with the studied morphological trait, suggesting that these loci can be efficiently used in DNA discrimination of two species. The population structure analysis suggested that samples in two species were separated starting from step K = 3. The assessment of plants in clusters at steps K = 3 and K = 4 suggested a limited admixture level between populations, supporting the model of isolation by distance. Thus, the analysis of plant height and application of SSR markers were successfully used to discriminate P. tenella and P. ledebouriana and study the genetic diversity and population structure of the endemic species P. ledebouriana . A clear distinction between similar plants from different populations makes it possible to delineate the boundary of mutual replacement of the P. tenella and P. ledebouriana species. It will be possible to accurately separate precious endemic populations of P. ledebouriana from the simple P. tenella , and then clarify the taxonomy of the genus and organize conservation measures in the mountainous zones of Eastern Kazakhstan. The obtained data on SSR will allow further research at a higher level. It is proposed to use markers such as ITS, s6pdh, trnL-trnF, and trnS-trnG to compare several species of shrub almonds from Central Asia and use whole-genome methods.
Background Genetic differences between isolated endemic populations of plant species and those with widely known twin species are relevant for conserving the biological diversity of our planet’s flora. Prunus ledebouriana (Schlecht.) YY Yao is an endangered and endemic species of shrub almond from central Asia. Few studies have explored this species, which is closely related and morphologically similar to the well-known Prunus tenella Batsch. In this article, we present a comparative analysis of studies of three P. ledebouriana populations and one close population of P. tenella in Eastern Kazakhstan in order to determine the particular geographic mutual replacement of the two species. Methods The populations were collected from different ecological niches, including one steppe population near Ust-Kamenogorsk ( P. tenella ) and three populations ( P. ledebouriana ) in the mountainous area. Estimation of plant height using a t -test suggested a statistically significant difference between the populations and the two species ( P < 0.0001). DNA simple sequence repeat (SSR) markers were applied to study the two species’ genetic diversity and population structure. Results A total of 19 polymorphic SSR loci were analyzed, and the results showed that the population collected in mountainous areas had a lower variation level than steppe populations. The highest level of Nei’s genetic diversity index was demonstrated in the 4-UK population (0.622) of P. tenella . The lowest was recorded in population 3-KA (0.461) of P. ledebouriana , collected at the highest altitude of the four populations (2,086 meters above sea level). The total genetic variation of P. ledebouriana was distributed 73% within populations and 27% between populations. STRUCTURE results showed that two morphologically similar species diverged starting at step K = 3, with limited population mixing. The results confirmed the morphological and genetic differences between P. tenella and P. ledebouriana and described the level of genetic variation for P. ledebouriana . The study’s results proved that the steppe zone and mountain altitude factor between P. tenella and isolated mountain samples of P. ledebouriana .
Supplemental Information
The authors express their gratitude to the head of the Department of Science, Information, and Monitoring of the State National Natural Park, Tarbagatai Alemseitova Janylkan Kabikyzy, for organizing field scientific expeditions in the territory of the National Park. Additional Information and Declarations
CC BY
no
2024-01-15 23:43:49
PeerJ. 2024 Jan 11; 12:e16735
oa_package/ed/f3/PMC10788089.tar.gz
PMC10788090
38223759
Introduction Faecal indicator bacteria (FIB) are a group of bacteria used to evaluate water faecal contamination. Ideally, FIB should be of faecal origin only and not grow in the extraintestinal environment ( Rochelle-Newall et al., 2015 ). Furthermore, the abundance of FIB should correlate with the presence of faecal contamination-related pathogen. Compared with direct detection of these pathogens, FIB are more abundant in the water and thus easier to detect ( Tortora, Funke & Case, 2013 ). Globally, Escherichia coli has been used as a FIB since the last century ( USEPA, 1986 ). Due to its wide application, extensive studies have been done on its survival in water. The survival of E. coli in aquatic habitats is affected by both biotic and abiotic factors ( Jang et al., 2017 ). For example, biotic factors include biofilm formation and the presence of other microorganisms ( Korajkic et al., 2014 ; Stocker et al., 2019 ), whereas abiotic factors include temperature, pH, salinity, sunlight and nutrient availability ( Petersen & Hubbart, 2020 ; Moon et al., 2023 ). Therefore, seasonal variations with changes in temperature, precipitation and anthropogenic activity could also affect E. coli abundance and their survival. An, Kampbell & Peter Breidenbach (2002) reported lowest E. coli density in summer and attributed this to the lower loading of faecal material, more vigorous grazing, and poor survival of E. coli in warm water. However, Durham et al. (2016) reported highest E. coli abundance in summer, suggesting that site-specific factors are also relevant. Nevertheless, there remain doubts about E. coli ’s reliability as a FIB, as studies have revealed that sediments are an environmental reservoir of E. coli in freshwater habitats ( Ishii et al., 2006 ; Ishii & Sadowsky, 2008 ; Cho et al., 2010 ; Garzio-Hadzick et al., 2010 ; Tymensen et al., 2015 ; Fluke, González-Pinzón & Thomson, 2019 ). Relative to the water column, sediments generally have higher nutrient levels, lower dissolved oxygen, and lower UV intensity, which helps E. coli survive in sediments ( Jamieson et al., 2005 ; Koirala et al., 2008 ; Lorke & MacIntyre, 2009 ; Rochelle-Newall et al., 2015 ). Studies have also reported that habitat transition of sediment E. coli to the water column, increases E. coli abundance in the water; for example, during resuspension of sediment by mechanical effects like precipitation or water flow ( Whitman, Nevers & Byappanahalli, 2006 ; Cho et al., 2010 ; Abia et al., 2017 ). Apart from resuspension, habitat transition should theoretically also occur as E. coli grows in the sediments ( Ishii et al., 2006 ). For instance, E. coli that thrives on sediment biofilms can be released into the water due to biofilm sloughing ( Mackowiak et al., 2018 ). Previous habitat transition studies focused more on sediment resuspension induced by mechanical effects. These mechanical effects included anthropogenic vessel activity and precipitation caused by seasonal variation. An, Kampbell & Peter Breidenbach (2002) revealed the resuspension of sediment caused by motorboat leads to water quality deterioration. Precipitation can also cause the resuspension of sediment, causing E. coli habitat transition from sediment to the upper water column ( Li, Filippelli & Wang, 2023 ). However, increase in E. coli due to sediment resuspension will quickly return to pre-resuspension concentrations ( Whitman, Nevers & Byappanahalli, 2006 ; Abia et al., 2017 ). Besides that, our literature review revealed no report that measured sediment E. coli habitat transition rates to the overlying waters. As the habitat transition rate could be an important process that contributes to E. coli prevalence in the waters, we designed experiments to measure the habitat transition rate of E. coli in sediment samples from lakes. In this study, five tropical urban lake waters were selected, as lake waters are generally more static and have less sediment resuspension ( Lim et al., 2018 ; Bong et al., 2020 ). The absence of mechanical effects in the lake waters will help clarify the role of E. coli habitat transition. Since the abundance of E. coli in the upper water column is also affected by its growth or decay, we concurrently carried out habitat transition experiments with size-fractionation decay experiments according to Lee et al. (2011) . Our results helped shed light on the possible reasons for the persistence of E. coli in urban tropical lakes as shown earlier by Wong et al. (2022) , and could help improve the current water surveillance strategies.
Materials & Methods Study sites and environmental variables A total of 35 water and 21 sediment samples were collected regularly at five independent urban lakes (Tasik Varsiti, Tasik Taman Jaya, Tasik Aman, Tasik Kelana and Tasik Central Park Bandar Utama), located 2–7 km between each other in the Klang Valley, Peninsular Malaysia, from May 2022 until November 2022 ( Fig. 1 ). The sampling dates with respective coordinates and experiments conducted for each sampling are listed in Table S1 . To avoid effects of precipitation, sampling was carried out when there was no rain. Surface water samples (≈ 0.1 m) were collected using autoclaved bottles (121 °C at 15 psi for 15 min) whereas surface sediment samples (≈ three cm depth) were taken with a shovel and collected using UV sterilized (at 245 nm wavelength for 20 min, intensity 550 μW cm −2 ) plastic zip lock bags. All samples were transferred on ice to the laboratory within 3 h for further analysis. A conductivity probe (YSI Pro 30, Yellow Springs, OH, USA) and a pH meter (Hach HQ11d, Loveland, CO, USA) were used to measure in-situ water temperature and pH, respectively. For dissolved oxygen (DO), water samples were collected with DO bottles in triplicates, and fixed with manganese chloride and alkaline iodide solution, before titration with sodium thiosulphate solution according to Winkler’s method ( Grasshoff, Kremling & Ehrhardt, 1999 ). Total suspended solids (TSS) was determined by filtering a known volume of water sample through a pre-combusted glass fibre filter (GF/F) (Sartorius, Goettingen, Germany) and measuring the weight increase after drying at 70 °C for a week. Particulate organic matter (POM) was determined by the weight loss after combustion at 500 °C for 2 h (HYSC MF-05, Seoul, Korea). Chlorophyll a (Chl a ) was also concentrated on the GF/F filter and extracted with 90% (v/v) ice-cold acetone at −20 °C overnight. Chl a concentration was then measured via a spectrofluorometer (PerkinElmer LS55, Waltham, MA, USA) ( Parsons, Maita & Lalli, 1984 ). The filtrate from the filtration was kept frozen until the determination of ammonium (NH 4 ) and phosphate (PO 4 ). These dissolved inorganic nutrients were determined on a spectrophotometer (Hitachi U-1900, Tokyo, Japan) via methods described by Parsons, Maita & Lalli (1984) . The sediment sample collected was dried in a freeze dryer (Labconco FreeZone 6 Liter, Kansas City, MO, USA). For sediment particle sizing, about 10 cm 3 of dried sediments were mixed with distilled water until a final volume of 40 mL. Then 10 mL of sodium hexametaphosphate (20% final concentration) was added to disperse the sediment particles ( Mil-Homens et al., 2006 ). The prepared sample was then homogenized and left overnight before analysis with the Beckman Coulter LS230 Particle Size Analyzer (Brea, CA, USA). For sediment organic matter content, the freeze-dried sediment was combusted at 500 °C for 3 h, and the organic matter content was measured via the loss on ignition method ( Heiri, Lotter & Lemcke, 2001 ). Enumeration of coliform and E. coli in water and sediment samples For water samples, both coliform and E. coli were measured whereas for sediment samples, only E. coli was measured. The additional coliform measurement in the water samples helped in the classification of the lake waters according to the National Water Quality Standards for Malaysia ( Department of Environment, 2008 ). Membrane filter technique (MFT) was used to enumerate both coliform and E. coli in water where a known volume of water sample (0.01 mL to 10 mL) was filtered through a sterile 47 mm diameter, 0.45 μm pore-size nitrocellulose membrane filter (Millipore, Burlington, MA, USA). For volumes <1.0 mL, the filtration vessel was filled with 5 mL sterile saline (0.85% sodium chloride (NaCl) final concentration) before addition of sample. After filtration, the membrane filter was placed on the CHROMagarTM ECC agar (CHROMagar, Paris, France) and incubated at 37 °C for 24 h. All blue and mauve-coloured colonies were counted as total coliform, whereas only blue colonies were counted as E. coli ( Chromagar, 2019 ). For sediment samples, 2 g of fresh sediment sample was mixed with 18 mL of sterile saline and then sonicated for 50 s with an ultrasonicator (220 W, 2 mm probe; SASTEC ST-JY98-IIIN, Subang Jaya, Malaysia) ( Epstein & Rossel, 1995 ). After allowing the mixture to settle for 10 min, the suspension was pipetted and used as inoculum in the MFT described above. The membrane filter was then placed on m-TEC agar (Sigma-Aldrich, Burlington, MA, USA) and incubated at 44.5 °C for 24 h. Purple- or magenta-coloured colonies were counted as E. coli ( Merck KGaA, 2018 ). Measuring E. coli decay or growth rates Using the size fractionation method, the water sample was divided into three size fractions: total or unfiltered, <20 μm and <0.2 μm fractions ( Lee et al., 2011 ). The <20 μm fraction was collected after filtration through a nylon net with a 20 μm mesh opening size, whereas the <0.2 μm fraction was collected after filtration with a 0.2 μm pore-size membrane filter (Millipore GTTP filter, Burlington, MA, USA). As E. coli counts in the water was sometimes too low, we used a laboratory strain of E. coli (isolated from Tasik Varsiti) for the decay or growth experiment. A fresh E. coli culture was adjusted to 0.5 McFarland standard (about 1.5 ×10 8 cfu mL −1 ) before further serial dilution to 10 5 cfu mL −1 . About 198 mL of each size fraction was then inoculated with 2 mL of 10 5 cfu mL −1 E. coli culture for a final concentration of about 10 3 cfu mL −1 . Inoculated size fractions were then incubated at 30 ° C for 24 h in the dark. The abundance of E. coli was determined as cfu mL −1 every 6 h through MFT on m-TEC agar. The cfu data was then transformed via natural logarithm and plotted against incubation time. A positive gradient of the best-fit regression line indicates E. coli growth whereas a negative gradient shows decay rate ( Lee et al., 2011 ). As protists are the major bacterial predators ( Enzinger & Cooper, 1976 ), we also enumerated protists ( Caron, 1983 ). A 50 mL water sample was preserved with glutaraldehyde (1% final concentration) during each sampling. At the laboratory, 1 to 2 mL preserved sample was filtered onto a black 0.8 μm polycarbonate filter (Millipore ATTP filter, Burlington, MA, USA) with a GF/A filter (Whatman, Little Chalfont, UK) as a backing filter. Filters were then rinsed twice with 0.1 M pH 4.0 Trizma-hydrochloride before being flooded with two mL of primulin solution (250 mg L −1 ) for 15 mins. After staining, the solution was removed gently by vacuum filtration. The black filter was then placed on one drop of immersion oil on a clean glass slide, and the prepared slide was observed under an epifluorescence microscope (Olympus BX60F-3, Tokyo, Japan) with U-MWU filter cassette (excitor 330–385 nm, dichroic mirror 400 nm, barrier 420 nm). Habitat transition experiment for E. coli We added 2 g of fresh sediment sample to the bottom of an autoclaved universal bottle (121 °C at 15 psi for 15 min), carefully avoiding any contact with the inner wall of the bottle. Then 0.4% (w/v) sterile soft agar (Difco, East Rutherford, NJ, USA) (kept at 45 °C) was added to the bottle until it covers approximately one cm level above the sediment. After the agar solidified, 18 mL of sterile saline was added slowly ( Fig. 2 ). To check for contamination, a blank without addition of sediment sample was also carried out. The habitat transition experiment was then incubated at 30 °C for 24 h in dark. The abundance of E. coli in the overlying saline was enumerated every 6 h via MFT on m-TEC agar. The cfu data was then natural logarithm transformed and plotted against incubation time where the gradient of the best-fit regression line was determined as E. coli increase rate (μ increase ). As E. coli also undergoes intrinsic growth in the saline environment ( Hrenović & Ivanković, 2009 ), we setup a microcosm experiment by replacing raw sediment sample with autoclaved sediment sample from Tasik Kelana ( n = 2) and Tasik Central Park Bandar Utama ( n = 2). The sediment was autoclaved to replicate possible nutrient contribution from the sediment but prevent adding bacteria to the microcosm. We then inoculated 10 3 cfu mL −1 of E. coli to the sterile saline. The microcosm was then incubated at 30 °C for 24 h in the dark and the abundance of E. coli in the saline was enumerated every 6 h via MFT on m-TEC. The best-fit linear slope was determined as E. coli intrinsic growth (μ intrinsic ), and the habitat transition rate was finally estimated by the following equation: μ increase –μ intrinsic . Data analysis All data were reported in this study as mean ± standard deviation (SD) unless stated otherwise. Values beyond mean ± 2 × SD were determined as outliers, and the coefficient of variation ( CV ) was used to measure the dispersion of data. Before statistical analysis, bacterial cfus were transformed by log (cfu + 1), whereas for growth or decay rate estimations, bacterial cfus were transformed via natural logarithm. Correlation analysis was carried out to identify relationships among variables, whereas linear regression was used for rate analysis. Student’s t -test was used to compare between groups, whereas one-way ANOVA (analysis of variance) with Tukey’s post-hoc analysis was used to determine the differences among lakes, and p ≤ 0.05 was considered significant. PAST (PAleontological STatistics) software (version 4.09) for Windows ( Hammer, Harpper & Ryan, 2001 ) was used to perform the statistical analyses, whereas plots were made in GraphPad Prism (version 9.5.1.733) for Windows ( Swift, 1997 ).
Results Environmental variables Table 1 lists the size and land use of five lakes, and physico-chemical variables measured in the water samples collected at the five lakes in this study. Surface water temperature and pH varied little among the five lakes and ranged from 28.7 ± 0.8 °C to 29.9 ± 1.2 °C ( CV = 3%) and 6.8 ± 0.4 to 7.2 ± 0.3 ( CV = 5%), respectively. In contrast, DO levels varied among the five lakes, from 2.65 ± 1.78 mg L −1 at Tasik Taman Jaya to 9.65 ± 2.00 mg L −1 at Tasik Central Park Bandar Utama (ANOVA: n = 18, F (4,13) = 3.08, p = 0.05). In contrast, TSS and POM concentrations were different among the lakes, and ranged from 21 ± 7 mg L −1 to 65 ± 13 mg L −1 (ANOVA: n = 18, F (4,13) = 14.06, p < 0.001), and from 13 ± 2 mg L −1 to 43 ± 5 mg L −1 (ANOVA: n = 18, F (4,13) = 34.8, p < 0.001), respectively. TSS and POM concentrations were highest at Tasik Central Park Bandar Utama (Tukey’s HSD: TSS: q > 6.65, p < 0.01; POM: q > 6.41, p < 0.01). Chl a concentration also varied among lakes (ANOVA: n = 18, F (4,13) = 14.14, p < 0.001), and was highest at Tasik Aman (90.63 ± 15.14 μg L −1 ) (Tukey’s HSD: q > 5.06, p <0.03). For dissolved inorganic nutrients, NH 4 varied among five lakes (ANOVA: n = 18, F (4,13) = 16.85, p < 0.001), ranged from 0.30 to 98.83 μM and was highest at Tasik Taman Jaya (Tukey’s HSD: q > 4.70, p < 0.04), whereas PO 4 was similar among the lakes (ANOVA: n = 18, F (4,13) = 1.80, p = 0.19), and varied from 0.15 to 0.60 μM. For the physico-chemical properties of sediments ( Table 2 ), average particle size ranged from 55.2 ± 26.2 to 613.4 ± 124.2 μm, and were different among the five lakes (ANOVA: n = 18, F (4,13) = 21.62, p < 0.001) with the largest average particle size at Tasik Kelana (Tukey’s HSD: q > 5.52, p < 0.02). The sediment texture at Tasik Varsiti was mainly loam, whereas in other lakes were mainly sand. Average sediment organic matter measured ranged from 3 to 72 mg g −1 and was not different among the five lakes (ANOVA: n = 18, F (4,13) = 2.96, p = 0.06). Biotic variables Total coliform and E. coli were detected in all five urban lakes ( Fig. 3 , Table S2 ). Total coliform ranged from 21 to 4,600 cfu mL −1 , and E. coli ranged from 1 to 2,300 cfu mL −1 . Total coliform in the water was different among the five lakes (ANOVA: n = 20, F (4,15) = 3.58, p = 0.03) but E. coli count was not different (ANOVA: n = 20, F (4,15) = 2.52, p = 0.09). The highest total coliform was detected at Tasik Kelana (Tukey’s HSD: q = 4.97, p = 0.02). For the urban lake sediments, E. coli was present in all five lake sediments ( Fig. 4 ). Its abundance ranged from below detection to 12,000 cfu g −1 , and there was no difference among the five lakes (ANOVA: n = 20, F (4,15) = 2.69, p = 0.07). E. coli decay or growth rates Generally, the abundance of E. coli in the larger fractions (total and <20 μm fraction) decreased with incubation time, while the <0.2 μm fraction increased ( Figs. 5 and 6 , Table S3 ). Decay rates among the five lakes in the total fraction (ANOVA: n = 19, F (4,14) = 7.85, p < 0.01) and in the <20 μm fraction (ANOVA: n = 18, F (4,13) = 4.89, p = 0.01) were different ( Fig. 7 , Table S4 ). The highest decay rates in both the total fraction (Tukey’s HSD: q > 4.65, p < 0.04) and <20 μm fraction (Tukey’s HSD: q = 5.77, p = 0.01) were observed at Tasik Taman Jaya. The decay rates measured in the total fraction also did not differ from those in the <20 μm fraction (Student’s t -test: n = 37, t (35) = 0.43, p = 0.67). As decay was most likely attributed to protistan grazers ( Lee et al., 2011 ), we measured protists abundance in the water samples, and observed that protists counts ranged from 3.04 ×10 4 cells mL −1 to 6.93 ×10 4 cells mL −1 but showed no differences among the five lakes (ANOVA: n = 15, F (4,10) = 2.18, p = 0.14). In contrast, E. coli grew in the <0.2 μm fraction, and the E. coli growth rates varied among five lakes (ANOVA: n = 18, F (4,13) = 3.65, p = 0.03). Habitat transition experiment for E. coli In the habitat transition experiment, E. coli abundance generally increased with time ( Fig. 8 , Table S5 ). E. coli increase rates (μ increase ) in the water column ( p < 0.05) ranged from 0.40 to 0.59 h −1 at Tasik Varsiti, 0.46 to 0.62 h −1 at Tasik Taman Jaya, 0.41 to 0.74 h −1 at Tasik Aman, 0.61 to 0.71 h −1 at Tasik Kelana and 0.69 to 0.78 h −1 at Tasik Central Park Bandar Utama ( Table S6 ). As the E. coli increase rate is a sum of both transition and intrinsic growth, we also measured E. coli intrinsic growth rates ( Table S7 ). The intrinsic growth rates using sterile sediments from Tasik Kelana were 0.39 h −1 and 0.32 h −1 , and were similar to Tasik Central Park Bandar Utama (0.41 h −1 and 0.36 h −1 ). Although sediments at Tasik Kelana had the lowest organic matter content (3 to 8 mg g −1 ), whereas Tasik Central Park Bandar Utama had the highest (19 to 72 mg g −1 ) among the five lakes, their E. coli intrinsic growth rates were not different (ANOVA: n = 4, F (1,2) = 0.48, p = 0.56). Therefore, for the calculation of habitat transition rates, we assumed the average intrinsic growth rate (0.37 ± 0.04 h −1 ) for all five lakes ( Fig. 9 , Table S8 ). The habitat transition rates were different among five lakes (ANOVA: n = 18, F (4,13) = 4.01, p = 0.02), with the highest at Tasik Central Park Bandar Utama (Tukey’s HSD: q = 4.67, p = 0.04).
Discussion Environmental condition of the urban lakes The surface water temperatures recorded at the five lakes were relatively high with low variability, and is typical of tropical waters ( Lim et al., 2018 ). The DO concentrations measured at Tasik Varsiti, Tasik Aman, Tasik Kelana and Tasik Central Park Bandar Utama were at healthy levels, and within the range previously reported for tropical freshwater ( Wong et al., 2022 ). However, for Tasik Taman Jaya, we observed hypoxic levels (2.65 ± 1.78 mg L −1 ) ( Farrell & Richards, 2009 ), which was not surprising as Wong et al. (2022) had previously classified Tasik Taman Jaya at Class III for total coliform and Class V for faecal coliform, indicative of extensive treatment required for the suitability of water supply ( Department of Environment , 2008 ). All lakes were also observed with high Chl a , indicating varying levels of eutrophication ( Lim et al., 2018 ). Total coliform and E. coli in water and sediment of urban lakes The total coliform abundance enumerated in five lake waters were within the range previously reported in Malaysia ( Wong et al., 2022 ). According to National Water Quality Standards for Malaysia, Tasik Varsiti, Tasik Taman Jaya and Tasik Kelana were categorized as Class V for total coliform, whereas Tasik Aman and Tasik Central Park Bandar Utama as Class III. Relative to Wong et al. (2022) , the water quality in these lakes had deteriorated over the last two years. In our study, the abundance of E. coli in sediment and water were correlated (Pearson correlation: n = 40, r (38) = 0.53, p = 0.02) ( Fig. 10 ), similar to Whitman, Nevers & Byappanahalli (2006) and Fluke, González-Pinzón & Thomson (2019) . Although we did not use the same agar medium to enumerate E. coli abundance in water and sediment, we had previously shown that E. coli counts on m-TEC and CHROMagar ECC agar were strongly correlated (Regression: n = 18, R 2 = 0.99, F (1,7) = 561.35, p < 0.001) ( Fig. 11 , Table S9 ), and that the abundance of E. coli (log cfu mL −1 ) obtained on m-TEC was on average 149% higher than that on CHROMagar ECC. In order to compare E. coli abundance in water and sediment, we corrected the abundance obtained, and found that the abundance of E. coli in the sediment was still higher than in the water column ( Stephenson & Rychert, 1982 ; An, Kampbell & Peter Breidenbach, 2002 ; Garzio-Hadzick et al., 2010 ; Pandey et al., 2018 ; Fluke, González-Pinzón & Thomson, 2019 ). Relative to the water column, sediment provides E. coli with more nutrients ( Jamieson et al., 2005 ); lower UV intensity ( Koirala et al., 2008 ); lesser bacterivore grazing ( Wright et al., 1995 ) and lower oxygen ( Lorke & MacIntyre, 2009 ). This host intestinal-like environment helps E. coli to survive better in sediments, even at varying climates ( Ishii et al., 2006 ; Ishii & Sadowsky, 2008 ; Garzio-Hadzick et al., 2010 ; Rochelle-Newall et al., 2015 ; Tymensen et al., 2015 ; Fluke, González-Pinzón & Thomson, 2019 ). As E. coli is dependent upon sediment organic matter for growth ( Ishii et al., 2006 ), higher abundance of E. coli in sediments with higher organic matter has been reported ( Lee et al., 2006 ). However in our study, sediment organic matter and sediment E. coli abundance were not correlated (Pearson correlation: n = 36, r (34) = −0.11, p = 0.66). One possible reason could be the organic matter replete state in our lakes. The sediment organic matter from this study was relatively higher (Organic matter content: Tasik Varsiti 3%, Tasik Taman Jaya 3.3%, Tasik Aman 2.7%, Tasik Kelana 0.5% and Tasik Central Park Bandar Utama 3.3%), than that reported by Lee et al. (2006) ( i.e., 0.7%–1.1%). In this study, the sediment E. coli abundance also did not correlate with particle size (Pearson correlation: n = 36, r (34) = −0.07, p = 0.78), further substantiating that sediment texture may be less important for the survival of sediment E. coli ( Lee et al., 2006 ). Although sediments could act as reservoirs of E. coli , and contribute to the water column E. coli , E. coli dynamics in the water column is also dependent upon E. coli decay or growth rates in the water ( Lee et al., 2011 ). The decay rate in total and <20 μm fractions were higher than the smaller fraction and consistent with previous reports ( Lee et al., 2011 ; Wong et al., 2022 ). In the total fraction, the E. coli decay rates ranged from 0.02 to 0.16 h −1 (or 0.50 to 3.94 d −1 ). These decay rates were generally within the range reported by Flint (1987) measured at 37 °C, and higher than in subtropical water ( Bitton et al., 1983 ). Temperature may explain the generally higher rates observed in this study, as microbial activity is at its optimum in tropical aquatic habitats ( White et al., 1991 ). In our study, the decay of E. coli in the larger fraction is mainly due to protistan bacterivory ( Enzinger & Cooper, 1976 ; Lee et al., 2011 ; Wong et al., 2022 ). However, we found no correlation between decay rate and protist counts (Pearson correlation: n = 10, r (8) = −0.3, p = 0.62). As E. coli only accounts for small fraction of the total bacterial community, at about 4.48% of total culturable gram-negative rod in freshwater ( Goñi Urriza et al., 1999 ), this could explain the uncoupling between protists and E. coli decay rates. Lee et al. (2011) have also reported that E. coli decay rates have a relatively small impact on the overall bacterivory rate. Although viral lysis could also cause E. coli mortality, previous studies have shown that its role is generally minimal ( Lee et al., 2011 ). Moreover in the <0.2 μm fraction, where protists were removed, E. coli did not decrease but increased against incubation time, suggesting that viral lysis was not significant ( Lee et al., 2011 ). The role of habitat transition rates for E. coli persistence in the water column Habitat transition experiments in this study showed that E. coli in sediments could transition from the sediment to the overlying water column without mechanical effects i.e., turbulence and resuspension. Although seasonal change in precipitation and turbulence can cause sediment resuspension and an increase in E. coli abundance in the upper water column ( Li, Filippelli & Wang, 2023 ), the effect should be minimal in lakes as E. coli abundance quickly return to pre-resuspension level ( Whitman, Nevers & Byappanahalli, 2006 ; Abia et al., 2017 ). Moreover, we have shown net transition rates in laboratory experiments without mechanical effects. Therefore, any precipitation will only increase the impact of habitat transition, and not affect the conclusion from this study. The habitat transition rates fluctuated among lakes ( CV = 53%) and was not correlated with sediment particle size (Pearson correlation: n = 32, r (30) = 0.38, p = 0.14) and organic matter (Pearson correlation: n = 32, r (30) = −0.08, p = 0.78). Although the habitat transition of E. coli from sediment to water may be associated with biofilm sloughing ( Lee et al., 2006 ), the mechanisms that drive dispersal in E. coli biofilms are complicated, and some are still unknown ( McDougald et al., 2012 ). In this study, we observed the presence of E. coli in all five tropical urban lake waters. Although previous studies have also reported on the survival of E. coli in the water column, they did not include the effects of sediment ( Lee et al., 2011 ; Wong et al., 2022 ). Given the higher abundance of E. coli in the sediments ( Garzio-Hadzick et al., 2010 ; Fluke, González-Pinzón & Thomson, 2019 ), and that these E. coli can transition to the water column, the effects of sediment on the abundance of E. coli in the water column could be important. In order to evaluate the effect of habitat transition on the abundance of E. coli in the water column, we compared E. coli habitat transition rates with total fraction decay rates. We found that in most cases (>80%), the habitat transition rates were higher than the total fraction decay rates ( Fig. 12 ). Thus, there was a net increase rate of E. coli in the water column, calculated by the following equation: μ habitat transition –μ totalfractiondecay . The E. coli net increase rates ranged up to 0.36 h −1 (0.16 ± 0.13 h −1 ) in our study. When the habitat transition rate exceeds the total fraction decay rate, using E. coli as a faecal indicator could overestimate the faecal contamination level. However, these rates were from microcosm-based experiments that did not include other biotic ( e.g. , biofilm and competition) and abiotic (sunlight) factors that can also affect E. coli survivability in-situ ( Korajkic et al., 2014 ; Stocker et al., 2019 ; Petersen & Hubbart, 2020 ; Moon et al., 2023 ). Further studies are therefore needed to understand the role of these factors on E. coli habitat transitions. In this study, we showed the role of sediment as reservoirs and habitat transition as a possible explanation for the persistence of E. coli in tropical aquatic habitats ( Wong et al., 2022 ).
Conclusions Sediments acted as a reservoir of E. coli in tropical urban lakes with a higher abundance of E. coli than in the water column. The habitat transition of E. coli from sediment to the water column affects its abundance in the water column, and could be one of the reasons for the persistence of E. coli in tropical urban lakes.
Background Escherichia coli is a commonly used faecal indicator bacterium to assess the level of faecal contamination in aquatic habitats. However, extensive studies have reported that sediment acts as a natural reservoir of E. coli in the extraintestinal environment. E. coli can be released from the sediment, and this may lead to overestimating the level of faecal contamination during water quality surveillance. Thus, we aimed to investigate the effects of E. coli habitat transition from sediment to water on its abundance in the water column. Methods This study enumerated the abundance of E. coli in the water and sediment at five urban lakes in the Kuala Lumpur-Petaling Jaya area, state of Selangor, Malaysia. We developed a novel method for measuring habitat transition rate of sediment E. coli to the water column, and evaluated the effects of habitat transition on E. coli abundance in the water column after accounting for its decay in the water column. Results The abundance of E. coli in the sediment ranged from below detection to 12,000 cfu g –1 , and was about one order higher than in the water column (1 to 2,300 cfu mL –1 ). The habitat transition rates ranged from 0.03 to 0.41 h –1 . In contrast, the E. coli decay rates ranged from 0.02 to 0.16 h −1 . In most cases (>80%), the habitat transition rates were higher than the decay rates in our study. Discussion Our study provided a possible explanation for the persistence of E. coli in tropical lakes. To the best of our knowledge, this is the first quantitative study on habitat transition of E. coli from sediments to water column.
Supplemental Information
We thank Yi You Wong, Kyle Young Low, Walter Aaron and Ee Lean Thiang for their assistance with sampling. Additional Information and Declarations
CC BY
no
2024-01-15 23:43:49
PeerJ. 2024 Jan 11; 12:e16556
oa_package/fd/e6/PMC10788090.tar.gz
PMC10788091
38222249
1. Background For thousands of years, narcotics have been used for medicinal and palliative purposes and still have an important role in relieving pain, diarrhea, cough, and other symptoms. Narcotics abuse has risen dramatically in recent years. For instance, Golestan Cohort Study conducted in Golestan province, Iran, reported that 17% (n = 8,487) of the participants' misused opium, with a mean duration of 12.7 years ( 1 ). Another study conducted in Fars province, Iran, reported that 8% (n = 339) of the participants misused opium ( 2 ). In the United States, 3% to 4% of adults receive long-term opioid treatment ( 3 ). Along with opioid use and misuse, there are unfavorable side effects, including endocrinopathies due to long-time opioid usage ( 3 ). Endocrinopathy of narcotics should be considered for any patient using the equivalent of 100 mg of morphine per day or more. Measuring the response of plasma cortisol levels to intravenous or muscular injections of ACTH is a common screening test for detecting adrenal insufficiency. Various diagnostic criteria have been set according to base cortisol levels, stimulated cortisol levels, or their difference ( 4 ). Methadone is a synthetic opioid. A complete Mu (μ) receptor agonist may mimic endogenous opioids, enkephalins, and endorphins ( 5 ). Methadone is frequently used to relieve pain, especially in the intensive care unit (ICU), and ease quitting opium addiction. This drug is qualitatively equivalent to morphine but has a longer half-life. The plasma half-life of methadone is very long and variable (13 - 100 hours). Despite this feature, many patients need methadone every 4 - 8 hours to maintain the analgesic effects ( 6 ). Methadone maintenance therapy (MMT) has shown excellent results in managing heroin-dependent patients. However, the researchers questioned whether MMT could improve the function of the hypothalamic-pituitary-adrenal (HPA) axis, which is damaged by heroin dependence, and improve baseline cortisol levels. In this regard, studies are limited and contradictory. For instance, in a 2006 study by Aouizerate et al., methadone reduced serum cortisol levels. However, a 2016 study by Young et al. found increased body cortisol levels following methadone administration ( 6 , 7 ). Detecting adrenal insufficiency is critical, especially in ICU patients and patients undergoing major surgeries. To the best of our knowledge in the available research, adrenal insufficiency in opium-addicted patients on MMT has not been evaluated with a cosyntropin test (ACTH stimulation test) ( 8 ).
3. Methods This study was conducted in November 2019 at Imam Reza Hospital Rehab Center, Birjand, Iran. The patients were on methadone to ease quitting opium addiction. Our inclusion criteria were an addiction to opium for at least six months, not using corticosteroids in the past year, age of 20 to 45, not having significant co-morbidities such as diabetes or cancer, and no history of quitting opium addiction. According to a study by Annane et al. ( 9 ) which reported a mean cortisol level of 13.9 ± 10.3 in its population, a sample size of 42 was calculated with α = 0.01 and β = 0.1. Convenience sampling was used to select patients. Eighty patients were assessed for eligibility, 42 of whom were enrolled in the study based on our inclusion criteria. The study procedure was explained to the patients; those who filled out informed consent and met the inclusion criteria were enrolled. A questionnaire was filled out to gather demographic characteristics. Cosyntropin tests were performed at 8-9 AM to minimize the effect of circadian rhythm on cortisol levels. Initially, a 5 mL blood sample was obtained to measure baseline cortisol. Afterward, 250 micrograms of intra-muscular cosyntropin were injected. In 30- and 60-minute intervals, blood samples were taken. The samples were analyzed at the central laboratory of Imam Reza Hospital. Chemiluminescence detection was used to measure cortisol levels with kits from Saluggia company, Italy. According to Henry's Clinical Diagnosis and Management by Laboratory Methods ( 10 ), following the cosyntropin test, cortisol levels should be higher than 18 μg/dL, and lower levels determine adrenal insufficiency. Also, according to a study by Annane et al. ( 9 ), a cortisol level change of less than 9 μg/dL is considered adrenal insufficiency. We used these definitions of adrenal insufficiency in our study. Also, based on the chemiluminescence device's reference, which measured cortisol, the mean cortisol level in the standard population is 14 μg/dL ( 11 ). We compared our population with this value. All statistical analyses were performed using SPSS version 16 software (SPSS Inc., Chicago, Illinois, USA). The normal distribution of variables was evaluated using the Kolmogorov–Smirnov test. Descriptive data are shown as the mean ± standard deviation or number (%). One-way analysis of variance (ANOVA) was applied to compare the demographic and clinical features between the groups. Repeated-measures ANOVA was recruited to assess the effect of cosyntropin on cortisol levels. Degrees of freedom were adjusted via Mauchly's W test, followed by a Greenhouse-Geisser correction of P-values.
4. Results As shown in Table 1 , the mean age of the participants was 34.4 ± 5.2, and most were men (90.5%). Eight of them were cigarette smokers. Cortisol levels and response to the cosyntropin test had a normal distribution (P-value = 0.44 and 0.28, respectively). The mean serum cortisol level at baseline was 9.46 ± 5.42 μg/dL, significantly different from its normal value of 14 μg/dL (P < 0.001). The mean response to the cosyntropin test (difference from baseline) was 9.34 ± 8.11 μg/dL. According to Henry's Clinical Diagnosis and Management by Laboratory Methods ( 10 ), 21 (50.0%) participants had adrenal insufficiency, and according to the study of Annane et al. ( 9 ), 24 (57.1%) participants had adrenal insufficiency. There was a significant difference between baseline cortisol levels and cortisol levels at 30- and 60-minute intervals (P-values < 0.001) ( Figure 1 ). Mean baseline cortisol levels and response to cosyntropin were not associated with age, dose, and duration of methadone usage ( Table 2 ).
5. Discussion This study investigated changes in cortisol levels and the response to ACTH hormone in former opium addicts on methadone treatment. Considering adrenal insufficiency can affect the management of these patients. Fifty-five percent of our participants had cortisol levels lower than 18 μg/dL following the cosyntropin test, indicating adrenal insufficiency. Many earlier studies show that chronic opioid misuse can lead to HPA axis suppression. In this regard, a review by Donegan and Bancos concluded that 9 to 29% of patients receiving long-term opiate therapy may experience opioid-induced adrenal insufficiency. However, our study's prevalence of adrenal insufficiency was significantly higher ( 3 ). Whether MMT restores the disrupted HPA axis is still unknown, and studies in this regard are inconsistent. Some studies have shown an overactivated HPA axis in MMT patients compared to controls ( 12 - 14 ). A study by Yang et al. on 52 MMT patients and 41 age-matched controls showed that MMT patients had significantly higher hair cortisol levels than the controls. Likewise, MMT patients showed significantly higher perceived stress levels. The authors imply that this higher stress level may have masked the suppressed HPA axis ( 7 ). In contrast, in a study, heroin users showed normal HPA activation with metyrapone, an 11-beta-hydroxylase inhibitor. The same study showed that patients on MMT addicted to cocaine had a hyperactivated HPA response to metyrapone ( 15 ). Dackis et al. studied five methadone misusers and 12 controls and observed a decreased response to ACTH stimulation in methadone misusers ( 16 ). Some case reports have also shown that chronic use of opioids can cause adrenal insufficiency ( 17 , 18 ). As mentioned, studies on cortisol levels and HPA axis function in patients on MMT are contradictory, and to date, the reasons for this discrepancy are unclear ( 19 ). One possible explanation may be that studies have used plasma, saliva, and urine cortisol levels as biological markers to assess basal cortisol levels. These biological markers are prone to circadian rhythms and events before sampling. Recently, endogenous cortisol levels in human hair have been proposed to overcome limitations and indicate cortisol over up to six months ( 7 ). Another explanation may be that participants in previous studies have been at different stages of the detoxification reaction. In addition, the activity of the HPA axis in patients on MMT may be affected by negative emotions. For example, patients with depressive symptoms may have higher basal cortisol levels. In addition, psychological and MMT factors may have synergistic effects on HPA axis function ( 20 , 21 ). Differences in opioid receptor affinity due to polymorphisms in different individuals may be another explanation ( 22 ). The duration of MMT can affect the result of studies. Kreek et al. showed that metyrapone and ACTH stimulation tests were abnormal in the first two months of MMT but normal after two months ( 23 ). In line with this, response to the cosyntropin test was increased in longer MMT durations in our study ( Table 2 ); however, this finding was not statistically significant (P-value = 0.40). The mechanism of HPA axis normalization is not clear. One explanation is given by Kling et al., who used positron emission tomography (PET) to study opiate receptors in MMT patients. They observed that only 19 – 32% of opiate receptors were occupied, and the remaining receptors could function normally in the HPA axis ( 24 ). Adrenal insufficiency can cause hemodynamic disturbances, changes in consciousness, hypoxemia, and ileus. It can be life-threatening if not managed properly ( 25 ). However, most of the patients have non-specific symptoms that may mislead clinicians. Therefore, knowing that many opioid abusers and MMT patients may suffer from adrenal insufficiency can help prevent serious complications in case of major medical stress. Cortisol helps maintain the balance of the cardiovascular system during surgical trauma by facilitating the activity of catecholamines. In this regard, Baghaei Wadji et al. examined the effects of opium addiction on the response to the stress of major surgeries. The serum cortisol level of the addict group showed a significant increase compared to the non-addict group 24 hours after surgery, indicating a stronger response of opium addicts to surgical stress ( 26 ). Cortisol levels during and after surgery are proportional to the severity of the operation, and any disturbances, whether an inappropriate increase like in the mentioned study or an inappropriate decrease like in our study, can be life-threatening ( 27 ). Most studies confirm that opioid misuse suppresses the HPA axis. However, whether long-term MMT can normalize the HPA axis is still unknown. Larger studies with control groups are needed to answer this question.
Background Opium has been used for thousands of years for medical and analgesic purposes, and its misuse has also increased in recent years. Methadone, a synthetic opioid, has been used as an analgesic and to help patients quit opium addiction. However, some evidence suggests that long-term use of opioids can affect the hypothalamic-pituitary-adrenal axis. Objectives We aimed to evaluate the serum cortisol level and response to the cosyntropin stimulation test in opium addicts on methadone treatment. Methods The study was conducted in November 2019 at Imam Reza Hospital Rehab Center, Birjand, Iran. Thirty-eight methadone-treated opium addicts participated in the study. A blood sample was initially obtained, then 250 μg intramuscular cosyntropin was injected. After 30 and 60 minutes, two other blood samples were obtained. The data were analyzed using SPSS. Results There was a significant difference between serum cortisol levels and the normal value in methadone users (9.46 ± 5.42 vs. 14 μg/dL) (P < 0.001). The mean response to the cosyntropin stimulation test in methadone users was 9.34 ± 8.11 μg/dL. Also, 55% of the participants had adrenal insufficiency. Conclusions Serum cortisol levels significantly differed from normal values in methadone-treated patients. Therefore, we recommend measuring serum cortisol levels in methadone-treated patients before major medical procedures to consider the stress doses of corticosteroids.
2. Objectives Regarding the high prevalence of methadone use, we aimed to measure the changes in cortisol levels and the response to the cosyntropin test to determine the extent and prevalence of adrenal insufficiency in opium-addicted patients on methadone treatment.
We would like to thank Mr. Keivan Kalali, who contributed to writing the manuscript and analyzing the data. Authors' Contribution: Study concept and design: F. Z., M. G, and A. B; Acquisition of data: F. Z., A. K., and A. B; Analysis and interpretation of data: A. B, SA. E., and A. K.; Drafting of the manuscript: F. Z. and SA. E.; Critical revision of the manuscript for important intellectual content: F. Z. and M. G; Statistical analysis: A. B and SA. E.; Administrative, technical, and material support: M. G and A. K.; Study supervision: M. G, A. B, and A. K. Conflict of Interests Statement: Birjand University of Medical Sciences completely funded the study. The authors are academics, are not involved in any business related to the content of this research, and have no financial interest in favor of any possible result of the study. Ethical Approval: This study was approved under the ethical approval code of IR.BUMS.REC.1398.129 . Funding/Support: Birjand University of Medical Sciences supported the study; no specific fund was granted. Informed Consent: The study procedure was explained to the patients; those who filled out informed consent and met the inclusion criteria were enrolled.
CC BY
no
2024-01-15 23:43:49
Anesth Pain Med. 2023 Jun 3; 13(3):e135206
oa_package/46/9b/PMC10788091.tar.gz
PMC10788092
38222996
Introduction Streptococcus pneumoniae ( S. pneumoniae ) is an aerobic gram-positive coccus that causes a broad variety of infections. Non-invasive pneumococcal infections include bronchitis, otitis media, and sinusitis. An infection in which S. pneumoniae is isolated from a typically sterile body site is known as an invasive pneumococcal illness. The most frequent presentation is pneumonia, followed by bacteremia (in which no source is identified), meningitis, septic arthritis, spontaneous peritonitis, endocarditis, osteomyelitis, and soft tissue infection. It has long been one of the most relevant bacterial causes of disease in humans, but since 2000, its impact has been blunted by the widespread use of vaccines that largely prevent infection and colonisation in young children [ 1 ]. Since there have been over 90 distinct S. pneumoniae serovars, the research aims at developing vaccines that deliver broad immunity. Pneumococcal conjugate vaccine (PCV) and pneumococcal polysaccharide vaccine (PPSV) are the two forms of pneumococcal vaccines that are available for clinical use. Both the active components are capsular polysaccharides from pneumococcal serotypes that commonly cause invasive disease [ 2 ]. The pneumococcal vaccination is indicated for all adults 65 years of age or older, as well as those under 65 who are at risk for pneumococcal infection or severe complications, namely immunocompromised, those with long-term predisposing illnesses (such as lung disease), functional or anatomic asplenia, or a history of invasive pneumococcal disease. In Portugal, the recommended vaccines are the PPSV23 (Pneumovax 23 ® ), which includes 23 partially purified capsular polysaccharide serotypes (1, 2, 3, 4, 5, 6B, 7F, 8, 9N, 9V, 10A, 11A, 12F, 14, 15B, 17F, 18C, 19A, 19F, 20, 22F, 23F, and 33F), and the 13-valent PCV13 (Prevnar 13 ® ), which contains capsular polysaccharide antigens covalently linked to a nontoxic protein (covers serotypes 1, 3, 4, 5, 6A, 6B, 7F, 9V, 14, 18C, 19A, 19F, 23F) [ 3 ]. As serotypes that cause pneumococcal disease continue to change, in the summer of 2021, two new PCVs for use in adults emerged: PCV15 and PCV20 [ 4 ]. These are approved and commercialised in Portugal, although they are not yet cited in the national guidelines.
Discussion The risk of infection, sepsis, and sepsis-related mortality appears to be approximately two to three times higher in asplenic patients when compared with the general population [ 5 - 6 ]. Patients with impaired splenic function are at risk for severe and overwhelming infections with encapsulated bacteria, bloodborne parasites, and other infections where the spleen plays an important role. This organ has an abundance of lymphoid tissue, including splenic macrophages that are responsible for the opsonisation and phagocytosis of encapsulated organisms such as S. pneumoniae , Haemophilus influenzae ( H. influenzae ), and Neisseria meningitidis (N. meningitidis) . The spleen is also a major site of early immunoglobulin M production, which is important in the acute clearance of pathogens from the bloodstream. The overall case-fatality rate for S. pneumoniae bacteraemia is about 20%, but it may be as high as 60% among patients with asplenia [ 6 - 7 ]. Reviewing the case, the symptoms reported by the patient, fever, myalgia, and diarrhoea, in a splenectomised patient were the clue to an underlying serious process. Management of fever in asplenic patients includes immediate empiric intravenous antibiotic administration. Ceftriaxone and cefotaxime are both active against most S. pneumoniae, H. influenzae type b, and N. meningitidis isolates; vancomycin should be added if there is a risk of beta-lactam-resistant S. pneumoniae . Because progression to septic shock and respiratory distress can occur rapidly, preparations for fluid resuscitation, vasopressor support, and airway management should be made [ 6 ]. IVIG is controversial in sepsis and not recommended for the general population. However, as IVIG has the potential to offset the immune deficits of splenectomised patients, it is reasonable to give IVIG to selected patients with sepsis who have impaired splenic function [ 7 - 8 ]. S. pneumoniae has long been one of the most prominent bacterial causes of disease in humans and was one of the first to be identified as a cause of human infection. In spite of the widespread use of vaccines for more than 20 years, this disease is still responsible for approximately 1.6 million deaths worldwide each year [ 1 ]. In Portugal, epidemiological studies have documented a dynamic evolution in the serotypes, with an increase in cases of invasive pneumococci disease by serotypes not included in the PCV13. Most of them, however, are included in the PPSV23. The most common serotypes in Portugal from 2015 to 2018 were eight (19%), three (15%), 22F (7%), 14 (6%), and 19A (5%). As such, during this period, PCV13 covered 44% of the circulating serotypes, and PPSV23 covered 80%. In the population of Northern Portugal, 22F is the most common serotype not covered by PPSV23 or PCV13 [ 9 ]. During the summer of 2021, the United States Food and Drug Administration licensed two new PCVs for use in adults: PCV15 and PCV20. These vaccines target common serotypes causing invasive pneumococcal disease and pneumococcal pneumonia in the United States. PCV15 contains all PCV13 serotypes (1, 3, 4, 5, 6A, 6B, 7F, 9V, 14, 18C, 19A, 19F, 23F) plus 22F and 33F. PCV20 contains all PCV15 serotypes plus 8, 10A, 11A, 12F, and 15B [ 10 ]. In this case, the patient already had two doses of PPSV23 (15 and 10 years ago) and one of PCV13 approximately 11 months before the fatal event. According to the new Centers for Disease Control and Prevention guidelines, the recommendation for adults 19 through 64 years old with immunocompromising conditions who have received PCV13 and two doses of PPSV23 is to give no additional pneumococcal vaccine or to give one dose of PCV20 at least five years after the last pneumococcal vaccine [ 11 ]. This patient had PCV13 about a year before, so no vaccine is recommended. Due to recent and dubious guidelines with multiple recommendation choices, the authors advise that, when available, local epidemiology should be addressed in order to cover the most common serotypes, especially in high-risk individuals. Reviewing European epidemiology, the last report by the European Centre for Disease Prevention and Control dates to 2018. At that time, the proportion of the five most frequent serotypes of S. pneumoniae that caused invasive pneumococcal disease in adults aged 65 or older was three (14.7%), eight (14.0%), 19A (7.6%), 22F (7.4%), and 9N (5.4%). As such, the proportion of serotypes covered by the PCV13 vaccine was 29%, and the proportion covered by PPSV23 was 73%. As such, an effort must be made to upgrade epidemiological data, as the frequency of serotypes not covered by PCV13 or PPSV23 should trigger the implementation of the new vaccines [ 12 ].
Conclusions Fever in a patient with impaired splenic function is an emergency, and it should be promptly and adequately identified and managed as it may have a fulminant course. Despite the success of PCVs, with the reduction in PCV-serotype nasopharyngeal colonisation rates in children, leading to herd immunity and reduced incidence of invasive disease, serotypes continue to change as vaccine serotypes disappear from the community and other non-vaccine serotypes take their place. These two population-level phenomena, indirect effects (or herd immunity) and the emergence of replacement strains, contribute to a circle difficult to interrupt. The much-awaited broadly serotype-independent vaccine may be the “holy grail” of pneumococcal vaccine development. Until then, local epidemiology should help tailor the vaccination scheme where the recommendations are dubious. The report of cases like this demonstrates the need for continuous serotype surveillance and vaccine development, as even with vaccination and all the therapeutic measures, there are still lives that cannot be saved.
Invasive pneumococcal disease is a serious infection with an elevated case-fatality rate that can be even higher among patients with asplenia. Its impact has been blunted by the widespread use of vaccines; even recently, in 2021, two new pneumococcal conjugate vaccines emerged. The authors present a case of a 58-year-old male, splenectomised with the immunisation schedule complete, who died of invasive pneumococcal disease with a fulminant course. It is highlighted that fever in a patient with impaired splenic function is an emergency, and despite the success of immunisation in reducing pneumococcal carriage and invasive disease, serotypes continue to change. Also, the local epidemiology may help guide situations where the immunisation recommendations are dubious regarding the implementation of the new vaccines.
Case presentation We report a case of a 58-year-old Caucasian male with a history of surgical splenectomy due to abdominal trauma at the age of 18. He had been previously hospitalised in July 2021 with pneumococcal shock with an unknown portal of entry; he was discharged after 18 days, making a full recovery. He had only done two doses of the 23-valent PPSV at that point (in 2007 and 2012), so he finished the immunisation schedule with the 13-valent PCV one month after being released from the hospital. After 11 months, he returned to the emergency department, reporting fever, shivering, myalgia, abdominal discomfort, and diarrhoea. Objectively, he was normotensive, slightly tachycardic, febrile, eupnoeic, did not need oxygen supply, and was without any other specific sign in physical examination. Laboratory tests showed a white blood cell (WBC) count of 12.120 x 109/L, with 93% neutrophilia, C-reactive protein (CRP) 7.2 mg/L (reference range <0.3 mg/L), platelet count 239 x 109/L, and serum creatinine (sCr) 1.36 mg/dL (Table 1 ). The chest X-ray was normal (Figure 1 ), and the arterial blood gas showed no signs of hyperlactatemia or respiratory failure. The pneumococcal urinary antigen test was positive, and he was discharged home and medicated with amoxicillin/clavulanic acid. After six hours, he returned to the emergency room in shock, with altered mental status, a very agitated, Glasgow Coma Scale score of 12 (E2 V4 M6), hypotensive, tachycardic, with evident signs of hypoperfusion, tachypnoeic with acute respiratory failure, and an exuberant erythematous rash (Figure 2 ). Laboratory tests revealed haemoconcentration with haemoglobin 21 g/dL, WBC 12 x 109/L, and CRP 157 mg/L, aggravating acute kidney injury with sCr 3.60 mg/dL, severe thrombocytopenia with 28 x 109/L, and hypoglycaemia (Table 1 ). Thoracic-abdominopelvic CT showed extensive lung consolidation bilaterally consistent with congestion, without any other alterations (Figure 3 ). A transthoracic echocardiogram confirmed ventricular hyperkinesia with small hypercontractile ventricles and a small inferior vena cava with marked respiratory variation without any other major alteration. He was intubated and ventilated, and he initiated fluid therapy with crystalloid and vasopressor support with norepinephrine (maximum 3.12 μg/kg/min). Due to the severity of the shock, hydrocortisone 200 mg and albumin 40 g were administered, and he started epinephrine. Regarding antibiotherapy, he has begun taking piperacillin, tazobactam, and vancomycin. Due to the exuberant presentation and the suspicion of streptococcal toxic shock, clindamycin 900 mg and intravenous immunoglobulin (IVIG) 1 g/kg (80 g) were also administered. Three fresh frozen plasmas, one platelet pool, and 10 mg of vitamin K were given to him, and he was put on continuous veno-venous haemofiltration. Despite all of these therapeutic interventions, he remained with refractory hypotension, respiratory failure with a PaO2/FiO2 ratio of 60, disseminated intravascular coagulation with uncontrollable bleeding from intravenous lines, catheters, and mucosal surfaces, and metabolic acidaemia. A real-time polymerase chain reaction assay detected the DNA of S. pneumoniae in blood and endotracheal aspirate. The detection for first line serotypes (3, 5, 7, 9, 14, 15, 16, 19, 20, 23, 33, 38) was negative. The sample volume was not enough to test for the second line serotypes (7C, 8, 10, 11, 12, 15A, 17F, 18, 19F, 22F, 31, 34, 35B, 35F).
CC BY
no
2024-01-15 23:43:49
Cureus.; 16(1):e52255
oa_package/ab/9c/PMC10788092.tar.gz
PMC10788094
38222154
Introduction Tuberculosis (TB) is a major health problem. The World Health Organization has defined latent tuberculosis infection (LTBI) as a state of persistent immune response to stimulation by Mycobacterium tuberculosis antigens without evidence of clinically manifested active TB [ 1 ]. TB bacteria can remain dormant for years, and in 10% of individuals with LTBI, the infection may progress to active TB. In half of these individuals, the progression occurs within the first two years of acquiring the infection, and in the other half, progression occurs after two years. The overall prevalence rates of LTBI in the Middle East and North African regions are 41.78% and 43.81% of the adult population, respectively [ 2 ]. Many immunosuppressive drugs are potent T-cell inhibitors that can impair the interferon response. Studies have shown that patients undergoing immunosuppression treatment, particularly with regard to tumor necrosis factor alpha (TNF-α), have an increased risk of TB; for example, the relative risk is 29.3 for patients taking adalimumab and 18.6 for those taking infliximab [ 3 , 4 ]. Furthermore, a study by the British Society for Rheumatology Biologics Register reported a 3- to 4-fold higher rate of TB with infliximab (144 events/100,000 person-years) and adalimumab (136/100,000 person-years) in comparison with etanercept (39/100,000 person-years) group [ 5 ]. Screening for LTBI and treating individuals who test positive are the cornerstones of TB prevention and are particularly important in high-risk patients, especially those with autoimmune disorders [ 3 ]. This study aimed to evaluate the frequency of positive and indeterminate interferon-gamma release assay (IGRA) tests, the management approach, and the risk of TB reactivation in rheumatologic patients at a tertiary hospital in the United Arab Emirates (UAE).
Materials and methods A single-center retrospective observational study was performed at Tawam Hospital, Abu Dhabi, UAE. Ethical approval for this study was obtained from the Tawam Human Research Ethics Committee. The department record system was searched to identify all patients on immunosuppression therapy for recruitment in the study. All adult patients (aged ≥16 years) attending the rheumatology clinic during a 12-year period (October 2010-April 2022) were enrolled. Those with positive and indeterminate IGRA testing were included in the analysis. Patients with negative IGRA results, those lost to follow-up, and those with active TB at the time of diagnosis of an autoimmune disease were excluded. A chart review was performed to gather demographic, radiological, and clinical data and management outcomes of patients with positive and indeterminate IGRA tests. The need for infectious disease (ID) referral and the use of anti-TB medications were evaluated. Moreover, long-term follow-up data were collected to determine the risk of TB reactivation in the cohort. Statistical analysis Descriptive data are expressed as mean ± standard deviation (SD), median (range), or number and frequency, as applicable. Quantitative variables are expressed as mean and SD or median and quartile. For the comparison of the groups, the Wilcoxon or Mann-Whitney test for the means was used depending on the normality test. Univariate and multivariate logistic regression analyses were performed to identify the factors correlated with positive and indeterminate tests. The significance level was set at P < 0.05. Statistical analysis was performed using the Jamovi 2.3.21.0 program (Jamovi Project, https://www.jamovi.org ). RStudio (Version 2023.03.0+386) and R (Version 4.2.3) were employed for data cleaning and logistic regression modeling.
Results A total of 1,012 positive and 223 indeterminate LTBI tests were identified in the 12-year period, of which 39 indeterminate and 123 positive results met the inclusion criteria. Indeterminate IGRA results Thirty-nine rheumatologic patients had indeterminate IGRA results. Twenty-four (61.5%) were women and 22 (56.4%) were UAE nationals, and their mean age was 38.6 years (SD=17.1). The predominant rheumatologic conditions in the cohort were systemic lupus erythematosus (SLE) (21, 53.8%), rheumatoid arthritis (RA) (four, 10.3%), psoriatic arthritis (PSA) (four, 10.3%), and small vessel vasculitis (five, 12.8%). The median duration of rheumatologic disease since diagnosis was 7.75 years (6 months-15 years). Conventional synthetic disease-modifying antirheumatic drugs (csDMARDs) were used in 13 (33%) of the patients. Corticosteroids were used in 26 (66.7%), and the mean prednisolone dose at the time of the IGRA test was 91 ± 241 ­­mg. Moreover, four (10.3%) of the patients were covered with biologics at the time of the IGRA test for various reasons, including switching immunosuppression, in-patient work-up owing to symptoms, contact with patients having active TB, and periodic medical check-ups, or for no reason at all. Table 1 provides a comparison between the characteristics of patients with positive and indeterminate IGRA results. In almost one-third of the patients (n = 14, 35.9%), the IGRA tests were repeated, which revealed indeterminate results in seven patients, negative results in four patients, and positive results in three patients. Chest radiographs were acquired for two-third of patients (n = 26, 66.7%), and only one-third of patients (n = 13, 33%) required chest computed tomography (CT). Additionally, ID consultations were sought for 43.6% (n = 17) of the cases. A total of eight (20.5%) patients received anti-TB medications owing to a diagnosis of LTBI (isoniazid monotherapy (INH) and vitamin B6 for nine months). The majority of patients were on maintenance immunosuppression medications and biologics during the follow-up period without evidence of reactivation of TB infection. Positive IGRA results Positive IGRA results were obtained in 123 rheumatologic patients. Their mean age was 55.7 years (SD=16.5), 78 patients (63.4%) were UAE nationals, and the female-to-male ratio was 3:1. The most common rheumatologic conditions were RA (n = 69, 56%), SLE (n = 17, 13.8%), PSA (n = 10, 8.1%), and Bechet disease (n = 6, 4.8%). csDMARDs were used in 65 (52.8%) of the patients. Corticosteroids were used in 43 (34.9%), and the mean dose at the time of the IGRA test was 40 ± 166 mg. The IGRA test was repeated in 28 (22.8%) patients, chest radiographs were acquired for half of the patients (n = 67, 54.5%), and chest CT was performed in one-fifth of the cases (n = 25, 20.3%). ID consultation was required for sixty patients (48.8%), and five patients had active TB infection. Seventy-four (60%) of the patients were treated with anti-TB medications, including those with active TB infection (n = 5) and those with LTBI (n = 69). Of note, three patients had a previous history of treatment for TB infection. These patients did not receive repeat courses despite positive IGRA testing, and there was no evidence of active infection during follow-up. The other two patients with active TB were on TNFα inhibitor (Figure 1 ). In univariate analysis for RA, current prednisolone use and prednisolone dose ≥ 15 mg were independently associated with positive IGRA results (P < 0.05). In multivariate analysis, male sex (odds ratio [OR] = 7.27; 95% confidence interval [CI]: 2.24-27.83, P = 0.002) and current prednisolone use (OR = 4.31; 95% CI: 1.2-16.14, P = 0.026) were associated with positive IGRA results (Table 2 ).
Discussion In this study, 123 positive and 39 indeterminate IGRA testing results were obtained in rheumatologic patients over a period of 12 years, with prevalence rates of 12.2% and 17.5%, respectively, across all subspecialties. These rates are much lower than the rate for the general population in the Middle East (41.78%) [ 2 ]. The prevalence rate of LTBI in patients with rheumatic diseases differs from country to country, with 20.4% in India [ 6 ], 21.6% in Morocco [ 7 ], and 29.5% in Brazil [ 6 ]. These discrepancies in the outcomes could be attributed to variations in the study design and the impact of immunosuppression and corticosteroid usage on interpretation [ 6 , 7 ]. This study found an inverse association between old age and positive IGRA results. With every one-year increase in age, the odds of being in the positive IGRA group decreased by approximately 6% in the univariable analysis (OR = 0.94, P < 0.001) and by approximately 4% in the multivariable analysis (OR = 0.96, P = 0.007). This finding is in line with some studies suggesting a strong association between old age (≥65 years of age) and indeterminate results [ 8 , 9 ]. In our cohort, patients with RA were observed to exhibit higher positive IGRA results, which agrees with previous research studies. RA itself increased the risk of TB infection among patients compared with the general population, and this association was independent of the use of biologics [ 10 , 11 ]. In multivariate analysis, no significant association was found between RA and positive and indeterminate IGRA results (OR = 0.44 (0.09-2.17, P = 0.307). In contrast, SLE was more common in the indeterminate group, which is similar to reports in the literature. Maharani et al. [ 12 ] demonstrated that active disease status resulted in indeterminate IGRA results in 12.66% of the patients in the SLE group, which was validated by another study [ 13 ]. In the present study, the disease activity status was not included as it was out of the study scope. Regarding the subsequent management approach, only half of the positive IGRA group had a chest x-ray (CXR), and approximately one-third proceeded to chest CT. Although there is limited evidence for the usefulness of CXR in screening, it is still recommended to improve the specificity. However, chest CT is considered more effective in identifying active TB in patients with a positive IGRA test, particularly if deemed necessary [ 14 , 15 ]. Owing to uncertainties in the subsequent steps following an indeterminate IGRA result, repeating the test is recommended. Nearly one-third of the patients in our cohort (n = 14, 35.9%) had a repeat test. Throughout the study duration, two-thirds of the patients had a CXR and one-third required chest CT. It is important to note that clear guidelines regarding the use of either approach are lacking unless the repeat test is positive and when necessary [ 16 ]. Immunosuppressants and corticosteroids can worsen an abnormal immune response. In this study, two factors contributed to the likelihood of obtaining an indeterminate result. Alongside a diagnosis of SLE, the utilization of corticosteroids negatively influenced the outcome of the IGRA test. Two third (n=14) of the SLE patients in the indeterminate group were treated with corticosteroids (doses of 15 mg, or >15 mg), which could explain the indeterminate result. Previous studies have documented that corticosteroid use increases the likelihood of having an indeterminate result, but some studies have reported conflicting outcomes [ 17 - 19 ]. Nevertheless, in this study, no differences were seen in the use of corticosteroid doses ≥15 mg between the two studied groups, which could be a consequence of the small sample size. With long-term follow-up, two cases (1.6%) of active TB infections were identified in patients with positive IGRA tests. All patients were treated with TNFα inhibitor (Adalimumab, 40 mg every two weeks) in conjunction with methotrexate (10-20 mg, weekly) for a duration of 208 and 312 weeks. In contrast, there were no cases of active TB in the indeterminate group. In any case, TNF plays an important role in the host’s response to infection, by maintaining the integrity of the granuloma that forms as a result of the infection. Thus, the use of TNF antagonists causes disruption of the granuloma integrity resulting in mycobacterial growth and activation. This supports our findings, while data from clinical trials reported negligible TB reactivation in non-anti-TNFa biologic [ 6 , 20 ]. Infectious disease standards of care Screening for LTBI using the IGRA or tuberculin skin test (TST) is recommended in high-risk patients, including HIV-positive, solid organ transplant, stem cell transplantation, and immunocompromised patients receiving biological therapy, especially TNFa antagonists [ 21 ]. The IGRA test is better than the TST in previously BCG-vaccinated immunosuppressed patients, with an estimated sensitivity of 67%-75% and specificity of 93%-99% [ 22 , 23 ]. A detailed medical history of signs and symptoms suggestive of active TB infection, history of exposure to patients with active TB, travel or migration from endemic areas, type of immunosuppression medications, and medical comorbid conditions should be obtained. LTBI is diagnosed based on positive IGRA/TST testing, negative CXR or chest CT, and no evidence of active TB infection [ 21 , 24 ]. Consultation with the ID team is required for LTBI treatment in immunosuppressed patients. For several years, isoniazid (5 mg/kg, 300 mg/d) (INH) supplemented with pyridoxine (vitamin B6) for nine months was the standard treatment for latent TB. However, this treatment is associated with a risk of hepatic injury and noncompliance with the therapy duration. Regular monitoring for hepatotoxicity is critical in patients receiving INH therapy, and the medication should be withheld whenever indicated. INH-related hepatotoxicity is defined as an increase in liver aminotransferases >5 times the upper normal limit or >3 times the normal limit with symptoms. Alternatively, a 4-month course of rifampin carries a lower risk for liver injury than INH [ 25 ]. Nevertheless, rifampin is a potent cytochrome P450 inducer and can accelerate the metabolism of some immunosuppressive agents (calcineurin inhibitors, including cyclosporine and tacrolimus). Hence, drug-drug interactions should be monitored. Three months of daily INH plus rifampin has also been approved for LTBI therapy [ 26 ]. Sterling et al. reported comparable effectiveness for directly observed, once-weekly therapy with rifapentine plus INH for three months and a self-administered nine-month course of daily INH [ 27 ]. The three-month therapy was well tolerated, with lower rates of adverse events and higher compliance. The National Tuberculosis Controllers Association and Centers for Disease Control and Prevention 2020 LTBI treatment guidelines recommend the use of rifamycin-based regimens, including three months of once-weekly INH plus rifapentine, four months of daily rifampin, and three months of daily INH plus rifampin. These are the preferred recommended regimens because of their effectiveness, safety, and high treatment completion rates. Alternative LTBI therapeutic regimens are 6 or 9 months of daily INH [ 24 ]. Consensus is lacking on the safe period for starting biological or immunosuppressive therapy in patients with LTBI. Some advocate ruling out active TB infection and commencing LTBI treatment 3 weeks to 2 months prior to the initiation of immunosuppressive medications [ 21 ]. Potential drug-drug interactions should be monitored. How frequently patients undergoing long-term biological or immunosuppressive therapy should be screened for LTBI is not well established. Immunosuppressed patients who were treated for LTBI previously will require further evaluation and ID referral for optimal risk assessment and for determining whether repeat LTBI therapy is needed [ 21 ]. Rheumatology standards of care According to the 2018 guidelines of the British Society of Rheumatology [ 28 ], all patients must be screened for TB before commencing treatment with biologics. This screening involves a clinical examination, a CXR, and either a TST or an IGRA test. If a positive result is obtained for LTBI, the patient must begin treatment at least 1 month before starting biologic therapy, with monitoring at 3-month intervals. Etanercept should be the first line of treatment for patients who require anti-TNF therapy and are at a high risk of TB reactivation because anti-TNF monoclonal antibody medications (particularly adalimumab and infliximab) have a higher risk of TB reactivation than etanercept [ 28 ]. The European Alliance of Associations for Rheumatology [ 14 ] recommends that all patients be screened for LTBI before starting treatment with biologic DMARDs or targeted synthetic DAMRDs. Additionally, if a patient is deemed to be at a high risk owing to factors such as alcohol abuse, smoking, living with people who have TB, or living in endemic countries, screening should also be performed if the patient is considering csDMARDs and/or glucocorticoids. No consensus exists on the recommended dose or duration for glucocorticoid usage, but based on previous studies, screening should preferably be done if the glucocorticoid dose is ≥15 mg/d and if the treatment period exceeds four weeks. This screening can be accomplished via CXR and IGRA. However, guidance on how frequently the test should be performed or when it should be repeated has not been updated. The American College of Rheumatology recommends annual testing for high-risk patients who live or travel to endemic countries [ 14 , 29 ]. However, such recommendations must be regularly updated, especially as new medications are developed. Management is based on international guidelines and the most often utilized regimens mentioned previously. This is the first study in the UAE to address the IGRA test results in rheumatologic conditions over the course of more than one decade. Some of the limitations were a small sample size of patients, the single-center experience, and a lack of a comparative group.
Conclusions Long-term data on the risk of TB activation in positive and indeterminate IGRA results for rheumatological conditions are low. It is recommended to reassess the choice of using anti-TNF-α, with a positive IGRA result if no other feasible alternatives can be offered. Our findings stress the importance of age, underlying diseases, and immunosuppressive treatments in interpreting IGRA results and guiding patient management. A large multicenter study is needed to understand the differences and outcomes of such patients in TB endemic and nonendemic geographical areas.
Introduction Prior to immunosuppression, rheumatology patients are routinely screened for latent tuberculosis (TB) infection using interferon-gamma release assays (IGRAs). Variability in the management of latent and indeterminate IGRA results across institutions limited long-term outcome data. A retrospective study was conducted at Tawam Hospital, United Arab Emirates, to investigate the incidence and management protocols associated with positive and indeterminate IGRA results, as well as TB infection, among patients with rheumatic conditions. Methods A single-center retrospective observational study was performed at Tawam Hospital, Abu Dhabi, UAE. Ethical approval for this study was obtained from the Tawam Human Research Ethics Committee. Laboratory records and the hospital's electronic medical system were used to obtain information about IGRA results over a 12-year period (April 2010-April 2022). The hospital's electronic medical system was used to obtain patient information and subsequent management approaches of positive and indeterminate IGRAs. Moreover, long-term follow-up data were collected to determine the risk of TB reactivation in the cohort. Results We found a total of 1,012 positive and 223 indeterminate IGRA test results within the 12-year period. Within the rheumatology department, 123 positive and 39 indeterminate IGRA results were identified. In the indeterminate IGRA group, the majority were women (n = 24, 61.5%) and UAE nationals (n = 22, 56.4%), and their mean age was 38.6 years. Systemic lupus erythematosus was the most prevalent rheumatologic condition (n = 21, 53.8%). Thirteen (33.3%) were on disease-modifying anti-rheumatic drugs (DMARDs) and 26 (66.7%) were on corticosteroids during IGRA testing. A total of eight patients (20.5%) received anti-TB medications. In the positive IGRA group, the mean age was 55.7 years and the female-to-male ratio was 3:1. The most common rheumatologic condition was rheumatoid arthritis (n = 69, 56%). Sixty-five (52.8%) patients were on conventional DMARDs, 43 (34.9%) were on corticosteroids during IGRA testing, and 74 (60%) received anti-TB medications. Two cases (1.6%) of active TB infections were detected among patients with positive IGRA tests, both of whom were receiving anti-tumor necrosis factor alpha inhibitor treatment in combination with methotrexate. No cases of active TB infection were observed in the indeterminate IGRA group. Conclusion Long-term data on the risk of TB activation in positive and indeterminate IGRA results for rheumatological conditions are low. It is recommended to reassess the choice of using anti-TNF-α, with a positive IGRA result if no other feasible alternatives can be offered. Our findings stress the importance of age, underlying diseases, and immunosuppressive treatments in interpreting IGRA results and guiding patient management. A large multicenter study is needed to understand the differences and outcomes of such patients in TB endemic and nonendemic geographical areas.
CC BY
no
2024-01-15 23:43:49
Cureus.; 15(12):e50581
oa_package/6f/6a/PMC10788094.tar.gz
PMC10788095
38222167
Introduction Lemierre syndrome is classically characterized by bacterial invasion of the pharyngeal mucosa, often preceded by a bacterial or viral pharyngeal infection [ 1 , 2 ], leading to the development of internal jugular vein (IJV) thrombophlebitis and disseminated septic emboli [ 1 - 3 ]. The most frequent causative organism is Fusobacterium necroforum , an anaerobic gram-negative rod, which has become synonymous with the disease. However, various other bacteria have been isolated in cases of Lemierre syndrome and should be considered when beginning empiric therapy. We describe such a case here.
Discussion Here we report a case of Lemierre syndrome in an otherwise healthy male without obvious signs of oropharyngeal involvement on initial presentation and initial findings consistent with pneumonia rather than septic thrombophlebitis. Lemierre syndrome was first reported by French physicians Courmont and Cade in 1900 [ 4 ] and was described by French bacteriologist Andre-Alfred Lemierre in 1936 [ 5 , 6 ]. The syndrome was far more prevalent in the “pre-antibiotic" era, where treatments were often limited to IJV excision or ligation [ 7 ]. Due to the widespread use of antibiotics, rates of Lemierre syndrome have declined so significantly that some have labeled it a “forgotten disease” [ 8 ]. However, in recent decades, the rates of Lemierre syndrome have increased [ 9 , 10 ], possibly due to reduced antibiotic use for pharyngitis and improvement in imaging techniques [ 11 ]. Incidence ranges from 3 to 14 cases per million persons, depending on the population studied [ 2 , 6 ]. Incidence rates are higher in adolescent and young adult patients [ 2 ]. Despite antibiotics, it remains a serious pathology with mortality rates as high as 18% [ 7 ]. Lemierre syndrome typically begins as an infection in the palatine tonsils and peritonsillar tissues [ 7 ], although other primary sources, such as sinuses, mastoid, oral, and auricular have been reported [ 12 ]. Invasion of the local tissue is not fully understood but may be due to an initial insult from viral or bacterial pharyngitis combined with bacterial-specific factors [ 6 , 7 ]. In many cases, there is no obvious inciting illness or injury, as observed in the case discussed here. Resulting bacteremia results in thrombophlebitis of the IJV [ 7 ]. From here, the thrombi embolize multiple tissues, most frequently the lungs, joints, or brain [ 13 , 14 ]. In rare cases, the thrombus may propagate to the subclavian or cranial sinuses [ 7 ]. Septic emboli in the lungs may result in abscesses, sterile effusions empyema, and cavitation [ 2 , 15 ]. Indeed, most investigations for Lemierre syndrome begin with chest X-rays, possibly due to associated lung pathology from septic emboli [ 7 ]. Lemierre syndrome occurs most frequently in healthy young adults, often males, in the second and third decades [ 2 ]. The reasons for this are unclear but may be due to the frequency of tonsillitis and pharyngitis in this demographic [ 16 ]. Diagnosis of Lemierre syndrome relies on diagnostic for identification of IJV thrombophlebitis, with CT the most common non-plain film modality [ 14 ]. Several authors note that a high degree of clinical suspicion is often needed to appropriately identify this condition [ 2 ], especially as the disease may initially be treated as pharyngitis or pneumonia. As mentioned by Lee et al., the presence of deep neck infections, septicemia, IJV thrombophlebitis, and signs of metastatic infection (such as septic emboli) should raise suspicion for Lemierre syndrome [ 2 ], especially if present in an otherwise healthy young adult. Early clinical signs and radiologic findings are crucial as the prolonged growth of anaerobic gram-negative bacteria, such as F. necrophorum , may delay diagnosis [ 2 ]. While F. necrophorum is the most frequent causative organism (81.7% of cases according to Chirinos et al. [ 13 ]), various bacteria have been isolated, though at far lower rates. The S. anginosus group (SAG) typically colonizes the reproductive and digestive tracts as well as the respiratory cavity and can cause visceral suppurative infections [ 17 ]. They are unique in their tendency to form abscesses and empyema. However, determining whether they are causal in a given infection can be difficult since they are resident oral cavity and respiratory tract flora [ 17 ]. A very small number of Lemierre syndrome cases involving the SAG species have been reported in the literature. As such, SAG appears to represent an uncommon group of pathogens in this syndrome [ 18 ]. Polymicrobial infections involve up to 30% of cases and, in many cases, are in combination with F. necrophorum [ 12 ]. Treatment involves empiric therapy, which is narrowed once bacteria are specified. Given the prevalence of F. necrophorum resistance of β-lactams, macrolides, fluoroquinolone, and aminoglycosides, β-lactamase-resistant antibiotics are often the recommended treatment [ 3 ]. Treatment length has not been established with randomized controlled trials but treatment for several weeks is often recommended [ 19 ]. Anticoagulation has been debated. Some have supported anticoagulation in cases where there are recurrent emboli, thrombus extension, or lack of improvement with antibiotic therapy [ 7 ], while others have opposed it due to the risk of bleeding. Multiple retrospective analyses have shown no benefit to anticoagulation. For example, a retrospective study of 394 patients found no difference in mortality [ 20 ]. Unfortunately, Lemierre syndrome can result in long-term complications; one study noted serious sequelae, such as neurologic deficits, in >10% of patients with Lemierre syndrome, possibly due to complications from septic emboli [ 6 ].
Conclusions In sum, we present a unique case of Lemierre syndrome with blood culture positive for S. constellatus . Clinicians should be cognizant of Lemierre syndrome as a cause of septic emboli in young, healthy adults and recognize that a variety of pathogens may be causative. In some cases, such as this one, patients may lack obvious clinical signs of oropharyngeal infection on initial presentation. As demonstrated here, infection and emboli of unknown origin may warrant imaging of the neck vasculature for thrombi.
Lemierre syndrome is characterized by thrombophlebitis of the internal jugular vein (IJV) secondary to bacterial pharyngitis or tonsillitis. Though antibiotic use has made this a rarer syndrome, it can nevertheless manifest in patients presenting with pharyngitis. Herein, we describe a 20-year-old male patient with no relevant medical history presenting with signs concerning for pneumonia and was ultimately diagnosed with Lemierre syndrome with Streptococcus constellatus bacteremia. Complications included IJV thrombus with presumed septic emboli to the lungs. The patient was discharged on ampicillin/sulbactam with plans to transition to amoxicillin/clavulanate.
Case presentation A 20-year-old male with no significant past medical history presented to the emergency department with a four-day history of cough, shortness of breath, non-bloody diarrhea, non-bloody emesis, decreased appetite, body aches, sweats, fevers up to 103o F and significant fatigue. He also reported a recent sore throat which had resolved prior to presentation. No signs of neck or oropharyngeal pathology were noted by the emergency medicine team. During this initial encounter, he was noted to have a leukocytosis (12.5 K/uL), mild anemia (12.7 g/dL), and an elevated D-dimer level (5.8 FEU/mL)(Table 1 ). Although no remarkable findings were seen on the chest X-ray, a CT scan of the chest showed multifocal consolidative changes throughout the middle and bilateral lobes, with no evidence of deep vein thrombosis (DVT). The patient had performed several COVID-19 tests prior to presentation, all of which were negative. He was started on doxycycline for presumed community-acquired pneumonia and was sent home with instructions to return if symptoms worsened. Over the next 24 hours, the patient’s shortness of breath progressed and he returned to the emergency department, where labs revealed a worsening leukocytosis (16.2 K/uL) and an elevated pro-BNP (3292 pg/mL). A viral panel including testing for SARS-COV-2, influenza A/B, and RSV was negative. Hazy bilateral infiltrates were now evident on the chest X-ray. A CT scan of the chest continued to demonstrate bilateral multifocal infiltrates consistent with atypical pneumonia and concerning for possible septic emboli (Figure 1 ), ultimately concerning for sepsis. He was started on empiric antibiotic coverage with vancomycin, cefepime, and azithromycin. A transthoracic transesophageal echocardiogram observed a left ventricular ejection fraction of 50% without any vegetation. A CT of the head and brain with and without contrast was unremarkable and without signs of emboli. The following day, worsening bilateral nodular infiltrates were seen on a repeat chest X-ray. Laboratory results showed worsening leukocytosis (17.4 K/uL), anemia (hemoglobin 10.9 g/dL) with a normal haptoglobin and slightly elevated LDH (300 IU/L), thrombocytopenia (platelets 49 K/uL), elevated procalcitonin (27.70), and elevated ferritin (592.9 ng/mL). He was subsequently transferred to the intensive care unit for a higher level of care given the worsening pneumonia and risk of decompensation. Upon presentation to the ICU, the patient was febrile (103.1 °F), tachycardic (122 bpm), tachypneic (RR 31), and with oxygen saturation at 94% on 2 liters nasal cannula. Physical exam was notable for bilateral cervical lymphadenopathy and bilateral wheezes throughout the upper lung fields. Oral examination was unremarkable, though sore throat prior to presentation was concerning for possible oral source of infection. Antibiotic coverage was subsequently changed to amoxicillin/clavulanic acid and doxycycline to cover atypical pneumonia, anaerobes, and tick-borne illnesses. The infectious diseases team was consulted and initiated an extensive work-up in addition to previous studies, given the infection of unknown etiology, elevated inflammatory markers, and thrombocytopenia (Table 2 ). Preliminary blood culture results identified Streptococcus anginosus group by Verigene. A CT scan of the neck with contrast demonstrated a thrombus in the left IJV (Figure 2 ). Antibiotics were then narrowed to ampicillin/sulbactam to cover Streptococcus species. Given the presence of septic emboli, anticoagulation was discussed with the infectious disease team and was ultimately decided against. A subsequent transesophageal echocardiogram noted an improved ejection fraction (60-65%) without any vegetations or intracardiac shunts. Finalized blood culture results showed revealed Streptococcus constellatus . At this time, findings were most consistent with a S. constellatus infection with multifocal pneumonia secondary to septic thrombophlebitis (Lemierre’s syndrome). The patient subsequently began to defervesce, with decreasing frequency of fevers and improvement of his leukocytosis and other laboratory parameters, including resolution of thrombocytopenia. A repeat chest X-ray continued to show multifocal opacities consistent with septic emboli, but overall interval improvement in aeration. The patient was subsequently discharged with an additional three weeks of intravenous (IV) ampicillin/sulbactam, transitioning to oral amoxicillin/clavulanic acid for an additional three weeks.
We would like to thank Dr. David C. Keyes for his assistance with radiological imaging interpretation.
CC BY
no
2024-01-15 23:43:49
Cureus.; 15(12):e50580
oa_package/d1/da/PMC10788095.tar.gz
PMC10788096
38222126
Introduction There is an increase in the burden of cancer globally due to population growth, aging, and an increase in risk factors such as obesity, smoking, and diet [ 1 ]. Head and neck cancers most commonly occur in the oral cavity. Oral cancer's prognosis and survival rates are poor despite significant advancements in its treatment [ 2 , 3 ]. Cancer in the maxillary arch is an uncommon tumor with higher mortality, and 10% of all oral cancers develop in the oral cavity subsites of the upper gingiva and hard palate [ 4 ]. Based on the tissue from whence they originated, malignant tumors of the maxilla can be categorized as squamous cell carcinoma, salivary gland tumors such as mucoepidermoid carcinomas, mesenchymal tumors such as chondrosarcomas, and other malignancies, including basal cell carcinoma and malignant schwannoma [ 5 ]. Using free flaps and advances in microvascular surgery, many oncology patients with palatal tumors have been able to have their tumors resected and immediately reconstructed after the surgery. A flap with vascularized bone is an ideal option to optimize the future prosthetic bearing area. In the event that the resection site cannot be closed surgically, an obturator must be provided. In addition to improving chewing, swallowing, speech, dental aesthetics, and facial support, the obturator restores the partition between the nasal and oral cavities, thus improving quality of life [ 6 ]. Postsurgical maxillary defects can result in several problems, such as hypernasal speech, nasal fluid leakage, the high potential for aspiration, poor aesthetics, and impaired masticatory function [ 7 ]. Therefore, treatment of the maxillary defects through surgery or prosthodontics is crucial to these patients' recovery. Some oncology patients may require conventional rehabilitation with an obturator following surgery [ 8 ]. Patients who have had a maxillectomy typically undergo several stages of prosthetic treatment. First, a surgical obturator is made and worn for the first one to four weeks after the procedure. Next, an interim obturator is made and worn for three to six months until the defect is improved, and finally, a long-term obturator is made [ 9 ]. In the initial postoperative phase, a surgical obturator acts as a partition between the oral and nasal cavities, enabling relatively normal speaking and deglutition and minimizing the psychological effects of the operation and the hospital stay. Additionally, it offers a matrix for surgical packing and lowers the chance of surgical wound contamination [ 10 ]. After the surgery, the surgical obturator can be adjusted to accommodate changes in the defect and surrounding tissues. In the meantime, an interim obturator can assist with oral functions until the wound has fully healed and the defect has achieved stability in terms of shape and size. Once the maxillary defect has healed and become stable, a permanent obturator can be used for long-term restoration. An effective seal of the defect is crucial to preventing liquid leakage into the nasal canal. Removable prostheses must be constructed with adequate support, retention, and stability to ensure proper functionality. The type and size of the defect, the presence of supporting palatal shelves, and the condition of the remaining dentition are essential factors that influence the movement of the prosthesis during use. In cases of incomplete dentition, the remaining teeth can serve as abutments, improving the prognosis of the prosthesis [ 10 ]. Care must be taken to prevent overload of the remaining dentition and to retain these teeth to the best of their ability. Various types of obturators have been used, such as hollow bulbs, full bulbs, and two-piece obturators. Obturators with a hollow design are often preferred for their light weight [ 11 ]. This case report discusses the step-by-step process of creating a cobalt chromium obturator, which is a special type of dental device used to close a gap in the palatal bone of the upper jaw. The report focuses on the clinical stages involved in making a one-part hollow box obturator.
Discussion Individuals who have undergone a maxillectomy often encounter recurring challenges in prosthodontic treatment related explicitly to insufficient support, retention, and stability. The extent of the defect, the number of remaining teeth, the amount of remaining bone structure, the condition of the surrounding mucosa, the impact of radiation therapy, and the patient's ability to adjust to the prosthetic device all play a role in determining the outlook for prosthodontic treatment in these individuals [ 13 ]. Saving as many remaining teeth as feasible for patients undergoing unilateral maxillectomy may be vital for optimal prosthesis design and performance [ 14 ]. The other components are subjected to continual pressure from such a massive, hefty obturator, impairing tissue health, patient function, and comfort. After the obturator has been processed into acrylic resin, the bulb component is frequently hollowed out to reduce the overall weight of the prosthesis. The extent of the maxillary defect determines whether a hollow maxillary obturator is appropriate. By incorporating a hollow design, the weight of the prosthesis can be reduced by as much as 33% [ 15 ]. The obturator prosthesis is critical to recovering oral function in postsurgical maxillectomy patients. Framework designs for obturators may differ depending on the defect classification system [ 16 ]. Removable obturator prostheses should adhere to fundamental prosthodontic principles, which include distributing stress over a wide area, employing a rigid major connector for cross-arch stabilization, and incorporating stabilizing and retaining components at strategic locations within the arch to minimize the risk of displacement due to functional forces. In this case, a tripodal design was chosen. The remaining teeth, palate, and specifically prepared rests offered support for the prosthesis. Rests were created on the left first and second premolars, the first and second molars, and the right canine on the right side. The complete palate was designed to ensure optimal distribution of functional loads across the underlying tissue [ 16 ]. In patients with remaining natural teeth, these teeth play a crucial role in maintaining, supporting, and stabilizing the obturator. Retention can be achieved through various means, such as utilizing the remaining teeth or ridge, the lateral aspect of the defect, the undercut in the soft tissue, and the scar tissue. Components for stabilization and indirect retention need to be carefully positioned to prevent movement of the portion of the prosthesis that covers the defect. Occlusion is the key factor in achieving stability for prostheses. It is crucial to ensure that occlusal forces are evenly distributed in both centric and eccentric jaw positions to minimize prosthesis movement and the resulting forces on individual structures. To reduce stress caused by lateral forces, proper selection of an occlusal scheme, elimination of premature occlusal contacts, and the use of stabilizing components that provide broad distribution are essential [ 12 , 17 ]. A metal framework obturator prosthesis offers several advantages, including its durability and ability to conduct heat, allowing normal stimulation of the supporting structure [ 18 ]. It is essential to wait for the defect site's complete healing and dimensional stability before constructing the definitive obturator. The timeframe for this can vary between 3 and 6 months following the surgery, depending on various factors, including the tumor’s prognosis, the defect's size, the progress of healing, and whether teeth are present [ 16 ]. The designs of obturators can differ depending on the classification system used to categorize the defects. A tripodal design was chosen in this specific case, considering the support provided by the remaining teeth and palate. The molars, first premolar, and canine were all stabilized to enhance stability. The remaining palate was also covered to ensure proper distribution of functional loads during oral functions. Dental implants have revolutionized the field of prosthodontics, playing a crucial role in removable [ 19 - 21 ], fixed [ 22 - 24 ], and maxillofacial prosthesis [ 25 , 26 ]. With their ability to provide stability, functionality, and aesthetic appeal, dental implants have transformed the lives of countless individuals, restoring their oral health and overall well-being. Enhancing the quality of life for hemimaxillectomy patients is a difficult task compared to patients with conventional prostheses. However, specialists with expertise, knowledge, and experience can achieve this goal. By implementing a team approach, utilizing skills and experience at each stage, and regularly evaluating the patient, the challenges faced by hemimaxillectomy patients can be effectively overcome [ 27 ].
Conclusions The primary challenge in a maxillectomy patient's recovery is ensuring adequate retention, stability, and support. A thorough understanding of the patient's needs and extensive expertise is critical in effectively rehabilitating these individuals. The patient's masticatory abilities, speech intelligibility, and overall quality of life can be significantly improved by designing a definitive obturator prosthesis with maximum coverage and appropriate design.
Maxillectomy defects can lead to oroantral communication, causing difficulties with chewing, swallowing, speech, and facial appearance. Prosthodontists play a crucial role in rehabilitating such defects using obturators. This case report presents the fabrication of a definitive obturator with a cast metal framework for a patient who had an acquired maxillary defect and previously experienced issues with an ill-fitting obturator. In this clinical report, the patient's canine teeth on both sides and the premolars and molars on the left side were used for rest placement. Retention was achieved by utilizing the remaining teeth, employing two embrasure Aker clasps on the left molars and premolars and a C-wrought wire clasp on the right canine. A complete palate was designed as the major connector to ensure optimal load distribution to the surrounding tissues. Additionally, an indirect retainer was planned for the right canine. This definitive prosthesis rehabilitated the patient, improving masticatory efficiency, enhancing speech clarity, and improving quality of life.
Case presentation A 70-year-old male patient was referred to the Department of Prosthodontics, Tabiah University Dental Hospital, Madina, Saudi Arbia, with a chief complaint regarding a previously ill-fitting acrylic maxillary obturator. The Research Ethical Committee of the College of Dentistry, Taibah University, Madinah, Saudi Arabia, approved this study (approval # 14032022). The specific issues reported by the patient were inadequate retention of the old obturator, stability, leakage, and food accumulation underneath the obturator. As a result, the patient desired to replace the obturator with a more suitable alternative. The patient had undergone a right maxillectomy due to the surgical removal of squamous cell carcinoma from the right maxillary sinus. Following the surgery, the patient received postoperative radiotherapy. Approximately six years ago, an obturator was fabricated for the patient to obturate the defect caused by the maxillectomy. The extra-oral examination revealed a class III skeletal base, with no abnormalities detected in the examined lymph nodes, temporomandibular joint (TMJ), or face. The intra-oral examination revealed a surgical defect on the right side of the hard palate resulting from a right maxillectomy. According to Aramany's classification of maxillary defects, this defect is classified as Class II [ 12 ]. The gingiva on the intact side and lower arch appeared healthy, displaying pink, but with generalized recession. The remaining teeth exhibited a 16% bleeding index and a 34% plaque index. The occlusal examination revealed a Class III malocclusion, characterized by a 0.5 mm anterior open bite and a group function occlusion when the obturator was in place during both centric and eccentric occlusions. Additionally, there was a slight midline shift to the right side (Figure 1 ). The patient's diagnosis includes an acquired palatal defect resulting from the surgical removal of a tumor, generalized plaque-induced gingivitis, acquired tooth loss, and a sub-optimal maxillary obturator that is causing leaks. The primary goal of the treatment was to close the communication between the oral and nasal cavities using an obturator. This would artificially block the unrestricted transfer of speech sounds, food, and liquids between these cavities. Additionally, the treatment aimed to enhance the aesthetics and function of the patient's oral cavity. The proposed course of treatment involved giving the patient oral health instructions (OHI), performing both supra and subgingival scaling and polishing, offering guidance on using floss and interdental brushes, recommending a fluoridated mouthwash with 0.05% sodium fluoride (NaF), and suggesting the use of a toothpaste with a minimum of 1350 parts per million (ppm) of fluoride. Following these interventions, the plan was to provide the patient with a removable cobalt-chrome partial obturator for the maxilla. The maxillary and mandibular impressions were taken using irreversible fast-setting hydrocolloids (Tropicalgin, Zhermack) after modifying the upper stock tray to ensure a better fit and to block out undercuts with petrolatum-laden gauze. These impressions were poured with dental stone type IV to produce study casts (Figure 2 ). The maxillary cast was duplicated for future reference. The study casts were accurately surveyed to determine the design of the metal framework. Considering his functional and aesthetic requirements, a removable cobalt-chrome partial obturator for the maxillary arch was planned. Following the jaw relation record, the casts were mounted on a semi-adjustable articulator. The remaining teeth and palate provided the necessary support. Both sides' canines, left-side premolars, and molars were used for cingulum and occlusal rest placement. Retention was achieved by utilizing the remaining teeth, with two embrasure Aker clasps on the left molars and premolars and a C-wrought wire clasp on the right canine. The placement of cingulum rest as an indirect retainer was planned in the right canine tooth. To ensure the functional load was evenly distributed, it was determined that the remaining palate should be fully covered (Table 1 and Figure 3 ). A special tray was made using cold-cure acrylic resin (Acrostone, Egypt) on the primary cast. Border molding was done using green stick compound (Dental Kerr Impression Compound, USA), and the final impression was taken using polyvinyl siloxane (PVS) material (Addition Silicon, Aquasil, Dentsply). The impression was poured with extra-hard type IV dental stone (Kimberlit, Type IV Dental Stone, Protechno-Spain) to generate the master cast. This master cast was duplicated to generate the refractory cast made of investment material, on which the framework wax-up was performed. The framework was then cast using cobalt-chromium alloy (Metal Brealloy, CO-CR alloy, Breadent-Germany) (Figure 4 ). The modified cast technique used PVS impression material to create a precise impression (Figure 5 ). The fit of the framework with the underlying structures was evaluated by placing it in the patient's mouth and using a pressure indicator paste to assist in the assessment. Occlusion rims were fabricated on the framework, and the centric jaw relation was recorded (Figure 6 ). The casts were then mounted on a semi-adjustable articulator (Bio-art semi-adjustable articulator. SM66297. Brazil). Acrylic denture teeth (Trubyte, Dentsply, Gloucestershire, England) were arranged, and the obturator was tested to ensure proper occlusion with the mandibular teeth, aesthetic appearance, and support for the underlying tissues. Subsequently, the obturator was processed, finished, and polished following standard procedures (Figure 7 ). During the insertion, pressure indicator paste (PIP) was used to identify any areas of excessive pressure. The denture was placed in the patient's mouth (Figure 8 ), and instructions were provided on the care and usage of the obturator. The patient underwent monthly evaluations for the first three months, followed by visits every three months for two years.
CC BY
no
2024-01-15 23:43:49
Cureus.; 15(12):e50578
oa_package/a9/16/PMC10788096.tar.gz
PMC10788114
38222994
Introduction Schizophrenia symptoms are described as positive, such as hallucinations and delusions, and negative, like decreased emotional expression, lack of motivation, and cognitive decline. Over time, schizophrenia may interfere with an individual's social life, increase the likelihood of unemployment, and decrease life expectancy by 10-20 years. Patients who have poor adherence to their medication regimen are more likely to discontinue and tend to relapse repeatedly. Although investigations into the causes of schizophrenia have been conducted over several decades, the condition remains poorly understood. Genetic involvement is suspected because the incidence of schizophrenia in identical twins is approximately 50% and the incidence of schizophrenia in the offspring is approximately 10 times higher when both parents have schizophrenia. Although various theories regarding the causes of schizophrenia have been reported, they have not yet led to direct clinical application [ 1 ]. The prevalence of schizophrenia is approximately 1%; however, environmental factors are thought to play a role in the onset of the disease. For example, the incidence of schizophrenia is higher in individuals who reside at high latitudes or in densely populated urban areas [ 2 ]. Patients with schizophrenia also present with various symptoms, including positive and negative symptoms. Therefore, in terms of treatment, it is difficult to objectively assess the severity of symptoms and determine the appropriate treatment options. In cases of severe psychomotor agitation, it is important to determine the appropriate treatment method, such as the intravenous (IV) administration of antipsychotic drugs and modified electroconvulsive therapy. Therefore, in the acute treatment of schizophrenia, prompt and appropriate management strategies should be selected, and biomarkers must be used to objectively determine the severity of symptoms and to predict treatment outcomes. One etiological theory of schizophrenia suggests a relationship with neuroinflammation. Neuroinflammation involves the activation of microglial cells and increased peripheral benzodiazepine receptor expression. Postmortem studies have reported that schizophrenia is associated with increased numbers of activated microglial cells. Recent studies measuring peripheral benzodiazepine receptors using positron emission tomography (PET) scans have also reported that neuroinflammation is an important factor affecting the onset of schizophrenia symptoms [ 3 ]. Inflammation may increase the permeability of the blood-brain barrier (BBB), permitting localized central nervous system lesions to spread to the periphery and allowing infectious pathogens to invade, further damaging the central nervous system [ 4 ]. Brain imaging tests are used to evaluate central nervous system functioning. However, such tests are challenging for acutely agitated patients to complete. Moreover, some psychiatric treatment facilities lack brain imaging equipment. Importantly, most treatment facilities have the necessary equipment to carry out blood tests to determine biomarkers. Among the various blood tests that could be useful, tests that examine immunological and inflammatory mechanisms may help examine patients with schizophrenia. For example, previous studies have examined changes in white blood cells (WBCs), particularly lymphocytes; however, no consensus has been reached. The neutrophil-lymphocyte ratio (NLR) has gained recent attention as a new biomarker for many diseases. NLR comprises a simple ratio of neutrophil-to-lymphocyte counts obtained from peripheral blood. NLR is a biomarker that links two aspects of the immune system: the innate immune response by neutrophils and adaptive immunity by lymphocytes. Furthermore, NLR is a prognostic predictor known to correlate independently with mortality in various diseases such as sepsis, coronavirus disease 2019 (COVID-19), and cancer [ 4 ]. The neutrophil-albumin ratio (NAR), platelet-lymphocyte ratio (PLR), and C-reactive protein (CRP)-albumin ratio (CAR) are other promising biomarkers for use in patients with cancer, sepsis, and heart failure [ 5 - 7 ]. Individuals with schizophrenia commonly have higher NLRs than healthy individuals [ 8 ]. Furthermore, they present with low lymphocyte counts and high neutrophil counts, indicating an imbalance in leukocyte distribution. Lymphocyte depletion is observed in inflammatory conditions due to the increased apoptosis of lymphocytes [ 9 ]. Therefore, these findings may contribute to our understanding of the inflammatory mechanisms associated with schizophrenia. Furthermore, the use of antipsychotic medications will not affect the NLR [ 10 ]. Past studies found that the NLR did not decrease in schizophrenic patients who were treatment-resistant but did increase among patients with schizophrenia who were treatment-responsive. These findings suggest that treatment may have an effect on schizophrenia symptoms [ 11 - 13 ]. NLR has also been shown to be significantly correlated with the Positive and Negative Syndrome Scale (PANSS) and Clinical Global Impression-Severity (CGI-S) scores, as well as aggression, clinical symptoms, and disease severity [ 14 , 15 ]. We hypothesized that inpatients with higher psychomotor agitation would have a higher NLR and examined the NLR as a biomarker for determining acute severity and selecting acute treatments for patients with schizophrenia. We compared patients admitted for acute treatment of schizophrenia according to whether or not they were treated with IV haloperidol, used for severe symptoms, as a biomarker for assessing severity and determining the appropriate treatment. We retrospectively studied the medical records of patients with acute schizophrenia who required hospitalization and who had severe psychomotor agitation, refused oral medication, and required IV haloperidol treatment.
Materials and methods Participants The participants were selected from patients admitted to the psychiatric emergency unit of Showa University Northern Yokohama Hospital in Kanagawa, Japan, between January 2014 and December 2019. This hospital provides acute psychiatric inpatient treatment. All patients with schizophrenia were diagnosed according to the criteria of the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM-5) [ 16 ]. Patients younger than 16 years were excluded because it is difficult to confirm the diagnosis and adjust the appropriate dosage of antipsychotics in children and adolescents. Patients who were not treated for a physical disease that can significantly affect blood counts were recruited. However, patients who died or were discharged and those with inflammatory diseases, such as infections, metabolic diseases, advanced-stage cancer, trauma, and collagen diseases, and blood hematopoietic diseases, such as leukemia, were excluded to prevent the effect of changes in WBC count and CRP levels. We retrospectively analyzed clinical data from the patients' electronic medical records. Following data extraction, we categorized the participants into two groups based on their antipsychotic treatment: those who required IV haloperidol upon admission and those who were administered oral antipsychotic drugs. Patients were prescribed IV haloperidol if they refused oral antipsychotic drugs or demonstrated extremely severe symptoms, including (1) an imminent risk of suicide attempts or self-harm; (2) pronounced hyperactivity or restlessness; and/or (3) the patient's condition, without treatment, is life-threatening. Clinical data We analyzed various demographic and social factors, including age, sex, education, and marital and cohabitation status. We additionally gathered clinical data about the duration of the illness, the number of previous hospitalizations, CGI-S score at admission, acute psychiatric symptoms, chlorpromazine equivalent dose, and blood test results at admission. To evaluate the patient's psychiatric symptoms and condition, we used the criteria outlined in the hospitalization form prescribed in the Act on Mental Health and Welfare for Persons with Mental Disorders or Disabilities. The evaluation items included auditory hallucinations, visual hallucinations, delusions, association loosening, disorganized thinking, flat affect, depressed mood, restlessness, increased irritability, agitation, and stupor. This was a retrospective study. Hence, it had some limitations. For example, the possibility of psychotropic medication bias, such as the use of other antipsychotics, mood stabilizers, and anxiolytics, and the presence of missing data on symptom rating scale scores and blood test results could not be ruled out. Blood cell indexes The following blood test results were analyzed: CRP and blood cell count (including WBC count, platelet count, neutrophil count, lymphocyte count, monocyte count, eosinophil count, and basophil count). We also calculated the NLR (neutrophil count/lymphocyte count), NAR (neutrophil count/albumin (g/dL)), CAR (CRP (mg/dL)/albumin (g/dL)), and PLR (platelet count/albumin (g/dL)) from the same blood test results. Statistical analysis All data analyses were performed using IBM SPSS Statistics for Windows, Version 28.0 (Released 2021; IBM Corp., Armonk, New York, United States). The Shapiro-Wilk test was used to check the normality of the study variables' distributions. Pearson's chi-squared test was used to compare categorical variables. Student's t-test was used to compare normally distributed variables, while the Mann-Whitney U test was used for non-normally distributed variables. Mean±standard deviation (SD) and the number of variables were used to present data. Categorical variables are presented as numbers and percentages. Statistical significance was set at p<0.05. Ethical compliance This research was carried out using inpatient medical records depicting typical treatment protocols. We used deidentified IDs and computers disconnected from external networks to ensure patient confidentiality. All identifiable data was strictly concealed. Instead of being exempted from the requirement for informed consent, we provided an "opt-out" option on our website. This informed users that their medical data would be used for research purposes and allowed them to opt out. Patients were also given the option to consent or opt out with assurances that their decision would not affect the clinical care they received. The study was designed according to the principles of the Declaration of Helsinki and was approved by the Institutional Review Board of Showa University Northern Yokohama Hospital (approval number: 19H042).
Results We enrolled 262 inpatients diagnosed with schizophrenia, out of which 10 were excluded due to complications of infection and one was excluded due to death. Therefore, we analyzed 251 patients. Out of these patients, 102 were males and 139 were females. The mean±SD for NLR was 3.27±2.91, and for NAR, it was 1099.17±546. The study had 43 patients in the IV haloperidol group and 208 patients in the oral antipsychotic group (Figure 1 ). Following the post-hoc analysis, we rejected the null hypothesis of the population means of these two groups being equal, with a probability (power) of 0.845. Demographic characteristics and background information of the participants The study cohort consisted of two groups: an IV haloperidol (n=43; 17 males and 26 females) with a mean age of 42.65±13.12 years and an oral antipsychotic group (n=208; 85 males and 123 females) with a mean age of 41.34±13.83 years (Table 1 ). The groups were similar in age and sex and other characteristics such as marital status, smoking habits, family history, and living situation. However, the IV haloperidol group showed significantly worse psychiatric symptoms at admission than the oral antipsychotic group, as indicated by the CGI-S score (p<0.05, as shown in Table 1 ). Additionally, the chlorpromazine equivalent dose was significantly lower in the IV haloperidol group than in the oral antipsychotic group (p<0.05, as shown in Table 1 ). Acute psychosomatic symptoms There were significant between-group differences for the following symptoms: delusions (53.49% IV vs. 72.6% oral), disorganized thinking (32.56% IV vs. 12.98% oral), depressed mood (0% IV vs. 7.69% oral), restlessness (0% IV vs. 11.06% oral), and agitation (25.58% IV vs. 11.06% oral) (Table 2 ). The differences were statistically significant (p<0.05) for delusions, depressed mood, restlessness, and agitation and highly significant (p<0.01) for disorganized thinking (Table 2 ). However, there were no significant between-group differences for auditory hallucinations, visual hallucinations, association loosening, flat affect, increased irritability, and stupor (Table 2 ). Comparison of the blood cell indexes There were significant differences between the IV haloperidol and oral antipsychotics for WBC count (8175.35±3441.82 IV vs. 6680.58±2566.97 oral, Mann-Whitney U test, p<0.01) and neutrophil count (5992.96±2960.96 IV vs. 4337.54±2221.31 oral, Mann-Whitney U test, p<0.01) (Table 3 ). The group that received IV haloperidol had a significantly higher NLR (4.03±3.55 vs. 3.11±2.89, Mann-Whitney U test, p<0.05) and NAR (1383.07±698.64 vs. 1037.67±546, Mann-Whitney U test, p<0.05) than those who received oral antipsychotics. However, the two groups did not significantly differ in terms of lymphocyte, CRP, CAR, or PLR levels (Table 3 ). Moreover, the group that received IV haloperidol had significantly higher WBC and neutrophil counts than the group that received oral antipsychotics. Lymphocyte counts and albumin levels did not differ between the two groups. Therefore, the significant difference between NLR and NAR could be attributed to the neutrophil counts.
Discussion This study examined potential biomarkers of imminent psychomotor arousal in patients with schizophrenia. Patients who refused oral antipsychotic drugs were offered IV haloperidol treatment. IV haloperidol was also used with patients who demonstrated extremely severe symptoms, including imminent suicide attempts or self-injurious behavior, hyperactivity, or restlessness, particularly if nontreatment would be life-threatening. In the IV haloperidol group, symptoms such as disorganized thinking and agitation were pronounced, and the CGI-S score was higher than that in the oral antipsychotic group at the time of admission. Similarly, NLR and NAR were higher in the IV haloperidol group than in the oral antipsychotic group. These patients demonstrated psychomotor agitation so imminent that they did not fully understand the need for antipsychotic treatment and, therefore, were administered IV haloperidol. We considered blood cell indexes as biomarkers of schizophrenia severity and treatment selection because such tests are easily performed in inpatient facilities. NLR and NAR can be calculated using only biochemistry and blood counts, commonly performed in blood tests and attracting attention as simple, inexpensive markers that can be tested at any facility [ 4 ]. Previous studies have suggested that NLR is higher in patients with schizophrenia than in healthy controls [ 8 , 15 ]. In some meta-analyses of NLR in schizophrenia, patients' NLR values were 2.63, and healthy controls' NLR values were 1.78 [ 17 ]. Patients' mean NLR was 2.03-3.24, whereas healthy controls had a mean NLR range of 1.6-2 [ 18 ]. Our study's mean of patients' NLR was 3.28±2.9. Compared with previous studies, this study's participants were considered typical patients with schizophrenia. Previous studies reported on relationships between NLR and schizophrenia symptom severity. In a study of 22 patients with schizophrenia, NLR was significantly correlated with PANSS-total, PANSS-positive, PANSS-general, and CGI scores and was reduced after long-term antipsychotic treatment. Increased NLR is associated with severe schizophrenia symptoms [ 14 ]. Labonté et al. found that patients with schizophrenia who were treatment-resistant showed no decrease in NLR; however, NLR did decrease following treatment among treatment-responsive patients with schizophrenia. NLR may, therefore, be useful for determining treatment response and assessing a patient's symptoms [ 12 ]. A positive correlation between aggression and NLR has also been reported. Patients with schizophrenia who are highly aggressive have higher pretreatment NLR than their nonaggressive counterparts. Therefore, NLR could be used as a biomarker to assess aggressive behavior [ 15 ]. Kulaksizoglu and Kulaksizoglu showed significant correlations between PANSS-total and NLR, indicating that NLR is related not only to the pathophysiology of schizophrenia but also to clinical symptoms [ 19 ]. Patients who required IV haloperidol treatment due to their psychomotor agitation had higher NLR values, CGI-S scores, and rates of symptoms such as perplexing thoughts and agitation. These findings are consistent with previous studies that showed a relationship between NLR and severity. Additionally, the study revealed new insights into the types of symptoms that are severe enough to require IV haloperidol treatment. NAR reflects hyperinflammation and is used to predict pathologic complete remission in patients with pancreatic and rectal cancer [ 6 , 20 , 21 ]. A previous study found that patients' NAR was higher compared to that of healthy controls; the authors considered NAR useful in diagnosing schizophrenia [ 21 ]. However, there are few studies of NAR for psychiatric disorders. The mean of NAR in our participants was higher in the IV haloperidol group than in the oral antipsychotic group. Higher values for NAR and NLR appear to be associated with more severe psychiatric symptoms. Therefore, evaluating NLR and NAR from blood tests and seeing how the values fluctuate relative to psychotic symptom severity may help physicians with treatment planning. NLR values may be useful in determining the need for invasive therapeutic intervention when oral treatment is unavailable. Of course, our results should be considered alongside several study limitations. Firstly this was a single-center study. Our cohort was from the psychiatric emergency unit of a general hospital. Hence, it is an incomplete representation of the general population. Moreover, there was a bias toward patients with schizophrenia who had chronic illnesses, and the area had only a few elderly patients with schizophrenia. Secondly, our data were collected retrospectively. Therefore, we could not exclude the possibility of psychotropic medication bias, including the use of other antipsychotics, mood stabilizers, and anxiolytic drugs. Furthermore, some important data could have been missing. Because the data were collected a long time back, there was a possibility of heterogeneity in terms of patient information, such as changes in the tests and treatments during that period of time. Thirdly, some values needed to be added to the symptom assessment. Importantly, only the CGI-S was administered to patients to assess psychiatric symptoms on admission and not after that. Therefore, no pre-to-post comparisons were carried out. The Brief Psychiatric Rating Scale and PANSS were not measured in all patients and were not available to assess schizophrenia symptoms. Lastly, we only assessed NLR and NAR at admission because the timing of blood tests performed after hospitalization was irregular. Moreover, we failed to analyze changes in NLR and NAR after symptom improvement and did not attempt to correlate NLR and NAR levels with changes in symptom severity. This study did not examine patients with schizophrenia who were aged <16 years. Recently, previous studies have reported that patients with schizophrenia who were aged <18 years have a higher NLR than healthy controls and adults [ 22 , 23 ]. Therefore, a study design that excludes age limitations should be considered. To tackle the issues at hand, additional research studies that examine the psychiatric symptoms of schizophrenia are needed. Such studies should include the results of blood tests taken at admission, during treatment, and at discharge. Furthermore, the research should analyze the correlation between changes in NLR and NAR and the changes in symptom severity. Our study found that patients with higher levels of psychomotor arousal had higher NLR and NAR values. This implies that blood tests may be useful for objectively assessing the level of psychomotor arousal during psychiatric exacerbations. However, we recommend validating these findings within the context of adequately powered, prospective studies.
Conclusions Herein, patients with severe psychosomatic schizophrenia who could not receive oral antipsychotics and required hospitalization for IV haloperidol had higher CGI-S and increased NLR and NAR compared with patients who could receive oral treatment. In the IV haloperidol group, patients also had higher rates of psychiatric symptoms such as delusions and agitation compared with patients who could be orally treated. In clinical practice, there are many situations in which the lack of objective biomarkers of psychiatric symptoms makes it difficult to decide whether or not to provide IV antipsychotic treatment. In such cases, elevated NLR and NAR were considered useful for IV haloperidol treatment selection as biomarkers that could be easily measured in patients with psychomotor agitation who should be treated. This study and previous reports showed that NLR and NAR could be an objective indicator that is useful for disease diagnosis, severity determination, and treatment selection in patients with schizophrenia. Nevertheless, larger, multicenter, prospective studies should be performed to validate our results.
Introduction Schizophrenia symptom severity is linked to neuroinflammation. Certain blood cell indexes such as neutrophil-lymphocyte ratio (NLR) and neutrophil-albumin ratio (NAR) have been used as biomarkers in various diseases, including schizophrenia. In acute clinical practice, it is challenging to decide whether to provide intravenous antipsychotic treatment in some cases due to the lack of objective biomarkers of psychiatric symptoms. The NLR of individuals with schizophrenia is thought to be associated with disease severity, and changes in NLR may reflect a patient's response to antipsychotic treatment. We investigated the application of NLR as a biomarker for identifying acute severity and determining acute treatment response in patients with schizophrenia. Methods We retrospectively examined 251 inpatients diagnosed with schizophrenia and classified them according to treatment (intravenous haloperidol vs. oral antipsychotic medication during the acute phase) and investigated their NLR and NAR while receiving inpatient care. Results A total of 48 inpatients were given intravenous haloperidol to manage their acute symptoms; 208 were given oral antipsychotics. The intravenous haloperidol group experienced more severe symptoms, such as agitation and disorganized thinking, during the acute phase. Further, those who received intravenous haloperidol had significantly higher Clinical Global Impression-Severity (CGI-S) scores than the oral antipsychotic group. NLR and NAR were also significantly higher in the haloperidol intravenous group. Conclusion Elevated NLR and NAR could be easily measured in patients with psychomotor agitation who should be treated at any facility. Further, they are useful biomarkers for determining disease severity and the effects of treatment on psychomotor excitement in patients who require intravenous haloperidol.
CC BY
no
2024-01-15 23:43:50
Cureus.; 16(1):e52181
oa_package/1f/b3/PMC10788114.tar.gz
PMC10788115
38222236
Introduction Arthritis robustus (rheumatoid robustus) commonly occurs in men over the age of 50, particularly those who are physically active and involved in manual labor. They do not complain of pain, stiffness, disability, or distress, though clinical signs of inflammation, deformity, and radiological erosions are present. Synovial proliferation, subcutaneous nodules, periarticular erosions, and subchondral cysts are common, while periarticular osteopenia is rare compared to classical rheumatoid arthritis (RA) [ 1 ]. We detail a unique case of arthritis robustus with tenosynovitis, review published literature, discuss potential reasons for the unique presentation, and explore the clinical and therapeutic implications.
Discussion Arthritis robustus primarily affects elderly males actively involved in physical labor. Despite the presence of an inflammatory joint disease that aligns with this patient’s clinical profile, the level of pain and stiffness is minimal. Also, a history of smoking was a risk factor for the development of RA [ 2 ]. Our patient is a milkman who uses small hand joints during milking. Continuous use of these joints would affect the perception of pain and stiffness, reducing awareness of symptoms. De Haas et al. described nine male patients with classical RA who had subcutaneous nodules as well as high titers of seropositivity of both RF/anti-CCP [ 3 ]. Despite these characteristics, they were “robust, healthy, and working normally.” The duration of arthritis robustus and joint involvement showed no significant difference compared to controls, who were males experiencing both active disease and remission. However, all of them were involved in strenuous physical work, had a mesomorphic body structure, and demonstrated a tendency toward independence during psychological interviews. These authors interrogated if these clinical features could be explained by the “soft-hearted” treatment of some patients. It was reported, however, that “RA, typus robustus” men needed fewer analgesics and less physiotherapy. Chopra and Chib reported a case of arthritis robustus, masquerading as gout [ 4 ]. The term “arthritis robustus variant of RA” was applied to categorize four out of 20 young servicemen experiencing chronic inflammatory polyarthritides. It must be noted that these were physically active youthful men, and the prevalence of “robustness” was four out of 15 RA patients [ 5 ]. More recently, Prasad et al. reported a 58-year-old telephone wireman with an active lifestyle who had clinically evident arthritis without arthralgia, diagnosed as arthritis robustus only when he presented with myocardial infarction [ 6 ]. Thompson and Carr reported 10 patients, out of a randomly selected list of 100 RA patients, who did not complain of pain despite having clinical and biochemical evidence of inflammation [ 7 ]. Jones takes this discussion forward to propose the role of psychosocial factors in pain perception and management [ 8 ]. Earlier, authors have highlighted the correlation between pain threshold and analgesic usage in men with RA, as opposed to those with ankylosing spondylitis, where pain threshold does not influence analgesic requirement [ 9 ]. A large study looked into the discordance of self-reported symptoms with objective disease activity scores/inflammatory markers. This study looked at three cohorts, namely (1) the Early Rheumatoid Arthritis Network (ERAN), (2) the British Society for Rheumatology Biologics Registry (BSRBR), starting therapy with tumor necrosis factor (TNF) inhibitors, as well as (3) those on non-biologic medications. A subset of patients with discordantly better patient-reported outcomes (PRO) compared to inflammation was identified, including 11% in the ERAN cohort, 23% in the BSRBR cohort of TNF inhibitors, and 10% in (BSBR) non-biologic medications. This suggested that non-inflammatory factors may influence the interpretation of inflammation/pain, acknowledging the presence of the typus robustus RA phenotype in this subset. It is worth noting that the authors place a greater emphasis on the needs of another subset that reported discordantly worse (in contrast to arthritis robustus) PRO compared to inflammation (12%, 40%, and 21% in the three cohorts) [ 10 ]. The pathogenesis of RA is multifactorial, with putative inflammatory and non-inflammatory pathways contributing to its clinical phenotype. Individuals with arthritis robustus may have lower interleukin-6-mediated dysfunction, which changes their pain perception [ 11 ] (Table 2 ). Healthcare-seeking behavior, too, may influence the presentation of arthritis. Patients with less access to rheumatology care may tend to downplay their concerns. The same applies to individuals who cannot afford quality care. One differential diagnosis of arthritis robustus is leprosy, which can present with deforming arthritis and lack of symptoms due to sensory impairment [ 12 ]. It may be possible that arthritis associated with sensory neuropathy, e.g., syphilis, diabetes, and acromegaly, can mimic arthritis robustus. Endemic diseases, such as nutritional and toxic neuropathy, may impact the perception of normal vs. abnormal, delaying their recognition of symptoms and signs [ 13 - 15 ]. Thus, arthritis robustus can be explained by the biopsychosocial model of health [ 16 ] (Table 2 ). Arthritis robustus has important clinical implications. Delayed recognition of the disease may lead to the progression of the disease and the occurrence of complications, including deformities. It may also result in delayed identification of comorbid conditions, such as autoimmune disorders, osteopenia, and sarcopenia, potentially leading to further morbidity. An early diagnosis allows the timely institution of disease-modifying anti-rheumatic drugs (DMARDs), and appropriate lifestyle changes, which may optimize long-term health. The construct of arthritis robustus also allows healthcare professionals to individualize therapy in a person-centered manner [ 16 ]. Arthritis robustus patients, for treatment adherence, require motivation, encouraging a proactive approach to managing their condition and controlling disease activity.
Conclusions The current case of arthritis robustus has associated tenosynovitis, which is rare. Apart from highlighting the existence of this syndrome, this discussion also underscores the need to spread awareness about this variant of RA while understanding its clinical presentation, differential diagnosis, and management.
Rheumatoid arthritis (RA) commonly presents as a chronic additive symmetric inflammatory polyarthritis involving the small and large joints. Rarely do patients present with few or no clinical symptoms, despite apparent signs of inflammation. This condition, known as arthritis robustus, typically occurs in elderly males who are manual laborers with an active lifestyle. It is essential to diagnose arthritis robustus and start treatment promptly to avoid the development of deformities and other complications in the future.
Case presentation A 45-year-old male was referred to the rheumatology clinic for complaints of swelling in both wrists for a six-month duration. The swelling persisted without progressing, and there was no associated joint pain. However, the patient experienced 30 minutes of early morning stiffness in the wrists and small joints of the hands. There was no associated history of fever, back pain, skin lesions, diarrhea, or urethral discharge. He worked as a milkman and was a former smoker who smoked 20 hand-rolled cigarettes (bidi) daily for ~ 10 years. Past medical and surgical history was unremarkable. On examination, swelling with fluctuation was present on both wrists and the left third proximal interphalangeal (PIP) joint, associated with warmth but without tenderness. A firm, non-tender swelling of size 1 x 1 cm was present over the left distal ulna (Figure 1 ). Radiographs of the bilateral wrist confirmed joint space narrowing with erosions but without significant periarticular osteopenia suggesting long-standing inflammatory disease. Investigations were remarkable for high rheumatoid factor (RF >90 IU/mL), anti-cyclic citrullinated peptide (CCP) antibodies (>80 IU/mL), and serum C-reactive protein (CRP) (8.2 mg/L) values (Table 1 ). Musculoskeletal ultrasonography revealed tenosynovitis of the left extensor carpi ulnaris tendon and cortical irregularity in the phalanges. The patient fulfilled the 2010 ACR/EULAR classification criteria with a DAS28-CRP score of 3.25, suggesting moderate disease activity. The patient was counseled and educated regarding this condition and was initiated on methotrexate, bridge therapy with low-dose prednisolone, and nonsteroidal anti-inflammatory drugs (NSAIDs) for RA. He was provided with physical and occupational therapy. The dose of prednisolone was tapered on follow-up and discontinued over a period of two months. The patient did not report a need for analgesics on follow-up visits. At the third-month follow-up visit, the patient’s swollen joint count was reduced to zero with disease activity measure DAS28-CRP 1.64 (suggestive of remission).
Vijay Karthik Bhogaraju and Arnav Kalra equally contributed to this work and should be considered co-first authors.
CC BY
no
2024-01-15 23:43:50
Cureus.; 15(12):e50583
oa_package/a1/dd/PMC10788115.tar.gz
PMC10788116
38222203
Introduction The symbiotic relationship between host and microorganisms is extremely complicated, and it is important to better understand these cellular interactions [ 1 ]. Humans are colonized by the microbiota containing bacteria, archaea, fungi, and viruses; different body surfaces contain distinct members. Internal organs, such as the heart, once thought to be sterile, harbor organ-specific microorganisms [ 2 ]. Blood from healthy individuals has diverse microbiota [ 3 ], and specific bacteria even exist in immune cells [ 4 ]. The colon is home to the most diverse microbiota composed of 100 trillion bacteria containing about 25 times the number of genes as the Homo sapiens genome [ 5 ]. The gut microbiota plays an important role in crosstalk between gut microbes and host cells as microbial metabolites can cross the intestinal wall and enter the bloodstream. Over 200 microbial metabolites have been identified in local and distal systems of the body [ 6 ]. The gut is also home to about 70-80% of human immune cells [ 7 ], and the symbiotic relationships between the microbiota and cells of the immune system are important in numerous medical disorders [ 8 ]. It is important to better understand how bacteria and immune cells coexist in the gut. It is commonly assumed that mixing bacteria with human cells in culture would result in the bacteria overgrowing the cells. We assumed that the oxygen in cell culture media would inhibit the growth or kill the anaerobic bacteria, allowing cellular reactions to be measured. Experiments adding a commercial mixture of probiotic anaerobic bacteria to peripheral blood mononuclear cells (PBMC) in culture with atmosphere air, which contains about 18% oxygen, did not show gross signs of bacterial contamination, and the growth rates were similar to control cultures at 24 hours. Here, we demonstrate that the four probiotics induced small amounts of cytokines/chemokines in PBMC, and cultures containing phytohemagglutinin (PHA) showed higher cytokine concentrations. There is a lot to learn from these mixed cultures, as the combination of PHA and probiotic bacteria resulted in significantly higher concentrations of the pro-inflammatory cytokine IL-1β, whereas the combination of PHA and bacteria significantly decreased the production of the chemokine MCP-1.
Materials and methods Study design The study involved four commercial anaerobic probiotic bacterial products: Neuro Byome (NB), Metabo Byome (MB), Male/Female Byome (M/F), and Immuno Byome (IB) were obtained from Nutri-Biome (Ogden, Utah). The four probiotics were made by reconstituting 18 freeze-dried (Gram - and Gram +) anaerobic bacteria and growing them on individual agar plates (Table 1 ). The agar plates with bacteria were incubated under strict anaerobic conditions (90% nitrogen, 5% carbon dioxide, 5% hydrogen at 37 °C) in a Bactron EZ Anaerobic Chamber (Sheldon Manufacturing, Cornelius, OR, USA). Bacteria were scrapped off the plates after one to five days of incubation depending on the growth of the particular strain. The bacteria were diluted in phosphate-buffered saline (PBS) to obtain an absorbance reading of about 1.0 at 600 nm on an Agilent 453E spectrophotometer (Santa Clara, CA, US). Equal numbers of each bacterium strain were combined to make the four probiotic products contain 4-6X 10 8 bacteria/ml. Each strain was characterized by DNA typing with strain-specific PCR primers (Table 2 ). Human PBMCs from an individual donor were purchased from Cellular Technology Limited (CTL) (Shaker Heights, Ohio, USA). PBMCs contain a mixture of immune cells including T-cells, B-cells, NK cells, monocytes (blood macrophages), and dendritic cells. The complex mixture of immune cells in systemic PBMC makes it possible to study many immune interactions in test tubes. The PBMCs were stored in the vapor phase of liquid nitrogen until the day of use. PBMC culture reagents were obtained from CTL and company protocols were followed. A vial of PBMCs contains about 1 x10 7 cells, which were plated at 2 x10 5 cells/well in 200 μL of CTL media in flat-bottom 96-well plates. Ten microliters containing 4-6 x 10 6 of the four probiotic bacteria products were added to each well in quadruplicate and allowed to incubate overnight with PBMC cultures under room air containing about 18% oxygen. At 24 hours, there were no signs of contamination or bacterial overgrowth (no cytotoxic granules or anomalous morphologies). To compare the amount of inflammation introduced by these anaerobic bacteria, PBMC control cultures were inflamed with 10 μg/ml PHA. PHA is a well-known mitogen that binds to toll-like receptor 2 (TLR2) on T-cells and monocytes and causes the production of high levels of inflammatory cytokines, which are associated with inflammation [ 24 ]. The HIEC-6 normal small intestine epithelial cell line obtained from American Type Culture Collection (ATCC) #CCL-3266 was cultured in Minimum Essential Media (MEM) with Glutamax TM supplemented with 10 ng/ml EGF and 5% fetal bovine serum (FBS). HIEC-6 cells were plated at a density of 10,000 cells per well in 200 μl of the appropriate media in flat bottom 96-well culture plates in quadruplicate, which reached about 80% confluency in 48 hrs. Ten microliters containing 4-6 x 10 6 of the four probiotic bacteria were added to each well and allowed to incubate overnight at 37 °C in room air containing about 18% oxygen. Viability assay The XTT assay (Biotium, Fremont, CA, USA) was used to evaluate cell viability at the end of culture experiments. XTT is a colorimetric detection assay utilizing tetrazolium dye to measure cell viability by enzymatic activity in the mitochondria of living cells. After the appropriate incubation time, 100 μl of the culture media from each well was removed for cytokine and chemokine analysis leaving 100 μl, to which 25 μl of the XTT reagent was added. The XTT plates were incubated at 37 °C for 120 minutes and the absorbance was recorded using a Tecan Genios plate reader (Mannedorf, Switzerland) that detects the absorption maximum (492 nm) of XTT. One hundred microliters of PBMC culture supernatants (absent of cells) of the quadruplicate wells were combined for a total of 400 μl and frozen. The cell culture supernatants were sent on dry ice to Quansys Bioscience (Logan, Utah, USA) for the determination of cytokines and chemokine concentrations using an enzyme-linked immunosorbent assay (ELISA) chemiluminescent immunoassay. Cytokines Pro-inflammatory cytokines include interleukin-6 (IL-6), interleukin-1β (IL-1β), granulocyte-macrophage colony-stimulating factor (GMCSF), and tumor necrosis factor-alpha (TNFα). Interleukin-8 (IL-8) and monocyte chemoattractant protein (MCP-1) are chemokines that attract innate immune cells, especially granulocytes to areas of inflammation. The 15-cytokine multiplex assay done at Quansys measures all cytokines at the same time by highly sensitive chemiluminescence. Statistical analysis One-way analysis of variance (ANOVA) followed by Dunnett’s multiple comparisons test was performed using GraphPad Prism version 10.0.0 for Mac (GraphPad Software, Boston, MA, USA, www.graphpad.com ).
Results It was our decision to study cellular viability and cytokine production after incubating PBMC with probiotic mixtures of anaerobic bacteria. With 70-80% of immune cells residing in the gut, it is important to understand the intricate interactions between the local microbiota and immune cells. First, there was no sign of contamination or bacterial overgrowth in any of the PBMC or HIEC-6 cultures. Adding the four probiotic samples containing different anaerobic bacteria stimulated the PBMC to produce up to a couple of thousand picograms/ml of the various cytokines/chemokines (Table 3 ). Adding PHA to the PBMC cultures stimulated the cells to produce higher amounts of the various cytokines and combining PHA and anaerobic bacteria produced even higher levels of various cytokines. However, the PHA bacteria combination induced significantly lower levels of MCP-1 compared to PHA alone (Table 4 ). Adding bacteria to the HIEC-6 cultures did not induce the production of pro-inflammatory cytokines or chemokines (data not shown).
Discussion There was a minor difference in cell viability in the control cultures with no bacteria compared to the cultures with four probiotic mixtures as measured by the XTT assay (Figures 1A , 1B ). This is not surprising, as bacteria have molecules on their surfaces that bind to pattern recognition receptors (PRR) on immune cells even if the bacteria are dead. These interactions of PRR on host cells and pathogen-associated molecular patterns (PAMPs) on bacteria are well-recognized and PAMPs exist in non-pathogenic bacteria [ 25 ]. PBMCs have toll-like receptors (TLRs) and C-type lectin receptors, which recognize bacterial PAMPs. Additionally, it is well known that cytokines are produced by microbe-immune cell interactions [ 26 ]. Cytokines are important chemical messengers that affect many cellular functions such as inflammation, cellular activation, and cellular proliferation [ 27 ]. The innate immune system has several defense mechanisms to detect and respond to invading microorganisms [ 26 ]. One important response is the induction of inflammation to eliminate the invading microorganism. However, one must remember that there are trillions of bacteria living in our bodies. The coexistence of the countless strains of bacteria living in us cannot elicit full-blown inflammation like certain pathogenic infections. Therefore, inflammation must be carefully controlled or regulated to prevent excess tissue damage [ 28 , 29 ]. The inflammatory cytokine (IL-1β) production was significantly higher in PHA plus bacteria compared to the PHA control (Figure 2A ), whereas the combination of PHA and bacteria significantly decreased the chemokine MCP-1 (Figure 2B ). Although this research strongly suggests that anaerobic bacteria do not overgrow PBMC when cultured overnight in media containing oxygen, there are many unanswered questions. For example, in the same experiment, the mixture of the four anaerobic bacteria with PHA resulted in 4-7-fold production of IL-1β (Figure 1A ) and conversely a 2-4-fold decrease in the production of MCP-1 (Figure 2B ) over PHA alone. It was decided to look at another non-immune cell to determine if the anaerobic bacteria affected growth patterns. The epithelial cell (HIEC-6) is a normal cell line that resides in the vicinity of the bulk of intestinal bacteria. The HIEC-6 growth patterns with the four anaerobic probiotic bacteria are not significantly different from control cultures without bacteria (Figure 1B ) and no evidence of contamination was noted under careful microscopic examination. This is further evidence that the anaerobic bacteria do not rapidly expand in oxygen-containing media. The ability to examine anaerobic bacteria in culture with living cells in overnight culture suggests numerous experimental possibilities. The PBMC used in these experiments contained a mixture of white cells, and it is unclear which cells are responding to the bacteria. Purified monocytes, T-cells, B-cells, dendritic cells, etc. should be examined to determine anaerobic bacterial effects. Also, PBMC or white cells from individuals with specific diseases could be examined for cytokine responses. Single anaerobic bacteria should be examined to determine specific cytokine effects. Culture experiments longer than overnight may show differences and there may be stimulation of other cytokines or growth factors not measured in our current assays. Gene expression experiments could be useful, in certain experiments, to determine bacterial effects on a particular cell line. In our hands, the mixing of anaerobic bacteria with human cells in culture showed some interesting results. Limitations in new areas of research become apparent when different laboratories repeat similar experiments. Our research presented here evaluated four mixtures of probiotics of anaerobic bacteria. The first limitation of this work is there is the limited amount of data generated from this novel approach of combining anaerobic bacteria with cells in culture. The second limitation is we only looked at a small number of cytokines. Thirdly, we do not know which individual bacteria in the four mixtures is causing the cytokine effect, suggesting that cell culture experiments should be done using individual anaerobic bacteria. The last limitation is we only examined a small number of bacteria compared to the thousands of anaerobic bacteria that exist in the intestines.
Conclusions The data presented here clearly suggests that anaerobic bacteria do not grow rapidly in oxygen-containing conditions in the PBMC cell culture experiments. The presence of anaerobic bacteria in PBMC cultures stimulates a weak pro-inflammatory response in several cytokines. The addition of PHA and PHA plus anaerobic bacteria results in a more robust cytokine response. In the same experiments, PHA plus anaerobic bacteria significantly inhibits the chemokine MCP-1 response. The growth patterns of the normal cell line HIEC-6 are not strongly affected by the four anaerobic bacteria mixtures, nor was there a robust cytokine response.
In the last couple of decades, much progress has been made in studying bacteria living in humans. However, there is much more to learn about bacteria immune cell interactions. Here, we show that anaerobic bacteria do not grow when cultured overnight with human cells under atmospheric air. Air contains about 18% oxygen, which inhibits the growth of these bacteria while supporting the cultivation of human cells. The bacteria cultured with human peripheral blood mononuclear cells (PBMCs) inflamed with phytohemagglutinin (PHA) greatly increased the production of proinflammatory cytokines like tumor necrosis factor-alpha (TNFα) while inhibiting the production of monocyte chemoattractant protein-1 (MCP-1), an important chemokine.
CC BY
no
2024-01-15 23:43:50
Cureus.; 15(12):e50586
oa_package/6c/1a/PMC10788116.tar.gz
PMC10788117
38222989
Introduction Bisphosphonate (BP) is a drug similar to pyrophosphate that has been prescribed since 1960 to treat various bone diseases [ 1 - 4 ]. It has been shown that BP causes the inhibition of osteoclasts, resulting in the reduction of bone resorption and bone remodelling, which leads to osteonecrosis of the jaw (ONJ), later known as BP-related osteonecrosis of the jaw (BRONJ) [ 5 , 6 ]. It was first reported in 2003 by Marx [ 7 ], and subsequent cases have been extensively reported in the scientific literature, leaving a significant impact on quality of life and substantial morbidity [ 8 - 12 ]. It appears as a bone exposed for eight weeks or more without any history of radiation therapy [ 4 , 13 ]. In 2014, the American Association of Oral and Maxillofacial Surgeons (AAOMS) modified the old terminology to medication-related osteonecrosis of the jaw (MRONJ) because of cases of ONJ related with other antiresorptive medications [ 5 ]. The etiology of MRONJ is not entirely understood, but there are several suggested risk factors which include the length of time a patient has been on BP therapy, method of administration, age of the patient, history of dentoalveolar surgery, use of corticosteroids, and presence of systemic disease like diabetes mellitus [ 4 , 14 ]. There has been a gradual increase in the occurrence of complications associated with the use of these drugs. The understanding of the mechanisms behind MRONJ is still lacking, and various hypotheses have been proposed to explain why MRONJ explicitly affects the jaws. The hypotheses proposed involve various factors that may contribute to the observed effects. These factors include the excessive suppression of bone resorption, changes in bone remodelling processes, ongoing microtrauma, inhibition of angiogenesis, vitamin D deficiency, suppression of acquired or innate immunity, presence of infection or inflammation, and potential toxicity of soft tissue blood pressure [ 4 ]. In MRONJ, bone resorption and remodelling decrease as osteoclast differentiation and function are inhibited and apoptosis increases. In all skeletal bones, osteoclasts play a crucial role in bone remodelling and healing. Conversely, ONJ occurs in the mandible 73% of the time and in the maxilla 22.5% of the time [ 15 , 16 ]. This phenomenon may be explained by the higher remodelling rate of the jaws compared to other skeletal bones. ONJ has been proven to be triggered by infection and inflammation in numerous clinical studies. Biopsy samples of necrotic bone taken from patients with ONJ have been found to contain bacteria, specifically Actinomyces spp. [ 4 ]. MRONJ is mostly a drug-related disease, with risk factors including dosage, administration method, duration, and therapeutic use. Other risk factors for this disease include surgical operations, such as tooth extraction; individuals with comorbidities, such as diabetes; and concurrent use of corticosteroids. Some low-risk cancer patients are given treatment for noncancer conditions such as osteoporosis, osteopenia, and Paget's disease [ 17 ]. It has been found that cancer-free individuals without having any additional risk who receive oral antiresorptive for less than four years have a relatively low risk of developing this disease. Low-risk patients can receive dental treatment without any modifications. However, cancer patients face a significant risk in developing multiple myeloma and bone metastases [ 18 , 19 ]. The treatment of MRONJ involves a comprehensive approach that includes prevention, ongoing cancer care, preservation of bone health, and enhancing the quality of life for patients. Strategies involve taking proactive measures to prevent MRONJ, ensuring that individuals on specific therapies can continue their oncologic treatments without interruption, and prioritizing bone health to minimize the risk of fractures. In addition, patient education plays a crucial role in empowering individuals to actively participate in their care. Pain management, infection control, and preventive measures to stop the progression of lesions in the jaw are essential for improving comfort and minimizing potential complications [ 4 ]. Dentists have a significant impact on preventing BRONJ and MRONJ by offering preventive care and prioritizing preventive treatment before starting BP [ 4 , 11 , 20 , 21 ]. Therefore, dentists and physicians must possess sufficient knowledge about identifying potential complications and the appropriate treatment for patients who are at risk of MRONJ [ 21 ]. Guidelines for patients getting BPs on staging and treatment approaches were released by the AAOMS. These guidelines' primary goal was to give physicians a foundational understanding of BPs, MRONJ/BRONJ clinical characteristics and risk factors, and, most importantly, how to treat and prevent MRONJ/BRONJ. Regretfully, investigations have revealed that dentists have shown very poor knowledge about the care of patients receiving BP therapy, even in spite of these guidelines [ 5 , 6 ]. There have been limited studies exploring dental students' knowledge and awareness about MRONJ. However, no study has been conducted thus far to investigate their understanding of drugs associated with it. Therefore, the objective of this study was to analyse and assess knowledge about MRONJ among dental students and practitioners in the central region of Saudi Arabia.
Materials and methods An observational cross-sectional study was planned to collect data from dental students and dentists in the central region of Saudi Arabia. To collect information from participants, a valid and reliable questionnaire was used [ 22 ] using a convenient non-probability sampling method during the period from October to December 2022. The Epi Info software (Centers for Disease Control and Prevention, Atlanta, Georgia, United States) was used to calculate the sample size, assuming a 50% incidence rate with a margin of error of 5% and a 95% level of confidence. The minimum sample required was 384; because of time constraints, we were able to collect responses from 250 participants. This study was approved by the Committee of Research Ethics of Qassim University (approval number: 21-12-03). Inclusion/exclusion criteria Dental students, graduates, and dental practitioners were included, whereas the general public, medical practitioners, and medical students were excluded from participation. Data collection method This study utilized a survey divided into five components (see Appendices section). There were six items in the first section of the questionnaire pertaining to demographic information, namely, age, gender, college affiliation (graduated from or currently enrolled in), years of professional experience, and highest educational degree attained. The second component consisted of five items designed to assess participants' general knowledge of antiresorptive medications. The third component assessed participants' understanding of the therapeutic applications of antiresorptive and antiangiogenic drugs. In the fourth component, participants were assessed on their knowledge of the correct definition and the associated risk factors. The fifth section addressed the dental management of patients taking BPs. Statistical analysis The data were analysed using IBM SPSS Statistics for Windows, Version 22.0 (Released 2013; IBM Corp., Armonk, New York, United States). Age, gender, marital status, and educational background were represented as frequencies and percentages. A chi-squared test was applied to determine the association between dentists' and students' knowledge related to ONJ. Statistical significance was determined by a p-value of less than 0.05.
Results A total of 250 participants were enrolled in the study. Of them, 128 (51.2%) were women, and 122 (48.8%) were men. Marital status revealed that most participants were single (198 or 79.2%) and 47 (18.8%) were married. Most participants (149 or 59.6%) were between the ages of 18 and 25, 82 (32.8%) were between the ages of 26 and 35, 13 (5.2%) were between the ages of 36 and 45, and only six (2.4%) were between the ages of 46 and 55. The colleges of dentistry at Qassim University, King Saud University, and Riyadh Elm University had 59 (23.6%), 55 (22%), and 39 (15.6%) participants, respectively. Additionally, 114 (45.6%) were students, whereas 136 (54.4%) were dentists, including dental interns, general practitioners, and specialists. Only 28 (11.2%) held a postgraduate degree (master's or PhD), as shown in Table 1 . The general knowledge of antiresorptive/antiangiogenic medications revealed that most of the dentists (119 or 87.5%) knew about BP drugs as compared to students (78 or 68.4%), with a significant difference found among them (p<0.05). Almost all of the dentists (121 or 89%) and about 81 (71.1%) students thought it was important to ask patients about their usage of antiresorptive/antiangiogenic medications, with a significant difference found between them (p=0.05). It was observed that the university was the primary source of information for both the dentists (97 or 71.3%) and students (70 or 61.4%). Regarding obtaining knowledge via variable additional sources (such as scientific journals and medical meetings), the dentists' group had a higher tendency to obtain knowledge than the students' group (p=0.136). Most of the dentists (117 or 86%) and 69 (60.5%) students believed BPs can lead to ONJ, with a significant difference between them (p<0.05). Furthermore, most dentists (115 or 84.6%) and only 79 (23.7%) students thought that patients should be checked by a dentist before starting intravenous BP treatment, with a significant difference among them (p=0.05), as shown in Table 2 . The study found that there was a general lack of knowledge regarding the therapeutic uses of antiresorptive and antiangiogenic medications in both dentists and students. Importantly, there were no significant differences between the two groups in terms of their knowledge (p=0.552). The data reveal that bone metastasis is the most commonly recognized therapeutic use of antiresorptive therapy among students, accounting for 25 (21.9%) of the responses. Furthermore, dentists primarily associate antiresorptive therapy with treating osteopenia and osteoporosis, which accounted for 28 (20.6%) of the responses. Interestingly, 62 (54.4%) students and 42 (30.9%) dentists could not identify BPs' active principle or commercial name. Out of all the listed BP medications, alendronate (Fosamax, Merck & Co., Rahway, New Jersey, United States) was the most recognized, followed by zoledronate (Zometa, Novartis, Basel, Switzerland), with a significant difference between them (p<0.05). Most of the dentists (77 or 56.6%) and students (73 or 64%) did not know that any other medications could lead to ONJ, with an insignificant difference among them (p=0.288), as shown in Table 3 . Regarding knowledge of the correct definition of ONJ, only a small proportion of dentists (30 or 22.1%) and students (25 or 21.9%) knew the correct definition of MRONJ according to the AAOMS, but an insignificant difference was observed among them (p=0.779). Regarding the risk factors of MRONJ, tobacco was the most recognized by 28 (20.6%) dentists and 19 (16.7%) students, with an insignificant association among them (p=0.409) as shown in Table 4 . Regarding the level of knowledge about the dental management of patients receiving BP therapy, most dentists (80 or 58.8%) and 58 students (50.9%) did not think invasive dental treatment could be performed safely on patients during intravenous BP therapy. In comparison, 32 (23.5%) dentists and 10 (8.8%) students thought that patients on intravenous BP therapy could possibly undergo invasive dental procedures without risk, with a significant difference between them (p<0.05). Conversely, there was an insignificant difference observed between dentists and students that patients who are on oral BP therapy for a duration of less than four years and in the absence of any risk factors could safely undergo invasive dental treatment (p=0.186). Additionally, 47 (34.6%) dentists and 39 (34.2%) students recognized that taking oral BP therapy less than four years will make invasive dental treatment unsafe for such patients, with an insignificant difference observed between dentists and students (p=0.851). In addition, 42 (30.9%) dentists and 20 (17.5%) students indicated that for patients who are on oral BP therapy for more than four years, invasive dental treatment could be performed safely, with a significant difference observed between dentists and students (p=0.048). Most of the dentists (111 or 81.6%) and 91 (79.8%) students wanted to learn more about the ONJ; however, an insignificant association was noticed between them (p=0.913), as shown in Table 5 .
Discussion MRONJ is a significant and debilitating adverse medication reaction observed in individuals undergoing prolonged treatment with antiresorptive or antiangiogenic drugs, primarily impacting the mandible more commonly than the maxilla. Having a sufficient understanding of MRONJ is essential to enhance treatment results and mitigate the problems linked to these drugs. Based on the available information, a scant amount of research has investigated the extent of knowledge of MRONJ among dental healthcare providers and dentistry students. This study was conducted to evaluate the knowledge of dentists and students regarding MRONJ to improve patient care. In the sample of 250 participants, this study found that most of the dentists knew about BP drugs (87.5%) and this is in alignment with previous research conducted in Saudi Arabia. However, both studies by Almousa et al. and Al-Eid et al. revealed a comparatively lower level of knowledge among dentists, with percentages of 66.5% and 60.8%, respectively. Furthermore, comparable findings were observed in previous studies conducted among dental professionals, indicating that 70% of dentists were aware of MRONJ. In a separate investigation, it was discovered that 83.3% of dental professionals and 99% of students reported possessing knowledge of BPs [ 22 , 23 ]. In addition, similar results were found in Al-Maweri et al.'s study conducted among dentists, which showed that 70% knew about MRONJ [ 24 ]. Other studies found 83.3% of dentists and 99% of students declared to know BPs [ 18 , 25 ]. University teaching was the primary source of knowledge attained by dentists (71.3%) and students (61.4%). In general, the group of dentists tended to gain knowledge from many external sources, including the media, scientific journals, and professional gatherings, in contrast to the student group. One possible explanation for this phenomenon is that the dentist group may have a higher likelihood of seeing patients at risk of MRONJ and actively engaging in continuing medical education (CME) programs. The data indicate 89% of dentists consider it necessary to inquire about patients' use of antiresorptive/antiangiogenic medications, in contrast to 71.1% of students. The overall knowledge of dentists and students with regard to the therapeutic uses of antiresorptive and antiangiogenic medications was low. The most common therapeutic use recognized by students was bone metastasis (21.9%), whereas osteopenia and osteoporosis (20.6%) were also recognized by dentists. In the current study, it was observed that a significant portion of the participants lacked knowledge about the specific antiresorptive medications despite the inclusion of both the generic and brand names of these medications. The study conducted by de Lima et al. and Almousa et al. among dentists and dental students revealed similar results, indicating that most (86%) participants could not identify the commercial brand names of BP medication [ 20 , 22 ]. Regarding BP side effects, a study revealed that alendronate (Fosamax) and zoledronate (Zometa) can cause osteonecrosis, whereas a large number of dentists (56.6%) and students (64%) were unaware that other medications might induce ONJ. The difference in knowledge between these two groups was statistically insignificant. Rosella et al. and Almousa et al. have reported similar findings regarding the impact of well-known BP medications [ 22 , 26 ]. The process of identifying medications is crucial to minimizing the potential of providing care without fully understanding the associated risks. Regarding knowledge of the precise definition of ONJ, only a small proportion of dentists (22.1%) and students (21.9%) possessed this knowledge. The AAOMS defines MRONJ as the presence of exposed bone or bone that can be probed through a fistula in the maxillofacial region that persists for more than eight weeks in patients who have been treated with antiresorptive or antiangiogenic agents, without a history of radiation therapy to the jaws or evident metastatic disease in the jaws [ 4 ]. The results demonstrate similarity with Almousa et al.'s, Al-Eid et al.'s, and Al-Maweri et al.'s studies, emphasizing the limited understanding of the clinical characteristics of MRONJ [ 22 - 24 ]. On the contrary, Spanish dentists and dental students exhibited more significant levels of knowledge because of their greater familiarity with the accurate definition of this disease, as reported by López-Jornet et al. [ 27 ]. Lack of understanding about the definition can lead to delayed detection or unwarranted treatments, thereby heightening the likelihood of more serious complications. The participants' responses regarding the risk factors were inadequate because less than 50% of them correctly identified the risk factors. According to the data, a significant percentage of dentists (58.8%) and students (50.9%) hold the belief that invasive dental procedures may not be safe for patients undergoing intravenous BP therapy. The data indicate that a higher percentage of dentists (23.5%) compared to students (8.8%) believed that the task could be carried out without any risks. The findings from our study are consistent with the results of Almousa et al.'s study [ 22 ]. Overall, the findings indicate a lack of awareness about patient management, resulting in deferring necessary treatment when the risk is low while attempting high-risk treatments without taking the appropriate precautions. The present study acknowledges several limitations. The sample size was relatively small and limited to specific locations in Saudi Arabia. The outcomes reported may lack representativeness for dentists nationwide. Future studies should include larger sample sizes and broaden their sampling to include other regions in Saudi Arabia.
Conclusions The implications of the findings in the present study warrant increased emphasis on the importance of educating students and dentists about this disease. It is highly recommended to attend continuing education courses that focus on treating and preventing this ailment in patients undergoing BP therapy.
Background: Bisphosphonates (BPs) are often used in treating benign and malignant disorders. Medication-related osteonecrosis of the jaw (MRONJ) is a significant problem that arises from the long-term use of BPs. Objective: In this study, we assessed the knowledge of students and dentists about MRONJ in the central region of Saudi Arabia. Methods: A cross-sectional study was conducted to collect information from dental students and practitioners from the central region of Saudi Arabia. A valid, reliable, and structured questionnaire was used to gather data using a non-probability convenient sampling technique. IBM SPSS Statistics for Windows, Version 22.0 (Released 2013; IBM Corp., Armonk, New York, United States) was used to analyse the data. The descriptive data were expressed as frequencies and percentages to evaluate the association between dentists and students concerning overall knowledge related to osteonecrosis of the jaw, and a chi-squared test was applied. Results: In total, 250 individuals completed the questionnaire. The general knowledge of antiresorptive/antiangiogenic medications showed that most dentists (87.5%) and students (68.4%) knew about BP medications. A general lack of understanding about the therapeutic uses of antiangiogenic and antiresorptive medications was demonstrated by the participants. A significant proportion of dentists (58.8%) and students (50.9%) were not convinced that invasive dental procedures can be safely performed on patients receiving intravenous BP therapy. A significant proportion of the participants in the sample were unclear of the principal diseases that antiresorptive and antiangiogenic medications target. A mere 22% of respondents were aware of the accurate definition of medications-related MRONJ. Conclusion: There is insufficient knowledge about MRONJ among students and practitioners. Therefore, these findings suggest increased emphasis should be placed on educating dentists and students about this condition to ensure patients receive the best possible care.
Appendices
CC BY
no
2024-01-15 23:43:50
Cureus.; 16(1):e52165
oa_package/9d/7f/PMC10788117.tar.gz
PMC10788118
38222198
Introduction Leukemia is a malignancy of the bone marrow that arises from the abnormal proliferation and differentiation of hematopoietic stem cells. This results in the accumulation of immature or abnormal blood cells in the marrow and peripheral blood [ 1 ]. Ocular manifestations of leukemia can impair vision. Of all leukemic ocular involvements, leukemic retinopathy is the most common, occurring in up to 50% of patients [ 2 - 4 ]. Further classification divides leukemic retinopathy into primary and secondary retinopathy. Primary retinopathy is characterized by direct retinal infiltration of cancerous leukocytes [ 5 , 6 ]. Secondary retinopathy is a sequela of leukemic hematological abnormalities, including thrombocytopenia, anemia, and hyperviscosity [ 5 ].
Discussion Leukemia is a systemic hematological disease, with retinal involvement as the most common ocular manifestation [ 2 - 4 ]. Leukemic retinopathy may arise from direct infiltration of cancerous leukocytes. It can also be a consequence of leukemia-induced hematologic abnormalities, which manifest as intraretinal hemorrhages (e.g., dot-blot hemorrhages, flame hemorrhages, Roth spots), preretinal hemorrhages, and cotton-wool spots [ 5 , 6 ]. Besides CEL, leukemic retinopathy has been noted in other leukemias such as acute lymphoblastic leukemia (ALL), acute myeloid leukemia (AML), chronic myeloid leukemia (CML), and adult T-cell leukemia [ 1 ]. In this case, leukemic retinopathy was one of the first presenting signs of CEL, which led to our patient receiving a more immediate treatment. Regular follow-up with oncology specialists and ophthalmologists is necessary to monitor the patient’s condition and treatment efficacy. The presence of ocular involvement in leukemia indicates aggressive systemic disease and portends a poor prognosis [ 7 , 8 ]. Ohkoshi and Tsiaras reported that the five-year survival rate was significantly lower in leukemia patients with leukemic retinopathy on presentation than in those without ophthalmic involvement (21.4% vs. 45.7%) [ 7 ]. This is due to a higher likelihood of central nervous system (CNS) involvement in patients exhibiting ophthalmic manifestations of leukemic retinopathy, which is a poor prognostic factor [ 7 ]. Abu el-Asrar et al. prospectively evaluated the prognostic importance of retinopathy in adult and pediatric leukemia patients, reporting that the three-month mortality rate of patients with cotton-wool spots is eight times higher than in patients without these retinal lesions. Cotton-wool spots are a product of occluded precapillary arterioles and resultant retinal ischemia, which signify a disease state that is clinically and hematologically active [ 8 ]. Therefore, the presence of any retinal hemorrhage or cotton-wool spot in a patient with no apparent systemic cause should prompt physicians to order a complete blood count, including WBC differential, to rule out leukemia and other hematologic irregularities [ 9 ]. Once the diagnosis of leukemia has been established, treatment of leukemic retinopathy involves treating the underlying cause with systemic chemotherapy [ 6 ]. Imatinib, a BCR-ABL tyrosine kinase inhibitor, has shown promising systemic treatment of different hematological diseases, including myeloproliferative neoplasms with eosinophilia that have evidence of PDGFRA rearrangement. Allopurinol, a xanthine oxidase inhibitor, is often added to imatinib for prophylaxis against tumor lysis syndrome [ 8 ]. Treatment regimens with imatinib have not only shown leukemic improvement, but cases have shown resolution of retinal hemorrhages and retinal infiltrates on fundus photography as soon as the one-month follow-up [ 10 , 11 ]. After induction chemotherapy, physicians may consider additional therapies, such as hydroxyurea and leukapheresis, for leukemic treatment. Hydroxyurea given orally at a dose of 50-100 mg/kg daily can reduce the absolute WBC count by 50-80% percent within 48 hours [ 12 ]. Leukapheresis, the direct removal of WBCs from circulation, is another adjunct therapy and has been shown to improve VA in patients with retinal involvement [ 13 ]. External radiation therapy may be indicated for cases of optic nerve and/or orbital involvement, however, it should be used sparingly due to the risk of radiation-induced retinopathy and cataracts [ 8 ]. In most cases, treatment of leukemia by systemic chemotherapy or radiation resolves primary and secondary leukemic retinopathy within the first two months [ 11 , 14 , 15 ]. If vitreoretinal leukemic infiltration, which can manifest as vitreous cell clumping or yellow-white subretinal infiltrates, persists despite systemic therapy, chemotherapeutic agents, such as methotrexate, can be injected intravitreally [ 15 ]. This approach may reduce systemic chemo-drug toxicity when considering additional systemic chemotherapy [ 8 ]. Sequelae of untreated leukemic retinopathy include choroidal neovascularization and tractional retinal detachments. In patients with choroidal neovascularization, intravitreal anti-vascular endothelial growth factor (VEGF) agents may be used. If persistent vitreoretinal hemorrhages, vitreomacular traction, or retinal detachments arise in the setting of leukemic retinopathy, a pars plana vitrectomy is indicated [ 9 ].
Conclusions In cases of unexplained retinal hemorrhages, a high index of suspicion for blood dyscrasias should warrant hematologic evaluation. This patient's visual complaints resulted in a workup that led to the diagnosis of eosinophilic leukemia, with prompt treatment allowing for a favorable prognosis. Ophthalmologists should thus be alert to retinal presentations of leukemia, as a comprehensive eye exam may lead to timely diagnosis and early intervention.
Leukemia is a systemic malignancy that can compromise various physiological functions, including vision. We report a case of a 37-year-old male presenting with worsening bilateral central vision loss, fatigue, shortness of breath, and ankle edema. Ophthalmic examination revealed extensive retinal hemorrhages, Roth spots, and subhyaloid hemorrhages, consistent with leukemic retinopathy. Further hematologic workup confirmed chronic eosinophilic leukemia. The patient showed systemic and visual improvement after prompt treatment with imatinib. This case highlights the importance of ophthalmological assessment in diagnosing leukemia, as ocular manifestations may often be the first sign of hematological disease.
Case presentation A 37-year-old Hispanic male presented with a two-day history of progressively worsening central vision in both eyes. The patient’s past medical history was significant only for diet-controlled hyperlipidemia. There was no past ocular history and no relevant family history. The patient worked as a forklift operator and denied alcohol and illicit drug use. Upon review of systems, the patient revealed that he had been having fatigue, shortness of breath on exertion, and bilateral ankle swelling for two weeks. He denied fever, chills, and recent weight loss. The best corrected visual acuity (BCVA) was counting fingers at 3 feet bilaterally. Pupils, intraocular pressure, confrontational visual fields, and motility were within normal limits bilaterally. Ishihara color plates were 0/8 in the right eye (OD) and 2/8 in the left eye (OS). Slit lamp examination was significant for bilateral conjunctival pallor. Dilated fundoscopic examination revealed Roth spots, macular edema, perivascular cotton-wool spots, extensive intra-retinal and pre-retinal hemorrhages, and chronic subhyaloid hemorrhages bilaterally (Figure 1 ). Vitreous was clear and optic nerves were sharp, pink, and without evidence of infiltration bilaterally. Differential diagnoses included infectious, inflammatory, and neoplastic etiologies. A hematologic workup revealed a high white blood cell (WBC) count (425 k/mm 3 ) with elevated eosinophils and myelocytes (42%), as well as anemia (hemoglobin 6.2 g/dL, hematocrit 16.8%, mean corpuscular volume (MCV) 112 fL) and thrombocytopenia (15 k/mm 3 ). Hematologic markers concerning for tumor lysis syndrome included hypocalcemia (8.2 mg/dL) and elevated lactate dehydrogenase (1063 U/L) while potassium levels were within normal limits (4.0 mEq/L). Infectious workup, blood cultures, fungal workup, and viral panels were negative. Bone marrow biopsy showed markedly increased eosinophils without an increase in blasts, consistent with chronic eosinophilic leukemia (CEL). The aspirate smears revealed a markedly increased eosinophilic component including mature segmented forms and precursors (42%). Immunochemistry showed an atypical myeloid population expressing CD13, CD33, CD11b, CD11c, CD9, and CD38. Flow cytometry was negative for HLA-DR, CD15, CD16, CD64, CD14, and immature markers (CD34, CD117). Polymerase chain reaction (PCR) testing came back negative for BCR-ABL and demonstrated a CHIC2 gene deletion, indicating a favorable prognosis with tyrosine kinase inhibitor therapy. PCR also revealed a PDGFRA gene rearrangement. The patient was admitted to the oncology service for treatment with leukapheresis, granulocyte colony-stimulating factor (G-CSF) injections, hydroxyurea, and imatinib, with allopurinol added for tumor lysis syndrome prophylaxis. He was given infection prophylaxis for seven days: acyclovir 400 mg PO BID, fluconazole 200 mg PO once a day, and ciprofloxacin 500 mg PO BID. The inpatient treatment regimen for CEL consisted of the following: two rounds of leukapheresis, several blood and platelet transfusions, one dose of G-CSF, allopurinol (300 mg PO once a day), hydroxyurea (500 mg q12 hours for 4 days), and imatinib (unspecified starting dose lowered to 100 mg PO once a day due to leukopenia). Upon discharge two weeks later, maintenance therapy of daily 100 mg imatinib led to normalized WBC count and resolution of fatigue and dyspnea. BCVA improved to 20/30 OD and 20/25 OS at the seven-month follow-up. Macular edema and retinal hemorrhage resolved after treatment when the patient was seen at his seven-month follow-up (Figures 2 , 3 ), with residual foveal exudate present in the right (Figures 2A , 3A ) and left (Figure 3B ) eyes.
CC BY
no
2024-01-15 23:43:50
Cureus.; 15(12):e50587
oa_package/69/f0/PMC10788118.tar.gz
PMC10788119
38222160
Introduction The term "musculoskeletal disorders" (MSDs) refers to a group of periarticular conditions that affect the musculoskeletal system and primarily cause functional discomfort and everyday pain [ 1 , 2 ]. Musculoskeletal disorders (MSDs) are more prevalent among schoolteachers, who are required to spend long periods of time standing, sitting, and engaging in repetitive tasks, such as grading papers or typing on a computer [ 3 ]. These disorders can affect the teacher's physical health, causing pain, discomfort, and movement limitations, ultimately impacting their ability to perform their job effectively [ 4 ]. In addition to physical discomfort, MSDs can significantly impact a teacher's mental health and job performance. Teachers with MSDs may experience increased stress and anxiety, reduced job satisfaction, and difficulty meeting their job demands [ 5 ]. Moreover, MSDs can lead to absenteeism and decreased productivity, ultimately impacting student learning and achievement [ 1 , 6 ]. In Saudi Arabia, a study conducted in 2020 by Alqahtani et al. [ 7 ] surveyed 261 high school teachers in Saudi Arabia and found that 73% of the respondents reported experiencing MSDs in at least one body part, with the highest prevalence being in the lower back (47.5%), neck (42.5%), and shoulders (33.3%). The study also found that female teachers reported a higher prevalence of MSDs compared to male teachers. They reported decreased work efficiency (73.2%), increased absenteeism (69.5%), and decreased job satisfaction (68.2%) [ 7 ]. In another study done in Saudi Arabia, 79.2% reported experiencing MSDs in at least one body part, with the highest prevalence being in the neck (68.3%) and lower back (59.4%). This study reported decreased productivity (68.3%), increased absenteeism (57.4%), and decreased job satisfaction (46.5%) [ 8 ]. The high prevalence of MSDs among teachers in Saudi Arabia indicates a need for further studies identifying what teachers are struggling with so that ergonomic and other interventions can be identified to help reduce this burden among teachers. To our knowledge, there are no studies exploring MSDs among schoolteachers in Buraydah City, and therefore, there are no data on the prevalence of MSDs among Buraydah City school teachers, their risk factors, or their impacts on performance. This study aimed to explore the prevalence and associated risk factors of MSD among teachers in Buraydah City, Saudi Arabia. The findings of this study could guide measures and strategies such as ergonomic interventions, physical activity programs, and education and awareness campaigns to prevent and manage MSDs among schoolteachers in Buraydah City and Saudi Arabia in general.
Materials and methods Study design and settings An analytic cross-sectional study was conducted for three months, from April 1 to June 30, 2023, in all schools in Buraydah City, Saudi Arabia, targeting all school teachers and other school workers in Buraydah City. Sample size The minimum sample size (n) was calculated as follows: n=Z 2 xPxQ/D 2 Where: n: Calculated sample size Z: The z-score for a 95% confidence level = 1.96. P: 50%, assumed proportion of participants for maximum sample size calculation Q: (1 - P) = 50%. D: The margin of error = 0.05. n = (1.96) 2 x 0.5x0.5/(0.05) 2 = 384 The minimum calculated sample size to achieve a precision of ±5% with a 95% confidence interval was 384 teachers. To compensate for possible inaccurate responses and the erroneous completeness of questionnaires, we recruited 400 teachers. Sampling technique A multistage random sampling technique was used to select participants. All schools in Buraydah city were divided into four clusters based on their location in the city (East, West, North, and South). Teachers from each cluster were further divided into two strata based on their gender (Male and Female). Finally, 50 teachers were selected by systematic random sampling from each subcluster, making 400 participants in total. Data collection instrument and procedure We used a validated, self-administered questionnaire with four parts. The first part of the questionnaire had questions regarding socio-demographics, such as age, gender, marital status, duration of experience in teaching, reported weight and height, medications or physical therapy for MSD, and any other diseases. The second part inquired about MSDs using the Arabic version of the standardized Nordic Musculoskeletal Disorder Questionnaire [ 9 , 10 ]. The third part inquired about the working conditions, how many standing hours per day, how many lectures, and what postures they took when teaching. The fourth part inquired about the effect of MSDs on daily life activities and work, absenteeism, and sick leaves. The investigators visited schools, sought permission from authorities, and attended teachers’ morning staff meetings to recruit them. Before data collection, participants were given all information about the study, including the study aims and objectives, and invited to participate voluntarily. Statistical analysis Both descriptive and inferential statistical analyses of the data were carried out. Simple frequencies and percentages of the sociodemographic characteristics and other categorical variables were calculated and tabulated. Percentages were also calculated for multiple-answer questions. For continuous variables, median and IQR were reported as central tendency and dispersion measures, respectively. To find any significant association between categorical variables, Fischer’s exact test was applied and interpreted. For continuous variables, the Kruskal-Wallis test was used to compare medians. Furthermore, to predict factors causing MSD symptoms, a binary logistic regression model with multiple predictors was created. The results of the model were presented as adjusted odds ratios (AOR). Statistical significance was established at a p-value of 0.05 or less with a 95% confidence interval. All the statistical calculations were performed using IBM Corp. Released 2020. IBM SPSS Statistics for Windows, Version 27.0. Armonk, NY: IBM Corp. Ethical considerations This study was approved by the ethics committee of Qassim province (Ref. No.: H-04-Q-001). Written consent was requested from participants before data collection. The study investigators requested approval from competent authorities and permission from the selected schools. There was no disclosure of the information obtained in this study to the hospital, legal or financial authorities, or anyone outside the study. The questionnaire collected anonymous information; no identifying data was collected, and participants had the right to withdraw. There was no disclosure of the information obtained in this study to anyone else outside of the study.
Results As indicated in Table 1 , the total study participants were 787; among them, 648 were teachers, and the remaining 139 were other people working in school. The median age was 43 years, and the median years of experience were 16. The gender distribution shows 65.1% females and 34.9% males. Most participants (89.2%) were married. School level distribution was 29.9% high school, 22.2% intermediate, and 47.9% primary school. Non-smokers account for 94.4%. The BMI distribution includes 27.6% normal, 33.4% obese, 37.4% overweight, and 1.7% underweight participants. Regular exercise was undertaken by 41.7%, while 58.3% did not. Most (61%) had no chronic diseases, while 14.3% had osteoarthritis. When asked about experiencing troubles such as aches, pains, discomfort, or numbness in the past 12 months, most (78.4%) of the other school staff and 83.3% of teachers responded positively. Among other school staff, 71.9% reported being prevented from carrying out normal activities due to these troubles, compared to teachers (73.6%). In the past 12 months, 55.4% of other staff and 59% of teachers had sought medical consultation, and within the last seven days, 83.5% of other staff and 79.9% of teachers reported still having the symptoms (Table 2 ). The most prevalent musculoskeletal problem was lower back discomfort (46%), followed by neck pain (38%). Notably, shoulder, ankle, and foot troubles share a prevalence of 26% (Figure 1 ). Among the other school staff, those who experienced MSD in the past 12 months reported a median of two days absent from work due to muscle or joint pain, compared to 0 days for those who did not experience such pain. Among teachers, those who experienced MSD in the past 12 months reported a median of three days of absence, while those without such pain reported zero days. The most used treatments were topical analgesics (40%), massage (36%), and oral analgesics (30%) (Figure 2 ). The teachers who considered changing their jobs showed a significantly higher percentage of pain compared to those who did not consider changing their jobs (p<0.001) (Table 3 ). Table 4 shows that teachers who reported being prevented from normal activities, who reported aches, pain, discomfort, numbness, and due to MSD, and who sought physician consultation for MSD in the past 12 months were significantly more likely to suffer from major depressive disorders than those who did not report such problems (p<0.001, p<0.007, and p<0.018, respectively). Teachers with MSD symptoms within the last seven days were significantly more likely to have a major depressive disorder (p<0.001) (Table 4 ). The chi-square test showed that MSD prevalence significantly increases with age (p<0.001). Females had a higher prevalence of MSD (67.0%) compared to males (33.0%) (p<0.001). Working hours, including fixed rest times (sitting), significantly affect MSD prevalence (p = 0.002), and years of job experience were significantly associated with MSD prevalence (p=0.041) (Table 5 ). A multiple regression model was created to evaluate the risk factors associated with MSD (Table 6 ). Age showed a significant association with MSD (aOR: 1.070, 95%CI: 1.009-1.136, p=0.025); each increase in age is associated with a 7% increase in the odds of experiencing pain. Females experience higher MSD compared to males (aOR: 2.581, 95%CI: 1.617-4121, p < 0.001). Regarding impacts of MSD and its associated factors, females had 2.906 times higher odds of experiencing disability due to MSD compared to males (p < 0.001) (Table 7 ).
Discussion The study's findings shed light on the significant prevalence of MSD among teachers and other school staff and would also help educational institutions and policymakers take measures to promote safe and healthy working conditions for teachers to prevent the development of MSDs and improve their work performance. The gender distribution displayed a majority of females (65.1%), highlighting the gender composition in the teaching profession and in teachers with MSD reported by other studies [ 1 ]. The prevalence of MSD, as indicated by the self-reported symptoms, such as pain and discomfort, varied across body regions. Neck and lower back pain were particularly common, affecting 38% and 46% of participants, respectively. These results are comparable to other Saudi Arabian studies from Abba (59.2%) [ 11 ], Dammam (63.8%) [ 6 ], and a national-based survey (66.9%) [ 12 ]. In contrast, a Japanese study showed a lower prevalence of back pain (20.6%) [ 13 ]. Notably, the prevalence of MSD was high across multiple regions, underscoring the need for comprehensive interventions. Our findings showed that most participants in both groups experienced symptoms such as aches, pains, discomfort, or numbness in the past 12 months. Among the other school staff, 78.4% reported such symptoms, while the teachers had a slightly higher prevalence at 83.3%, similar to the findings of other studies [ 3 , 13 , 14 ]. Furthermore, the study examined whether these symptoms prevented participants from carrying out normal activities. It was observed that 73.6% of the teachers were affected, and 59.0% of the teachers had seen a physician for their condition in the past 12 months. These findings align with another study conducted in Cairo, Egypt [ 15 ]. The findings from this study underscore the substantial impact of musculoskeletal disorder (MSD) on both work performance and individuals' job considerations. Among teachers, the median number of absent days for those with MSD was three days, while those without such pain reported none. This significant difference (p<0.001) further substantiates the impact of MSD on work attendance. A similar study conducted in Italy found that there was a greater intention to call off work and leave the job among those diagnosed with MSD [ 16 ], aligning with another study conducted in Qassam, Saudi Arabia, that also showed a significant relationship between absenteeism and pain [ 17 ]. Moreover, the association between job considerations and MSD among teachers was also significant (p<0.001). A higher percentage (94.3%) of participants contemplating job changes reported experiencing pain, highlighting the strong link between MSD and the inclination to explore alternative job options. Similar findings were reported by previous studies [ 13 , 18 ]. Altogether, these findings emphasize the multifaceted influence of MSD, encompassing both absenteeism and job-related decision-making, underscoring the importance of holistic interventions to mitigate its effects and enhance overall workplace well-being. Age exhibits a positive and significant association with MSD (p=0.025). This finding underscores the influence of age on the likelihood of experiencing pain and emphasizes the need for age-sensitive interventions. Similar findings were reported by other studies conducted in Saudi Arabia [ 6 , 18 ]. However, another study conducted in Saudi Arabia presents a contrasting result, showing no association of pain with age [ 19 ]. Gender emerges as a significant predictor, with females reporting higher MSD rates compared to their male counterparts. The substantial difference highlights the gender-based disparities in MSD experiences. These findings are comparable to the findings of other studies conducted in Saudi Arabia and Cairo [ 15 , 18 ]. Targeted interventions could enhance pain management strategies. Marital status, level of school, smoking habits, BMI categories, and fixed rest times were not significantly associated with MSD. These results suggest that these factors might not be associated with MSD. However, some of our study's findings contradict the previously conducted studies, which show that weight is significantly associated with MSD [ 6 , 17 ]. While BMI categories do not independently predict MSD, the lack of regular exercise marginally increases the odds of experiencing pain. Although not statistically significant, further studies might focus on this as a potential intervention. Interestingly, the presence of major depressive disorder significantly correlates with a higher MSD prevalence (P<0.001). This association signifies the intricate interplay between mental and physical well-being, emphasizing the need for holistic approaches to address both conditions simultaneously. Teachers with major depressive disorders reported more MSD-related symptoms, activity limitations, and medical consultations. This is in contrast to the results of a study that shows no correlation between depression and MSD [ 20 ]. However, MSD severity's association with MDD was not significant, underscoring the complex interplay between mental and physical health in educators. Age and years of experience, though showing a slight trend, do not exhibit a significant correlation with disability, aligning with another previous study indicating that these factors might not strongly influence such outcomes [ 17 ]. Gender emerges as a significant determinant, with females facing higher disability rates. This could be attributed to biological differences, differing job roles, or varied coping mechanisms. Participants with major depressive disorders exhibited significantly higher odds of experiencing disability due to MSD, emphasizing the need for comprehensive health assessments and integrated care approaches. Similar findings were also reported by the study, showing the negative impact of MSD on quality of life among elementary school teachers [ 21 ]. Some limitations of this study include its cross-sectional design, which is unable to establish causal relationships between risk factors and MSD. The reliance on self-reported data for MSD might introduce recall bias or social desirability bias, affecting the accuracy of reported pain levels and associated factors. Finally, the study's findings might not be easily generalizable to teachers in other regions or countries due to cultural, organizational, or educational system differences.
Conclusions This study showed a high prevalence of MSD among teachers, highlighting the importance of addressing this issue for the well-being of educators. The study identifies key risk factors associated with MSD, including age and gender. MDD was also found to influence MSD. These findings emphasize the need for targeted interventions to alleviate pain and promote the overall health of teachers. Considering the high prevalence of MSD among teachers, implementing ergonomic interventions is crucial. Designing classrooms with adjustable furniture and promoting proper posture during teaching could reduce the strain on muscles and joints. Given the association between age and MSD, interventions should be tailored to different age groups. Younger teachers might benefit from preventive measures, while older teachers could benefit from pain management strategies. Recognizing the gender-based differences in MSD, design interventions that address the unique needs of male and female teachers. This might involve providing targeted exercises, workshops, or resources. Since major depressive disorder is linked to higher MSD, adopting an integrated approach to mental and physical health is essential. Collaboration between healthcare professionals specializing in both domains can yield comprehensive solutions.
Introduction: Musculoskeletal disorders (MSD) pose a significant challenge to the well-being and productivity of individuals and various occupational groups, including teachers. Among teachers, the prevalence of MSD has raised concerns globally, impacting their daily activities and overall quality of life. Buraidah and Saudi Arabia, like many other regions, face the implications of this issue. This study aimed to explore the prevalence and associated risk factors of MSD among teachers in Buraydah, providing valuable insights into the extent of the problem and potential areas for intervention. Methodology: An analytic cross-sectional study was conducted for three months, from April 1 to June 30, 2023, using the Arabic version of the standardized Nordic Musculoskeletal Disorder Questionnaire. This study was conducted in all schools in Buraydah City, Saudi Arabia. The study population was all schoolteachers (including principals, vice principals, etc.) in Buraydah City. The study analyzed responses from 648 teachers and 139 school workers using statistical tests, including chi-square tests and logistic regression models. Results: The results indicated a notable prevalence of MSD among teachers, with a significant association found between age, gender, and major depressive disorder (MDD) and MSD. The study reveals that females are at higher risk of MSD compared to males, emphasizing the need for gender-specific interventions. Moreover, the presence of MDD is identified as a significant contributor to MSD among teachers. However, certain demographic and lifestyle factors, such as marital status, level of school, smoking habits, and fixed rest times, do not show significant associations with MSD. Although age and years of experience are correlated, only age is found to significantly contribute to MSD. Regular exercise and BMI also do not emerge as significant contributors, although a lack of exercise shows a marginal impact. Conclusion: This study's findings have implications for educational institutions and policymakers, highlighting the need for tailored interventions to address MSD among teachers. It underscores the importance of ergonomic interventions, gender-sensitive approaches, and mental health support.
CC BY
no
2024-01-15 23:43:50
Cureus.; 15(12):e50584
oa_package/88/00/PMC10788119.tar.gz
PMC10788122
0
INTRODUCTION Transporters are transmembrane proteins that mediate selective uptake or export of solutes, metabolites, ions and drugs across the plasma membrane (PM) or other organellar membranes. Secondary active transporters of the PM couple the transport of substrates against their concentration gradients with the transport of other solutes down their concentration gradients. All PM transporters, despite their structural, functional and evolutionary differences, operate via an alternating access model [ 1 ], where substrates bind on transporters from one side of the PM and elicit conformational changes, which lead to the opening of the transporter on the other side of the membrane, thus enabling release of substrates [ 2 – 4 ]. Mechanistic variations of the alternating access model, known as the rocker-switch [ 5 ], the rocking-bundle [ 6 ] or the elevator-type transporters [ 7 ], reflect major structural differences among specific transporters. Notably, however, in all three mechanistic models, transporters undergo significant structural changes from an outward-facing to an inward-facing conformation, and vice versa, via a series of substrate-occluded structures [ 4 ]. The Amino Acid-Polyamine-Organocation (APC) superfamily is one of the largest and highly ubiquitous families of secondary transporters [ 8 , 9 ], including well-studied transporters of biomedical interests, as for example neurotransmitter transporters DAT (dopamine) or SERT (serotonin), or transporters mediating the uptake of nucleobase-related drugs (e.g. 5-fluorouracil or 5-flurocytosine). APC transporters are characterized by a 5+5 α-helical inverted repeat (known as the 5HIRT or LeuT fold) formed by ten continuous transmembrane segments (TMS1-10). The ten TMSs are arranged in two discrete domains, the so-called ‘hash’/scaffold domain (TMS3, TMS4, TMS8, TMS9) and the ‘bundle’/core domain (TMS1, TMS2, TMS6, TMS7). In this arrangement, TMS5 and TMS10 function as dynamic gates controlling access to and release of substrates/cations from a central binding site. The substrate binding site in APCs is made by residues located in TMS3, TMS6, TMS8 and TMS10, at the interface of the hash and bundle domains. Based on this fold, all APC transporters function via variations of the rocking-bundle model, where the outward-facing to inward-facing conformational change occurs by the relative motion between the bundle or hash motifs, which underlies substrate accessibility and release [ 10 – 14 ]. It has been suggested that substrate binding in the outward-facing conformation is assisted by the simultaneous binding of a positive charge ion (Na + or H + ), which elicits the conformational change of the protein towards the inward-facing conformation. Recent high-resolution structures further showed that water molecules shape and stabilize the substrate-binding site and affect the functioning of gates in the bacterial AdiC, APC-type, transporter [ 15 ]. Notably also, the cytosolic N- and C-terminal regions of several APC transporters have been shown to be involved in intramolecular interactions that are critical for function, substrate specificity or transporter turnover [ 16 ]. Most APC transporters possess two ‘extra’ TMSs at their C-terminal part (e.g. TMS11 and TMS12), the role of which does not seem to be directly related to substrate transport catalysis. Although the majority of APC structures have been resolved as monomeric transporters, in some cases, it has been proposed that these two extra C-terminal TMSs might be critical for APC oligomerization. For example, the AdiC transporter has been crystallized as a stable dimer where the homodimer interface is formed by non-polar amino acids from TMS11 and TMS12 [ 15 ]. In this case, however, the role of APC oligomerization remains dubious as each monomer seems to be a self-contained transporter [ 15 , 17 ]. LeuT has also been crystalized as a dimer via TMS9 and TMS12, and possibly TMS11 [ 18 , 19 ], but to our knowledge there is no information whether dimerization is necessary for transport activity. There is also evidence, via co-immunoprecipitation [ 20 ], crosslinking studies [ 21 ] and FRET experiments [ 22 , 23 ], that DAT and SERT transporters oligomerize. Similar results supporting oligomerization have been reported for rGAT1 and glycine transporters [ 24 , 25 ]. However, in the aforementioned cases, only in SERT, TMS11 and TMS12 have been shown to be implicated in oligomerization in vivo [ 26 ]. The APC superfamily includes the Nucleobase Cation Symporter 1 (NCS1) group, of which several fungal and plant transporters have been extensively studied at the genetic and functional level [ 27 – 34 ]. In particular, work performed in Aspergillus nidulans , a filamentous fungus that has developed to be a model system to study transporters [ 35 – 37 ], has unveiled important knowledge on the regulation of expression, subcellular trafficking, turnover, transport kinetics and substrate specificity of NCS1 transporters [ 27 – 31 ]. All A. nidulans NCS1s function as H + symporters selective for uracil, cytosine, allantoin, uridine, thiamine or nicotinamide riboside and secondarily for uric acid and xanthine. Previous studies have modeled A. nidulans NCS1s using the homologous prokaryotic Mhp1 benzyl-hydantoin/Na + transporter [ 10 ] as a structural template, and assessed structure-function relationships via extensive mutational analyses. These studies have identified the substrate binding site and substrate translocation trajectory, and revealed important roles of the cytosolic N-and C-terminal segments in regulating endocytic turnover, transport kinetics and, surprisingly, substrate specificity of NCS1 transporters [ 27 – 31 ]. Noticeably, all characterized functional mutations in NCS1 map in specific TMSs of the 5+5 inverted repeat fold or in the cytosolic terminal regions of these transporters. Here, we systematically investigate the role of TMS11 and TMS12 in the most extensively studied NCS1 transporter of A. nidulans , namely the allantoin-uracil-uric acid/H + FurE symporter. We show that two specific aromatic residues in TMS12 are essential for ER-exit and traffic to the PM, apparently via structural interactions with specific residues in the core domain (TMS1-10) that catalyzes transport. We subsequently provide genetic evidence that TMS11-12 is essential for oligomerization or/and partitioning of FurE in specific membrane microdomains to achieve ER-exit and traffic to the PM.
MATERIAL AND METHODS Media, strains and growth conditions Standard complete (CM) and minimal media (MM) for A. nidulans growth were used. Media and supplemented auxotrophies were used at the concentrations given in http://www.fgsc.net [ 49 ]. Glucose 1% (w/v) was used as carbon source. 10 mM sodium nitrate (NO 3 ) or 10 mM ammonium tartrate were used as standard nitrogen sources. Allantoin, uric acid and 5FU were used at the following final concentrations: 5FU at 100 μM; uric acid and allantoin 0.5 mM. All media and chemical reagents were obtained from Sigma-Aldrich (Life Science Chemilab SA, Hellas) or AppliChem (Bioline Scientific SA, Hellas). A Δ furD::riboB Δ furA::riboB Δ fcyB::argB Δ azgA Δ uapA Δ uapC::AfpyrG Δ cntA::riboB pabaA1 pantoB100 mutant strain, named Δ7, was the recipient strain in transformations with plasmids carrying furE alleles based on complementation of the pantothenic acid auxotrophy pantoB100 and/or the pabaA1 auxotrophy [ 50 ]. A pabaA1 (paraminobenzoic acid auxotrophy) is a wt control strain. A. nidulans protoplast isolation and transformation was performed as previously described [ 51 ]. Growth tests were performed at 37°C for 48 h, at pH 6.8. Standard molecular biology manipulations and plasmid construction Genomic DNA extraction from A. nidulans was performed as described in FGSC ( http://www.fgsc.net ). Plasmids, prepared in E. coli , and DNA restriction or PCR fragments were purified from agarose 1% gels with the Nucleospin Plasmid Kit or Nucleospin ExtractII kit, according to the manufacturer's instructions (Macherey-Nagel, Lab Supplies Scientific SA, Hellas). Standard PCR reactions were performed using KAPATaq DNA polymerase (Kapa Biosystems). PCR products used for cloning, sequencing and re-introduction by transformation in A. nidulans were amplified by a high-fidelity KAPA HiFi HotStart Ready Mix (Kapa Biosystems) polymerase. DNA sequences were determined by VBC-Genomics (Vienna, Austria). Site directed mutagenesis was carried out according to the instructions accompanying the Quik-Change® Site-Directed Mutagenesis Kit (Agilent Technologies, Stratagene). The principal vector used for most A. nidulans mutants is a modified pGEM-T-easy vector carrying a version of the gpdA promoter, the trpC 3' termination region and the panB (and pabaA for co-transformations) selection marker. Mutations were constructed by oligonucleotide-directed mutagenesis or appropriate forward and reverse primers (see Supplementary Table S6 ). Protein Model Construction The FurE modeled structure was constructed based on homology modeling using Prime 2018-4 (Schrödinger, LLC, New York, NY, 2018) on Maestro platform (Maestro, version 2018-4, Schrödinger, LLC, New York, NY, 2018). Mhp1 was used as query in the three conformations: Outward (2JLN), Occluded (4D1C), inward open (2X79), sharing with FurE a 35% similarity. The models shown here are presented with PyMOL 2.5 ( https://pymol.org ). Molecular Dynamics (MD) Protein model construction and MD simulations are described in detail elsewhere [ 52 ]. In brief, homology models of FurE were constructed based on Mhp1 crystal structures 2JLN, 4D1B, 2X79. Each model was inserted into a lipid bilayer using the CHARMM-GUI tool and the resulting system was solvated using the TIP3P water model with final NaCl concentration of 150 mM. Calculations were conducted using GROMACS software, version 2019.2 and CHARMM36m force field [ 53 , 54 ]. The protein orientation into the membrane was calculated using the PPM server ( http://amber.manchester.ac.uk , [ 55 ]). The system was first minimized to obtain stable structures and then equilibrated for 20ns by gradually heating and releasing the restraints. The resulting equilibrated structures were then used as an initial condition for the production runs of 100ns at constant pressure of 1 atm and constant target temperature of 300K using Nose-Hoover thermostat and Parrinello-Rahman semi-isotropic pressure coupling. Transport assays Kinetic analysis of wt and mutant FurE was measured by estimating uptake rates of [ 3 H]-uracil uptake (40 Ci mmol −1 , Moravek Biochemicals, CA, USA), as previously described [ 50 ]. In brief, [ 3 H]-uracil uptake was assayed in A. nidulans conidiospores germinating for 4 h at 37°C, at 140 rpm, in liquid MM, pH 6.8. Initial velocities were measured on 10 7 conidiospores/100 μL by incubation with concentration of 0.75 μM of [ 3 H]-uracil at 37°C. The time points when the initial velocities (rates) are measured is 1 or 2 min. All transport assays were carried out at least in two independent experiments and the measurements in triplicate. Results were analysed in GraphPad Prism software. Epifluorescence microscopy Samples for standard epifluorescence microscopy were prepared as previously described [ 56 ]. In brief, sterile 35 mm l-dishes with a glass bottom (Ibidi, Germany) containing liquid minimal media supplemented with NaNO 3 and 1% glucose were inoculated from a spore solution and incubated for 16 h at 25°C. The images were obtained using an inverted Zeiss Axio Observer Z1 equipped with an Axio Cam HR R3 camera. Image processing and contrast adjustment were made using the ZEN 2012 software while further processing of the TIFF files was made using Adobe Photoshop CS3 software for brightness adjustment, rotation, alignment and annotation. The GFP-fluorescence intensity ratio (PM/cytosolic) was calculated using the ICY software [ 57 ]. The areas of the plasma membrane and the cytosol were manually highlighted and the intensity of GFP-fluorescence was measured. For nuclear staining, the DAPI dye was added to the growth medium in a final concentration of 0,002 mg/ml. The strains of interest were incubated with the dye for 20 minutes (25°C) and then washed with liquid minimal medium, before observation. Data Availability Statement Strains and plasmids are available upon request. The authors affirm that all data necessary for confirming the conclusions of the article are present within the article, figures, and tables.
RESULTS Most residues in TMS11 and TMS12 of FurE are not critical for PM localization and have a moderate role in transport Previous studies concerning NCS1 transporters failed to show a functional role of residues of the last two TMSs (TMS11-12) in transport kinetics or substrate specificity [ 28 , 31 ]. In FurE specifically, where we have employed several unbiased genetic screens to select functional mutants, we have never obtained any mutation located in TMS11 or TMS12 affecting FurE function. In line with this, the reported distinct crystal structures (outward-facing, substrate-occluded or inward-facing) of the Mhp1 bacterial NCS1 homologue strongly suggested that TMS11 and TMS12 do not participate in transport catalysis [ 13 , 38 – 42 ]. In order to investigate whether the last two TMSs of FurE play any role in transport activity, substrate specificity, subcellular trafficking or turnover, we constructed strains expressing triple alanine (Ala) substitutions of residues predicted to form the helices of TMS11 and TMS12 ( Figure 1A ; for details see Materials and methods). The choice of Ala substitution is based on the logic that Ala residues conserve the hydrophobic nature of these transmembrane segments but replace specific amino acid side chains that might be important for function. The predicted structures of FurE, shown in Figure 1B , have been constructed via homology modeling with Mhp1. By comparing the outward-facing or occluded structures to the inward-facing conformation of FurE, what becomes immediately apparent is a significant distancing of TMS11-12 from the main body of the transporter (TMS1-10), also associated with a tilt of the cytosolic-facing half of TMS12, which is now exposed towards the lipid bilayer (marked in red In Figure 1 ). The significance of this observation becomes apparent later. Figure 2A shows growth tests relevant to FurE function of all triple Ala mutants and control strains. As also shown previously [ 30 , 31 ], wild-type (wt) FurE expressed via the gpdA promoter in a genetic background lacking all other major nucleobase transporters (e.g., Δ7) confers growth on allantoin or uric acid as sole nitrogen sources and leads to sensitivity to 5-fluorouracil (5FU). The ‘empty’ Δ7 isοgenic strain lacking FurE expression (negative control), cannot grow on allantoin or uric acid and is resistant to 5FU. Notice that uracil, although it is also a FurE substrate, is not used as a N source in A. nidulans . Most of triple mutations did not significantly affect FurE-dependent growth on allantoin or sensitivity to 5FU, resembling the growth phenotype of the strain expressing the wt FurE. Some mutants, mostly those concerning TMS12, showed reduced growth on uric acid, which is characteristic of lower transport capacity of FurE. Notably, two triple Ala replacements in TMS12, those affecting residues 472–474 and 484–486 led to total loss of FurE function. Substitutions of residues 448–450, which mark the end of TMS11, but also of 487–490 and 490–492 led to partial loss-of-function of the FurE transporter, evident through the growth defects exhibited by the respective strains on uric acid. Finally, the very last triple Ala mutant (496–498), which corresponds to the beginning of the cytosolic C-tail, also led to a drastic reduction in FurE function (reduced growth on allantoin, significant sensitivity to 5FU and loss of growth on uric acid). To investigate whether the loss or reduction of FurE function in specific mutants is due to problematic translocation to the PM, reduced protein stability, or defective transport activity per se, we took advantage of the fact the all FurE alleles made were functionally fused in their C-terminus to a GFP epitope. Figure 2A (middle panel) shows the in vivo subcellular localization of wt and mutant versions of FurE, as followed by widefield epifluorescence microscopy. The two TMS12 mutations leading to apparently total FurE loss-of-function (472–474 and 484–486) led to retention of the transporter in the ER (notice the prominent rings, typical of nuclear ER in fungi; see also Supplementary Figure S1 ). Mutation 493–495, which led to nearly total loss of FurE function also showed prominent ER-retention. Also, substitutions 448–450 and 490–92 led to partial ER-retention of FurE, rationalizing the growth phenotypes of the corresponding strains. In all other cases, FurE mutant versions label the PM, septa and vacuoles, similar to a correctly folded wt FurE transporter [ 30 , 31 ]. Quantification of the ratio of PM-associated to cytosolic FurE-GFP fluorescence shows that most functional mutants give a similar result with that obtained with wt FurE, suggesting that the levels of FurE expression is in these strains is comparable ( Figure 2A , right panel and Supplementary Figure S2 ). Only mutations concerning residues 448–450 and 490–492 showed ∼5-fold reduced quantity of PM-associated FurE relative to the wt control, concomitant with partial ER-retention of the transporter. We performed direct uptake assays of all mutants made to test whether the mutant versions of FurE conserve transport capacity for radiolabeled uracil (radiolabeled allantoin is not available and radiolabeled uric acid is unstable). Figure 2B shows that most mutants conserve detectable transport rates, albeit often at a reduced degree. Transport rates measured ranged from those similar to wt FurE (i.e., close to 100%, in 438–440, 466–468 and 487–489) to moderate (∼44–55%, in 441–443, 444–447, 469–471 and 496–498), low (∼15–30%, in 475–477, 478–480, 481–483) or extremely low (just detectable, <10%, in 448–450, 490–492 and 493–495). As might have been expected, the 472–474 and 484–486 Ala triple mutants, which showed ER-retention of FurE, did not exhibit detectable FurE transport capacity. Thus, uptake assays are in good agreement with growth tests and microscopy, which showed that most mutations, except those retained in the ER, could confer normal or reduced FurE-mediated growth on allantoin or uric acid and were sensitive to 5FU. Notice that for most transporters of nitrogen sources (e.g., amino acids, purines, nitrate, urea, etc.) studied in our laboratory, analogous mutations allowing transport rates >10%, of the respective wt transporter are sufficient to confer detectable growth, while mutations allowing transport rates >25% of wt can grow normally. In summary, our results showed that most of the residues in TMS11 and TMS12 are not essential for proper folding and localization of FurE to the PM and are not absolutely essential for transport. The exception concerns three amino acid triplets, namely 472–474, 484–486 and to a lower degree 493–495, which seem to contain sequence-specific information essential for proper ER-exit and trafficking to the PM, and thus for function. Among these mutations, 472–474 (Ser-Trp-Leu) and 484-486 (Tyr-Tyr-Leu) include conserved aromatic residues, Trp473 and Tyr484, respectively. Especially Tyr484 is absolutely conserved in all Fur-like transporters, replaced by aliphatic hydrophobic acids (Met or Val) in the homologous Fcy-like subgroup of NCS1 transporters (see Figure 1A ). Noticeably also, Tyr484 is one helix turn downstream from the tilt-point (Gly481) where the TMS12 ‘breaks’ during the transition from outward- or occluded to the inward-facing conformation (see Figure 1B ). Thus, we decided to investigate the role of these two aromatic residues in more detail. Tyr484 is irreplaceable for proper ER-exit of FurE possibly due to interactions with specific residues in TMS3, TMS10 and TMS12 We constructed single Phe, Ser or Met replacements of Tyr484, the only well conserved residue of the triplet 484–486. The rationale for constructing and analyzing the Y484M is based on the observation that Met is present in the Fcy-type subgroup of NCS1 transporters, which exhibits distinct and non-overlapping substrate specificity (see Figure 1A ). All three mutations led to inability for growth on allantoin and uric acid and resistance to 5FU ( Figure 3B , left panel). This was shown to be due to ER retention, similar to what was found with the triple Ala replacement of residues 484–486 ( Figure 3B , right panel). In line with growth tests and microscopy, direct uptake studies of radiolabeled uracil showed that Phe, Ser or Met replacements of Tyr484 led to FurE loss of function ( Figure 3C ). Thus, neither the presence of hydroxyl group (Ser) nor of an aromatic ring (Phe) could functionally replace Tyr484. In conclusion, Tyr484 is shown to be irreplaceable for the proper function of FurE, apparently due its essential role in ER-exit and translocation of the transporter to the PM. Defective ER-exit could be due to improper intramolecular folding, defective interaction with annular lipids, or due to abortive interactions with Sec24, the main membrane cargo receptor, or specific ER chaperones [ 43 ]. To investigate these possibilities, we tried to select genetic revertants suppressing the lack of growth on allantoin of strains expressing FurE-Y484F or FurE-Y484S. We failed to obtain any, after repeated trials. This might suggest that Tyr484 is involved in complex interactions, these being intramolecular or with lipids or other proteins. We tried to identify, by modeling, the location of Tyr484 in respect to other domains of the transporter or the lipid bilayer. In the outward and occluded conformation Tyr484 faces the interior of FurE and seems to be at interacting distances with Phe111 located in the beginning of TMS3, and with Asp406 located towards the last turn of TMS10 ( Figure 3A ). More specifically, Tyr484 is predicted to interact via a H-bond with Asp406 and through pi-pi stacking with Phe111. Interestingly, in the inward-facing conformation, as a consequence of a tilt of half of the TMS12 helix, the side chain of Tyr484 although still oriented to Asp406, also gains direct contacts with the lipid bilayer. Consequently, the interaction with Asp406 and Phe111 is weakened ( Figure 3A and Supplementary Figure S3 ). This local conformational change is probably related to the final step of the transport cycle. Noticeably also, Tyr484 is also in close distance, especially in the inward conformation, with a series of aromatic acids in the end of TMS12, namely Tyr485, Phe488, Phe489, Trp491 and Phe493, which come into contact with the lipid bilayer ( Supplementary Figure S3 ). The above findings suggested that the essentiality of Tyr484 might be associated to dynamic interactions with specific residues in TMS3 and TMS10, but also with downstream aromatic residues at the end TMS12, close to the cytoplasmic interphase with the lipid bilayer. If so, we thought that mutations in the interacting amino acids of Ty484, namely residues Phe111 and Asp406, but also Phe488, Phe489, Trp491 and Phe493, might lead to functional defects similar to those of mutation in Tyr484. Noticeably, Phe111 and Asp406 are nearly absolutely conserved in NCS1 transporters, while the other aromatic residues are only partially conserved in eukaryotic homologues. To further investigate the idea of a network of functional interactions of Tyr484 with Phe111 and Asp406 we constructed and analysed strains carrying the following FurE mutations: F111A, F111Y, D406A, D406E, F111Y/D406E, F111Y/Y484F, D406E/Y484F and F111Y/D406E/Y484F. Notice that the double mutation F111Y/Y484F inverts the topology of the aromatic acids involved, while some of the other mutations involve very conservative changes (e.g., F111Y or D406E). Nearly all of these amino acid substitutions scored as loss-of-function mutations associated to retention of FurE in the ER, very similar to Y484F ( Figure 3B ). An exception was only F111Y, one of the most conservative changes, which did not affect trafficking to the PM, but still led to a small defect in activity, reflected as reduced growth on allantoin and no growth on uric acid. Direct uptake assays were in excellent agreement with growth tests and microscopy ( Figure 3C ). The similarity of defects in ER-exit and FurE activity caused by mutations in Tyr484, Phe111 and Asp406 was in good agreement with the network of interactions of these residues, as proposed via structural modeling. Finally, notice that Ala mutations in amino acids triplets in TMS12 that include Trp491 and Phe493 also showed partial (490-492) or significant (493-495) ER retention, resembling the effect of replacing Tyr484 (see Figure 2A ). Altogether, it seems that a network of interactions between residues in TMS12 and TMS3 or/and TMS10 might indeed be necessary for proper folding of FurE, and thus essential for its proper exit of FurE and traffic to the PM. An aromatic residue at position 473 is necessary for FurE ER-exit We also constructed and analysed W473A, W473F and W473Y substitutions of the well-conserved Trp473 ( Figure 4A ; see also Figure 1A ). W473A conferred a FurE-dependent growth defect, which similarly to the Tyr484 mutations was shown to be related to ER-retention ( Figure 4B ). In contrast, both W473F and W473Y substitutions were functional, conferring growth on allantoin or uric acid and sensitivity to 5FU, and also showing proper localization of FurE to the PM ( Figure 4B ). In fact, W473F seems to enhance the presence of FurE protein in the PM, as the respective mutant not only shows increased PM associated fluorescence, but also the fraction detected in cytosolic foci originating from endocytosis (vacuoles and endosomes) is relatively reduced when compared to the image of wt FurE. Direct uptake measurements of radiolabeled uracil were in agreement with growth tests and microscopy, as W473A showed no transport activity, while W473F and W473Y exhibited increased (3-fold) or similar to wt transport rates, respectively (see Figure 4C ). Thus, it seems that the presence of an aromatic residue at 473, not necessarily a Trp, is sufficient for proper ER-exit and FurE function. Notice that an aromatic residue is conserved in all Fur-like homologues. Molecular Dynamics (MD) were employed to identify putative interactive residues or whether Trp473 faces the membrane lipids. This showed that Trp473 is exactly in the middle of the lipid bilayer plane and might interact with Ile87, Pro88 and principally Leu91 in TMS2, Ala400 and Val404 in TMS10, and less so with Tyr469 and Ile477 in TMS12 ( Figure 4A ). The sites of residues corresponding to Leu91, Ala400, Val404 and Ile477 are conserved as aliphatic residues in other FurE homologues transporters, while Tyr469 is conserved as Tyr or Phe in all Fur proteins. The predicted interactions with the aforementioned residues appear rather stable. Trp473 has also contacts with lipid chains as the indole moiety is oriented in several structures towards the lipid-protein interface. In the outward and inward structure there are lipid chains parallel to the aromatic ring, but in the inward structure TMS12 appears to be slightly tilted exposing Trp473 further to lipid contacts ( Supplementary Figure S4 ). Based on the above observations we mutated Leu91 and Val404 into Ala or Phe residues. The functional analysis of mutations L91A, L91F, V404A and V404F showed that none of them affects FurE localization to the PM. However, the Phe substitutions led to reduced growth on allantoin and uric acid and increased resistance to 5FU, whereas the Ala substituted mutants behaved as wt FurE in growth tests ( Figure 4B ). Uracil uptake assays confirmed that Ala substitutions did not affect FurE transport capacity, while Phe substitution led to differentially reduced FurE activity ( Figure 4C ). Substitution Y469A, concerning another residue possibly interacting with Trp473, is included in the already analyzed triple Ala mutation of residues 469-471, which showed no FurE defect. Overall, unlike the case of Tyr484, we could not identify putative intramolecular partners of Trp473 critical for FurE trafficking to the PM. Thus, the molecular basis that underlies the essentiality of an aromatic residue at position 473 for proper FurE folding and ER-exit remains elusive, although it might be related to dynamic interactions of TMS12 with TMS2 and/or TMS10, and probably with the membrane lipids too, as judged by MD. Evidence for oligomerization or concentrative partitioning of FurE molecules at ER-exit sites Previous reports have suggested that APC transporters might dimerize or oligomerize via their TMS11 and TMS12 domains (see Introduction). To address this issue in FurE and try to better understand the role of these segments in ER-exit and transport activity, we co-expressed ER-retained mutant versions of FurE with wt FurE. Co-expression was achieved by co-transformation of the Δ7 strain with plasmids carrying different FurE alleles expressed via the gpdA promoter (for details see Materials and methods). Figure 5 shows results obtained with selected transformants co-expressing FurE-Y484F or FurE-W473A with wt FurE. In both cases, we analyzed transformants where the GFP epitope is tagged in either the ER-retained mutant or the wt FurE. This allowed us to address the effect on subcellular localization of the ER-retained version on wt FurE and vice versa. As already discussed, both FurE-Y484F or FurE-W473A cannot confer growth on allantoin and uric acid and cannot accumulate the toxic analogue 5FU, while the strain expressing wt FurE grows on allantoin and uric acid and is sensitive to 5FU. Strains co-expressing ER-retained FurE-Y484F or FurE-W473A with wt FurE showed intermediate growth phenotypes ( Figure 5 , left panel ). That is, they exhibited reduced growth on allantoin, no growth on uric acid, and intermediate resistance to 5FU. Epifluorescence microscopy of the same strains rationalized the growth phenotypes obtained ( Figure 5 , right panel ). The strains co-expressing the ER-retained versions and wt FurE tagged with GFP showed partial ER retention of wt FurE-GFP (see panels with FurE-GFP/FurE-Y484F and FurE-GFP/FurE-W473A). Strains co-expressing the ER-retained versions tagged with GFP and untagged wt FurE showed no PM labeling, similar to the original ER-retained mutants. In agreement with the growth tests, direct uptake assays with radiolabeled uracil confirmed that FurE transport rates are significantly reduced in the strains co-expressing ER-retained FurE-Y484F and FurE-W473A with wt FurE, ( Figure 5B ). These findings point to the idea that FurE molecules associate by oligomerization or partitioning into specific ER microdomains, early after their co-translational insertion into the ER, so that misfolded versions, such as FurE-Y484F or FurE-W473A, trap a fraction of wt FurE in aggregates incapable of ER exit. Truncation of TMS11-12 abolishes oligomerization or partitioning of FurE at ER-exit sites To investigate the role of TMS11-12 in ER-exit and PM localization of FurE, we constructed and functionally analysed truncated versions of the transporter possessing either the core domains TMS1-10 (FurE-TMS1-10) or just the last two TMSs (FurE-TMS11-12). Both constructs were fused with GFP, and both were expressed via the gpdA promoter (for details see Materials and methods). First, we tested whether these two truncated versions translocate to the PM and whether FurE-TMS1-10 is functional. Figure 6A shows that FurE-TMS1-10 is totally trapped in the ER, and consequently could not confer FurE-dependent growth on allantoin or uric acid or sensitivity to 5FU, whereas FurE-TMS11-12 seems to be rapidly sorted in vacuole for degradation. We then co-expressed GFP-untagged FurE-TMS1-10 with FurE-TMS11-12-GFP, and tested whether the two parts of FurE are somehow functionally reconstituted or their co-expression promotes translocation to the PM. We showed that ‘split’ FurE could not be functionally or cellularly reconstituted (not shown). We also tried to employ Bifluorescence (BiF) assays using split YFP tags in all combinations (i.e., YFPn or YFPc N-terminally or C-terminally fused to the truncated parts of FurE), but still we did not obtain any evidence of reconstituted FurE parts (not shown). Subsequently, we co-expressed FurE-TMS1-10 with wt FurE and functionally analysed respective transformants. Results, summarized in Figure 6A , show no evidence for a dominant negative effect in respect to FurE function or localization, as seen when using non-truncated FurE versions. To obtain further evidence for concentrative oligomerization or partitioning of FurE in ERes (ER exit sites) and the role of TMS11-12, we also employed a distinct version of FurE, not related to TMS11-12 that shows tight retention in the ER, and consequently no transport activity. This is a FurE mutant where N-terminal residues 30–32 (Leu-Asp-Ser) have been replaced by alanines [ 31 ] (named here FurE-Δ30-32). When full-length FurE-Δ30–32 was co-expressed with wt FurE, this led to synthetic phenotypes, basically a diminished transport capacity for uric acid and 5-fluorouracil ( Figure 6A ). However, when we used, in similar co-expression experiments, a truncated version of FurE-Δ30-32 lacking TMS11-12 (FurE-Δ30-32-TMS1-10), we ‘lost’ the dominant negative effect on growth ( Figure 6A ). In line with the above observations, direct uptake assays with radiolabeled uracil confirmed that FurE transport rates are significantly reduced in the strains co-expressing ER-retained FurE-Δ30-32 with wt FurE, while the uptake rate of strains co-expressing FurE-TMS1-10 or FurE-Δ30-32-TMS1-10 with wt FurE remained mostly unaffected ( Figure 6B ). Our findings strongly suggest that TMS11-12 are necessary for exit from the ER via their essential role in folding and oligomerization or partitioning of FurE in nascent ER exit sites.
DISCUSSION We showed that two specific aromatic residues in TMS12, namely Trp473 and Tyr484, are essential for ER-exit of FurE. In line with their functional importance, these residues are highly (Tyr484) or well (Trp473) conserved in FurE homologous proteins. Structural modelling and MD provided evidence that these residues, and especially Tyr484, might change their intramolecular topology relative to specific residues in other TMSs and aromatic residues at the end of TMS12, but also in respect to the lipid bilayer, during the transport cycle. This is a direct consequence of major topological changes that TMS12 undergoes during alteration from outward to inward conformation, as shown for Mhp1 [ 11 , 13 ] and predicted by homology modelling in FurE here. More specifically, in the inward-facing conformation, TMS12 is shown to move away from the 5+5-fold core of the transporter and be directed towards annular lipids. Given the positioning of Trp473 in the middle of the bilayer and Tyr484 at the interphase of the transporter with lipids, the associated lack of proper ER-exit in the relative mutants suggests a structural defect in FurE folding due to modified intramolecular interactions and altered association with membrane lipids. In the case of Tyr484, we provided in silico evidence, supported by genetics, that this defect might be due to modified interactions of Tyr484 with two specific and highly conserved residues in TMS10 (Asp406) and TMS3 (Phe111), as well as, with downstream aromatic residues in TMS12 (mainly with Phe493). Previous mutations in the N-terminal part of TMS10 (named TMS10 a ) have been shown to affect the function or specificity of FurE, albeit without affecting ER-exit (e.g., mutations in M389) [ 28 ]. Asp406, shown here to affect ER-exit, is located in the less flexible C-terminal part of TMS10 (named TMS10 b ). MD in Mhp1 has shown that the other half of TMS10 (TM10 a ) is dynamic gate able to occlude access to the major substrate binding site via local tilting in the middle of TMS10 [ 41 ]. It is thus probable that interaction of Tyr484 (TMS12) with Asp406 in TMS10 b is a structural interaction necessary for proper FurE folding, but this interaction seems to also affect gating and transport via stabilization of the neighbouring dynamic movements of TMS10a. In conclusion, we have identified a putative dynamic network of structural interactions necessary for proper FurE folding and thus for ER-exit. Our findings further show that specific TMS12 residues are involved principally in structural intramolecular interactions crucial for folding and proper ER-exit and localization to the PM, rather than being directly essential for substrate recognition and transport catalysis. Given the high conservation of residues involved in the network of interactions of TMS12, we predict that the ‘intramolecular chaperoning’ role TMS12 revealed herein might extent to all NCS1 or structurally similar APC-type transporters. In addition, the finding that the side-chain identity of most residues TMS12 and TMS11, except Trp473 and Tyr484, is little critical for ER-exit, further suggests that the packing of these helices with the 5+5 TMS core of the FurE transporter occurs via hydrophobic interactions of helical backbones, which are apparently strengthened by the specific interactions of involving Trp473 and Tyr484. To our opinion, an original finding of this work is the discovery that co-expression of wt and ER-retained FurE leads to a synthetic dominant negative phenotype, which can be best explained by FurE oligomerization or partitioning in common ERes, and that this phenomenon is dependent on TMS11 and TMS12. We exclude the possibility that overexpression of ER-retained versions of FurE in some transformants causes a general defect in cargo trafficking as we did not observe any growth defects in the relative strains in all growth media tested, except those revealing a defect in FurE activity (e.g., growth on allantoin or uric acid or sensitivity to 5FU; See Supplementary Figure S5 ). The observed inter-molecular interactions of FurE versions might be explained by tight oligomerization, as is the case in other transporters (e.g., the UapA uric acid-xanthine transporter of A. nidulans ; [ 44 ]). However, we failed to obtain any evidence of FurE oligomerization at the ER or the PM, using a BiF approach or blue native gel electrophoresis (not shown). This lack of evidence for in vivo oligomerization is also true for other NCS1 transporters and several APC transporters. An alternative explanation of the observed dominant negative or positive phenotypes is that FurE molecules soon after their co-translational insertion into the ER membrane partition laterally in common microdomains, which associate with ERes. This has as a consequence the concentrative packaging of distinct FurE versions in common COPII vesicles. Concentrative ER-exit of membrane cargoes has been previously reported, suggesting that a multimeric sorting ‘code’ drives selectivity in cargo sorting [ 45 ]. Why are FurE mutants, such as those including mutations Y484F or W473A, unable to exit the ER when expressed by themselves? We believe these are partially misfolded FurE versions that do not provide the proper multimeric structure to partition properly in ERes. Misfolding of these mutants seems only partial, as at least in the case of Y484F and W473A, the relative FurE versions can exhibit extremely low, but still measurable minimal transport, in some transformants (not shown). This hypothesis suggests that properly folded and misfolded FurE molecules are capable of self-associating and partitioning in common microdomains, probably prior to sorting to ERes. In this case, there might be an intrinsic propensity of self-association or loose oligomerization of FurE molecules immediately after their biogenesis. Alternatively, an ER-associated adaptor protein might exist that specifically recognizes nascent FurE molecules and promote their association into a common microdomain or in ERes. Such cargo-specific ER adaptors for specific membrane cargoes exist, with the Saccharomyces cerevisiae Erv14 being the most extensively studied [ 46 – 48 ]. Mutants unable to exit the ER might then be unable to be recognized efficiently by this adaptor, or association with this adaptor is defective for proper exit from ERes. An important finding related to the synthetic dominant negative phenotypes observed and the hypothesized association/oligomerization of FurE molecules in specific microdomains in the ER is the essential role of TMS11 and TMS12. We showed that these helices are essential for self-association of FurE molecules and/or partitioning to ERes. It is thus most reasonable to suggest that truncation of these helices leads to an unfolded FurE version, which unlike missense mutations in TMS3 (Phe111), TMS10 (Asp406), TMS12 (Trp473, Tyr484), or Δ30-32, cannot associate with co-expressed wt FurE molecules and thus fails to partition in ERes. This strengthens the notion that TMS11-12 function as an “intramolecular chaperone” essential for proper FurE folding, which in turn provides a structural code for FurE association and concentrative ER-exit the trafficking to the PM. As already discussed in the Introduction, there is in vitro and in vivo evidence that several members of the APC superfamily oligomerize, and in some cases oligomerization was shown to depend on TMS11 and TMS12. For example, in AdiC, which has been shown to form homodimers in vitro and in possibly in vivo , the homodimer interface is formed by non-polar amino acids from TMS11 and TMS12, where residues of TMS11 from one monomer interdigitate with residues of TMS12 from the other monomer. Further interactions between the two monomers are mediated by the loops between TMS2 and TMS3, the cytoplasmic ends of TMS2 and TMS3, the cytoplasmic halves of TMS10, and the C-termini. The latter embrace neighbouring monomers [ 15 ]. However, given that monomers of AdiC have been reported to be transport active, the role of AdiC oligomerization remains unknown [ 15 , 17 ]. Similarly, hSERT protomers have been shown to form oligomers via TMS11 and TMS12 [ 26 ]. Oligomeric states have also been proposed for rGAT1 and glycine transporters [ 24 , 25 ]. Finally, the LeuT dimeric structure has also been shown to involve interactions of TMS9 and TMS12, and possibly TMS11 [ 18 , 19 ]. In conclusion, findings presented in this work, concerning the essential role of TMS11 and TMS12 in FurE folding, self-association and/or concentrative sorting to nascent ERes, might well extend to other NCS1 and similar transporters of the APC superfamily.
Conflict of Interest: The authors declare no conflict of interest. Please cite this article as: Yiannis Pyrris, Georgia F. Papadaki, Emmanuel Mikros and George Diallinas ( 2024 ). The last two transmembrane helices in the APC-type FurE transporter act as an intramolecular chaperone essential for concentrative ER-exit. Microbial Cell 11: 1-15. doi: 10.15698/mic2024.01.811 FurE is a H + symporter specific for the cellular uptake of uric acid, allantoin, uracil, and toxic nucleobase analogues in the fungus Aspergillus nidulans. Being member of the NCS1 protein family, FurE is structurally related to the APC-superfamily of transporters. APC-type transporters are characterised by a 5+5 inverted repeat fold made of ten transmembrane segments (TMS1-10) and function through the rocking-bundle mechanism. Most APC-type transporters possess two extra C-terminal TMS segments (TMS11-12), the function of which remains elusive. Here we present a systematic mutational analysis of TMS11-12 of FurE and show that two specific aromatic residues in TMS12, Trp473 and Tyr484, are essential for ER-exit and trafficking to the plasma membrane (PM). Molecular modeling shows that Trp473 and Tyr484 might be essential through dynamic interactions with residues in TMS2 (Leu91), TMS3 (Phe111), TMS10 (Val404, Asp406) and other aromatic residues in TMS12. Genetic analysis confirms the essential role of Phe111, Asp406 and TMS12 aromatic residues in FurE ER-exit. We further show that co-expression of FurE-Y484F or FurE-W473A with wild-type FurE leads to a dominant negative phenotype, compatible with the concept that FurE molecules oligomerize or partition in specific microdomains to achieve concentrative ER-exit and traffic to the PM. Importantly, truncated FurE versions lacking TMS11-12 are unable to reproduce a negative effect on the trafficking of co-expressed wild-type FurE. Overall, we show that TMS11-12 acts as an intramolecular chaperone for proper FurE folding, which seems to provide a structural code for FurE partitioning in ER-exit sites.
SUPPLEMENTAL MATERIAL All supplemental data for this article are available online at www.microbialcell.com/researcharticles/2023a-pyrris-microbial-cell/ .
Abbreviations: – 5-fluorouracil, – amino acid-polyamine-oragnocation, – bifluorescence, – molecular dynamics, – plasma membrane, – transmembrane segment, – wild-type
CC BY
no
2024-01-15 23:43:50
Microb Cell.; 11:1-15
oa_package/33/68/PMC10788122.tar.gz
PMC10788123
38222150
Introduction Status epilepticus (SE) occurs within five minutes or more of continuous clinical and/or electrographic seizure activity or recurrent seizure activity without consciousness recovery between seizures. It is a common, life-threatening neurologic disorder with high morbidity and mortality rates [ 1 ]. Its consequences can result in alterations of the neuronal network or death, and its incidence ranges from 10 to 40 cases per 100,000, with the rate of mortality ranging from 7.6 to 39% [ 2 ]. Rapid management is a priority to improve patient outcomes while etiologic investigation can be a challenge in people with SE. In Morocco, most people with epilepsy have no access to treatment, with traditional customs leading to very late patient management. Data on SE are lacking, and recognizing the burden of SE morbidity, we conducted a single-center retrospective study. The objective was to analyze the clinical characteristics of SE and its etiology. We also report mortality rates and risk factors.
Materials and methods Study design We conducted a single-center retrospective study at the A1 intensive care unit at the Hassan II University Hospital of Fez in Morocco. Three-year study data were obtained from January 2019 to December 2021. We included patients with SE according to defined inclusion and exclusion criteria. Inclusion and exclusion criteria for the study population Adult patients ( age > 17 years) admitted to the intensive care unit with the diagnosis of SE were included in the study and data were obtained from medical records. Pregnant and lactating women, children, and patients with incomplete clinical data were excluded. Definition of variables All pre- and in-hospital records of SE patients were reviewed and data were collected using standardized forms. SE was defined according to the latest criteria of the International League Against Epilepsy (ILAE). SE involves a convulsive seizure lasting 5 minutes or more, nonconvulsive status with impaired consciousness lasting longer than 10 minutes, while refractory status epilepticus (RSE) is defined as persisting seizures after the failure of a sufficient benzodiazepine dose as a first-line treatment and an antiseizure medication (ASM) as a second-line treatment, irrespective of time [ 3 ]. Baseline variables These included gender, age, family history of epilepsy, SE type, SE duration, symptomatology, electroencephalogram (EEG), neuroimaging, cerebrospinal fluid results, etiology of seizure treatment, complications, length of hospital stay, and outcome. SE duration The end of SE was defined clinically as a cessation of seizure activity on EEG for patients with nonconvulsive status or under pharmacological sedation [ 3 ], continuous EEG monitoring was not available, and an EEG examination was performed to confirm the cessation of SE. Neuroimaging This was performed in all cases on admission and was repeated according to clinical evolution. Etiologies of seizures The etiology was rated as unknown if no cause of SE could be identified. Management of SE According to our hospital’s management protocol, anti-seizure medication (ASM) was administered per the national guidelines, with the administration of benzodiazepines in the initial phase (diazepam or midazolam), followed by intravenous phenobarbital as a second line of treatment if SE did not resolve. Patients with refractory SE required sedation. Statistical analysis Data were analyzed using SPSS Statistics software version 22.0 (IBM Corp., Armonk, NY, USA). Frequencies were calculated using descriptive statistics, mean with standard deviation was used for continuous variables, and univariate comparisons of proportions were calculated using a chi-square test. P values <0 .05 were considered statistically significant. Predictive factors of mortality were studied with a logistic regression model with multivariate analysis.
Results Overall, 82 patients with SE were admitted to the ICU from January 2019 to December 2021 and were included in the study. Patient characteristics Patients were aged 18 to 95 years, with a mean of 39.5 years, including 50 males (61%) and 32 females (39%). Seventy-two percent (72%) of patients (N: 59) presented with de-novo SE, 27.7% of patients (N: 23) had a history of epilepsy, of whom 40% (N: 9) were receiving regular antiepileptic drugs (AED) and 60% (N: 14) had poor therapeutic adherence. Sixteen patients (18%) had a history of brain injury, the majority of semiology was convulsive SE (93%, N: 77); and 7 patients had non-convulsive SE. EEG was only performed in 45 patients (54%). All patients underwent head CT scans, which showed abnormalities in 45 patients. MRI was performed on 48 patients, which showed abnormalities in 37 cases. Lumbar puncture was performed in 65 cases, and cerebrospinal fluid culture tested positive in 5 cases. Metabolic abnormalities notified at admission were hypernatremia, hyponatremia, and kidney failure. Causes of SE Epilepsy of unknown cause was the most common diagnosis (41.2%, N: 34). The cause of SE in 19 patients was a reduced seizure threshold. The most known etiology was acute/subacute cerebrovascular events (12 patients, 14.4%), primary tumors of the CNS (8 patients, 9.6%), metabolic abnormalities (8 patients, 9.6%), cerebral venous thrombosis (7 patients, 8%), and encephalitis (7 patients, 8%). The range of etiologies of super-refractory status epilepticus (SRSE) is illustrated in Figure 1 . Drugs used to control SE All patients received benzodiazepines (diazepam or midazolam) as the first line of treatment, 96.4% of them received phenobarbital as the second line of treatment, and 65 patients had refractory SE requiring anesthesia. Patient characteristics are listed in Table 1 . Outcomes Fifty-two patients developed at least one complication, the most common was a bloodstream infection, 31 patients (38%) died, mortality was attributed to cardiovascular and infectious complications in most cases, and the median days to mortality was 15.23 days. Mortality risk factors Acute Physiology and Chronic Health Evaluation (APACHE) II score ≥10 (p=0.0001), ischemic stroke (as an etiology of SE (80% vs 66.2%, P=0.048)), history of epilepsy(93% vs 66%, P =0.005), poor therapeutic adherence (100% vs 72%, P=0.001), cardiovascular complications (90% vs 43%, P= 0.0001), and presence of multiple complications (P=0.0001), pneumonia (96.7% vs 11.7%, P=0.0001), recurrence of SE (70.9% vs 58%, P=0.050) were variables significantly associated with mortality. However, there were no significant differences in mortality in relationship with the type of SE (for convulsive SE, P=0.419; refractory SE (P=0.173); or tracheal intubation, P=0.321 (Table 2 ).
Discussion Our study analyzed the clinical characteristics, etiologies, outcomes, and factors associated with mortality in patients with SE in a Moroccan center. Demographic and clinical characteristics The median age was 39.5 years, the incidence of SE was higher in elderly people in several studies, and 61% of patients were male, which is consistent with previous studies [ 4 - 5 ]. The rate of patients with a history of epilepsy in SE is up 40% to 50% in several studies [ 6 - 7 ]. Only 27.7% of our patients had a history of epilepsy, illustrating the high rate of de-novo SE in our study, with the possibility of misdiagnosed epilepsy remaining abnormally high in our region [ 8 ]. Most SE was convulsive (93%, N: 76) in our study; only 7% of SE was non-convulsive, in consistency with other series [ 4 , 5 , 7 ]. Convulsive SE is usually easy to diagnose and EEG is not required for the initial diagnosis; however, the possibility of progressing to pauci-symptomatic epilepsy justifies daily EEG monitoring until recovery of consciousness or refractory SE [ 8 - 9 ]. In our context, EEG monitoring access is limited, leading to an underestimated incidence of nonconvulsive SE and an increase in the duration of sedation in refractory SE. Causes of SE In our study, we found a higher rate of unknown etiology (41.2%), considering that MRI was not always accessible in our context, particularly for unstable patients, followed by patients with acute symptomatic etiology; cerebrovascular events were found to be the main known cause (14.4%) in conformance with several studies [ 10 - 11 ]. It is worth mentioning that, recently, autoimmune/paraneoplastic encephalitis has become one of the most known causes of SE [ 12 - 13 ], probably undiagnosed and untreated in our study because anti-neuronal autoantibodies testing is not yet available nationally. Treatment Our study indicated that the use of benzodiazepines accounted for 96.4%, with phenobarbital as the second line to control seizure activity. Intravenous levetiracetam, phenytoin, and valproic acid recommended for second-line treatment are not available in our country. Sixty-five patients (79%) required anesthesia to be maintained for at least one to two days to have burst suppression. Midazolam and propofol were the main anesthetic drugs; a recent international cohort study proved that propofol and midazolam are equivalently efficacious for refractory SE [ 14 ]. We reserved ketamine in association with other anesthetics for SRSE. Publications on ketamine efficiency in SE are heterogeneous, but ketamine appears to have a promising outlook for refractory SE and SRSE. Larger randomized prospective studies should clarify its place in controlling seizures [ 15 ]. Outcomes In contrast to several studies of SE that found mortality rates of approximately 10-20%, the outcome of patients with SE in our study is poor with 38% mortality; this could be explained by the lack of prehospital emergency care and specialized centers and limited access to EEG monitoring as well as limited diagnostic facility limitations. Several studies have shown that delay in treatment and a longer duration of SE contributed to poor clinical outcomes [ 16 - 17 ]. Mortality risk factors A history of epilepsy, ischemic stroke as an etiology, poor therapeutic adherence, the presence of complications, and recurrence of SE were associated with poor prognosis. This is compatible with the findings of other researchers, which identify also the age of the patient and duration of SE as risk factors for mortality [ 18 - 20 ]. Also, studies have shown that in patients with ischemic stroke or those suffering from an anoxic brain injury, the occurrence of SE is identified as an independent factor of mortality [ 21 - 22 ]. Immediate admission to the ICU of patients with a high risk of mortality should improve the prognosis of these patients.
Conclusions This study elucidated the clinical characteristics, etiologies, management, and outcomes of SE in our hospital. The patients in our study were young, with a high rate of de-novo SE. While cerebrovascular events are the most common cause of known etiology, the major diagnosis in our study was unknown etiology due in part to autoimmune encephalitis, most likely undiagnosed. The majority of SE patients in our study were managed with benzodiazepines and phenobarbital; patients who required anesthesia received midazolam or propofol. The mortality rate in patients with SE remained high in our study. Rapid determination of the causative etiology and initiation of therapy could decrease the mortality rate by improving prehospital emergency care and implementing elective ICU admission for patients at high risk. In our study, a history of epilepsy, ischemic stroke as an etiology, poor therapeutic adherence, presence of complications, and recurrence of SE were associated with a poor prognosis.
Background Status epilepticus (SE) is a common neurologic emergency with high rates of mortality and morbidity. Objective To analyze the clinical characteristics, causes, management, and outcomes of patients with SE in a tertiary care hospital in Morocco. Methods A retrospective study was conducted from January 2019 to December 2021, including all patients admitted to the medico-surgical general intensive care unit (ICU) with a diagnosis of SE. We recorded demographic characteristics, SE clinical history, management, causes, and discharge outcomes. Results Overall, 82 patients with SE were included, the median age was 39.5 years (18-95), 61% of the patients were male, the majority of semiology was convulsive SE (93%, N: 77), epilepsy of unknown cause was the most common diagnosis (41.2%, N: 34), and the most known etiology was acute/subacute cerebrovascular events (12 patients, 14.4%). All patients received benzodiazepines, 96.4% of them received phenobarbital as a second line of treatment, 65 patients required anesthesia, 52 patients developed one complication at least - the most common complication being systemic infection, and the mortality rate was noted to be 38% among patients with SE (N: 31). In this study, the factors associated with mortality were ischemic stroke (as an etiology of SE (p=0.048), history of epilepsy (p=0.005), poor therapeutic adherence (p=0.001), cardiovascular complications, presence of multiple complications (p=0.0001), pneumonia (p=0.0001), and the recurrence of SE (p=0.050). Conclusions We provide a single-center retrospective analysis of admissions in SE and note that mortality among SE patients is high in our settings. Improving prehospital emergency care and implementing elective ICU admission for patients at high risk could improve the mortality rate.
CC BY
no
2024-01-15 23:43:50
Cureus.; 15(12):e50591
oa_package/95/4a/PMC10788123.tar.gz